In the classical twenty questions game, Bob guesses an element from 1 to $n$, and Alice’s goal is to find the element using as few Yes/No questions as possible.
The game becomes more interesting when Bob chooses the element according to a distribution $\mu$ known to both players, and Alice attempts to minimize the expected number of questions. The optimal strategy is given by a Huffman code.
Rényi and Ulam asked what happens when Bob is allowed to lie. Of the several different ways of quantifying lies, we consider the setting in which Bob is allowed to lie at most $k$ times.
Rivest et al. showed that in the setting of the classical twenty questions game, allowing Bob to lie a fixed number of times requires Alice to ask $\log\log n$ additional questions per lie.
We extend the result of Rivest et al. to the distributional setting, in which the penalty is $H_2(\mu)$ per lie, where $H_2(\mu)=\sum_x \mu(x) \log\log (1/\mu(x))$.
As an application, we extend the result of Moran and Yehudayoff about distributional sorting to the setting of lies.