The liar's guessing game is a game played between two players $A$ and $B$. The rules of the game depend on two positive integers $k$ and $n$ which are known to both players. At the start of the game $A$ chooses integers $x$ and $N$ with $1 \le x \le N.$ Player $A$ keeps $x$ secret, and truthfully tells $N$ to player $B$. Player $B$ now tries to obtain information about $x$ by asking player $A$ questions as follows: each question consists of $B$ specifying an arbitrary set $S$ of positive integers (possibly one specified in some previous question), and asking $A$ whether $x$ belongs to $S$. Player $B$ may ask as many questions as he wishes. After each question, player $A$ must immediately answer it with yes or no, but is allowed to lie as many times as she wants; the only restriction is that, among any $k+1$ consecutive answers, at least one answer must be truthful. After $B$ has asked as many questions as he wants, he must specify a set $X$ of at most $n$ positive integers. If $x$ belongs to $X$, then $B$ wins; otherwise, he loses. Prove that: 1. If $n \ge 2^k,$ then $B$ can guarantee a win. 2. For all sufficiently large $k$, there exists an integer $n \ge (1.99)^k$ such that $B$ cannot guarantee a win. Proposed by David Arthur, Canada
Problem
Source:
Tags: probability, IMO, IMO 2012, game, IMO Shortlist, guessing game, ilostthegame
10.07.2012 21:17
I only have a solution for part 1, although I think the first parts would probably help part 2: First, we can observe that it obviously doesn't matter what the actual integers that are being guessed are. So we'll generalize the game so that $A$ chooses a finite set of positive integers $D$, tells the entire set to $B$, and picks $x \in D$ for $B$ to guess; clearly this game is still equivalent to the original one. Lemma 1. With a fixed $k$ and $n$, player $B$ can guarantee a win for all $N$ iff $B$ can guarantee a win for $N = n + 1$. Proof. (Only if) Trivial. (If) We use induction on $N$; the statement is trivial for $N \leq n$. For larger $N > n$, arbitrarily partition $D$ into $n+1$ nonempty sets $E_1, E_2, \ldots, E_n$. $B$ can use his strategy for $n + 1$ numbers on $R := \{1, 2, \ldots, n+1\}$, replacing each question $S'$ with the set $\bigcup_{i \in S'} E_i$. Then his strategy will yield a subset $T$ of $R$ that has size at most $n$, and $B$ will know that $x \in \bigcup_{i \in T} E_i$. Since $T \subsetneq R$, we have $\bigcup_{i \in T} E_i \subsetneq \bigcup_{i \in R} E_i = D$ so $\left| \bigcup_{i \in T} E_i \right| < |D| = N$. From here, $B$ has a winning strategy by the induction hypothesis. -- end lemma 1 -- 1. Trivially, a strategy for $n = 2^k$ is also a strategy for $n \geq 2^k$, so we only need to consider $n = 2^k$, and by the lemma we only have to consider the case $N = 2^k + 1$. Now all $B$ needs to win is to know one element $d \in D$ that cannot be $x$. Identify each element of $D$ with a binary string of length $k$, with one extra element $e$ left over. Let $S_i$ be the set of elements corresponding to binary strings with $0$ in the $i$th position. First, $B$ asks the questions $S_1, S_2, \ldots, S_k$. Let $a_i$ be $0$ if $A$ answers that $x \in S_i$, and $1$ otherwise. Also denote the binary digit $\overline{a_i} = (1-a_i)$. Now, let $w_i$ be the positive integer in $D$ corresponding to the binary word $a_1a_2\ldots a_i\overline{a_{i+1}a_{i+2}\ldots a_k}$ for $i = 0, 1, \ldots, k$. $B$ next asks the questions $\{w_0\}, \{w_1\}, \ldots, \{w_k\}$ in order. If $A$ answers at least once that $x \neq w_i$, let $i$ be the smallest nonnegative integer for which $A$ answers this. Then if $x = w_i$, $A$ will have lied for the $k+1$ consecutive questions $S_{i+1}, S_{i+2}, \ldots, S_k, \{w_0\}, \{w_1\}, \ldots, \{w_i\}$. Therefore $B$ knows that $x \neq w_i$ and he wins. If $A$ always answers that $x = w_i$, then if $x = e$, then $A$ will have lied for $k + 1$ consecutive questions $\{w_0\}, \{w_1\}, \ldots, \{w_k\}$ as well. Therefore $B$ knows that $x \neq e$ and he wins also. [edit: fixing typos]
10.07.2012 22:16
1. Let $T_1 = \{ 1, 2, \ldots , N \}$, $S_i$ is i-th question, $f(S_i)$ is answer of $S_i$, $g(S_i)$ is $S_i$ when $f(S_i)$ is yes, $(S_i)^C$ when $f(S_i)$ is no Lemma. if $n(T_m) > 2^k$ we can make $T_{m+1} \subset T_m$ such that $x \in T_{m+1}, T_{m+1} \ne T_m$ proof. Let $T_m = \{ \alpha \} \cup P$ $S_i = \{ \alpha \}, (i = 1, 2, \ldots)$ case 1. if $f(S_i)$ is no B can make $S_{i+1}, \ldots , S_{i+k}$ such that always $(\cup g(S_i))^C$ is not empty set because set of $\cup g(S_i))^C$ has $2^k$ elements. $T_{m+1} = \cup g(S_i)$ case 2. if $f(S_1), \ldots, f(S_{k+1})$ is all yes $X = \{ \alpha \}$ therefore there exist $T_m$ such that $n(T_m) \leq 2^k$
10.07.2012 23:49
Here's an outline of part 2: Rephrase the problem as "A must make a series of answers so that for every sequence of k+1 consecutive questions, there is no number for which all the answers are false. If this is the case, then at least one of the questions must be true for x and B will learn no information from the possible lies. Now have A act completely randomly. Show that the probability that fixed k+1 consecutive questions are all lies is at most n divided by 2^k, no matter the questions or answers before those questions. This now will let you use a version of the Lovasz Local Lemma. It´s different from the usual version in that it's directional and you need to be a bit more careful, but the proof still works. ' Lovasz Local Lemma will give you arbitrarily long finite sequences of responses. Now, assume B has a deterministic strategy. Finally, do a compactness argument with Tychonoff on {0,1}^n to turn it into an infinite sequence.
11.07.2012 00:58
This is a sketch for b); as far as I've seen somebody had posted a solution for a), though I did not read it carefully. What we want to do is to write some $N$ so that no proper subset of $\{1,2,\ldots,N\}$ would be winning under a certain strategy of the liar. We say that a number $1\leq x\leq N$ requires an answer in $\leq p$ turns at some moment if the previous $k-p+1$ answers are false under assumptions that $x$ is the secret number. And we say that $x$ requires an answer in exactly $p$ turns if it requires an answer in $\leq p$ turns, but not in $\leq p-1$. At every moment $t$ we count a weighted sum $P(t)$ in the following way: every number that requires an answer in $k+1-i$ turns brings to $P(t)$ a summand $\frac{1.995^i}{N}$. Suppose we (the liar) are given a new set $S$ to give the immediate answer. Which one will we give? Indeed, if $S$ carries more than a half of $P(t)$ then we say 'yes', otherwise 'no'. Following the strategy, one can easily check that $P(t+1) \leq \frac{1.995}{2}\cdot P(t) + 1$. Then, by induction, $P(t)<400$. Assume that at some moment our opponent could guarantee that $x$ is not a secret number. Let $t$ be the first such moment. Then $x$ brings $\frac{1.995^{k+1}}{N}$ to $P(t)$. It is impossible if $N<\frac{1.995^{k+1}}{400}$, which is exactly what we wanted for large $k$.
11.07.2012 12:03
Part 1 More generally, we'll let $A$ choose $x$ from $T \subset \mathbb{Z}_+$, where $|T|=N$, and $T$ is known to both players. We will proceed by induction on $N$. If $1\le N \le 2^k$, take $X=T$. Otherwise, suppose $x_1<...<x_{2^k} \in T$. Firstly, note that if we can prove that $x \ne x_j$, we will be finished since $B$ may now guess subsets of $T'=T \backslash x_j$, and we know $B$ has a winning strategy here from our inductive hypothesis, since $|T'|=N-1$. Let $P_1=\{x_1,...,x_{2^k}\}$. If a subset of $\mathbb{Z}_+$ has $2r$ elements, define its lower half to be the set of its $r$ smallest elements. Step 1: Ask if $x \in P_1$ repeatedly. If we get an answer of yes $k+1$ times, we can take $X=P_1$. Otherwise, $A$ will eventually say no. Step $\ell$, $2 \le \ell \le k+1$: If $A$ last said no, colour $P_{\ell-1}$ red, and set $P_{\ell}$ to be the lower half of $P_{\ell-1}$. If $A$ last said yes, colour $P_{\ell-1}$ blue, let $t$ be the greatest index such that $P_t$ is red, and set $P_{\ell}$ as the lower half of $P_t \backslash (P_{\ell-1} \cup ... \cup P_{t+1})$. Finally, ask $A$ if $x \in P_{\ell}$. Note that $|P_i|=2^{k+1-i}$. Let $P_{k+1}=\{x_m\}$. We have $x_m \in P_j \Longleftrightarrow x_{m+1} \in P_j$, for $1 \le j \le k$. Furthermore, $x_m$ belongs to precisely those $P_j$, $1 \le j \le k$, such that $P_j$ is coloured red. Case 1 $A$ says $x \not\in P_{k+1}$. Then the truthfulness of any answer from $A$ implies $x \ne x_m$, so we are done. Case 2 $A$ says $x \in P_{k+1}$. Then the truthfulness of any answer from $A$ implies $x \ne x_{m+1}$, so we are done.
12.07.2012 07:43
Brilliant problem with brilliant solution! I was on right track during the contest but I solved it long after that. My solution is pretty the same as magazinov's one.
12.07.2012 21:22
My solution to part b: (Which happens to get a better bound of asymptotically $\frac{2^k}{k}$) First of all, it is necessary to make B´s strategy non-random. This will be very important later. To do this, if his strategy involves randomly picking a move at some point (depending on the previous moves and ANSWERS) fix his strategy by explicitly picking one of the moves which he might make at that point. This is not saying he cannot adapt his strategy; this is just saying he cannot flip a coin to decide which question to ask. Fixing his strategy can do no harm because this is a perfect information game and he needs to guarantee a win. Now, I will slightly alter the rules of the game. Instead of A not being able to lie for k+1 moves in a row, he is able to lie for k+1 moves in a row, but he will lose at the end of the game if he ever does that. This is obviously equivalent. Now, consider a sequences of moves and answers. I will write $S_{i,j}$ for the set of possible $x$ such that moves $i, i+1, \cdots i+j$ are all lies. (Here $i$ is a positive integer, and $j$ is an integer that is at least -1. I found this convenient for a later induction.) I claim that it suffices for A to move so that all of the $S_{i,k}$ are empty. This is because if all the $S_{i,k}$ are empty, then first of all he cannot lose by lying k+1 moves in a row, as if that were the case, x would be in one of the sets. $B$ also cannot exclude any integer from possibly being $x$, as no possible $x$ would result in k+1 moves being lies. We now make A´s strategy purely random. At each move, he answers yes or no with probability one half each (and different moves are independent). Let the event $E_i$ be the event that $S_{i,k}$ is NONEMPTY. I will now claim the following result: Claim: Assume i+k-m moves have been made so far, where m ranges from 0 to k+1. Then no matter those moves, the probability of $E_i$ occurring is at most $\frac{|S_{i,k-m}|}{2^m}.$ Proof: Induct on m. Base Case: m=0: The probability of $E_i$ occurring is at most $|S_{i,k}|$ because if that set is empty, $E_i$ does not occur by definition, and if it is nonempty, the probability it occurs is 1 and the set has size at least 1. Inductive Step: Assume it is true for m-1. We prove it for m. Consider what happens on the i+k-m+1th move. B will ask a question, and A has equal chance of answering either yes or no. This partitions the set $S_{i,k-m}$ into two possible sets $S_{i,k-m+1}$ and $T_{i,k-m+1}.$ This because everything that was a lie for the last k-m+1 questions will be a lie for exactly one possible answer to the next question. Thus, by induction, the probability of $E_i$ occurring if $A$ answers yes is at most $\frac{|S_{i,k-m+1}|}{2^{m-1}}.$ A similar expression, but with T instead of S holds if $A$ answers no. Thus, the probability of $E_i$ occurring is at most $\frac{1}{2}(\frac{|S_{i,k-m+1}|}{2^{m-1}}+\frac{|T_{i,k-m+1}|}{2^{m-1}}) = \frac{|S_{i,k-m}|}{2^m},$ as desired. In particular, letting m = k+1, the probability of $E_i$ occurring for some fixed first i-1 moves is at most $\frac{|S_{i,-1}|}{2^{k+1}}.$ (This is where it´s convenient to define $S_{i,-1}.$ It is the set of possible x where a set of 0 questions are all lies, in other words the set of all x, which has size N.) Thus, the probability that $E_i$ occurs, no matter the first i-1 moves, is at most $\frac{N}{2^{k+1}}.$ This lets us use what is essentially a variant of the Lovász local lemma! Now, if $\frac{N}{2^{k+1}}$ is $<= p(1-p)^k$ for some $p$ between 0 and 1, I claim that the probability that $E_1,E_2, \cdots E_m$ all don´t occur for some $m$ is at least $(1-p)^m.$ Such a p exists if N is asymptotically $\frac{2^n}{n}$ (To see this, let p = $\frac{1}{k+1}$ and approximate $(1-p)^k$ with e.). To prove this, I will first prove a stronger lemma: (Note: $P(A|B)$ means conditional probability of A assuming B. And $\bar{A}$ means the event that A doesn´t happen. And A,B means A intersect B.) Lemma: $P(E_m|\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z}}) <= p(1-p)^{z-1},$ where m is an integer at least 1 and z is an integer from 1 to $min(k+1,m-1)$. Proof: We do a double induction. Induct upwards on m, and then induct downwards on $z$ (Starting from $k+1$). For the sake of the induction, it suffices to fix $m$, assume we have proven it for everything smaller than $m$ (which may be nothing!) and then induct on $z$. Then, we will only need to check the base case $z = k+1.$ If this induction worries you, rephrase the argument as "Take the smallest m that doesn´t work for some z, and then take the largest z that doesn´t work for that m." Base case: We want to estimate $P(E_m|\bar{E_1},\bar{E_2},\cdots \bar{E_{m-k-1}}).$ (The same logic works for when m-1 is less than k+1, you just need to be a bit more careful.) Note that all the events $E_i$ where $i$ is less than $m-k$ have to deal with moves before the $i$th move. Therefore, the conditional is just saying "for some certain possible first i-1 moves." But we showed earlier than this must be <= $\frac{N}{2^{k+1}} <= p(1-p)^k.$ Inductive step: We now do a bit of calculation. \[P(E_m|\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z}})\] \[=\frac{P(E_m,\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z}})}{P(\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z}})}\] \[=\frac{\frac{P(E_m,\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z}})}{P(\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z-1}})}}{\frac{P(\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z}})}{P(\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z-1}})}}\] \[\le \frac{P(E_m|\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z-1}})}{P(\bar{E_{m-z}}|\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z-1}})}\] By our induction on z, the first one is at most $p(1-p)^z.$ The second term is, by our induction on m (Replace m by m-z) $1-P(E_{m-z}|\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z-1}})\le 1-p$ (We just changed $\bar{E_{m-z}}$ to $E_{m-z}$) so that gives us that $P(E_m|\bar{E_1},\bar{E_2},\cdots \bar{E_{m-z}}) \le p(1-p)^{z-1}$ as desired. Now, note that we can prove that the probability that none of $E_1, E_2, \cdots E_m$ happen is at least $(1-p)^m$ simply by noting that it equals $\prod_{i=1}^{m}P(\bar{E_i}|\bar{E_1},\bar{E_2},\cdots \bar{E_{i-1}}) \ge (1-p)^m.$ This tells us that there exists, for each m, a sequence of responses to B´s strategy so that he does not get any information in those $m$ moves. But we still need an infinite sequence of responses; it could turn out that for each m, there is a way to stop $B$ from getting information in those $m$ moves, but each of those strategies will eventually start failing after some point. To get an infinite sequence, we encode each m-sequence as an infinite binary sequence with yes = 1, no = 0, and every move after the mth is written as 0. This is an element of the topological space $\{0,1\}^{\mathrm{N}}.$ But this is a metric space, and is compact by Tychonoff, so it is sequentially compact, so there is some element of the space that, for all i, there are infinitely many m such that the chosen m-sequence agrees for the first i terms; as there are infinitely many, there is one such that m is at least i, and therefore whether or not the events $E_1,E_2,\cdots E_{i-k}$ happen must be the same for the two sequences, and therefore each event must not happen for this new sequence for all i, which is exactly what we want.
14.07.2012 12:55
In a parallel topic (The authors of the problems), the author of this one, David Arthur from Canada, sheds a little light on the process of creating it. He states "The basic $k=1$ version was inspired by ...", which brings me to sharing some musings. So for $n\geq 2^k$ player $B$ has a winning strategy (point a)), while for $\varepsilon > 0$ and large enough $k$ there exists $(2-\varepsilon)^k \leq n<2^k$ so that $B$ cannot anymore guarantee a win (liberal generalization of point b)). Of course, one is led to ask oneself "so what is happening for precisely $n= 2^k - 1$" ? Is it (via a better, deeper proof than that for point a)) that $B$ still has a winning strategy, or is it (via a better, deeper proof than that for point b)) that already $B$ cannot anymore guarantee a win ? Or does it depend on the value of $k$ ? Well, for $k=1$, that basic, initial case, the answer is quite simple. $B$ cannot anymore guarantee a win for $n=2^k - 1 = 1$, and $N=2$. Player $A$ simply mentally chooses if his first answer is a truth or a lie, and then alternates between truth and lie, with the exception of the cases when the set $S$ specified by $B$ contains both or neither of $1,2$, when he answers in truth (this is quite normal, because then $B$ knows the truth value of $A$'s answer, so $A$ may as well take advantage of it, and tell the truth). Now, $B$ not knowing the choice made by $A$ at his first non-forced answer, cannot decide if $x=1$ or $x=2$. So my question is, can it, or why can it not a "similar" strategy work for $A$ for $k>1$; in other words, is the exponential weakening of $(2-\varepsilon)^k$ just so because we do not know how to prove a better bound (like deciding on $n=2^k-1$) ? or is it some intrinsic reason why we cannot decide.
15.07.2012 03:01
One may have a look at Section 15.3 (tenure game) in Alon & Spencer's book The Probabilistic Method. This problem can be easily reduced to that (or a similiar one)...
15.07.2012 03:14
liyi wrote: One may have a look at Section 15.3 (tenure game) in Alon & Spencer's book The Probabilistic Method. This problem can be easily reduced to that (or a similiar one)... You may view the section in google books here.
15.07.2012 11:13
It is not good for P3. I think USA used Probabilistic method and in particular this book in their preparation. Although they didn't get crucial advantage and it's like they took only 3rd place.
15.07.2012 16:55
liyi wrote: One may have a look at Section 15.3 (tenure game) in Alon & Spencer's book The Probabilistic Method. This problem can be easily reduced to that (or a similiar one)... Maybe at Section 14.3.
15.07.2012 17:01
yunxiu wrote: liyi wrote: One may have a look at Section 15.3 (tenure game) in Alon & Spencer's book The Probabilistic Method. This problem can be easily reduced to that (or a similiar one)... Maybe at Section 14.3. Alon & Spencer is already in its third edition. In the 1st edition, it is Section 14.3. In the 3rd edition, it is Section 15.3.
15.07.2012 18:02
pavel kozlov wrote: It is not good for P3. I think USA used Probabilistic method and in particular this book in their preparation. Although they didn't get crucial advantage and it's like they took only 3rd place. US team does not use this book as preparation (though there is probabilistic training).
20.10.2012 00:44
Here is a solution to problem 3 http://www.zyymat.com/imo-2012-solutions.html 2. Let $p$ be a positive real number such that $1.99<p<2$. Let us choose $k_0$ such that for every $k>k_0$, the following relation holds \[(2-p)p^{k+1}-1.99^k>2.\] We will prove that if $ N \in\left(1.99^k+1, (2-p)p^{k+1}\right)$, then B cannot specify a set $X(|X|\leqslant N-1)$ such that $x\in X$
26.10.2012 02:10
What the heck??? I wish I understood what you guys are saying
13.11.2012 15:19
Results of Heidi Gebauer, Tibor Szab\'o, Gabor Tardos on so-called $(k,d)$-trees imply that the threshold for player $B$ winning versus player $B$ losing is at $n\approx\frac{1}{ek}\, 2^{k+1} \left(1+o(1)\right)$. (The proof method is above olympiad level.)
20.11.2012 17:22
Let by $n(k)$ denote the precise number, depending on $k$, such that: if $n \geq n(k)$ then $B$ wins; if $n < n(k)$ then $A$ wins. In the official solution the following asymtotic bounds for n(k) are given: $ q^k< n(k) \leq 2^k$ , for each fixed $q <2$ and sufficiently big $k$. The lower bound is proved using a strategy which minimizes a certain weighted function. The main reason to write this post is that a sharper lower bound can be achieved using a modified weighted function. Namely: For each $\alpha > 1$, for sufficiently big $k$ holds: $\frac{2^k}{k^\alpha} < n(k) $. Proof. First we will slightly restate the game. Let $M$ is a set with $n$ elements, $x_0\in M$. On each round player $B$ offers to $A$ some subset $X, \, X\subset M$ and $A$ chooses between $X$ and $M\setminus X$. The choice of $A$ can be interpreted as saying that $x_0$ don't belong to the set have been chosen. Let us tag to every element $x \in M$ a counter $i(x)$. In the initial moment all counters are $0$. Every time $A$ chooses a set $X$, the counters of all elements of $X$ are increased by $1$ and all counters of $M\setminus X$ are reset to $0$. The aim of $A$ is to not allow any counter to become $k+1$. The strategy of A. Let $\, 1 = q_0 \leq q_1 \leq \ldots \leq q_m \leq\ldots < 2$ be a sequence, which will be explicitly chosen later. For each $X\subset M$ define a function $Q(X)=\sum_{x\in X} q_0q_1\ldots q_{i(x)}$. We will call it a potential of the set $X$. The strategy of $A$ is to look to $Q(X)$ and $Q(M\setminus X)$ and to choose the set which minimizes $Q$ i.e. the set with the less potential. Let us fix $k$ and assume that: (1) $n \leq \frac{1}{4} q_0q_1\ldots q_{k+1}(2-q_{k+1})$. We will prove that following the above strategy will keep each counter under $k+1$. Assume that after some steps we are in a situation that $\frac{1}{2}q_0q_1\ldots q_{k+1} \leq Q(M) < q_0q_1\ldots q_{k+1}$. We will show that after the next round, the new potential $Q'$ will not exceed $Q$ i.e. $Q'\leq Q$. Assume that player $B$ offers to $A$ to choose between $X$ and $M\setminus X$ and WLOG let $Q(X) \leq Q(M\setminus X)$. $A$ chooses $X$. We estimate the new potential $Q'$. $Q' \leq q_{k+1}Q(X) + |M\setminus X | \leq q_{k+1}\frac{Q(M)}{2} + n \, \Rightarrow$ $Q'-Q \leq q_{k+1} \frac{Q}{2} - Q +n \leq $ $\leq \frac{1}{4}q_0q_1\ldots q_{k+1} (q_{k+1}-2) +\frac{1}{4}q_0q_1\ldots q_{k+1}(2-q_{k+1})=0 $. Therefore $Q'\leq Q$. Since after each round the new potential is less than the doubled value of the old, we conclude that it can not jump over the interval $[\frac{1}{2}q_0q_1\ldots q_{k+1}, \, q_0q_1\ldots q_{k+1})$. But it can't grow while it is in this interval. All this means that $Q < q_0q_1\ldots q_{k+1}$, so if (1) holds and $B$ adhere to the above strategy, it is impossible any counter to become $k+1$. It imply that: (2) $ \frac{1}{2} q_0q_1\ldots q_{k+1}(2-q_{k+1}) \leq n(k) $ Now let fix $\alpha > 1$ and choose $q_i=2-(i+1)^{-\alpha},\, i=0,1,2,\ldots$. Then: (3) $n(k) \geq (k+2)^{-\alpha} \prod_{i=1}^{k+2} (2-i^{-\alpha}) = (k+2)^{-\alpha}\, 2^{k+2} \prod_{i=1}^{k+2} (1-\frac{1}{2}i^{-\alpha}) > C(\alpha) \frac{2^k}{k^\alpha} $ where $C(\alpha)$ is a constant which depends only on $\alpha$. If $1< \alpha < \beta $, then for sufficiently big $k$ we have $\frac{C(\alpha)}{k^{\alpha}} > \frac{1}{k^{\beta}} $. Finally, it allows us to restate (3) as follows: For every fixed $\alpha > 1$, if $k$ is big enough we have: $\frac{2^k}{k^\alpha} < n(k) $.
08.07.2013 18:32
I was a coordinator at IMO 2012 and I wrote this just after the IMO.
Attachments:
Liar's Guessing Game.pdf (210kb)
29.06.2017 12:56
The key observation for this problem is that the only info Ben ever receives is that any consecutive $k+1$ answers aren't all false. Thus the question is whether you can reduce answers to a set of size $n$ or less using that info only. The exact integers used don't really matter at all, so I'll simply use $0, \dots, N-1$. Asking about a set of size $\ge N/2$ is equivalent to asking about its compleement of size $\le N/2$, so we only consider questions of the latter kind. For part (a), suppose $N > 2^k$ is the minimum value that $n$ can take such that Ben guarantees a win. first Ben asks about $0$ until he gets a yes. If he gets $k+1$ nos, one of them is true and thus $x \ne 0$, allowing us to reduce the size of $n$. Now, if Ben got a yes for $0$ consider that to be the $0th$ question. Ben can construct the $i$-th question inductively as follows: If the $i$-th answer was a yes, set $a_i=1$. Otherwise if it's a no or not yet answered, we have $a_i=0$. Set $b_i$ as the number represented by the binary string $a_0a_1 \dots a_k$, after the $(i-1)$-th question was asked. Then ask the $i$-th question about whether the $2^{k-i}$ numbers up to $b_i$ contain $x$. e.g. if $k=3$, the first question would be about whether ${1,2,3,4}$ contains $x$. If the answer is yes, the second question is about ${5,6}$. If no, then it's about ${1,2}$. Since $N > 2^k$, these questions can be asked with no issue. Now after $k+1$ questions Ben can extract information from the answers. If Amy answered yes to all of them, we know that the statement that none of the numbers from $0$ to $2^k -1$ is $x$ is false i.e. one of them is $x$. This allows us to reduce $n$ to $2^k$. Otherwise the converse statement of the answers are consistent with precisely one number from $1$ to $2^k -1$ being $x$, and since the converse statement is not true we can safely say that that number isn't $x$. This allows us to reduce $n$ until we get to $2^k$. Now part b is where this line of argumentation becomes tricky. I think that if you have $n<2^{k-1}$, Ben can no longer guarantee a win. First we prove that given any fixed first question and answer, if Amy can answer the next $k$ questions freely, Ben can't reduce to less than $n=2^k -1$. Note that if Amy answers no to some set, the converse of the answers must be consistent with one of that set being $x$. This creates a consistent set of numbers which may be $x$ per the converse. If any question has a set which is disjoint with the current consistent set, Amy can simply answer yes to obtain a contradiction. We label the questions as $0, \dots , k$ again. We claim that if the $i$-th question for $1 \ge i \ge k$ introduces less than $2^{k-i}$ new integers which weren't asked about in the questions $0$ to $i-1$, Amy can answer no and thus create a consistent set smaller than $2^{k-i}$. Note that each subsequent question set must have a common subset with the current consistent set, and Amy can answer No is the largest common subset is not greater than half the size of the current consistent subset and yes otherwise, decreasing the size of the consistent subset by at least half. Thus since we started with less than $2^{k-i}$, we will end at less than 1 i.e. at 0. Then we don't have any consistent statement falsified, providing no information to Ben. If the answer to the $0$-th question was no, then by a similar argument the associated set must have been of size at least $2^k$. This would lead to reducing by at most one, which doesn't fulfill our purpose to reducing below $2^k - 1$ Now if the $i$-th question for $1 \ge i \ge k$ introduces at least $2^{k-i}$ new integers which weren't asked about in the questions $0$ to $i-1$, alongside the set for question $0$ we must have at least $2^k$ distinct integers asked about. Then if Amy answers yes everywhere she restricts the numbers to a group of size more than $2^k$, which can't reduce us below $2^k - 1$ either. Since this line of logic holds, if we can prove that at most $c+1$ answers can be affected by previous answers Amy can freely answer $k-c$ questions and thus prevent $n$ from reducing below $2^{k-c} - 1$. I think one can only force 2 yeses at most, and if we have $N<2^{k-1}$ one can fearlessly say no except in perhaps the last two questions. But this still doesn't prevent other lines of answers from being forced, and it's all guesstimating around anyways. My mind is too clogged to take this idea and form it into a proper solution, so I'm posting here so that hopefully someone else can take a stab. mavropnevma wrote: In a parallel topic (The authors of the problems), the author of this one, David Arthur from Canada, sheds a little light on the process of creating it. He states "The basic $k=1$ version was inspired by ...", which brings me to sharing some musings. So for $n\geq 2^k$ player $B$ has a winning strategy (point a)), while for $\varepsilon > 0$ and large enough $k$ there exists $(2-\varepsilon)^k \leq n<2^k$ so that $B$ cannot anymore guarantee a win (liberal generalization of point b)). Of course, one is led to ask oneself "so what is happening for precisely $n= 2^k - 1$" ? Is it (via a better, deeper proof than that for point a)) that $B$ still has a winning strategy, or is it (via a better, deeper proof than that for point b)) that already $B$ cannot anymore guarantee a win ? Or does it depend on the value of $k$ ? Well, for $k=1$, that basic, initial case, the answer is quite simple. $B$ cannot anymore guarantee a win for $n=2^k - 1 = 1$, and $N=2$. Player $A$ simply mentally chooses if his first answer is a truth or a lie, and then alternates between truth and lie, with the exception of the cases when the set $S$ specified by $B$ contains both or neither of $1,2$, when he answers in truth (this is quite normal, because then $B$ knows the truth value of $A$'s answer, so $A$ may as well take advantage of it, and tell the truth). Now, $B$ not knowing the choice made by $A$ at his first non-forced answer, cannot decide if $x=1$ or $x=2$. So my question is, can it, or why can it not a "similar" strategy work for $A$ for $k>1$; in other words, is the exponential weakening of $(2-\varepsilon)^k$ just so because we do not know how to prove a better bound (like deciding on $n=2^k-1$) ? or is it some intrinsic reason why we cannot decide. You can use my algorithm to obtain two yeses for sufficiently small sets and then apply the strategy for part (a) again for $k-1$ answers, to reduce $n$ to some point very close to $2^{k-1}$. So I don't think $n=2^k-C$ will work as a bound. Although a constant multiple might work, if my idea is correct. Charlydif wrote: I was a coordinator at IMO 2012 and I wrote this just after the IMO. Thank you for sharing these with us. A very interesting read for sure.
11.07.2017 04:30
1. We prove this by induction on $N$. If $N\le 2^k$, then clearly $B$ can just name $\{1,2,\ldots,N\}$ and immediately win. Otherwise, $B$ repeatedly asks about $\{1,2,\ldots,2^k\}$. If $A$ answers yes $k+1$ times in a row, then $B$ knows that $x\in\{1,2,\ldots,2^k\}$ so he wins with that set. Otherwise, $A$ answers no at some point. After that, $B$ asks about the sets $S_1,S_2,\ldots,S_k$, where $S_i$ consists of the numbers in $\{1,2,\ldots,2^k\}$ that have a 0 in the $i$th digit of its binary representation. $B$ considers the number $x'$ such that for all $i$, if $A$ answered yes to $S_i$ then $x'$ has a 1 in the $i$th digit of its binary representation and 0 otherwise. If $x=x'$, then $A$ would have lied $k+1$ times in a row, contradiction. Thus, $B$ has narrowed down the set that $x$ lies in to a set of $N-1$ integers and by induction can win. 2. We will show that $B$ cannot guarantee a winfor all sufficiently large $k$ and $n<0.00001\cdot1.999^k$, which implies the result. Let $N=n+1$, so $B$ wins if and only if he finds an integer $x'\in\{1,2,\ldots,N\}$ that cannot be $x$, i.e. that $A$'s answers are inconsistent with $k+1$ times in a row. For each $m$, let $i_m$ be the number of answers inconsistent with $m$ that $A$ has given after her most recent answer consistent with $m$. At every point in the game, consider the variant $\sum\frac1{1.999^{k+1-i_m}}$. Each answer $A$ gives increases $i_m$ by 1 if it is inconsistent with $m$ and sets $i_m$ to 0 if it is consistent with $m$. The main claim is that $A$ can indefinitely keep the variant below 1. Then, we must always have $i_m<k+1$ for all $m$, which means that $B$ cannot rule out any number in $\{1,2,\ldots,N\}$ and win. In the beginning, $i_m=0$ for all $m$, so the variant is $\frac{n+1}{1.999^{k+1}}<\frac{0.00001}{1.999}+\frac1{1.999^{k+1}}<1$. During the game, suppose that $B$ names the set $S$. Then, $A$ can either reply yes, which turns the number into $\sum_{m\in S}\frac1{1.999^{k-i_m}}+\sum_{m\not\in S}\frac1{1.999^{k+1}}$ or no, which turns the variant into $\sum_{m\in S}\frac1{1.999^{k+1}}+\sum_{m\not\in S}\frac1{1.999^{k-i_m}}$. The sum of these two is $\sum\left(\frac1{1.999^{k-i_m}}+\frac1{1.999^{k+1}}\right)=1.999\left(\sum\frac1{1.999^{k+1-i_m}}\right)+\frac{n+1}{1.999^{k+1}}<1.999+\frac{0.00001}{1.999}+\frac1{1.999^{k+1}}<2$ for sufficiently large $k$, so at least one of them is below 1. $A$ just chooses the answer that keeps the variant below 1 and can indefinitely prevent $B$ from winning, as desired.
15.04.2019 17:36
Call the players Alice and Bob. Part (a): We prove the following. Claim: If $N \ge 2^k+1$, then in $2k+1$ questions, Bob can rule out some number in $\{1, \dots, 2^k+1\}$ form being equal to $x$. Proof. First, Bob asks the question $S_0 = \{ 2^k+1 \}$ until Alice answers ``yes'' or until Bob has asked $k+1$ questions. If Alice answers ``no'' to all of these then Bob rules out $2^k+1$. So let's assume Alice just said ``yes''. Now let $T = \{1, \dots, 2^k\}$. Then, he asks any $k$-follow up questions $S_1$, \dots, $S_k$ defined as follows: $S_1 = \{1, 3, 5, 7, \dots, 2^k-1\}$ consists of all numbers in $T$ whose least significant digit in binary is $1$. $S_2 = \{ 2, 3, 6, 7, \dots, 2^k-2, 2^k-1\}$ consists of all numbers in $T$ whose second least significant digit in binary is $1$. More generally $S_i$ consists of all numbers in $T$ whose $i$th least significant digit in binary is $1$. WLOG Alice answers these all as ``yes'' (the other cases are similar). Among the last $k+1$ answers at least one must be truthful, and the number $2^k$ (having zeros in all relevant digits) does not appear in any of $S_0$, \dots, $S_k$ and is ruled out. $\blacksquare$ Thus in this way Bob can repeatedly find non-possibilities for $x$ (and then relabel the remaining candidates $1$, \dots, $N-1$) until he arrives at a set of at most $2^k$ numbers. Part (b): It suffices to consider $n = \left\lceil 1.99^k \right\rceil$ and $N = n+1$ for large $k$. At the $t$th step, Bob asks some question $S_t$; we phrase each of Alice's answers in the form ``$x \notin B_t$'', where $B_t$ is either $S_t$ or its complement. (You may think of these as ``bad sets''; the idea is to show we can avoid having any number appear in $k+1$ consecutive bad sets, preventing Bob from ruling out any numbers.) Main idea: for every number $1 \le x \le N$, at time step $t$ we define its weight to be \[ w(x) = 1.998^e \]where $e$ is the largest number such that $x \in B_{t-1} \cap B_{t-2} \cap \dots \cap B_{t-e}$. Claim: Alice can ensure the total weight never exceeds $1.998^{k+1}$ for all large enough $k$. Proof. Let $W_{t}$ denote the sum of weights after the t$h$ question We have $W_0 = N < 1000n$. We will prove inductively that $W_t < 1000n$ always. At time $t$, Bob specifies a question $S_t$. Let $B_t$ denote whichever of $S_t$ or $\overline{S_t}$ has lesser total weight, hence at most $W_t/2$. The weights of every element in $B_t$ increase by a factor of $1.998$, while the weights of each number in the complement $\overline{B_t}$ reset to $1$. So the new total weight after time $t$ is \[ W_{t+1} \le 1.998 \cdot \frac{W_t}{2} + \# \overline{B_t} \le 0.999 W_t + n. \]Thus if $W_t < 1000n$ then $W_{t+1} < 1000n$. To finish, note that $1000n < 1000 \left( 1.99^k + 1 \right) < 1.998^{k+1}$ f $k$ is big enough. $\blacksquare$ In particular, no individual number can have weight $1.998^{k+1}$. Thus for every time step $t$ we have \[ B_t \cap B_{t+1} \cap \dots \cap B_{t+k} = \varnothing. \]Then once Bob stops, if he declares a set of $n$ positive integers, and $x$ is an integer Bob did not choose, then Alice's question history is consistent with $x$ being Alice's number, as among any $k+1$ consecutive answers she claimed that $x \in \overline{B_t}$ for some $t$ in that range. Remark: [Motivation] In our $B_t$ setup, let's think backwards. The problem is equivalent to avoiding $e = k+1$ at any time step $t$, for any number $x$. That means have at most two elements with $e = k$ at time $t-1$, thus have at most four elements with $e = k-1$ at time $t-2$, thus have at most eight elements with $e = k-2$ at time $t-3$, and so on. We already exploited this in solving part (a). In any case it's now natural to try letting $w(x) = 2^e$, so that all the cases above sum to ``equally bad'' situations: since $8 \cdot 2^{k-2} = 4 \cdot 2^{k-1} = 2 \cdot 2^k$, say. However, we then get $W_{t+1} \le \frac{1}{2} (2W_t) + n$, which can increase without bound due to contributions from numbers resetting to zero. The way to fix this is to change the weight to $w(x) = 1.998^e$, taking advantage of the little extra space we have due to having $n \ge 1.99^k$ rather than $n \ge 2^k$.
16.07.2019 04:56
I agree with v_enhance, this problem is quite linear thinking style. The trickiest part is just figuring out what is going on and distilling that down. Then a plausible approach is to find some weighting although it seems very odd at first. Getting to the $1.99^k$ weighting would take a bit more effort understanding the situation more but it was spoiled due by the problem statement.(You want to get the $\log{1.99}{n}$ bound, realize the weight value turns into sort of like the inverse of the bound)
15.06.2020 13:09
Wonderful problem! Here is a solution which is similar to v_Enhance's in spirit. Hopefully this offers more perspective to this problem. First of all, we will replace $n$ by $n-1$ for later convenience. That is, $B$ must nail down the set of candidates to $n-1$ numbers. Since $B$ can keep eliminating candidates, we will assume that $N=n$. To make $A$'s job more concrete. We consider $n$ boxes, numbered $1$ to $n$. At any time, the box numbered $i$ will have the amount of pebbles corresponding to the current streak $A$ has lied if $x=i$. Since $B$'s goal is to eliminate at least one candidates while $A$'s goal is to prevent this, $A$ must ensure that in any case ($x=1,2,\hdots,n$), she fulfills the restriction that she lie at most $k$ consecutive times, that is, no box has more than $k$ pebbles. Thus the liar guessing game is isomorphic to the following new game, which we have removed all the information theory. IMO 2012/3, rephrased wrote: Consider $n$ boxes. Initially, each box has $0$ pebbles. In each turn, $B$ splits the boxes into two groups while $A$ chooses to add $1$ pebble to each box in one group and clear all pebbles in the boxes in the other group. Then this turn ends and another turn starts. The game continues indefinitely. $B$'s goal is to make at least one box having $>k$ pebbles, while $A$'s goal is to prevent it. If $n=2^k+1$, then $B$ can guarantee a win. For all sufficiently large $n$, there exists $n\geq 1.99^k+1$ such that $A$ can prevent $B$ from winning. Now let's do actual combinatorics. Part 1. With this preparation, the solution to part 1 becomes very natural. Assume that $k\geq 3$ (the rest can be done manually). We will strengthen the bound a little bit by letting $n=2^k$. The idea is to ignore the last box first and divide the remaining $2^k$ boxes symmetrically. Here is the example of what happens in each turn when $k=3$. $$\begin{array}{c} 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0 \\ \downarrow^A\\ 1\ 1\ 1\ 1\ 0\ 0\ 0\ 0 \\ \downarrow^B\\ 1\ 1\ 0\ 0\ 1\ 1\ 0\ 0 \\ \downarrow^A\\ 2\ 2\ 1\ 1\ 0\ 0\ 0\ 0 \\ \downarrow^B\\ 2\ 1\ 0\ 0\ 2\ 1\ 0\ 0 \\ \downarrow^A\\ 3\ 2\ 1\ 1\ 0\ 0\ 0\ 0 \end{array}$$At this point, we will end up with one box with $k$ pebbles, one box with $k-1$ pebbles and $2^{i-1}$ boxes with $k-i$ pebbles for $i=2,3,\hdots,k$ so we are close. Now we focus on the four boxes with $k,k-1,k-2,k-2$ pebbles and do three moves to end the game. Dividing the boxes into $k$ and the rest. A is forced to clear the box with $k$, hence we must remain with $0,k,k-1, k-1$. Keep doing the same, this time we will remain with $0,0,k,k$ Hence by putting both $k$'s in different groups, we have forced a win. Part 2. Suppose that the $n$ boxes have $x_1,x_2,\hdots,x_n$ pebbles. Let $\lambda = 1.999$. We define the invariant $$S_t = \lambda^{x_1}+\lambda^{x_2}+\hdots+\lambda^{x_n}.$$where $t$ is a time parameter. A's strategy is to pick the group that have the sums of $\lambda^{x_i}$ at most $\tfrac{S}{2}$. It's clear that the after her operation, the invariant will be at most $$S_{t+1} < \frac{\lambda}{2}\cdot S_t + n \stackrel{\text{induction}}{\implies} S_t < \frac{2}{2-\lambda}n = 2000n$$for any $t$. When $n<1.99^k+2$, $S_t$ is less than $1.999^{k+1} = \lambda^{k+1}$ for large enough $k$. Hence no box can have more than $k$ pebbles as desired.
19.12.2020 09:45
We will do the two parts separately. Part a) We will devise a strategy. If $N \leq 2^k$ then $B$ chooses $\{1, 2, \ldots , N\}$ and wins. Else, have $B$ choose the set\[S = \{1, 2, \ldots ,2^k\}\]a total of $k+1$ consecutive times. If $A$ responds yes all times, then one of the "yes" responses was truthful and thus $B$ can win with $S$. Else, $A$ responds no at some point. $B$ then constructs $k$ sets $S_1, S_2, \ldots , S_k$ defined by $S_i$ containing all elements $g \in S$ for which the $i$th digit in the binary form of $g$ is $0$. $B$ proceeds to ask about $S_1$, then $S_2$, all the way to $S_k$ for a total of $k$ questions asked. At this point, $B$ can consider the number $X$ for which the $j$th digit of its decimal representation is $1$ if $A$ responded "yes" to $S_j$, and is $0$ if $A$ responded "no". We claim that $x \neq X$; assume the contrary, and we will find that the original "no" was a lie since by definition $X \in S$. Furthermore, by construction, every single one of $A$'s $k$ responses to sets $S_1, S_2, \ldots , S_k$ must be false by the construction of $X$, hence $k+1$ consecutive lies would be told, impossible. So indeed, $x \neq X$, so we have reduced the scope of $x$ from $N$ to $N-1$. Relabeling the $N-1$ remainng candidates and then inducting down suffices. $\blacksquare$ Part b) Consider when $N = n + 1$, where some constant $c$ satisfies $1.99 < c < 2$ and $n = \lfloor (2-c)c^{k+1}\rfloor - 1$ along with sufficiently large $k$ satisfying $n > 1.99^k$. $A$ then selects some random $x$ from the set $\{1, 2, \ldots , n+1\}$. Suppose that some amount of time has passed and $A$ has given $t$ responses to some $t$ of $B$'s questions. For each element $i \in \{1, \ldots , n+1\}$, let $a_i$ be the length of the most recent run of consecutive answers among the $t$ that are untruthful with regard to whether $i$ is in or not in the $t$ sets $B$ had asked about. Then, $A$ will consider the quantity\[Q = c^{a_1} + c^{a_2} + \ldots + c^{a_{n+1}}.\]Then, $B$ asks his $t+1$th question, and in response, $A$ will lie or not based on the goal of minimizing $Q$. I claim that upon following this strategy, $A$ can keep $Q < c^{k+1}$, and thus it follows that all $a_i \leq k$ so there are no consecutive runs of lies of length $k+1$, hence $B$ can not gain any information about $x$. This completes the problem. We will show this with induction. Clearly, initially, all $a_i = 0$ hence $Q = (n+1) < c^{k+1}$ indeed. Suppose that at some point, $Q < c^{k+1}$. Next, $B$ asks whether $x \in T$. If he lies, then for all $i \in T$, values of $a_i$ increase by $1$, and for all $j \not \in T$, values of $a_j$ become $0$. If he tells the truth, then for all $i \in T$, values of $a_i$ become $0$, and for all $j \not \in T$, values of $a_j$ increase by $1$. That is, for his next value of $Q$, he must choose between:\[Q_1 = (|N| - |T|) + c\sum_{i \in T} c^{a_i} \text{ or } Q_2 = |T| + c\sum_{j \not \in T} c^{a_j}\]with the goal of minimization. Note that\[\text{min}(Q_1, Q_2) \leq \frac{Q_1 + Q_2}{2} = \frac{n+1 + cQ}{2}\]and by our inductive hypothetis and initial assumptions, $n + 1 \leq (2-c)c^{k+1}$ and $Q < c^{k+1}$ so\[\frac12(n + 1 + cQ) < \frac12[(2-c)c^{k+1} + c^{k+2}] = c^{k+1}\]thus $\text{min}(Q_1, Q_2) < c^{k+1}$ as desired, completing our induction and therefore also our proof. $\blacksquare$
07.10.2021 09:58
Amazing problem! Solved with hints from p_square Rephrase the problem in the following way. Each turn, Bob, partitions the set $\{1,2,\cdots,M\}$ into two groups. Alice then chooses one of these groups. Alice must attempt to ensure that among every $k+1$ turns, the union of groups she has chosen is $\{1,2,\cdots,M\}$. This is equivalent because for any group of $k+1$ questions $B$ asks, one of them must be true. Now, Bob checks what possible values of $x$ could be possible if each of the $k+1$ questions were individually answered truthfully. If there is some number that is not a possible value of $x$ (If the union of groups misses some number), then Bob can eliminate that number. Now first, part (a). It asks to show that if $M = 2^k + 1$, then Bob can eliminate some number and therefore at the end, can ensure a set of size at most $n = 2^k$ containing $x$. So in the beginning, Bob does the following thing. Each time, he divides into groups such that the the numbers not chosen yet are as equally distributed among the groups as possible. This way, he can ensure that there must be some number, say $z$, that Alice must necessarily take on the $k+1$'th turn. Then, Bob partitions on the $k+1$th turn into two groups, one containing only $z$, and the other containing everything else, so Alice is forced to pick the one with a single element. Now, in the next $k$ turns, Bob once again uses the old strategy, but this time he can ensure, since one turn was already wasted, that on the last turn, Alice has two numbers needed. He then just puts these in different sets and wins. Next the harder part (b). We need to show that if $M = (1.99)^k + 1$, then Alice can succeed in not giving away any information. Let $p = 1.9999999$ for convenience. Suppose it is the $m$th turn and the last turn a number $l$ was chosen was in the $t$th turn. Then, to this number, assign a weight of $p^{m-t}$. Alice on each turn, chooses the group that has higher weight. I claim this will work, by showing that the sum of all weights on each turn is bounded. Initially, the weights is $Mp$. In general, if the weights were $K$ the previous turn, lets see how much it must be the next turn. The group which wasn't chosen the previous turn will have its weight multiplied by $p$ and for the rest, it becomes just $p$ for each. Say on the $m$th turn, the weight of the chosen group was $A$ and the one not chosen was $K-A$. So now, the weight is $cp + p(K-A)$ where $c$ is the number of elements in the chosen group. Since $c < M$, we have the new weight is $< Mp + p(K-A)$. Since we must have chosen the bigger weight, we have $A > K-A$, so the new weight is $< Mp + p(K-A) < Mp + \frac{pK}{2}$. I claim that that the sum of weights is $< CM$ for some constant $C$. Suppose it was true until $m$, then the next sum of weights we have is $< Mp + \frac{pK}{2} < Mp + \frac{pCM}{2} < CM$ if we pick some $C > \frac{2p}{2-p}$. Now, suppose that at some point, we have some thing of weight at least $p^{k+2}$, which would correspond to Bob having eliminated something. In that case, the sum of remaining things must be at least $p(M-1)$ and so the total sum is at least $p^{k+2} + p(M-1) < CM \implies p^{k+2} - p < M(C-p) \implies M > \frac{p^{k+2} - p}{C-p}$. But we had picked $M = (1.99)^k + 1$, so $(1.99)^k + 1 > \frac{p^{k+2} - p}{C-p} > \frac{p^{k+2}}{C} \implies C((1.99)^k + 1) > (1.9999999)^{k+2}$. but the RHS can be made much much bigger than the LHS for sufficiently large $k$ since $C$ is fixed. Therefore, for sufficiently large $k$, Bob cannot achieve his aim, as desired. $\blacksquare$
17.03.2022 22:17
Solved with Jeffrey Chen. Let the two players be Alice and Bob, and $0$ index for writeup reasons Part a) Bob asks the set $\{0,1,\ldots 2^k-1\}$ a total of $k+1$ times, or until Alice returns false. If Alice returns true for each time Bob asks it, then clearly the answer is in that set, so we just return $\{0,1,\ldots 2^k-1\}$ which has size $2^k \leq n$. Otherwise, If Alice returns false, then for each $1\leq i\leq k$, let Bob ask the set $S_i = \{x | x\&(1 << i) = 0\}$. If $k_i$ was the output for $S_i$, where $k_i = 0$ if the output was false and $1$ otherwise, then consider the binary number $t = \overline{0k_k k_{k-1} \ldots k_1}$. I claim $x\neq t$. This is because, if $x = t$, then for each $k_i$, it will be in $S_i$ if $k_i = 0$, so Alice must've lied. Furthermore, Alice lied the time we asked for $\{0,1,\ldots 2^{k} - 1\}$, so she lied $k+1$ times in a row, a contradiction. Therefore, for all $N \geq 2^k$, we can either find a satisfactory set of size $2^k$, or reduce the search space by $1$. This clearly means, that, after a number of such processes as the one described above, we will eventually find a set of size $2^k$ such that it contains $x$. This proves part a. Part b) Let Bob ask $x$ questions, $n = \lceil 1.99^k\rceil$, and $N = 2^k$. I claim, for large enough $k$, Alice can choose true or false answers such that, for $\lceil 1.99^k\rceil$ number of $x = 0,1,\ldots 2^k - 1$, each $k + 1$ consecutive questions will have at least one truth. This clearly means Bob can not pick a set of size $n$; thus, for large enough $k$, we have $n > (1.99)^k$ does not mean Bob can win. Define $f(x)$ as the number of binary strings of length $x$, such that every substring of length $k + 1$ has at least one $1$. I claim \[f(x) = 2f(x-1) - f(x-k-2)\]This is because, at the $x$th position, we can always place either a $0$ or a $1$, and get $f(x-1)$ of the remaining $x-1$ size string, unless the last $k$ elements (not including the last one) were all false, in which case the $k+1$th last element is true, so the $x-k-2$ elements before that form a desirable binary string. Then, in this case, we have $f(x-k-2)$ such strings, and we can only place a $1$ at the end, so we subtract this. We get a recursive relationship. The characteristic polynomial is $x^{k+2} = 2x^{k+1} - 1$. I claim that this has exactly one root $r$ with $|r| > 1$. First, observe as $k$ gets large, then such an $r$ must have $\lim_{k\rightarrow \infty} r = 2$. Then, for the rest of the roots, their symmetric sums will approach $0$ as $k$ increases and $r$ approaches $2$, by inductively calculating each symmetric sum using the previous (besides the product). This clearly means, if a root $z$ of the polynomial had magnitude greater than $1$, we get a contradiction by the triangle inequality. Now, we can express $f(x)$ as a linear sum of the roots. Finally, by choosing the answers randomly, Alice expects $\frac{f(x)}{2^x} \cdot 2^k$ of each value between $0$ and $2^k - 1$ to satisfy the conditions. However, for large enough $k$, this will be greater than $1.99^k$, since $f(x) = r^k + c$ for $c < k$ and $r > 1.99$. We conclude that there exists such a configuration of trues/falses, for each set of questions. Thus, Bob can not guarantee victory.
18.05.2022 03:48
Part A): Obviously we can always divide the numbers below N into $2^k+1$ buckets and eliminate one, so it suffices to prove that among $2^k+1$ numbers we can always determine one that is not $x$. Also 0-index for convenience. Query the numbers $0$ through $2^k-1$ repeatedly; if the answer is "yes" $k+1$ times in a row, then we can eliminate $2^k$. Otherwise, assume that Alice lies says no (we will be selecting a bucket from under $2^k-1$ to eliminate, so if Alice was telling the truth it wouldn't matter). If we assume that Alice tells the truth from now on, we can deduce the binary digits of the number $m$ that complies to Alice's responses as $x$. Since Alice has to say the truth at least once, $m$ shares at least one binary digit with $k$, so we can xor $m$ with $2^k-1$ to get something that definitely isn't $x$. I have an outline to part B) and will post a writeup later.
09.06.2023 02:35
The scales invariant is actually overused Part 1: Just repeatedly ask a condition until it gives your hypothesized answer (true or false, must cave eventually or else you gain information and then repeat until no more information is gained), and then ask $k$ more questions assuming your hypothesis is false. At least one of those $k$ answers (instead of $k+1$) must be true. This inducts down if you force the hypothesis to be false, giving at most $2^k$ viable solutions by induction. Part 2: Given a number $s$, the scales invariant $N(s)$ is defined as $\sum (1.999)^m$ where $m$ is the number of turns since $A$ last answered truthfully. We will design a strategy for $a$. Suppose $a$ chooses a set $S$ of size $n+1$ and minimizes the sum $N(S) := \sum_{s \in S} N(s)$ after each answer. Then after each turn the new $N_{\textrm{new}}(S)$ is at most $1.999 (N_{\textrm{old}}(S)/2) + n$. Thus since $N_0(S) = n+1$, it follows that $N(s)$ is bounded above by $2000n$. Thus as long as $(1.999)^k > 2000n$, $A$ will never tell a lie for more than $k$ rounds in a row for any given element $s \in S$. Thus if the condition holds, then $A$ can maintain $n+1$ different viable values of $x$ so that $B$ can never guarantee a win. However for sufficiently large $k$, we always have $n = \lfloor \frac{(1.999)^k}{2000} \rfloor \geq (1.99)^k$, so we are done.
01.10.2023 22:44
Used way too many hints. Claim: If there are $2^k + 1$ numbers, $B$ can uniquely rule out one of the numbers. Proof. $B$ repeatedly queries $\{2^k + 1\}$ until $A$ says yes. If this never happens after $k + 1$ turns, then $B$ can rule out $\{2^k + 1\}$. For the remaining $k$ queries, $B$ asks about the subset of $\{1, 2, \dots, 2^k\}$ with $k$th digit equal to $1$. In total, Alice can respond in at most $2^{k+1}$ ways to these $k+1$ queries. If the first query is truthful, then $x$ can only be $2^k + 1$. Else, if the first query is a lie, then Alice must tell the truth at least once in the remaining $k$ queries, giving $x$ at most $2^k - 1$ possible values (as it can't be all false). There must be a remaining $x$ that never occurs in the above, can be ruled out. $\blacksquare$ $(a)$ follows from this. Claim: For all $\varepsilon$, $B$ cannot guarentee a win for $N = \left\lceil(2 - \varepsilon) ^k \right\rceil$ and sufficiently large $k$. Proof. Suppose that the $t$th query gives a partition $A_t \sqcup B_t$, and $A$ chooses one partition $B_t$ and says $x \setminus B_t$. As long as $\bigcap_{n=t}^{t+k} B_n = \emptyset$, it follows that $B$ can't rule out any options, as one of the statements must be true. We now give a weighing scheme. Fix a real $\alpha = 2 - \frac{1}{2} \dot \varepsilon$. On the $t+1$th query, weigh $n$ as $\alpha^c$ where $\bigcap_{n=t+1-c}^{t} B_n$ is the longest chain of sets containing $n$. Then $A$ heuristically chooses between the partitions to minimize the sum of weights over all numbers, which changes the sum from $a$ to at most $\frac{\alpha}{2} \cdot a + N$. As such, the sum is bounded above by $N \cdot \frac{1}{1 - \frac{\alpha}{2}}$, which is less than $\alpha^{k+1}$ for sufficiently large $k$. $\blacksquare$
05.12.2023 17:18
any matholy kid born after 2003 can't monovariant, all they know is think about process, induct, fakesolve, do config geo, complain nt dead, and lie Name the players Alice and Bob. For part 1, it suffices to prove that given $2^k+1$ fixed possibilities for $x$, Bob can eliminate at least one of them. WLOG suppose they are $\{1,\ldots,2^k+1\}$. Bob first queries $\{2^k+1\}$ repeatedly until Alice either responds with $k+1$ consecutive no's or a single yes. In the former case, Bob immediately eliminates $x=2^k+1$. Otherwise, Bob now defines an impossibility set $S$, initialized to $\{1,\ldots,2^k\}$, which tracks the set of numbers in $\{1,\ldots,2^k\}$ that don't agree with any of the following $k$ guesses. Then, for each of his next $k$ guesses, he queries any subset of $S$ with half its size. In this way, the size of $S$ halves after every guess (with $S$ known to Bob the entire time), so after $k$ guesses we still have $|S|=1$, i.e. some value of $x \in \{1,\ldots,2^k\}$ is eliminated, as desired. For $N>2^k$, this fact allows us to induct from $N$ to $N-1$, and since $N\leq 2^k$ clearly works (just specify $X=\{1,\ldots,N\}$) it follows that Bob can win. For part 2, take $n=\lceil 1.99^k\rceil$ and $N=n+1$ for large $k$, so $N \leq 1.991^k$. Here, it will help to view the process as follows: Bob partitions $\{1,\ldots,N\}$ into two subsets and sends them to Alice. Alice then chooses one of the subsets and informs Bob of her decision (corresponding to a possibly false claim that $x$ lies in that subset). We will show that Alice can act in a way such that, among any $k+1$ consecutive subset choices from Alice, every element of $\{1,\ldots,N\}$ appears in at least one subset. This would mean that Alice successfully prevents Bob from eliminating any of the $n+1$ possible choices for $x$. Define a (changing) variable $d_i$ for each $i \in \{1,\ldots,N\}$ such that, immediately after Alice's decision, $d_i$ equals the number of decisions between now and the last time $i$ was contained in a chosen subset (so if $i$ was just in the chosen subset, $d_i=0$); for convenience, we "suppose" that every $i$ was included in a chosen subset right before the game actually starts (so $d_i=0$ initially). We want to avoid $d_i=k+1$. Consider the quantity $M=\sum_i 1.999^{d_i}$. I claim that it suffices for Alice to always act in a way such that this quantity is minimized. Note that $M$ initially is equal to $1.99^k \leq N \leq 1.991^k$. Suppose that during some move, Bob sends $S_1$ and $S_2$ to Alice. Then, based on Alice's decision, $M$ either changes to $1.999M-\sum_{i \in S_1} (1.999^{d_i}-1)$ or $1.999M-\sum_{i \in S_2} (1.999^{d_i}-1)$. If Alice picks the subset that results in the greatest decrease, it follows that $M$ ends up being at most $\frac{1.999M+N}{2}$. For $k$ sufficiently large, this quantity is always at most $1.999^k$, since this holds initially and also $(1.999/2)1.999^k+1.991^k/2 \leq 1.999^k$. On the other hand, if some $d_i$ exists with $d_i=k+1$ then we have $M \geq 1.999^{k+1}$: contradiction. $\blacksquare$
05.12.2023 17:24
wack wrote: Finally, by choosing the answers randomly, Alice expects $\frac{f(x)}{2^x} \cdot 2^k$ of each value between $0$ and $2^k - 1$ to satisfy the conditions. However, for large enough $k$, this will be greater than $1.99^k$, since $f(x) = r^k + c$ for $c < k$ and $r > 1.99$. We conclude that there exists such a configuration of trues/falses, for each set of questions. Thus, Bob can not guarantee victory. The entire part (b) solution is a neat idea but I'm not sure if it's actually correct. Shouldn't we have $f(x)=r^x+c$ instead of $r^k$? Then $\frac{f(x)}{2^x}\cdot 2^k \approx ((2-\varepsilon)/2)^x\cdot 2^k$ for some fixed $\varepsilon$ which is less than $1.99^k$ as $x \to \infty$ and $k$ is held constant? $x$ can be much larger than $k$.
20.01.2024 22:01
Suppose that Bob is a librarian and Alice is borrowing books from a library. Every day, Bob gives Alice a subset of the books that Alice currently has(Bob's question), and Alice may choose to renew that subset of books(answering yes) or the complement of that subset of books(answering no). Suppose that Alice lost one of the books(the one numbered $x$), but Bob does not know which book it is. Alice may still renew the lost book. Claim. Bob can guarantee that a book was not lost if and only if that book was not renewed $k+1$ days in a row. Proof. If the book was not renewed $k+1$ days in a row, then one of these responses was truthful so the book was indeed not the lost one. If the book is renewed in every interval of $k+1$ days, then it is always possible for Alice to tell the truth when and only when she renews that book, which will always be consistent with that one being the lost book. Thus we can continue our analogy. Alice must return a book if it was not renewed in some interval of $k+1$ days(and she may never borrow a book), and Bob wants to get Alice to have as few books left as possible. (a) We will prove that if Alice has $2^k+1$ books, then Bob can get Alice to return at least one book. Then we can just apply the same algorithm to any subset of $2^k+1$ books if Alice has more than $2^k+1$ books. Suppose that one of the books Alice has is EGMO. Then Bob will always query the set {EGMO} until Alice: - Doesn't renew EGMO for $k+1$ days in a row, then Bob has achieved his goal, or - Renews EGMO. Then Bob will query any subset of $2^{k-1}$ books besides EGMO so that Alice has exactly $2^{k-1}$ non-renewed-after-EGMO books left, then any subset of $2^{k-2}$ books from the ones Alice didn't renew since she renewed EGMO so that Alice has exactly $2^{k-2}$ non-renewed-after-EGMO books left, then $2^{k-3}$, etc. After $k+1$ moves, Alice will still have one book that hasn't been renewed, so Bob has achieved his goal. $\blacksquare$ (b) We will describe a strategy that will let Alice keep her favorite $n=\left\lfloor(1.992)^k\right\rfloor$ books. Alice should disregard all other books besides these $n$ books. Let a book's urgency be the number of days left that Alice has to renew it, so that the urgency of a book that Alice hasn't renewed for $k$ days in a row is $1$. Then every day, Alice can assign a book a weight of $1.996^{-\text{urgency of the book}}$. Let the weight of a configuration of urgencies be the sum of the weights of the books. It suffices to prove that the configurations that Alice reaches always have a weight less than $1$. I claim that Alice can simply renew the set of books(out of Bob's query and its complement) with larger total weight to win. Suppose that the weight of a configuration was $2w$, then now the sum of the weights of the renewed books is at most $W=(1.992)^k\cdot 1.996^{-k-1}$, and the sum of the weights of the non-renewed books is multiplied by $1.996$, so it is now $1.996w$. But for large $k$, $W$ can be as small as we want, so we indeed have \[1.996w+W<2w\]whenever $w\ge 0.1$, and when $w<0.1$ the LHS is still less than $0.2$. Therefore, the weight of the configuration decreases or is under $0.2$, and since it starts at at most $W$, it is always less than $1$, as desired. $\blacksquare$
23.07.2024 23:00
Part 1: It suffices to show that for $N= 2^k+1$, $B$ can eliminate a number. First, $B$ can keep querying $2^k+1$. If $k+1$ moves pass and $A$ only says $x\neq 2^k+1$, $B$ can rule out $2^k+1$. So, assume $A$ eventually says $x=2^k+1$. After that, $B$ can ask $A$ about each of the $k$ digits of $x-1$ when written in binary. Let $y$ be the value such that $y-1$'s digits are all different from what $A$ says the digits of $x-1$ are. Then, the most recent $k$ statements are inconsistent with $x=y$. The earlier statement that $x=2^k+1$ is also inconsistent with $x=y$. So, the most recent $k+1$ statements are inconsistent with $x=y$. One of these statements must be true, so $x\neq y$. Thus, $B$ can rule out $y$. Part 2: Let $N=\lfloor 1.999^k\rfloor$. For sufficiently high $k$, $N>1.99^k$. We will show that for sufficiently high $k$, $A$ can prevent $B$ from ruling out any number. This is equivalent to saying that for each $m\le N$, $A$ says at least one true statement about $m$ in every $k+1$ consecutive answers. For each $m\le N$, let $t_m$ be the number of false statements $A$ has said about $m$ since the last time $A$ has said a true statement about $m$. Assume all of the $t_i$'s start at $0$. Claim: For sufficiently high $k$, $A$ can keep $S=\sum_{i=1}^N 1.9999^{t_i}$ below $20000*1.999^k$. Proof: Note $S$ starts below $2000*1.999^k$. Let $f(S)=S*1.9999/2+1.999^k$. Note that $f(S)$ is increasing and $f(20000*1.999^k)=20000*1.999^k$. Thus, if $S<20000*1.999^k$, then $f(S)<20000*1.999^k$. Let $X$ be the set $B$ queries and let $Y=[N]\setminus X$. Then, WLOG assume $\sum_{y\in X} 1.9999^{t_y}\ge\frac12S$. Then, $A$ can say that $x$ is in $X$. Then, $S$ will become $\sum_{y\in X} 1 + \sum_{y\in Y} 1.9999^{t_y+1} \le f(S)$. So, $S$ always stays below $20000*1.999^k$. For sufficiently high $k$, $20000*1.999^k<1.9999^{k+1}$. So, none of the $t_i$'s are more than $k$. Therefore, $B$ cannot rule out any number.