Let $a$ and $b$ be distinct positive integers. The following infinite process takes place on an initially empty board. If there is at least a pair of equal numbers on the board, we choose such a pair and increase one of its components by $a$ and the other by $b$. If no such pair exists, we write two times the number $0$. Prove that, no matter how we make the choices in $(i)$, operation $(ii)$ will be performed only finitely many times. Proposed by Serbia.
Problem
Source: IMO Shortlist 2018 C6
Tags: IMO Shortlist, combinatorics, IMO shortlist 2018
17.07.2019 17:50
The official solution is really great. (See the IMO official!) But this one is the sketch of the solution I submitted during the test. (I will not go in the detail as it is super messy.)
17.07.2019 19:37
Similar problem on China TST 2018.
08.08.2019 20:12
This is my idea (I hope it is correct).
13.11.2019 00:23
This solution, due to MathAwesome123 is different from the previous ones: It uses a brilliant algebraic invariant argument inspired by the classic escape of the clones / Conway's Checkers. Edit: this solution doesn't work - see goldenboy's comment below
https://artofproblemsolving.com/community/c996208h1950519_isl_2018_c6
25.01.2020 12:07
This problem can be solved very easily using Chip Firing Lemma on an infinite directed graph. Notice that the problem is just proving that the CFG with initial setting of chips being k chips on node 0 and 0 chips on all other nodes is finite for sufficiently large k.
24.04.2020 23:19
rzlng wrote: This solution, due to MathAwesome123 is different from the previous ones: It uses a brilliant algebraic invariant argument inspired by the classic escape of the clones / Conway's Checkers.
https://artofproblemsolving.com/community/c996208h1950519_isl_2018_c6 What if y=a-1?
03.06.2020 10:52
rzlng wrote: Let $\mathcal{B}$ be the multiset of integers currently written on the board. Define the quantity\[S=\sum_{n\in\mathcal{B}}f(n). \]We claim that $S$ is preserved under type (i) operations. It suffices to show that $2f(n)=f(n+a)+f(n+b)$, which is $2\cdot \varphi^{2y-x}=\varphi^{2y+2-x}+\varphi^{2y-x-1}$, or $2\varphi=\varphi^2+\varphi^{-1}$. This follows from the fact that $\varphi^2=\varphi+1$ and $\varphi^{-1}=\varphi-1$ (by rearrangements of $\varphi^2-\varphi-1=0$). Correct me if I'm wrong, I might just be really bad at algebra, but isn't this false? \[2f(n) = 2\varphi^{2y-x}\]\[f(n+a)+f(n+b) = \varphi^{2y-x+2} + \varphi^{2y-x-1} = \varphi^{2y-x}(\varphi^2 + \varphi^{-1}) = 2\varphi^{2y-x+1}\]and these two are not equal? I had a similar idea of firing points, with $2 \cdot (x, y) \Rightarrow (x+1, y) \cup (x, y+1)$ where $(x, y)$ denotes $xa+yb$ but the only invariant I really bothered to find was $x-y$. However I believed this is solved denoting $2f(n = xa + yb) = 2\varphi^{x-2y} = \varphi^{x-2y}((1 - \frac{1}{\varphi}) + (1 + \frac{1}{\varphi})) = \varphi^{x-2y}(\frac{1}{\varphi^2} + \varphi) = \varphi^{x-2y+1} + \varphi^{x-2y-2} = f((x+1)a + yb) + f(xa + (y+1)b) = f(n+a) + f(n+b)$
03.06.2020 11:16
Sepled wrote: Proof In fact, $$\{0,0,a,2a,...,(b-1)a,b,2b,...,(a-1)b\}$$satisfy the Claim. The proof is bashing everything out. (The actual proof is too messy and I'm too lazy to show that.) While this works it is not achievable in a "real game" right. If you just choose $a$ to be a super small prime like $17$ and $b$ to be some huge prime in the millions then it's obvious that achievement of this configuration is impossible. Not to mention you're trying not to perform operation i, so you can always choose pairs so it never gets here and you're forced into ii
04.06.2020 05:50
@above. Since I already proved that the order of operation (even mixing up between i and ii) does not affect the ending result. I just need to prove that each number is reachable from the starting point. This is pretty obvious because we could make all numbers in the form $ax+by$ when $x,y \geq 0.$ There is no reason to care whether it could be achieved by the condition that enforces the second rule on using operation. We could swap in any way we want. Forget to mention that WLOG $gcd(a,b)=1)$ Also, this number is achievable even though the second rule still applies. Here's the trick. You could prove that for $f(n,k)$ that is the function which counts the number of times $n$ appear after applying operation 2 for $k$ times. Then, $f(xa,k) \equiv 1 \pmod{2}$ for $x<b$ iff $c2^{x+1}+2^x \leq k <(c+1)2^{x+1} $ for some constant $c.$
13.06.2020 00:15
WLOG $\gcd(a,b)=1$. Suppose that, for some pair $(a,b)$ with $a<b$ we needed operation (ii) an infinite number of times. The key idea is that, if we start from a given position and want to reach an endstate, we can permute the steps to reach the endstate as long as the permutation is still executable (i.e. when only operate on $k$ when there are at least 2 $k$s). In particular, this means that, if we needed step (ii) an infinite number of times, we can consider our endstate to be a turn far in the future right before the $N+1$st execution of step (ii) for $N\gg 1$. Rearranging, we now start with $2N$ $0$s, and our endstate occurs when no pairs are left. Using at most $2^i$ $0$s, we can get at least $1$ of each of the numbers $b,2b,\ldots, ib$. Similarly, we can use finite number of steps to get $ka+b,ka+2b,\ldots, ka+ib$. Varying $k$, we can cover all residues mod $b$, so that means using finite $0$s, we can get $b$ consecutive numbers. Now, duplicate the necessary steps such that there are at least $2$ of the first $a$ consecutive numbers. So, if we set $N$ to be a sufficiently large integer, which is possible because we assumed we applied (ii) an infinite number of times, we can get, by rearranging the appropriate steps, some multiset of numbers of the form $k+\{1,1,2,2,\ldots, a,a,a+1,a+2,\ldots b\}$. However, if we operate on $k+1$, we get $k+1+\{1,1,2,2,\ldots, a,a,a+1,a+2,\ldots b\}$. Hence, this pattern is self sustaining, and will never run out of pairs. Hence, there will never be an $N+1$st execution of step (ii) and we have a contradiction, as desired.
17.01.2021 15:04
03.01.2022 05:31
This one's a mindbender. Scale $a,b$ down so that $\gcd(a,b) = 1.$ Claim: Suppose we start from a position and apply a sequence $S$ of $k$ moves of type (i), $A_1,A_2, \dots A_k,$ until we get to an "end position" where we must apply a (ii) move. If we apply an arbitrary sequence $B_1,B_2, \dots $ of type (i) moves on the same starting position, we will get to the same end position. Proof: We induct on $N$ with base case trivial. For the inductive step, if $B_1$ consists of replacing $(x,x)$ with $(x+a,x+b);$ then take the first move $A_i$ consisting of replacing $(x,x)$ with $(x+a,x+b)$ (there must exist one, otherwise $S$ wouldn't lead to an end position). Move $A_i$ to the front of $S.$ Note that 1) all moves are still valid since the $x$'s were not moved on before $A_i$ 2) the end position is unchanged 3) we still cannot possibly reach an end position before $A_k.$ Then apply the inductive hypothesis. $\square$ Critical Claim: Apply a sequence $S$ of moves from the empty board leading to an end position after making $N$ type (ii) moves. Assume we start with $2N$ zeros and make type (i) moves arbitrarily. Then we will reach the same end position. Proof: Follow the same sequence $S$ on the $2N$ zeros, if we get to a type (ii) move then skip it; let the next move be on two zeros that haven't been moved on. This leads to the same end position. Then apply the previous claim. $\square$ The key insight is that it suffices to show that starting with enough zeros, there exists a sequence of moves allowing us to make (i) moves indefinitely (note the critical claim gives the freedom to choose any arbitrary sequence of (i) moves). It is not hard to see that with enough $0$s, we can get any large enough linear combination of $a$ and $b$ on the board. So we can get a multiset of numbers of the form $$\{C+1,C+1,C+2,C+2,\ldots, C+a,C+a,C+a+1,C+a+2,\ldots C+b\}$$for an integer $C$ on the board. We always move on the two smallest integers in this set; we can do this indefinitely. $\blacksquare$
25.02.2022 09:15
Suppose not. First, note that the we don't have a "choice" in operation (i), irrespective of what order we do operations in, we get the same result eventually. This is because suppose you want to do some operations and instead I want to perform operation (i) on say $x,x$. Clearly your set of operations will eventually have to do operation (i) on $x,x$ as otherwise you will never have to use operation (ii), and when you do so, assume we operate on this pair, so it doesn't matter, the same thing works if I want to perform operation (ii) since I can just "wait" until you're done with operations on the rest and then since you can't go on infinitely, you will have to perform operation (ii) later. Now start the game with $2^M$ times of operation (ii), I claim that this will go on forever. Let $a \ge b$, it suffices to show that there exist a consecutive set of $a$ numbers, each of which appear at least $2$ times, since then for any number bigger than it, say $n$, $n-a$ and $n-b$ appear each at least two times, so each contributes at least $1$ time of $n$ appearing so $n$ also appears at least two times and so on, so we never have to use operation (ii) beyond this. Assume $a,b$ are coprime since otherwise we can just divide everything out by the gcd. There must exist a number $C$ such that every number bigger than $C$ can be written as $ap + bq$ for $p,q \ge 0$, by say chicken mc-nugget. Consider the $a$ numbers $C+1, C+2, \cdots, C+a$, so to each of them, there is a "path" from $0$ to them, and since the number of times it is written at most halves along each time you walk on the path, if we start with $> 2^{\frac{C+a}{a} + 1}$ things, then each of these numbers will get written at least twice, so we are done by the above paragraph. $\blacksquare$
29.05.2022 10:41
I think I have a solution, but I find that it is very difficult to put into words. I will do my best. (This solution is also potentially wrong, please catch any mistakes pls) First, notice that the order of the moves does not change anything. Thus, we will have total control of the order of the moves. Second, it does not matter when we put a $0$ as long as we never miss any, you can write a $0$ before it is needed and it will not affect anything. Finally, if there are more $0$'s written than needed, this is a strictly stronger version of the setup. If the result can be shown for this arrangement it can be shown for the weaker one too. So assume wlog $a<b$. We will denote $S_1=\{ a,b \}$ and define $S_n$ Inductively. If we have $S_{n-1}$ on the board, we can add $S_1$ to the board. Adding another $S_1$ gives $S_{n-1}$ and $S_{2}$ on the board. Keep adding $S_1$'s and combining until we have two $S_{n-1}$'s. Now, we can form an $S_n$. Notice that since there are now two $S_{n-1}$'s on the board, we can pair their elements and apply the operation. Thus, the sets $S_{n-1}+a$ and $S_{n-1}+b$ will be on the board. Now, proceed to pair numbers arbitrarily until we reach all numbers being distinct. To show that we will use (ii) finitely often, it is sufficient to show there are finitely many $S_i$'s. It is easy to see by induction that $\min(S_k)=ak$, $\max(S_k)=bk$, and $|S_k|=2^k$. But, all elements in $S_k$ are distinct. So, \[ k(b-a) \geq 2^k \]Which is not true for sufficiently large $k$. Thus, finitely many $S_i$'s exists as desired.
31.12.2022 19:22
Assume the opposite - for some $a<b$ we have to use operation ($ii$) for infinitelty many times. Lemma 1. Assume there is a sequence of moves $p_1,p_2,\dots,p_k$ type ($i$) which lead from some intial configuration of numbers on board to the final configuartion, in which no moves of type ($i$) are possible. Then any arbitrary sequence of moves $q_1,q_2,\dots$ type ($i$) from the same initial configuration will lead to the same final configuration. Proof. Induct on $k;$ the base case $k=0$ is trivial. Notice that due to our assumption sequence $q_i$ must end at some point. Now suppose that $q_1$ replaces $(x,x)\rightarrow (x+a,x+b)$ $(\star );$ in particular the initial configuration of numbers contain two copies of $x,$ so some move $p_n$ has to make replacing $(\star)$. If we take the first such move, then the sequence $$p_n,p_1,p_2,\dots,p_{n-1},p_{n+1},\dots ,p_k$$lead to the same final configuration as $p_i.$ It suffices to apply inductive step for configuration appeared after $q_1$ $\Box$ Lemma 2. Assume there is a sequence $r_i$ of moves starting from an empty board, which contains $k$ moves of type ($ii)$ and lead to some final configuration. Then the arbitrary sequence of moves type ($i$) starting from configuration of $2k$ zeroes will lead to the same final configuartion. Proof. For a configuration of $2k$ zeroes apply moves $r_i$ missing all of type ($ii$) - then the lemma 1 finishes $\Box$ Due to the lemma 2 we may thought starting from sufficently large number of zeroes. One may obtain configuration $$X+1,X+2,\dots ,X+a,X+1,X+2,\dots ,X+b$$for $X=ab-a-b$ with some sequence of moves type ($i$), because of Chicken McNugget theorem. Easy to see that next we have to perform moves ($i$) for infinitely many times, contradiction $\blacksquare$
04.09.2023 06:37
Assume without loss of generality that $\gcd(a, b) = 1$, since we may remove the common factor simply by dividing each number even written on the blackboard by it. Suppose for contradiction that operation (ii) is performed infinitely many times in a sequence of operations. Define a round of operations to be a block of consecutive operations starting on a type (ii) operation and ending right before the next type (ii) operation. Note that each round is finite. Call the round begining with the $i$-th type (ii) operation the $i$-th round/round $i$, and say that a nonnegative integer $n$ is used when a pair of the number $n$ is chosen in operation (i). Lemma 1: If $n$ is a nonnegative integer expressible as a linear combination $n = ax + by$ with $x$ and $y$ nonnegative integers, then $n$ is used infinitely many times. Proof: Strong induction on $n$. Base case $n=0$ follows from operation (ii) being performed infinitely many times, and the fact that after an operation (ii), 0 must be used before the next round can begin. For the inductive step, notice that at least one of $n-a$ and $n-b$ already satisfies the lemma. Assume without loss of generality that $n-a$ is. Then, every time $n-a$ is used, an additional copy of $n$ appears on the board, and this will occur infinitely often. Every two copies of $n$ will force $n$ to be used before the current round ends. Since each round is finite, the process will repeat in a future round and this will continue forever, proving the claim. $\square$ Lemma 2: If $n \ge \max(a, b)$ is an integer, $n-a$ is used in round $x$, and $n-b$ is used in round $y$, then there is some round $z$, between $x$ and $y$ (both inclusive), where $n$ is used. Proof: Assume without loss of generality that $x \le y$. Then, after $n-a$ is used during round $x$, there exists a copy of $n$ on the board. If $n$ is never used from round $x$ to round $y$, then we cannot lose any copies of $n$, and on round $y$, we will gain a second copy of $n$. But we must use the two copies of $n$ before round $y$ ends, contradicting the fact that we are not using $n$ from round $x$ to round $y$. $\square$ To finish, note that all sufficiently large integers (everything greater than $ab - a - b$, to be precise) satisfy the hypothesis of lemma 1, so they are used in some round. Then we can find $\max(a, b)$ consecutive integers that are used during some round. Let $R$ be an integer such that on or before round $R$, all these consecutive integers have been used at least once. Then by lemma 2, the number following these consecutive integers must be used sometime during or before round $R$, and the number following it, and so on forever. Therefore infinitely many integers must be used in some round up to $R$, contradicting the fact that each round is finite.
29.09.2023 03:17
Pretty easy for C6 (and way easier than C4, C5). First observe that the order in which we apply our two operations does not matter. WLOG $\gcd(a,b)=1$, otherwise we can just divide by the common gcd and have an equivalent problem. Now by chicken mcnugget theorem, for any number $k>ab-a-b$, we will be able to get a $k$ by applying the first operation given we are supplied with sufficiently many $0$'s. Observe that it now suffices to show we can find a multiset of integers, such that when the first operation is applied some nonzero number of times the multiset is shifted. WLOG $a<b$. Let a "snowstorm" be the operation formed by finding the smallest value $k$ that is in a pair, and applying the first operation to every $k$ on the board. I claim that we can find a multiset that is right shifted by $1$ after applying a snowstorm. Represent our multiset with a $b$ dimensional vector containing the multiplicities of the smallest element, the number one more than the smallest element, the number two more than the smallest element, and so on. Then the vector consisting of $a$ $2$'s and $b-a$ $1$'s fits the bill.
30.09.2023 10:34
Groupsolved with leo.euler. More of a pain to writeup than actually solve. Claim: We can make infinite moves given the board (as a multiset) $\mathcal{L} = \{0, a, \dots, (b-1)a\} \cup \{0, b, \dots, (a-1)b\}$ Proof. We can repeatedly apply the operation involving the $0$ to the first set in the union until we get $\{b, a + b, \dots, (b-1)a + b, ba\} \cup \{b, \dots, (a-1)b\}$. However, this is equivalent to shifting the entire board up by $b$. $\blacksquare$ Note that by considering the operation as removing two occurences of $n$ and adding one occurence of $n + a$ and $n + b$, the operation becomes commutative besides a positivity condition. Now, FTSOC suppose that its possible to do $(ii)$ infinite times in some order. Claim: Each number $ma + nb$ appears infinitely many times. Proof. Follows inductively. $\blacksquare$ Claim: This implies that this is also possible for a board containing $\mathcal{L}$ to have infinite occurences of $(ii)$. Proof. Wait until $(ii)$ has been done $N$ times and needs to be done for an $N+1$ times for an arbitrarily large $N$. Then rearrange the moves such that $(ii)$ is done $N$ times in the beginning. It follows that the final config must be unique by greedily running $(i)$ until unable to do so, and counting the number of times $(i)$ is done to each number. In other words, given an intermediary config, we can reach the final config as well. We simply then take a large enough $N$ such that we can fully embed $\mathcal{L}$ in the board position as an intermediary config. $\blacksquare$ It remains to do some formalities about the order of operations. Claim: Any board containing $\mathcal{L}$ has finitely many occurences of $(i)$. Proof. FTSOC suppose that its possible to run out of the ability to do $(i)$ on such a board after finitely many moves. Since $(i)$ must be run at least once on $0$, we can move the occurence of $(i)$ at $0$ to be the very first move afterwards while respecting positivity. This creates a new occurence of $a$ and $b$ immediately afterwards, which must also have $(i)$ ran on them at some point. We can then shift up the move involving $(i)$ on $a$ while respecting positivity. Repeating this in the order of the first arguement on $\mathcal{L}$, we end up requiring infinitely many moves through simply reordering, contradiction. $\blacksquare$
30.09.2023 10:53
Solved with YaoAoPS. WLOG $\gcd(a, b)=1$. Assume for contradiction that we apply operation (ii) infinitely many times. Essentially, a configuration can be seen as a multiset of lattice points in the first quadrant, by ignoring $a$ and $b$. What I mean by this is that if we take $a$ and $b$ arbitrary then a given configuration is a process on lattice points that represent linear combinations of $a$ and $b$ in an obvious way. View two points as equivalent if they have the same signature, and use this as an equivalence class for the points. This process starts with the origin, and operations (i) and (ii) are applied following the rules. Lemma: The order in which operations (i) and (ii) are applied doesn't matter after a sufficient number of moves are applied. Proof. Consider the coordinate setup as previously described. We consider the points corresponding to the configuration. The idea is that adding by $a$ corresponds to moving $1$ unit right and adding by $b$ corresponds to moving $1$ unit up, and any two coordinates are equivalent iff there exists a sequence of ``knight" moves of the form $(\pm a, \mp b)$ between them. Operation (i) corresponds to up-right moves applied to points sufficiently far from the origin (and we eventually cover two or more distinct points with the same signature), while operation (ii) corresponds to adding another point at the origin. Thus we can eventually ``shuffle" the order of operations and result in the same set of points. Claim: We can apply infinite moves on the configuration given by $\{0, a, \dots, (b-1)a\} \sqcup \{0, b, \dots, (a-1)b\}$. Proof. Apply operation (i) repeatedly using the $0$ in $\{0, a, \dots, (b-1)a\}$. Essentially this shifts all the coordinates $1$ unit up. Thus we can apply infinite moves on the configuration. The final fact we require is that by the means of knight moves we have infinitely many equivalence classes. Now by the lemma let there be an infinite pile of $(0, 0)$ points, and apply operation (i) any number of times. Then using the coordinates it is easy to construct the set in the claim via a sequence of right moves and up moves, so by the claim we are done. Remark: I think the main ideas involved in this problem are fairly natural, and in fact everything can be written up with respect to the coordinate setup. The ``Chicken McNugget" idea here is essentially the same thing as not being able to move from $(-1, a-1)$ to the first quadrant by a series of knight moves, which was used in a number of other solutions.
23.10.2023 21:24
silly goofy problem WLOG let $\gcd(a,b)=1$ and $a<b$. View the problem as follows: we have a number line indexed by the nonnegative integers. If any exist, we take some nonnegative integer $n$ with at least $2$ stones on it, and move them to $n+a$ and $n+b$. Otherwise, we add two stones to $0$. We wish to show that finitely many stones get added. Suppose that no matter what, infinitely many stones get added, so there are infinitely many points where every point on the number line has at most $1$ stone. Consider one of these points and suppose that $2N$ stones are present in total at this point. Then change the situation slightly: instead of adding stones to $0$, we start with $2N$ stones on $0$ and move them in any way we want, but don't add any new stones to $0$. Evidently if we make two moves from different integers $n_1,n_2$ in that order such that $n_2$ has at least $2$ stones before we move from $n_1$, we could swap their order and nothing after these two moves changes. Therefore, it is clear that by starting with any previously "valid" series of moves (i.e. one where we only move from $0$ if there are no other options) we can reach any series of moves, without the condition on moves from $0$ (for example, we could start by moving $N$ times from $0$), such that these two series produce the same ending position—in particular, this means that we can make any series of moves without "the condition" and it will terminate. Call this new set of rules—namely fixing a positive integer $N$, placing $2N$ stones on $0$, and making whatever moves we want—the "modified" version. If we start with enough stones at $0$ and make an appropriate series of moves, we can get a number onto anything of the form $ax+by$, where $x,y$ are nonnegative integers. By Chicken McNugget, every integer greater than $ab-a-b$ can be written in this form. Thus if we make $N$ large enough and perform the appropriate moves in the "modified" version, we can get at least $2$ stones on each of $m,\ldots,m+a-1$ and at least $1$ stone on each of $m+a,\ldots,m+b-1$ for some large enough $m$. But from this state we can make infinitely many moves without adding any more stones to $0$ by moving from $m$, then $m+1$, then $m+2$, and so on. This contradicts our assumption that any sequence of "modified" moves starting with $2N$ stones on $0$ must terminate.
28.12.2023 17:17
Without loss of generality, let $a<b$ and $\gcd(a,b)=1.$ The configuration of the problem clearly prompts us to consider an alternate formulation of the problem. The following infinite process takes place on an initially empty coordinate grid. If there are two stones at the same point on the grid or differ by some integer multiple of $(b,-a)$, we move one up by $1$ and one right by $1$. If moves of the above form is not possible, then drop two stones at the origin. Clearly, a stone at $(x,y)$ represents a number at $ax+by$. It is clear that the location of a stone depends only on the nature of the moves made on it, including its birth, and not the order they were made in. In particular, if we assume for the contradiction that $(ii)$ is applied infinitely many times, then we can simply add an infinite number of stones at the origin. Hence, if we consider the configuration with a stone on each of the points $(0,1)$, $(0,2)$, $\dots$, $(0,a-1)$ and $(1,0)$, $(2,0)$, $\dots$, $(b-1,0)$, we can use the infinitely many origin stones to move everything up by one, contradiction! Since we can place infinitely many zeroes, we can always reach a configuration of this form since by the Chicken McNugget theorem we can make all sufficiently large numbers.
05.01.2024 06:42
same as #4 First assume $\gcd{(a,b)}=1$ and $b>a$. Each integer on the board can be represented by $Xa+Yb$ with $Y\in \{0,\dots,a-1\}$. Consider the infinite lattice grid of points $(x,y)$ where $x\ge 0$ and $y\in \{0,\dots,a-1\}$. At each point $(X,Y)$ we assign a frequency of numbers $Xa+Yb$ on the board. Then the initial state is all zeros. Operation $(i)$ performs the following on a point $(x,y)$ with value at least $2$: If $y<a-1$, take \[(x,y),(x,y)\to (x+1,y),(x,y+1).\]If $y=a-1$, take \[(x,y),(x,y)\to (x+1,y),(x+b,0).\]Operation $(ii)$ simply adds $2$ to the value at $(0,0)$. Suppose that there is a sequence of moves in which operation $(ii)$ is applied infinitely many times. Consider the state after $2^{b-1}-1$ operations of $(ii)$ followed by arbitrary operations of $(i)$ until no such operations are possible. Trivially all values are either $0$ or $1$. The value at $(0,0)$ is clearly $0$ since all operations preserve the parity of this value. Claim: The values at $(1,0),\dots,(b-1,0)$ are all $1$. Proof: By the nature of operation $(i)$, the values at $(r,0)$ for $r\in \{1,\dots,b-1\}$ may only be incremented through the step \[(r-1,0),(r-1,0)\to (r,0).\]Evidently we begin with a value of $2^b-2$ at $(0,0)$. Then for each $r\in \{1,\dots,b-1\}$, we perform operation $(i)$ a total of $2^{b-r}-1$ times at $(r-1,0)$. We can now verify that all said values are equal to $1$. $\blacksquare$ Claim: The values at $(0,1),\dots,(0,a-1)$ are all $1$. Proof: The proof is essentially verbatim as the one above. It is important, however, that $b>a$. $\blacksquare$ It turns out that this is sufficient. Perform the $2^{b-1}$th operation of $(ii)$. The following claim is that operation $(i)$ may now be performed indefinitely, providing the desired contradiction. Claim: Operation $(i)$ must be performed at every point $(x,y)$ in the infinite lattice grid. Proof: Split the lattice grid into blocks: block $k\ge 0$ consists of lattice points $(x,y)$ where $x\in [kb,(k+1)b)$. We use induction to prove the claim. The base case is proven through yet another induction, this time on $y$ from $0$ to $a-1$. At the end, each point in block $0$ is exhausted. In addition, the original frequencies ($2$ at $(0,0)$, and $1$ at the "axes") have been maintained, but the $x$-coordinates have been incremented by $b$. The increment is enough to complete our induction, as we have achieved the exact same configuration within block $k+1$. This may also be thought of through the following argument: for $k\ge 0$, the residue from performing operation $(i)$ on block $k$ provides the jump-start for operation $(i)$ in the next block $k+1$. We are done.
22.10.2024 00:45
WLOG $\gcd(a,b)=1$ and $a<b$. Notice that the order of the operations does not matter. Let $n$ be large. Write $0$ $2n$ times. We claim that operation $1$ can always be done. By the Chicken McNugget Theorem, there exists $c$ such that $c+1$, $\ldots$, $c+b$ are linear combinations of $a$ and $b$. Then, we can get at least $2$ copies of $c+1$, $\ldots$, $c+a$ and at least one copy of $c+a+1$, $\ldots$, $c+b$ for large $n$. Now, we can do operation $1$ infinitely by operating on $c+1$, then increasing $c$ by $1$, which allows us to repeat the process infinitely.