Let $a_1, a_2, a_3, \dots$ be an infinite sequence of positive integers, and let $N$ be a positive integer. Suppose that, for each $n > N$, $a_n$ is equal to the number of times $a_{n-1}$ appears in the list $a_1, a_2, \dots, a_{n-1}$. Prove that at least one of the sequence $a_1, a_3, a_5, \dots$ and $a_2, a_4, a_6, \dots$ is eventually periodic. (An infinite sequence $b_1, b_2, b_3, \dots$ is eventually periodic if there exist positive integers $p$ and $M$ such that $b_{m+p} = b_m$ for all $m \ge M$.)
Problem
Source: IMO 2024 P3
Tags: combinatorics, IMO 2024
16.07.2024 18:02
I’m not sure which one is the actual thread Solution sketch: Consider all numbers that occur infinitely often. This is either N or 1-c. If it is N, consider large C and consider minimal index with two numbers >C to get a contradiction Otherwise, it goes small, large, small, large. All large appear exactly c times Prove that n 1 is before n+1 1, similar for all 1,2,…,c Prove exists Ci such that n i is followed by n+Ci In every c+1, two smalls repeat, so the difference between the larges are bounded by c max Ci If ni is number of appearances of i, then this implies ni-n1 must be bounded, so it eventually repeats, giving periodicity I will hopefully elaborate on this writeup after day 2
16.07.2024 19:00
where are you geometry!!!!!!
16.07.2024 19:03
MathAssassin wrote: where are you geometry!!!!!! maybe in day $2$
16.07.2024 19:35
Just solved this problem in about 2 hours, many people commented that it is "NNN" in Day 1, but I believe this is more like a combi problem. I think the difficulty is not so high, but very annoying to write clearly, so here is a sketch. I think it is nice to understand the sequence as points in $\mathbb N^2.$ For $a_n,$ define $b_n$ as the number of time it appeared in $a_1\sim a_n,$ and map it to point $(a_n,b_n).$ Each time we look at the number of column of the point, to generate a new point. What we actually want to prove is after finite rounds, as the below graph, there are only finite rows and columns to extend infinitely, and the number of column and row are the same. Now obviously when a point is in "infinte row", the next point is in "infinite column", and so on, so odd/even subsequence is finally periodic. There are many details to complete, but the main idea is clear. We want to prove $\min\{a_n,b_n\}$ is bounded. First $a_n$ is unbounded, so there exists $a_M$ sufficiently big, and $a_M>a_1,\ldots ,a_{M-1}$ (this means a new column), now each column in the right of that column have at most $a_M-1$ points (from $1\sim a_M-1$ column), so it is bounded.$\Box$
16.07.2024 20:03
Ill try to post a personnal solution to the P3 unless I finish the full writing
16.07.2024 20:18
Based on the subject of P1, this problem must be a C. But how on earth is this a C?? It seems as "A" as 2009 P3 and 2015 P6 (yes I know the latter was also C, but that was evidently misclassified).
16.07.2024 20:40
This is C. There is no algebra part in the solution
16.07.2024 21:32
16.07.2024 21:54
Interesting combinatorics in disguise of sequence. The key is to prove that sufficiently large numbers appear finitely many times in such sequence, and such number of appearance should be equal for all sufficiently large numbers. Denote $f(n, i)$ the number of occurrences of $i$ among the first $n$ entries $a_1, \ldots, a_n$ of the sequence. We have $a_n = f(n-1, a_{n-1})$ for all $n > N$. Select a sufficiently large integer $M > N$ such that $f(N, j) = 0$ for all $j \ge M$ and $f(N, j) < M$ for all $j <M$. We begin by observing that starting from $n = N+1$, any pairs $(a_n, a_{n+1})$ will appear only once in the sequence. It means that all the numbers that precedents a same number $j$ in the sequence must be mutually different. Claim 1: For all $t > N$ and $M \le i < j$, $f(t, i) \ge f(t, j)$.
Claim 2: $f(t, K)\le K-1$ for all $t$.
Due to claim 1 and claim 2, for all $t$ and $i \ge K$, we have $f(t, i) \le K-1$. This means that for any given $i$, $\lim_{t\to \infty} f(t, i)$ converges to $B_i$ for some $B_i \le K-1$ (due to monotonicity in $t$). Due to claim 1 again we can see that $B_i\ge B_j$ for all $K\le i < j$, so $\lim_{i\to \infty} B_i = c$ for some $0\le c \le K-1$. Hence, there exists constants $T, M, c$ such that $f(t, i) = c$ for all $t > T, i > M$. Claim 3: $c \ge 1$ and the numbers $1, 2, \ldots, c$ are the only numbers that occur infinitely often in the sequence.
Claim 3 implies that there exists sufficiently large integers $N'$ and $M$ such that for all $n > N'$, it is either that $a_n > M$, or $a_n \le c$. Note that we can again select $N'$ to be large enough so that $f(N', i) > c$ for all $1\le i \le c$, so that any numbers that comes after $1\le i \le c$ must be larger than $M$. This creates an alternating sequence between "large" and "small" number. It is sufficient to show that the sequence of "small" number is eventually periodic. The tricky part of this is to control the gap between occurrences of the same number. Claim 4: There exists a constant $D$ such that for any $t$, $\max_{1\le i \le c} f(t, i) - \min_{1\le i\le c} f(t, i) \le D$.
For a number $n > M$, define $i_{n, 1} < i_{n, 2} < \ldots < i_{n, c}$ the indices that $a_{i_{n, k}} = n$. Denote the difference tuple $T_n = (i_{n, 2} - i_{n, 1}, \ldots, i_{n, c} - i_{n, c-1})$. Claim 5: There is a constant $D'$ such that $i_{n, h} - i_{n, h-1} \le D'$ for all $n > M$ and $2\le h\le c$.
Now, since there are at most $D'^{c-1}$ values for the tuples $T_n$ for $n \ge M$, $T_n$ must be eventually periodic. Let $T$ be the period of $T_n$, and let $b = i_{n+T, 1} - i_{n, 1}$, then for every sufficiently large $n$ and $1\le h\le c$, we have $i_{n+T, h} - i_{n, h} = i_{n+T, H} - i_{n+T, 1} - (i_{n, h} - i_{n, 1}) + i_{n+T,1} - i_{n, 1} = i_{n+T,1} - i_{n, 1}$. Now, we can proceed similar to claim 5 to show that there is a constant $B$ such that $i_{n+T,1} - i_{n, 1} \le B$ for all $n$ (brief idea: the difference $i_{n+T,1} - i_{n, 1}$ is the difference between the $n^\text{th}$ 1 and $(n+T)^\text{th}$ 1, hence occurrences of $2, \ldots, c$ can only be at most $T+2D$ for each of them). This again means that $i_{n+T,1} - i_{n, 1} \le B$ is eventually periodic, so there is a period $C$ such that $i_{n+T+C,1} - i_{n+C, 1} = i_{n+T,1} - i_{n, 1}$ for all $n$ sufficiently large. It is important to note that eventual periodicity is implied due to a non trivial fact that once a pattern repeats, then by induction it will keep repeating. I'll leave this as some details for others to work on. Let $b = i_{n+T+C, 1} - i_{n, 1}$. Then for any $n$ sufficiently large and $1\le h\le c, a_{i_{n, h} +1} = h = a_{i_{n+T+C, h} + 1} = a_{i_{n, h} + b+1}$. This implies that the "small" sequence is eventually periodic, as desired. Due to my laziness, I might have avoided many small details of the proof. Any comments are appreciated.
16.07.2024 22:19
Reducted
17.07.2024 03:48
I don't think the claim that the number of appearances of $1$ minus the number of appearances of $c$ is bounded is easy. On the other hand, if I'm wrong, funny joke "If I had a nickel for every IMO P3 which consisted of a sequence where the sol was tuple pigeonhole I'd have two nickels." For instance, the proof of claim $4$ in #11 cites this without really proving why. #2 also doesn't really show this. "In every c+1, two smalls repeat, so the difference between the larges are bounded by c max Ci" doesn't actually show this I don't think. Consider the sequence $3, 1, 2, 3, 2, 2, 2, 1, \dots$ where $1$s occur whenever the index is $2^{2k+1}$ and $3$ occur every $2^{2k}$. Even if the change between each two elements is bounded, you don't get a nice eventual bound between $1$ and $2$ at the same time as $2$ and $3$. There may be a way to patch this but I don't see it immediately. I don't see the main ideas behind #10 and #12 but I might be crazy. My solution to that is a world of pain, but the gist of it is that the eventual behavior of the sequence corresponds to a cycle on the integers that repeat infinitely. And yeah +1 to the people who say this is combo. EDIT: Some sols below successfully show this, as well as #11. Red text means I no longer agree with statement. Let $s_t(k)$ denote the number of occurences of $k$ in $a_1, \dots, a_t$. Claim: Let $C = \max\{a_1, a_2, \dots, a_n, n\} + 1$. Then all numbers $\ge C$ appear at most $C-1$ times in the sequence. Proof. FTSOC suppose that $k \ge C$ is the first integer to appear $C$ times at time $t$. If $k \ne C$, then for each integer $l$ such that $s_t(l) \ge k$, \[ s_0(l) < C < k < s_t(l) \]so by discrete continuity $C$ must've appeared $C$ times before $k$. However, $C$ can't appear $C$ times first, because the only numbers which contribute to the number of times it appears are in fact $1, \dots, C-1$, which gives at most $C$ occurences. $\blacksquare$ Claim: We have that \[ s_t(C) > s_t(C+1) > s_t(C+2) > \dots \]Proof. Earlier discrete continuity given form. $\blacksquare$ Define $s(x)$ as the value of $s_t(x)$ eventually for large $t$. Now, let $d$ be the limit of $s(C) > s(C+1) > \dots$. We get then that all the integers $\{1, \dots, d\}$ appear infinitely many times and $s(K) = d$ for sufficiently large $K$. Let $N = \max\{s(d+1), \dots, s(C-1), C-1\}$. Then note that eventually, we have that $s_t(i) > N$ for all $i \le d$ and $s_t(i) < N$ for all $i > d$. This means that we effectively alternate between "big" numbers and "small" numbers. Now, the idea is as follows. Consider only the sequence after the movement of the small numbers (so skipping $2$ each time). Eventually, we always have an ordering of $\{a_1, a_2, \dots, a_d\}$ of $\{1, 2, \dots, d\}$ such that \[ s_t(a_d) \ge s_t(a_{d-1}) \ge s_t(a_{d-2}) \ge \dots \ge s_t(a_1) \]and we have that $s_t(i) = l$ for $l \in (s_t(a_{i+1}), s_t(a_i)]$ (where for simplicity $a_{d+1} = 0$). Whenever $a_t = a_{t+1} = \dots = a_{t+e}$, we don't choose a specific ordering of these values for now, only later. Now suppose we just had $s_t(a_i)$ move up by $1$. This then makes some large number have $i$ appearances, so this then increases $s_t(i)$. Remark: Motivation wise, we can consider there to be $d$ frogs on numbers, and the numbers between the frogs to be some ordering. Then define $\sigma$ as the permutation $a_i \to i$. Now suppose that $a_l$ is the last increase. We construct the cycle $C$ by repeatedly taking $a_l \to l$. Whenever more than one $a_i$ have the same image under $s_t$, we select the ordering such that the $i$ added to the cycle is ordered first among these $a_i$. The point of this strange ordering is that we can then we can check that this the ordering we choose never has an $a_j$ in $C$ jump over another $a_i$ in $C$, and thus that no element repeats in $C$. This then forces $C$ to eventually become a cycle on $a_l$. Then we end up repeatedly increasing the numbers in $C$ until some element in $C$ jumps over an element not in $C$ (if this never happens, either $C$ has size $d$ or some numbers in $\{1, 2, \dots, d\}$ are lost behind, the latter is not allowed). Claim: Jumping over an element not in $C$ increases the size of $C$. Proof. Consider the first occurence of such a jump. We have that $s_{t+1}(a_i) > s_t(a_{i-k}) = s_t(a_{i-k+1}) = \dots = s_t(a_i)$. By the selection process given ordering, we must have that $a_{i-k}, \dots, a_{i-1}$ are not in $C$ (or else they would've jumped before). Then consider the permutation $\sigma \colon a_i \to i$, which $C$ is a cycle in. We end up creating the new ordering where $a_i \mapsto i-k, a_{i-1} \mapsto i, a_{i-2} \mapsto i-1, \dots, a_{i-k} \mapsto (i-k+1)$. Effectively, we've ended up increasing the size of $C$ by grafting the cycle that $a_{i-k}$ was part of onto it. We can that this also preserves the property that no element of $C$ jumps over another element of $C$, modulo choosing a potential ordering on $a_{i-k}, \dots, a_{i-1}$. $\blacksquare$ We then keep using the earlier process until $C$ becomes all of $d$, at which point $C$ is our desired periodicity. Remark: If no frogs share positions, then this becomes a lot easier to word. Here's an example in that case. We first have the frogs ordered $6, 1, 5, 3, 4, 2$ with cycle $(2154)$. Then this eventually forces $3$ and $5$ to swap and we get $6, 1, 3, 5, 4, 2$ which has $(21534)$ as a complete cycle. This cycle then detaches completely, contradiction. Alternatively, in other cases the cycle containing $6$ is grafted on. Another way of finishing is to note that $s_t(a_d) > s_t(a_{d+1}) + C_d$ for some $d$ and constant $C_d$. So the frogs are roughly in the order that we expect them to be. Then if we have $s_t(a_d) - s_t(a_{d+1})$ too large, then it detaches the entire cycle, contradiction. Tuple pigenhole then finishes.
17.07.2024 05:34
YaoAOPS wrote: For instance, the proof of claim $4$ in #11 cites this without really proving why. I omitted quite a bit of details to justify this, but essentially it is the line rtyu852 wrote: Note that when $t > N'$, any occurrences of $y$ must be precedented by an occurrence of a number $j > M$. Such $j > M$ must give an occurrence of $x < y$ some time before that. Why is this related? What this means is that when $t$ is sufficiently large, the occurrences of $y$ must be when some "large" number $j >M$ appears $y$ times. When $j$ occurs $x$ times, which happens for some $t' < t$, the number $x$ appears as $a_{t'+1}$. In short, every occurrence of $y$ when $t$ is sufficiently large must be accompanied by an occurrence of $x$ before that. So if you exclude a finite number of first occurrences, say $L_{x, y}$, of $y$, then $x$ must occur at least as many times as $y$, and that explains $f(t, x) \ge f(t, y) - L_{x, y}$ for all $t$.
17.07.2024 05:46
Redacted
17.07.2024 05:51
Didn't anyone consider Graph Theory for this?
17.07.2024 06:02
Obviously $(a_i)$ is unbounded. Let $M=\max\{a_1,\ldots,a_N\}$. For each $k$ and $i>N$, define $S_{k,i}$ denote the set of all integers $t$ such that $t$ appears at least $k$ times in $a_1,\ldots,a_i$; obviously $S_{k,i} \subseteq S_{k,i+1},S_{k-1,i}$. For convenience, define $m_{k,i}=\max S_{k,i}$. We have the following fact about $S_{k,i}$. Claim 1: For each $i$, if $m_{k,i}\geq\max\{M,N\}$ then we have $m_{k,i+1}-m_{k,i} \in \{0,1\}$. Proof: Suppose $m_{k,i+1}\geq m_{k,i}+2$ and suppose the above difference is larger. Since $m_{k,i+1}>M$ any occurrences of $m_{k,i+1}$ must occur after $a_N$. Hence there must have existed at least $k$ distinct "$m_{k,i+1}$-th occurrences" occurring after $a_N$ and before $a_{i+1}$. But each $m_{k,i+1}$-th occurrence implies the existence of a (unique) $(m_{k,i+1}-1)$-st occurrence before it, and since $m_{k,i+1}-1>N$ all of these must occur after $a_N$, hence are followed up with $m_{k,i+1}-1$. Thus $m_{k,i+1}-1$ occurs at least $k$ times before $a_{i+1}$; contradiction. $\blacksquare$ This implies that for each fixed $k$, $[m_{k,i}] \setminus S_{k,i}$ is eventually constant: either $m_{k,i}$ is bounded by $\max\{M,N\}$ and eventually constant, after which the result is clear, or the aforementioned expression forms a chain of subsets from the point where $m_{k,i}>\max\{M,N\}$ onwards. Moreover we have the following claim. Claim 2: Actually, if $k>M$ then $m_{k,i} \leq M$ for all $i$. Proof: Strong induction on $k$ ("base case" of $k=M$ vacuous). All occurrences of $k$ must occur after $a_N$, so they occur right after some $t$ appears the $k$-th time. But if the hypothesis holds then the only choices for $t$ are $1,\ldots,M$ which is too few. $\blacksquare$ Corollary: The terms of $(a_i)$ eventually alternate between greater than $M$ and at most $M$. Indeed each term greater than $M$ must be followed by a term at most $M$ by the above claim. Furthermore since we have finitely many terms at most $M$, there exists a point in the sequence where every term at most $M$ has appeared either more than $M$ times or no longer appears for the rest of the sequence; at this point every term at most $M$ will be followed by a term greater than $M$. Now since $S_{k,i} \subseteq S_{k-1,i}$ there exists some positive integer $t\leq M$ such that the elements of the sequence which appear infinitely many times are precisely $1,\ldots,t$ (these are the same as the $k$ with $S_{k,i}$ unbounded). Because of claim 1 and our fact on $[m_{k,i}] \setminus S_{k,i}$, it is not hard to see that for all $k \leq t$ there exists an integer constant $c_k$ such that the number of occurrences of $k$ in $a_1,\ldots,a_i$ is equal to $m_{k,i-1}+c_k$ for all large enough $i$: eventually $S_{k,i-1}$ only gains elements "at the top", and if $a_{i-1}$ is a $k$-th occurrence then $a_i=k$ by definition. Now we consider the following scenario for large enough $i$. Suppose that $a_i=k \leq t$. Then by the above $a_{i+1}=m_{k,i-1}+c_k=m_{k,i}+c_k$ (since $a_i$ is not a $k$-th occurrence). This must be a large element, hence it's an $\ell$-th occurrence for some $\ell \leq t$. Furthermore by claim 1 it follows that $a_{i+1}=m_{\ell,i}+1$ as well, and then $a_{i+2}=\ell$ and $m_{\ell,i+1}=m_{\ell,i}+1$. In particular $m_{k,i}-m_{\ell,i}=1-c_k$. This equation, along with the obvious fact that if multiple $\ell$ satisfy it then we should take the smallest, uniquely determines $\ell$. Now note that $m_{k,i}$ is nonincreasing in $k$ for fixed $i$. Consider some $i$ such that $a_i=1$ in the above scenario (there are clearly infinitely many) and suppose we had $1 \leq r<t$ such that $m_{r,i}-m_{r+1,i}$ was larger than the maximum of $|1-c_k|$. Then no $\ell$ can ever be at least $r+1$, since prior to $\ell$ being at least $r+1$ none of $m_{r+1,j},\ldots,m_{t,j}$ can increase, while none of $m_{1,j},\ldots,m_{r,j}$ can decrease as $j \geq i$ increases, so at any point $m_{(\leq r),j}-m_{(>r),j} \geq m_{r,j}-m_{r+1,j} \geq m_{r,i}-m_{r+1,i}$ which is too large to equal any $1-c_k$. But this contradicts the fact that $r+1 \leq t$ should appear infinitely many times. Hence for large enough $i$, whenever $a_i=1$ the tuple $(m_{1,i}-m_{2,i},\ldots,m_{t-1,i}-m_{t,i})$ must have all entries bounded independently of $i$, so there are finitely many tuples. Hence by pigeonhole there exist $i \neq j$ such that $a_i=a_j=1$ (so by the corollary $i \equiv j \bmod 2$) and the corresponding tuples are identical, which implies $(m_{1,j},\ldots,m_{t,j})=(m_{1,i}+p,\ldots,m_{t,i}+p)$ for some integer $p$. But then by simultaneous induction it's not hard to see that $a_{i+2n}=a_{j+2n}$ and $a_{i+2n-1}+p=a_{j+2n-1}$ for all positive integers $n$, which establishes the desired result of eventual periodicity. $\blacksquare$ Remark: This solution is very slick. None of the steps are difficult to prove, but I found some of them very difficult to find and put together. There's a lot of structure in the periodic blocks, but we sort of ignore all of it in our finish. The finish requires viewing the problem in a pretty strange/unnatural fashion. I was tempted for a long time to break the sequence into blocks between maximal elements, which feels like it should capture the structure quite well, but I couldn't find a way to finish. After being stuck on this approach I eventually tried to do random things. One of which was viewing the problem "through" $m_{k,i}$—I had proven all of the previous claims about $S_{k,i}$, but was using them as "tools" rather than foundations for the entire approach. Once I viewed the problem through $m_{k,i}$ this approach became clear, especially since I had some vague notion that $c_k$ should exist already. Remark: UP UP and AWAY!! SUPER WASHED Ain’t He??!! . Stay low and keep firing! The air up there is a tad bit different. LIVE.LAUGH.LOVE #striveforgreatness $\vphantom{.}$ #thekidfrommillburn #wanggang #allenknows
17.07.2024 06:24
Sketch? Let $M=\max(a_1,\dots,a_N)$. Claim 1: If $a_n>M$ then $a_{n+1}\le M$. Note that if the pair $x,y$ occurs in the sequence where $x>M$, then the pair $x,y-1$ must have occurred before that. Furthermore, the $i$th occurrence of $x$ will occur before the $i$th occurrence of $x+1$, because to get an element appearing $x+1$ times, it must first appear $x$ times. Hence, the pair $x-1,y$ must have also occurred before. Hence, if there exists $a_n>M$ and $a_{n+1}>M$, then the first time this happens will be at $M+1,M+1$. To get $M+1$s, we have the pairs $1,M+1$ ... $M,M+1$. How are we going to get another $M+1$ after this to get $M+1,M+1$? We will need $X,M+1,M+1$ where $x>M+1$, a contradiction. Hence, for sufficiently large $n$, $a_n$ alternates between $>M$ and $\le M$. (Pick $n$ such that $1,\dots,M$ have occurred $>M$ times.) Reset $M$ to be the largest integer that appears infinitely many times. Claim 2: For big $n$, define $f_i(n)$ to be the number of $i$s from $a_1$ to $a_n$. Then, $f_i(n)-f_{i+1}(n)$ is bounded. Obviously, after some time, for every -$x,i+1$ that we add, we will first add a $x,i$. This is true for all $i$, so $f_a(n)-f_b(n)$ is always lower bounded for $a<b$. Suppose otherwise. Then, there exists $i$ such that $f_i(n)-f_{i+1}(n)$ can take arbitrarily large values. Then, for large enough $k$, the pairs $1,k$ to $i,k$ will occur way before the pairs $i+1,k$ to $M,k$. Consider what happens after $i,k$. The next number, $j$, must be $\le i$, since $k$ only appeared $\le i$ times. Let the number after $j$ be $x$. $x$ is also large enough so that $1,x$ to $i,x$ occur way before the pairs $i+1,x$ to $M,x$. Hence, $x\le i$. By induction, this means that we won't see $M$ from some point onwards, a contradiction. Claim 3: The giant pigeonhole. Note that at any point, the number of $x$ such that $x,1$ exists but $x,M$ doesn't is bounded, by claim 2. Call these the set of "active integers", where the value of $x$ in this set is equal to the number of times it has appeared. For any $n$ where $a_n\le M$, consider the values: 1. Set $f_1(n)-f_2(n),\dots,f_1(n)-f_M(n)$. This can take finitely many values by claim 2. 2. $a_n$ 3. The values of the integers in the active set (all $<M$, and active set has bounded size, so finitely many possibilities). 4. $a_{n+1}$'s position in the active set. By pigeonhole, there are two of these that are the same. Yet, all these information determine the rest of the sequence, hence the small half of the sequence is periodic. I can imagine this is an absolute pain to write up in contest.
17.07.2024 06:58
Augh. I probably fakesolved but I do not care this took quite some time and a headache. From a brief scroll this seems isomorphic to some of the solutions posted above, except for maybe the final pidgeonhole application (IM SORRY)
17.07.2024 07:26
Here's a buffed version that seems to be true (I've tested a few sequences). Buffed IMO 2024/3 wrote: Let $a_1, a_2, a_3, \dots$ be an infinite sequence of positive integers, and let $N$ be a positive integer. Suppose that, for each $n > N$, $a_n$ is equal to the number of times $a_{n-1}$ appears in the list $a_1, a_2, \dots, a_{n-1}$. Prove that at least one of the sequence $a_1, a_3, a_5, \dots$ and $a_2, a_4, a_6, \dots$ is eventually periodic with minimal period $p$, and the periodic part is a permutation of $\{1, 2, \dots, p\}$.
17.07.2024 07:28
guys are u sure that this is the actual imo problems like if this is fake yall wasting ur time
17.07.2024 07:56
nmlikesmath wrote: guys are u sure that this is the actual imo problems like if this is fake yall wasting ur time It is. Even if it is not so, it is still an insanely good problem that worths solving.
17.07.2024 09:35
YaoAOPS wrote: #2 also doesn't really show this. "In every c+1, two smalls repeat, so the difference between the larges are bounded by c max Ci" doesn't actually show this I don't think. Consider the sequence 3, 1, 2, 3, 2, 2, 2, 1, … where 1s occur whenever the index is 2^(2k+1) and 3 occur every 2^2k. Even if the change between each two elements is bounded, you don't get a nice eventual bound between 1 and 2 at the same time as 2 and 3. There may be a way to patch this but I don't see it immediately. The key idea is that every large number appears c times, so if there is a number N+x that appears once before N is done, then if x is large enough, N cannot ever be reached by bounded difference
17.07.2024 12:20
YaoAOPS wrote: Here's a buffed version that seems to be true (I've tested a few sequences). Buffed IMO 2024/3 wrote: Let $a_1, a_2, a_3, \dots$ be an infinite sequence of positive integers, and let $N$ be a positive integer. Suppose that, for each $n > N$, $a_n$ is equal to the number of times $a_{n-1}$ appears in the list $a_1, a_2, \dots, a_{n-1}$. Let $M = \max\{a_1, a_2, \dots, a_N\}$ Prove that at least one of the sequence $a_1, a_3, a_5, \dots$ and $a_2, a_4, a_6, \dots$ is eventually periodic with period $M$, and the periodic part is a permutation of $\{1, 2, \dots, M\}$. Not sure if this is what you want, but if you take the sequence $2,1,1,\dots,1$, where there are $1434$ $1$s, the statement seems to be false?
17.07.2024 12:42
Yeah, there's no guarantee to the value of $M$. I've tried to correct it but it's a lot weaker now. Here's a short python script for testing values. k = [1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 4, 8, 8, 8, 8, 8, 8, 8, 9, 13, 14, 17, 17, 21] for i in k: print(i) while True: t = k.count(k[-1]) print(t) k.append(t)k = [1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 4, 8, 8, 8, 8, 8, 8, 8, 9, 13, 14, 17, 17, 21] for i in k: print(i) while True: t = k.count(k[-1]) print(t) k.append(t)RunResetPop Out /
17.07.2024 17:17
rtyu852 wrote: Now, since there are at most $D'^{c-1}$ values for the tuples $T_n$ for $n \ge M$, $T_n$ must be eventually periodic. Let $T$ be the period of $T_n$, and let $b = i_{n+T, 1} - i_{n, 1}$, then for every sufficiently large $n$ and $1\le h\le c$, we have $i_{n+T, h} - i_{n, h} = i_{n+T, H} - i_{n+T, 1} - (i_{n, h} - i_{n, 1}) + i_{n+T,1} - i_{n, 1} = i_{n+T,1} - i_{n, 1}$. Now, we can proceed similar to claim 5 to show that there is a constant $B$ such that $i_{n+T,1} - i_{n, 1} \le B$ for all $n$ (brief idea: the difference $i_{n+T,1} - i_{n, 1}$ is the difference between the $n^\text{th}$ 1 and $(n+T)^\text{th}$ 1, hence occurrences of $2, \ldots, c$ can only be at most $T+2D$ for each of them). This again means that $i_{n+T,1} - i_{n, 1} \le B$ is eventually periodic, so there is a period $C$ such that $i_{n+T+C,1} - i_{n+C, 1} = i_{n+T,1} - i_{n, 1}$ for all $n$ sufficiently large. Near the end, why must $T_n$ be periodic? Showing that there are finitely many options for $T_n$ only shows that eventually something must repeat, but it doesn't necessarily mean periodic. E.g. digits of pi use finitely many choices (0 to 9) but not periodic.
17.07.2024 23:02
YaoAOPS wrote: Here's a buffed version that seems to be true (I've tested a few sequences). Buffed IMO 2024/3 wrote: Let $a_1, a_2, a_3, \dots$ be an infinite sequence of positive integers, and let $N$ be a positive integer. Suppose that, for each $n > N$, $a_n$ is equal to the number of times $a_{n-1}$ appears in the list $a_1, a_2, \dots, a_{n-1}$. Prove that at least one of the sequence $a_1, a_3, a_5, \dots$ and $a_2, a_4, a_6, \dots$ is eventually periodic with minimal period $p$, and the periodic part is a permutation of $\{1, 2, \dots, p\}$. A couple others have already mentioned this is not true, but here is an argument that kills this statement and many others. The point is that given any infinite sequence of the type given in the problem, we can stop the sequence at any point, adjust the value of $N$ to be a few entries further than where we currently are, and add any small entries we would like without influencing the later portion of the sequence. For example, given the sequence: \begin{align*} 1, 1, 2, 1, 3, 1, 4, 1, 5, 1, 6, \ldots \end{align*}I can adjust the value of $N$ up to $8$ and start the sequence as follows: \begin{align*} \textcolor{red}{1, 1, 2, 2, 1, 3, 1, 4,} 1, 5, 1, 6, \ldots \end{align*}This tells us that unless we modify what starting sequences we can have, there's no guarantee that we can make the ``corner square'' nice in any way. So once you get that you are alternating between big and small numbers you know that really you have to work with the dynamics of what kinds of subsequences the small squares can form.
18.07.2024 00:37
You are correct this is a counterexample to the original buffed version I gave (where $p$ has a set value), but this is not a counterexample to the edited version (the one you quoted). This sequence given is eventually periodic with $p=1$.
18.07.2024 01:08
solved with gigamilkmen'tgeg Figure 1. We are definitely high schoolers solving the problem using the allowed resources. Quote: The milkmen have been hired to build the skyline of the city of milkgington, consisting of buildings of height $h_i$ located at each positive integer $i$. At every time step, they increase the height of some building by $1$. At time $t=0$, the rightmost building is at location $M$, and the government passes a law saying that if you have just incremented the building at location $i$, you must next increment the building at location $h_i$. Show periodicity lol. Say a building is a tall boi if its height is $\ge M$, and a short boi otherwise. Claim 1. Buildings at locations $i > M$ can never become tall bois. Proof. Note that for $i > M$ the height of building $i$ is at most the number of buildings of height $i$ (because we only increment $h_i$ when we make a new building of height $i$). So before creating the first tall boi with location $>M$, we must already have more than $M$ tall bois, impossible. $\square$ Claim 2. For any $i$, the quantity $h_{i+1}-h_i$ is bounded from above. Proof. Consider starting from time $t=0$ and evolving. Besides the buildings which initially had height $i$, whenever we make a new building of height $i+1$, we must also have made a new building of height $i$. $\square$ Heuristically the meaning of this is that the skyline must eventually take the approximate shape of an L, as shown in Figure 1. Say a building $i$ is a skyscraper if it becomes arbitrarily tall. By Claim 2, if $i$ is a skyscraper then $i-1$ must also be a skyscraper, so we can let the skyscrapers be at locations $1,\dots,S$. Note that any building to the far right (e.g. say a building is to the far right if it only receives contributions from skyscrapers) will eventually have height $S$. Say a building to the far right is a townhouse if it has not yet reached its eventual final height of $S$. It's clear that the milkmen will eventually be alternating between the skyscrapers and the townhouses. We wish to show the skyscraper work is periodic; it suffices to show the relative heights of the skyscrapers are bounded (pigeonhole will give periodicity; note that the relative heights of the skyscrapers fully determine the heights of the townhouses). Recall Claim 2. The final step: assume for contradiction there's some skyscraper index $1 \le L < S$ such that $h_{L+1}-h_L$ becomes arbitrarily negative. In other words, say the buildings $1,\dots , L$ are on the extreme left; then at some point in time, the extreme left will be taller than everyone else (by Claim 2 and triangle inequality). Pick some large $C$ and consider the first time at which $h_{L+1}+C < h_L$ (note we must have just incremented some building on the extreme left), let $R=\max\{h_{L+1},\dots, h_S\}+1$, and say a building on the extreme right if it's positioned at $i \ge R$. Then everyone on the extreme right has height at most $L$ and everyone on the extreme left has height at least $R$, so we'll get stuck in a never ending exchange between the extreme left and the extreme right, never getting back to the moderate skyscrapers! It's joever.
18.07.2024 11:06
Finally did up a solution video which I think can help explain some of the solutions in a more visual way! https://www.youtube.com/watch?v=ASV1dZCuWGs
19.07.2024 10:41
tywebb wrote: here are 2 solutions from the official solutions Cool! Can you tell me where did you get the official solution?
19.07.2024 10:42
see https://artofproblemsolving.com/community/c6t1295829f6h3361245_imo_2024_solutions_link for link
19.07.2024 10:45
tywebb wrote: see https://artofproblemsolving.com/community/c6t1295829f6h3361245_imo_2024_solutions_link for link Thanks!!
19.07.2024 15:39
This was quite fun!
20.07.2024 02:37
someone has deleted the official solutions link i put before another way to get it is to go to https://www.imo2024.uk/ and scroll to bottom of page and you will see “IMO2024 - Paper 1 Solutions” so you going to delete that too?
28.07.2024 12:23
Pick a sufficiently large integer $M$, such that it is greater than all terms, and the number of occurrence of the terms in the initial sequence. Also for convenience, i would express that $a_{n-1}$ makes $a_n$. Let $\#k$ be the number of occurrence of $k$.
28.07.2024 20:20
Playing around with this problem for days , finally come up with the solution though. Hope not a fake solve. Claim 1: $a_n$ is unbounded. Proof: Say otherwise $a_n$ is bounded, and thus there exists an integer that occurs infinitely many times. Call these indexes $n_k$, then $a_{n_k+1}$ runs through all large integers, giving a contradiction. Claim 2: New terms of $a_i$ (that is, the first appearance of the integers) eventually occur in order. Proof: A term $a_i$ is its first appearance when the previous term occurs exactly that many times (if $i$ is big enough), which happens in order eventually. Claim 3: There exists $m_0, d_0$ so for each $j>m_0$, we have that $$ d_0 < i < i' \implies id_j(i) \geq id_j(i')$$where $id_j(i)=$ # of $i$ occurs among the multiset $\{a_1, ..., a_j\}$. Proof: Claim 4: There exists $m_1>m_0$ and $d \in \mathbb{N}$, so for each $j>m_1$, $$ x > d \iff id_j(x) \leq d. $$ Claim 5: There exists $m_2>m_1$ so $a_{j+1} = id_j(a_j)|_{j > m_2}$ switches between greater than $d$ and less than $d$. Claim 6: Consider the subsequence of odd or even terms so that it is eventually less than $d$. Then each subsequence of it after its some $m_3$th term, starting with an $1$ and ends before the next $1$, consists of no repeated terms. Claim 7: Futhermore from Claim 6, the dictionary order of such subsequence is non-increasing (ordered by the natural order in the even/odd subsequence). Since the length of subsequence described above has an obvious upper bound of $d$, and by Claim 7 its dictionary order is non-increasing, we know there is a final subsequence that it stabilizes into, that is, it eventually loops as desired. $_\square$ It is 1a.m. so I'll give the proof of claims later...
02.08.2024 01:18
In this problem, there were $88$ contestants who made partial progress, $12$ who nearly solved it, and $8$ who solved it entirely. I didn't feel there was a breakthrough point in this problem, but I found it very challenging, nevertheless. Instead of going back along the sequence whenever generating a successive term, I marked every new term with a dot in a row of its number, left aligned. For example, if $a_N$ occurs $n$ times, then I put a dot in row $n$. Then, if there are $n'$ dots in this row, I put a dot in row $n'$ after that, iterating the rule. Using this notation, it is easy to test many examples. Let's draw an $L\times L$ black box around all the dots counting the multiplicities before the rule begins to apply. Upon iteration, I claim there will be three kinds of dots: inside the box, in the first $L$ rows outside it ($\rightarrow$), or in the first $L$ columns outside it ($\downarrow$). After a dot in the first $L$ columns, there will be a dot in row number at most $L$. If such a dot is outside the box, the dot after that has row number more than $L$, but no more than $L$ dots can be in that row already, because those count how many dots occur in a row numbered at most $L$. Thus, by induction, these are the three kinds of dots, and only finitely many occur inside the box. Moreover, once those have been exhausted, $\rightarrow$ and $\downarrow$ dots alternate. In all examples I have checked, in this regime, the $\rightarrow$ sequence is periodic, but I did not have to prove this conjectural fact. Returning to the $\rightarrow$ dots, which all arise according to the rule, the $\downarrow$ dots, which arise from the $\rightarrow$ dots, number as many in each row as there are $\rightarrow$ dots there are in the respective column. If a $\rightarrow$ dot arises following a $\downarrow$ dot, its row number is equal to this. Hence, the configuration of $\rightarrow$ dots and current row value determine the next value if the $\rightarrow$ and $\downarrow$ dots arise alternately. In particular, the next row value is given by the number of dots in the current column. For the conclusion, suppose the value $n$ appears in the sequence. That means there were $n$ occurrences of the previous term, and so there were $n-1$ occurrences of it beforehand. By definition, $n-1$ appeared thereafter, excepting at most $L$ cases. Thus, we obtain an injective map from times that $n$ appears to times that $n-1$ appears or $L$ exceptions. Furthermore, if the value that occurs $n-1$ times will occur $n$ times, too, then this map has an inverse. If $n$ appears infinitely many times, then every value larger than $L$ occurs $n$ times, and so this holds with at most $L$ exceptions. Let $m\le L$ be the largest value that appears infinitely many times. It follows inductively that $m-1,m-2,\dots,1$ also appear infinitely many times. Now it is clear that between these rows, the number of dots differ by a bounded amount, and it is these differences, along with a current value, that can determine the next value with differences. By quoting how a sequence with finite range in which every term is determined by the preceding one shall be periodic eventually, it follows that every other term, the row values of the $\rightarrow$ dots, yields an eventually periodic sequence.
09.09.2024 21:33
Very ezz question for IMO p3
14.09.2024 21:24
Similar sequence as 2001 c5
20.09.2024 00:59
Proposed by William Steinberg, Australia (also author of RMM 2024/2).
03.10.2024 18:23
Cute! posting on aops after a while. Here is my solution:- Lemma 1: $a_i$ contains infinitely many distinct integers in it. Proof: Assume the contrary , by infinity PHP there exists some integer $c$ which repeats infinitely often , eventually every time $a_{n-1}=c$ we will be writing its count in $a_n$ but by this $a_i$ will grow arbitarily large which is a contradiction !. Now pick some integer $L=\max{a_1,...,a_{N}}$ For any term $a_i$ define its happiness score to be the number of distinct integers in $a_{1},....,a_{n}$ which are atleast $a_{n}$ call this new sequence $h_i$. Call a positive integer $c$ , "recurring" if it occurs ifninitely often in $a_i$ and non reccuring otherwise. we already showed $1$ is recurring. by lemma 1. Now say $b_1<b_2....<b_k$ are all the numbers in $[1,L]$ which are reccuring . Lemma 2: there exist only finitely many $n$ such that $a_{n} \neq b_1,...,b_k$ and $h_{n+1} >k $. Proof: Basically observe for large enough $n$ the number of terms $=b_i$ for any fixed $1 \leq i \leq k$ in $a_{1},.....,a_{n}$ will be greater than the number of times $a_{n}$ came coz its larger and it wasent in the intial $N$ terms. Define $S$ to be the set of all integers which occur infinitely often in $h_i$ which are at max $k$ Lemma 3: for large $n$ if $h_{n}=e$ then $a_{n-1}$ is fixed infact for that fixed $a_{n-1}$ , $h_n$ is also fixed. where $e$ is any element in $S$. Proof: Say the smallest value of $a_{n-1}$ which repeats infintely often across the $n$'s such that $h_{n}=e$ is $c$ , then its easy to prove $a_{n-1}$ is eventually always $c$ and the other direction is also easy to establish. Now time to wrap things up. Consider a large number $a_n$ in the sequence with $h_n=1$ $a_{n-1}$ is fixed and $a_{n+1}$ has to be $1$ and from there $a_{n+2}=a_n-1$ so it has happiness score of $2$ and so on by that it easy to see the periodic nature among $a_{i}$ , $i \equiv n(mod 2)$
19.10.2024 17:56
I don't think there is a single idea that solves this problem. Most of the contestants in our team made all the necessary observations. Unfortunately, without rigorous proofs. So, the difficulty of this problem lies in the technique of proving the facts we know. The main observation is that the sequence eventually alternates between small-large-small, ... numbers, where the "small" numbers are picked from the set $\{1,2,\dots,\ell\}$, for some $\ell\in\mathbb{N}$ . The large number that follows each small number equals its frequency in the sequence up to the moment. Proving this gets you 3 points. The additional 4 points are for proving that the pattern of the small numbers becomes periodic at some point. The last sentence is the most difficult to prove. It's helpful to interpret the sequence in the following way. Imagine finite number of checkers enumerated from $1$ to $\ell$, placed on some natural numbers on $Ox$ axis. The checkers are the "small" numbers and the $i$-th checker is places on the number equal to the number of occurrences of the small number $i$ in the sequence $a_1,a_2,\dots,a_{T}$ up to the moment $T$. At each subsequent moment one of the checkers, say the $j$-th, moves from position $b_j$ to position $b_j+1$, which means that the number of occurrences of $j$ increases from $b_j$ to $b_{j}+1$. That is, when a checker $j$ steps on a position $b:=b_j+1$, two new numbers are placed at the end of the sequence: $j$ and $b$. We want to translate the rule of how the sequence evolves to meet this interpretation. So, after the checker $j$ steps on $b$-th position, what is the next checker to be moved forward? The answer is: the checker whose number is equal to the number of occurrences of $b$ in the original sequence so far. But $b$ appears in the sequence if and only if a checker steps on the $b$-th position. This means that after checker $j$ moves to the $b$-th position, then the next checker to move will be one whose number is equal to the number of checkers that have already stepped on the $b$-th position, that is, the number of checkers currently at positions $b, b+1,\dots.$ This interpretation makes it easier to prove that the ensemble of checkers moves in a pack, i.e. no one is left behind. Once this is proven, the rest is easy. The patterns of the moving checkers are finitely many and two positions will repeat. Since each subsequent move is determined by this pattern, the moves from here on out will follow a periodical cycle. Here is a note on this problem I wrote shortly after the IMO.