Let $n > 2$ be an integer and let $\ell \in \{1, 2,\dots, n\}$. A collection $A_1,\dots,A_k$ of (not necessarily distinct) subsets of $\{1, 2,\dots, n\}$ is called $\ell$-large if $|A_i| \ge \ell$ for all $1 \le i \le k$. Find, in terms of $n$ and $\ell$, the largest real number $c$ such that the inequality \[ \sum_{i=1}^k\sum_{j=1}^k x_ix_j\frac{|A_i\cap A_j|^2}{|A_i|\cdot|A_j|}\ge c\left(\sum_{i=1}^k x_i\right)^2 \]holds for all positive integer $k$, all nonnegative real numbers $x_1,x_2,\dots,x_k$, and all $\ell$-large collections $A_1,A_2,\dots,A_k$ of subsets of $\{1,2,\dots,n\}$. Proposed by Titu Andreescu and Gabriel Dospinescu
Problem
Source: 2024 USAMO Problem6
Tags: AMC, USA(J)MO, USAMO, inequalities, Hi
21.03.2024 07:05
It is a good idea to express$$x_ix_j\frac{|A_i\cap A_j|^2}{|A_i|\cdot|A_j|}=\frac{x_i}{|A_i|}\cdot\frac{x_j}{|A_j|}\sum_{a=1}^n\sum_{b=1}^n1_{a\in A_i}1_{b\in A_i}1_{a\in A_j}1_{b\in A_j}$$
21.03.2024 07:10
By Cauchy's inequality we can get the inequality holds when $c=\frac{1}{n}+\frac{(l-1)^2}{n(n-1)}$ . Let $k=C(n,l)$ , all the $x_i=1$ and $A_i$ be exactly all the subset of cardinality $l$ of $\{1,2,\cdots,n\}$ gives the upper bound.
21.03.2024 07:16
nvm factors of 2 are hard
21.03.2024 07:25
Let $v_i$ be a vector, with indices indexed by $[n]^2$, such that $(v_i)_{\alpha\beta}$ is $1/\lvert A_i\rvert$ if $\alpha,\beta \in A_i$ and $0$ otherwise. Then we find that $\textstyle v_i \cdot v_j = \frac{\lvert A_i \cap A_j\rvert^2}{\lvert A_i\rvert\lvert A_j\rvert}$, so the LHS rewrites as $\textstyle \lvert \sum_i x_i v_i \rvert^2$. Let $u$ be such that $u_{\alpha\beta}$ is $n-1$ if $\alpha = \beta$ and $\ell-1$ otherwise. Then we can compute that \[v_i \cdot u = (n-1) + (\lvert A_i\rvert - 1)(\ell-1) \geq \ell^2-2\ell+n.\]Also, \[\lvert u\rvert^2 = n(n-1)^2 + n(n-1)(\ell-1)^2 = n(n-1)(\ell^2-2\ell+n).\]Thus, the length of $v_i$ projected onto $u$ is at least $\textstyle \sqrt{\frac{\ell^2-2\ell+n}{n(n-1)}}$ and all these projections point in the same direction. Now, letting $P_u$ denote projection onto $u$, we have \[\left\lvert \sum_i x_i v_i \right\rvert^2 \geq \left\lvert P_u \left(\sum_i x_i v_i\right) \right\rvert^2 = \left\lvert \sum_i x_i P_u v_i \right\rvert^2 = \left(\sum_i x_i \lvert P_u v_i\rvert \right)^2 \geq \frac{\ell^2-2\ell+n}{n(n-1)} \left(\sum_i x_i\right)^2.\] This proves the bound for $\textstyle c = \frac{\ell^2-2\ell+n}{n(n-1)}$. To see that this is tight, we claim that if we let the $A_i$ range across all the subsets of length $\ell$, then the sum of the $v_i$ is proportional to $u$. To see this, note that if we choose a random subset $A_i$, the expected value of $(v_i)_{\alpha\beta}$ is $1/\ell \cdot \ell/n$ if $\alpha = \beta$ and \[\frac{1}{\ell} \cdot \frac{\binom{\ell}{2}}{\binom{n}{2}} = \frac{1}{\ell} \cdot \frac{\ell}{n} \cdot \frac{\ell-1}{n-1}\]if $\alpha \neq \beta$. Now, if we let all the $x_i$ be equal, we have equality everywhere in the preceding argument.
21.03.2024 07:28
I knew it was Cauchy Schwarz but that’s as far as I got:(
21.03.2024 07:31
I drew a picture of volcano mt inequality erupting, three sad faces, and wrote cauchy schwarz on the side of the paper
21.03.2024 07:37
who let the writers cook this year first p1 JMO then this monstrosity of a p6 AMO
21.03.2024 07:41
bryanguo wrote: I drew a picture of volcano mt inequality erupting, three sad faces, and wrote cauchy schwarz on the side of the paper i drew sad faces and aligned them to make a bigger sad face
21.03.2024 07:45
Question for you smart people. Is this problem as hard to solve as it is to read?
21.03.2024 08:33
I wrote verbatim: "Trivial by Jensen's (maybe). I conjecture $\binom{n}{\ell}$."
21.03.2024 08:34
Here's a write up of the solution using just Cauchy Schwarz in an attempt to show how such a solution can be motivated. This is based off numbersandnumbers's glorious solution. Construction: By trying out all $\ell$ subsets and equal $x_i$, we can get $c = \frac{\ell^2-2\ell+n}{n(n-1)}$. Trying other combinations gives this as likely the equality case. Factorizing: Off the bat, $|A_i \cap A_j|$ is a bit messy, so it'd be nice if we could factor the left hand side. Let's replace it first with this \[ \sum_{i} \sum_{j} x_ix_j\frac{\left(\sum_{a=1}^{n} 1_{a \in A_i, a \in A_j} \right)^2}{|A_i|\cdot|A_j|} \]where $1_{a \in A_i, a \in A_j}$ is an indicator variable which is $1$ when the condition holds and else $0$. This allows us to do something nicer and replace it with a product. Let's also try to further split things by expanding the squaring. \[ \sum_{i} \sum_{j} x_ix_j\frac{\left(\sum_{a=1}^{n} 1_{a \in A_i}1_{a \in A_j}\right)^2}{|A_i|\cdot|A_j|} = \sum_{i} \sum_{j} x_ix_j\frac{\left(\sum_{a=1}^{n}\sum_{b=1}^{n} 1_{a \in A_i}1_{a \in A_j}1_{b \in A_i}1_{b \in A_j}\right)}{|A_i|\cdot|A_j|} \]That's a mess now. Let's not lose sight of our goal of factorizing based off $i$ and $j$. Our indicator variables are jumbled up with $a$ and $b$, and we can realize that if we want to factorize, then we must work with a fixed $a$ and $b$. Reorder the sum to get \[ \sum_{a=1}^{n}\sum_{b=1}^{n} \left(\sum_{i} \sum_{j} x_ix_j\frac{1_{a \in A_i}1_{a \in A_j}1_{b \in A_i}1_{b \in A_j}}{|A_i|\cdot|A_j|}\right) \]Now, we can factorize the inner sum to get \[ \sum_{i} \sum_{j} x_ix_j\frac{|A_i\cap A_j|^2}{|A_i|\cdot|A_j|}= \sum_{a=1}^n\sum_{b=1}^n \left(\sum_{i} \frac{x_i}{|A_i|} 1_{a\in A_i}1_{b\in A_i}\right)^2. \] Remark: As IAmTheHazard notes, this can be thought of as an $n \times n$ grid. Check out their writeup! Remark: If you think about it, $\cap$ is a similar function to $\min$. So it might not be surprising that a similar problem, USAMO 2000/6, has a solution through factorizing as well, though with a bit more ingenuity.
Bounding: Now, this looks like Cauchy Schwarz. Let's define $t(a, b)$ for $1 \le a, \le b \le n$ for fudging coefficients of Cauchy Schwarz. Let's take $t(a, b)$ as $s$ when $a = b$ and $d$ when $a \ne b$. What's the motivation for separating out the case where $a = b$? The motivation is that we want the equality case to hold exactly (this is called the sharpness principle), and when we have the equality case, fixing $a = b$ gives a different value. Another is that when $a = b$, we can expect that it should be "larger" in a way. Anyways, by Cauchy Schwarz we get that \begin{align*} &\left(\sum_{a=1}^n\sum_{b=1}^n \left(\sum_{i} \frac{x_i}{|A_i|} 1_{a\in A_i}1_{b\in A_i}\right)^2\right) \left(\sum_{a=1}^n\sum_{b=1}^n t(a, b)^2\right) \ge \\ &\left(\sum_{a=1}^n\sum_{b=1}^n \left(\sum_{i} \frac{x_i}{|A_i|} 1_{a\in A_i}1_{b\in A_i}\right) \cdot t(a, b)\right)^2 \end{align*}We worry about different parts of this expression now. We can check that \[ \left(\sum_{a=1}^n\sum_{b=1}^n t(a, b)^2\right) = n \cdot s^2 + n(n-1) \cdot d^2. \]Similarly, we can combine to get \begin{align*} &\left(\sum_{a=1}^n\sum_{b=1}^n \left(\sum_{i} \frac{x_i}{|A_i|} 1_{a\in A_i}1_{b\in A_i}\right) \cdot t(a, b)\right)^2 = \\ &\left(\sum_{i} \frac{x_i}{|A_i|} \sum_{a=1}^n\sum_{b=1}^n \left(1_{a\in A_i}1_{b\in A_i}\right) \cdot t(a, b)\right)^2 = \\ &\left(\sum_{i} x_i \cdot (s + d(|A_i| - 1)) \right)^2 \ge \left(\sum_{i} x_i \cdot (s + d(\ell - 1)) \right)^2 = \\ &(s + d(\ell - 1))^2 \left(\sum_{i} x_i \right)^2. \end{align*}The motivation for the last few lines is to try and mash out the right hand side of the original equation. We can also take the gamble that $|A_i| = \ell$ due to looking at our equality case again. Substituting all this into the original equation, we get \[ \left(\sum_{a=1}^n\sum_{b=1}^n \left(\sum_{i} \frac{x_i}{|A_i|} 1_{a\in A_i}1_{b\in A_i}\right)^2\right) (n \cdot s^2 + n(n-1) \cdot d^2) \ge (s + d(\ell - 1))^2 \left(\sum_{i} x_i \right)^2. \]So now we have that $c = \frac{(s + d(\ell - 1))^2}{n \cdot s^2 + n(n-1) \cdot d^2}$. Because Cauchy Schwarz only holds when the two larger terms have equal ratios, we expect $\frac{s}{d} = \frac{n-1}{\ell - 1}$ to minimize and get the desired $s$. Remark: Alternatively, we can homogenize $s = 1$, then taking a derivative with respect to $d$ works (ends up being a quadratic numerator).
21.03.2024 10:45
Definitely the easiest problem of the day (at least for experienced contestants). I thought the "indicator function trick" is known to all. Add some related problems for @djmathman: 2018 CGMO P3, 2018 China TST P6 or 2021 CGMO P6. The problem is readily in hand once you know how to manipulate $|A_i \cap A_j|$.
21.03.2024 11:02
Solved with mira74 and Th3Numb3rThr33. Associate \(\mathbb R^{(n^2)}\) with \(n\times n\) real matrices. For \(A\subseteq\{1,2,\ldots,n\}\), let \(w(A)\in\mathbb R^{(n^2})\) such that \[w(A)_{ij}=\begin{cases} 1/|A|&i,j\in A\\ 0&\text{else}. \end{cases}\]Let \(w_i=w(A_i)\) for each \(i=1,\ldots,k\). Then we have \[ \sum_{i=1}^k\sum_{j=1}^kx_ix_j \frac{|A_i\cap A_j|^2}{|A_i|\cdot|A_j|} =\sum_{i=1}^k\sum_{j=1}^kx_ix_j(w_i\cdot w_j)\\ =\left\|\sum_{i=1}^k x_iw_i\right\|^2. \] Now let \(I\) be the \(n\times n\) identity matrix (so \(I_{ij}=1\) when \(i=j\) and 0 else) and let \(J\) be the all-ones matrix. Then by Cauchy-Schwarz, \begin{align*} \sqrt{n(n-1)\Big[n+\ell^2-2\ell\Big]} \cdot\left\|\sum_{i=1}^k x_iw_i\right\| &= \|(n-\ell)I+(\ell-1)J\| \cdot\left\|\sum_{i=1}^k x_iw_i\right\|\\ &\ge(n-\ell)\sum_{i=1}^kx_iw_i\cdot I +(\ell-1)\sum_{i=1}^kx_iw_i\cdot J\\ &=(n-\ell)\sum_{i=1}^kx_i +(\ell-1)\sum_{i=1}^kx_i\cdot|A|\\ &\ge\Big[n+\ell^2-2\ell\Big]\cdot\sum_{i=1}^k x_i, \end{align*}and thus we conclude \[ \sum_{i=1}^k\sum_{j=1}^kx_ix_j \frac{|A_i\cap A_j|^2}{|A_i|\cdot|A_j|} =\left\|\sum_{i=1}^k x_iw_i\right\|^2 \ge\frac{n+\ell^2-2\ell}{n(n-1)}\cdot\left(\sum_{i=1}^k x_i\right)^2. \] To see sharpness, let each \(A_i\) be a random subset of \(\{1,\ldots,n\}\) of size \(\ell\) chosen uniformly and independently. Then, \[ \mathbb E[w_i]=\frac1\ell\left[ \frac\ell nI+ \frac{\ell(\ell-1)}{n^2}(J-I)\right] =\frac1{n^2}\Big[nI+(\ell-1)(J-I)\Big], \]is a scalar multiple of \((n-\ell)I+(\ell-1)J\), and thus so is \(\mathbb E[\sum_{i=1}^kx_iw_i]\). This establishes equality in the above bound.
21.03.2024 18:10
Linear Algebra $\geq$ Cauchy-Schwarz The answer is $\boxed{\tfrac{\ell^2-2\ell+n}{n(n-1)}}$. This is achieved by letting $k=\tbinom n\ell$, $A_i$ runs through all subsets in $\tbinom{[n]}\ell$ exactly once, and $x_1=x_2=\dots=x_k=1$. A direct computation shows that this works. Alternatively, check through the below bound that the equality case is always good. For each $i\in\{1,2,\dots,k\}$, we let $v_i\in\mathbb R^{[n]^2}$ be such that $$(v_i)_{\alpha\beta} = \begin{cases} 1/\lvert A_i\rvert & \alpha,\beta\in A \\ 0 & \text{else.} \end{cases}$$Next, WLOG, $x_1+\dots+x_k=1$. Then, we need to minimize $$\sum_{i=1}^k \sum_{j=1}^k x_ix_j \frac{\lvert A_i\cap A_j\rvert^2}{\lvert A_i\rvert \lvert A_j\rvert} = \sum_{i=1}^k \sum_{j=1}^k x_ix_j (v_i\cdot v_j) = \lvert x_1v_1 + \dots +x_kv_k\rvert^2.$$(i.e., $\lvert u\rvert^2$ for all $u$ in the convex closure of $v_1,\dots,v_k$). Next, the key observation is that for every $v=v_i$, we have \begin{align*} v_{11} + v_{22} + \dots + v_{nn} &= 1 \\ \sum_{\alpha\neq\beta} v_{\alpha\beta} &= \tfrac{\lvert A_i\rvert^2}{\lvert A_i\rvert} - 1 \geq \ell-1. \end{align*}In particular, this means that for any $u$ in the convex closure of $v_1,\dots,v_k$, we must have $u_{11} + \dots + u_{nn} = 1$ and $\textstyle\sum_{\alpha\neq\beta} u_{\alpha\beta}\geq \ell-1$. Hence, the minimum length of $u$ must be at least the distance from $0$ to the intersection of two hyperplanes $H_1 : u_{11} + \dots + u_{nn} = 1$ and $H_2: \textstyle\sum_{\alpha\neq\beta} u_{\alpha\beta}= \ell-1$. These two planes are orthogonal. Hence, we can compute the projection by $$p = 0 - \operatorname{proj}(0, H_1) - \operatorname{proj}(0, H_2).$$We have $p_{\alpha\alpha} = -\tfrac 1n$ from only $H_1$ and for $\alpha\neq\beta$, we have $p_{\alpha\beta} = -\tfrac{\ell-1}{n(n-1)}$ from only $H_2$. Hence, we have $$\lvert p\rvert^2 = n\cdot\left(\tfrac 1n\right)^2 + n(n-1) \cdot \left(\tfrac{\ell-1}{n(n-1)}\right)^2 = \frac{\ell^2-2\ell+n}{n(n-1)}$$
21.03.2024 20:06
NTstrucker wrote: Definitely the easiest problem of the day (at least for experienced contestants). I thought the "indicator function trick" is known to all. I'm not sure if it's "well-known" because there aren't a ton of problems that showcase this idea. That said, I agree it's highly motivated. (Although I didn't solve this myself, I did notice the problem statement amounts to showing a corresponding quadratic form is positive-definite, which highly motivates algebraic techniques over combinatorial ones.) (EDIT: this isn't strictly true because the variables $\color{red}x_i$ are nonnegative, rather than being arbitrary real numbers. However, I still maintain that the "structure" of the inequality can motivate the solution quite well.) For another example of this idea, see KoMaL A.492 (November 2009).
22.03.2024 03:26
How many points would I get for parts of setup and saying it has to do with a particular quadratic form, and also noting projection vector stuff? In particular, is this >0 points? EDIT: Oh, I also mentioned Cauchy Schwarz EDIT: Apparently $1$.
22.03.2024 22:20
The above solutions in the thread are misleading. This is a very nice combinatorics problem! Thanks to islander7 for sharing this idea with me (I didn't have time to actually think about this problem on my own whoops) The answer is $\frac{1}{n}+\frac{(\ell-1)^2}{n(n-1)}$, where equality can be achieved by making all the $x_i$ equal and letting $\{A_i\}$ span every $\ell$-element subset of $[n]$. To see this, note that for a fixed $i$ we can compute $\mathbb{E}[|A_i \cap A_j|^2]$ using linearity of expectation as $$\ell \cdot\mathbb{P}(x \in A_i \mid x \in [n])+\ell(\ell-1)\cdot\mathbb{P}(x,y \in A_i \mid x \neq y \in [n])=\frac{\ell^2}{n}+\frac{(\ell(\ell-1))^2}{n(n-1)},$$and note that the LHS is precisely $\frac{1}{\ell^2}$ times this. Construct a grid with $k$ rows $r_1,\ldots,r_k$ and $n$ columns $c_1,\ldots,c_n$ corresponding to the $A_i$: we label the cell in row $i$ and column $j$ with $\frac{x_i}{|A_i|}$ if $j \in A_i$ and $0$ otherwise. Note that the sum of the grid elements is $\sum_{i=1}^k x_i$. If we fix two rows $(r_i,r_j)$, then $|A_i \cap A_j|^2$ counts the number of ways to select two (possibly identical) columns $(c_p,c_q)$ such that the four intersections between $c_p \cup c_q$ and $r_i \cup r_j$ are all nonzero (one can think of this intersection as the vertices of a possibly degenerate rectangle). Thus the LHS is simply the sum of the product of $c_p \cap r_i$ times $c_q \cap r_j$ over all such nonzero rectangles on the grid, where we treat the two vertical edges and two horizontal edges as distinguishable—so a nondegenerate rectangle gets counted a total of $4$ times. We now count this sum across all pairs of (possibly identical) columns. It's not hard to see that if we fix two columns $(c_i,c_j)$, the relevant quantity is equal to the square of the sum of elements in $c_i \cap c_j$ (this is an abuse of notation; one can think of this as the intersection of either $c_i$ or $c_j$ with every row $r$ such that $r \cap c_i$ and $r \cap c_j$ are both nonzero). If we sum over all $(c_i,c_i)$, then we obtain $\sum_{1 \leq i \leq n} \Sigma(c_i)^2$. Since the columns collectively contain each element in the grid exactly once, by Cauchy-Schwarz (or QM-AM) this is at least $\frac{1}{n}\left(\sum_{1 \leq i \leq n} \Sigma(c_i)^2\right)^2=\frac{1}{n}\left(\sum_{i=1}^k x_i\right)^2$. If we sum over all $(c_i,c_j)$ with $i \neq j$, then we obtain $\sum_{1 \leq i\neq j \leq n} \Sigma(c_i \cap c_j)^2$, which by Cauchy-Schwarz is at least $\frac{1}{n(n-1)}\left(\sum_{1 \leq i \neq j \leq n} \Sigma(c_i \cap c_j)\right)^2$. But for each cell $(x,y)$, since $|A_x|\geq \ell$, there are at least $\ell-1$ choices of columns $c_j$ with $j \neq y$ such that the label of $(x,y)$ appears in $c_y \cap c_j$, hence $\sum_{1 \leq i\neq j \leq n} \Sigma(c_i \cap c_j)$ counts every cell at least $\ell-1$ times, and the LHS is at least $$\frac{1}{n}\left(\sum_{i=1}^k x_i\right)^2+\frac{1}{n(n-1)}\left((\ell-1) \sum_{i=1}^k x_i\right)^2,$$from which we recover the answer. $\blacksquare$ Remark: I think computing the value of $c$ in the equality case guides the splitting into $(c_i,c_i)$ and $(c_i,c_j)$ with $i \neq j$: the former corresponds to $\mathbb{P}(x \in A_i)$ and the latter to $\mathbb{P}(x,y \in A_i)$.
23.03.2024 00:46
The answer is $c = \boxed{\frac{1}{n} + \frac{(\ell - 1)^2}{n(n-1)}}$. For integers $1 \le i, j \le n$, let \[m_{ij} = \sum_{s=1, \dots, k, \text{where }i, j \in A_s} \frac{x_s}{|A_s|}.\]Let $S = \sum_{i=1}^k x_i$. Then the LHS of the given inequality can be rewritten as $\sum_{i=1}^n \sum_{j=1}^n m_{ij}^2$. Let $M_1 = \sum_{i=1}^n m_{ii}$, and $M_2 = \sum_{1 \le i, j \le n, i \neq j} m_{ij}$. For each $1 \le i \le k$, the set $A_i$ contributes $x_i$ to $S$, $x_i$ to $M_1$, and $x_i(|A_i| - 1)$ to $M_2$, and taking the sum over all $i$ gives $M_1 = S$ and \[M_2 = \sum_{i=1}^k x_i(|A_i| - 1) \ge \sum_{i=1}^k x_i(\ell - 1) = (\ell-1)S.\]By Jensen, \[\sum_{i=1}^n m_{ii}^2 \ge n\left(\frac{M_1}{n}\right)^2 = \frac{M_1^2}{n} = \frac{S^2}{n},\]and \[ \sum_{1 \le i, j \le n, i \neq j} m_{ij}^2 \ge n(n-1)\left(\frac{M_2}{n(n-1)}\right)^2 = \frac{M_2^2}{n(n-1)} \ge \frac{S^2(\ell-1)^2}{n(n-1)}. \]Adding the two inequalities gives that \[ \sum_{i=1}^n \sum_{j=1}^n m_{ij}^2 \ge \left(\frac{1}{n} + \frac{(\ell - 1)^2}{n(n-1)}\right)S^2, \]from which we can read off the answer. Since equality occurs when each of the $\binom{n}{\ell}$ sets appears in $A$ once, this constant $c$ is optimal.
27.03.2024 16:03
Glorious one,but it took me less time comparing with 3
31.03.2024 00:12