Let $n \ge 2$ be an integer. Let $x_1 \ge x_2 \ge ... \ge x_n$ and $y_1 \ge y_2 \ge ... \ge y_n$ be $2n$ real numbers such that $$0 = x_1 + x_2 + ... + x_n = y_1 + y_2 + ... + y_n $$$$\text{and} \hspace{2mm} 1 =x_1^2 + x_2^2 + ... + x_n^2 = y_1^2 + y_2^2 + ... + y_n^2.$$Prove that $$\sum_{i = 1}^n (x_iy_i - x_iy_{n + 1 - i}) \ge \frac{2}{\sqrt{n-1}}.$$Proposed by David Speyer and Kiran Kedlaya
Problem
Source: USOMO #6
Tags: inequalities, Hi
22.06.2020 02:20
Comments and Motivation: When one looks at the condition on $x_i$ and $y_i,$ one might think of 2012 USAMO Problem 6 because the same condition appears there. Indeed, just like 2012 USAMO Problem 6, one solution to this algebra problem invloves evaluating the variance of a relevant random sum. Nevertheless, there are some new ideas here, and I like this problem overall. $x_i$ and $y_i$ are sorted, so by the rearrangement inequality, $\sum\limits_{i=1}^nx_iy_i$ and $\sum\limits_{i=1}^nx_iy_{n+1-i}$ are the maximum and the minimum over all permutations of $y_i,$ respectively. Thus, if one chooses a random such permutation, the LHS of the problem is the range of the resulting sum. In addition, we see a square root on the RHS. When using the probabilistic method, and more specifically the second moment method, square roots usually come from taking the square root of a variance, also known as the standard deviation. TL;DR: The problem looks like a lower bound on the range of a random variable involving the standard deviation. Solution: Lemma: If $X$ is a random variable uniformly sampled from a multiset $\mathcal S,$ and $R=\max(\mathcal S)-\min(\mathcal S),$ $R$ is at least twice the standard deviation of $X.$ Proof: $\operatorname{Var}(X)=\operatorname E[X^2]-\operatorname E[X]^2$ is a quadratic in terms of the elements of $\mathcal S.$ In particular, if one takes $v\in\mathcal S,$ fixes $\mathcal S-\{v\},$ and expresses $\operatorname{Var}(X)$ as a quadratic in $v,$ the coefficient of $v^2$ in $\operatorname E[X^2]$ is $\frac1{|\mathcal S|},$ but in $\operatorname E[X]^2$ is $\frac1{|\mathcal S|^2},$ so as $|\mathcal S|\ge1,$ $\operatorname{Var}(X)$ is convex. Thus, over $v\in[\min(\mathcal S),\max(\mathcal S)],$ it is maximized at some $v\in\{\min(\mathcal S),\max(\mathcal S)\},$ so by smoothing we can assume $$\forall v\in\mathcal S,v\in\{\min(\mathcal S),\max(\mathcal S)\}$$in order to maximize the variance. Let $p=P(X=\min(\mathcal S)).$ We have \begin{align*} \operatorname{Var}(X) &=\operatorname E[X^2]-\operatorname E[X]^2\\ &=\left(p\min(\mathcal S)^2+(1-p)\max(\mathcal S)^2\right)-\left(p\min(\mathcal S)+(1-p)\max(\mathcal S)\right)^2\\ &=(p-p^2)\min(\mathcal S)^2+((1-p)-(1-p)^2)\max(\mathcal S)^2-2p\min(\mathcal S)(1-p)\max(\mathcal S)\\ &=p(1-p)\left(\max(\mathcal S)-\min(\mathcal S)\right)^2\\ &\le\frac12\cdot\frac12\cdot R^2, \end{align*}so the standard deviation of $X$ is at most $\frac R2,$ as desired. $\square\qquad$ Let $\sigma$ be a random permutation of $[n]=\{1,\ldots,n\}$ and let $X=\sum\limits_{i=1}^nx_iy_{\sigma(i)}.$ By linearity of expectation, we have $\operatorname E[X]=\sum\limits_{i=1}^nx_i\operatorname E[y_{\sigma(i)}]=0$ and \begin{align*} \operatorname E[X^2] &=\sum\limits_{i=1}^n\sum\limits_{j=1}^nx_ix_j\operatorname E[y_{\sigma(i)}y_{\sigma(j)}]\\ &=\sum\limits_{i=1}^nx_i^2\operatorname E[y_{\sigma(i)}^2]+\sum\limits_{\substack{i\ne j\\1\le i,j\le n}}^nx_ix_j\operatorname E[y_{\sigma(i)}y_{\sigma(j)}]\\ &=\left(\sum\limits_{i=1}^nx_i^2\right)\left(\frac1n\sum\limits_{i=1}^ny_{\sigma(i)}^2\right)+\left(\sum\limits_{\substack{i\ne j\\1\le i,j\le n}}^nx_ix_j \right)\left(\frac1{n(n-1)}\sum\limits_{\substack{i\ne j\\1\le i,j\le n}}^ny_{\sigma(i)}y_{\sigma(j)}\right)\\ &=1\cdot\frac1n+\frac1{n(n-1)}\left(\left(\sum\limits_{i=1}^nx_i\right)^2-\left(\sum\limits_{i=1}^nx_i^2\right)\right)\left(\left(\sum\limits_{i=1}^ny_{\sigma(i)}\right)^2-\left(\sum\limits_{i=1}^ny_{\sigma(i)}^2\right)\right)\\ &=\frac1n+\frac1{n(n-1)}\left(0^2-1\right)\left(0^2-1\right)\\ &=\frac1{n-1}. \end{align*}Thus, $\operatorname{Var}(X)=\operatorname E[X^2]-\operatorname E[X]^2=\frac1{n-1}-0^2,$ so the standard deviation of $X$ is $\frac1{\sqrt{n-1}},$ so by the above lemma, $\operatorname{Range}(X)$ is at least $\frac2{\sqrt{n-1}}.$ By the rearrangement inequality, $\max(X)=\sum\limits_{i=1}^nx_iy_i,$ and $\min(X)=\sum\limits_{i=1}^nx_iy_{n+1-i},$ giving the desired. $\blacksquare\qquad$
22.06.2020 02:21
We will compute $\mathbb E \left [\left( \sum_{i = 1}^{n} x_i \left( y_{\alpha(i)} - y_{\beta(i)} \right) \right)^2 \right]$, where $\alpha, \beta$ are permutations of $\{1, 2, \cdots, n\}$ chosen uniformly at random. Before doing so, notice that $\mathbb E \left[ y_{\alpha(i)} \right] = \mathbb E \left[ y_{\beta(i)} \right] = 0,$ $\mathbb E \left[ y_{\alpha(i)}^2 \right] = \mathbb E \left[ y_{\beta(i)}^2 \right] = \frac{1}{n}$, $\sum_{i \neq j} x_i x_j = \sum_{i \neq j} y_i y_j = -\frac{1}{2},$ and $\mathbb E \left[ y_{\alpha(i)} y_{\alpha(j)} \right] = \mathbb E \left[ y_{\beta(i)} y_{\beta(j)} \right] = \frac{-\frac{1}{2}}{\binom{n}{2}} = -\frac{1}{n(n-1)}$, for any $1 \le i < j \le n.$ In view of these observations, we are ready to compute the above quantity. Indeed, we have that: $$\mathbb E \left [\left( \sum_{i = 1}^{n} x_i \left( y_{\alpha(i)} - y_{\beta(i)} \right) \right)^2 \right]$$ $$= \sum_{i=1}^{n} \left( \mathbb E \left[ x_i^2 y_{\alpha(i)}^2 \right] + \mathbb E \left[ x_i^2 y_{\beta(i)}^2 \right] \right)$$ $$+ \sum_{i \neq j} 2x_i x_j \mathbb E \left[ y_{\alpha(i)} y_{\alpha(j)} \right]$$ $$+ \sum_{i \neq j} 2x_i x_j \mathbb E \left[ y_{\beta(i)} y_{\beta(j)} \right]$$ $$- \sum_{i \neq j} \left( 2x_i x_j \left( \mathbb E \left[ y_{\alpha(i)} y_{\beta(j)} \right] + \mathbb E \left[ y_{\alpha(j)} y_{\beta(i)} \right] \right) \right)$$ $$= \sum_{i = 1}^{n} x_i^2 \cdot \frac2{n}$$ $$+ 2 \cdot \left( \sum_{i \neq j} x_i x_j \right) \cdot \left( \mathbb E \left[ y_{\alpha(i)} y_{\alpha(j)} \right] \right)$$ $$+ 2 \cdot \left( \sum_{i \neq j} x_i x_j \right) \cdot \left( \mathbb E \left[ y_{\beta(i)} y_{\beta(j)} \right] \right)$$ $$- \left(\sum_{i \neq j} 2x_i x_j \right) \left( \mathbb E \left[ y_{\alpha(i)} \right] \cdot \mathbb E \left[ y_{\beta(j)} \right] + \mathbb E \left[ y_{\alpha(j)} \right] \cdot \mathbb E \left[ y_{\beta(i)} \right] \right)$$ $$= \frac2{n} + 2 \cdot \frac{-1}{2} \cdot \frac{-1}{n(n-1)} + 2 \cdot \frac{-1}{2} \cdot \frac{-1}{n(n-1)} - \left( \sum_{i \neq j} 2 x_i x_j \right) \cdot (0 \cdot 0 + 0 \cdot 0) $$ $$= \frac{2}{n} + \frac{2}{n(n-1)} = \frac{2}{n-1}.$$ From here, the finish is straightforward. Define $M$ to be the multiset of size $n!$ consisting of all numbers $\sum_{i = 1}^{n} x_i y_{\sigma(i)}$ over all permutations $\sigma$ of $[n]$. The following lemma essentially finishes the problem off. Lemma. For a multiset $S = \{z_1 \le z_2 \le \cdots \le z_k\}$ of real numbers, we have: $$\sum_{i = 1}^{k} \sum_{j = 1}^{k} (z_i - z_j)^2 \le \frac{k^2(z_1 - z_k)^2}{2}.$$ Proof. Observe that $(z_i - z_j)^2 \le |z_i - z_j| \cdot (z_1 - z_k).$ Hence, it suffices to show the stronger inequality $$\sum_{i = 1}^{k} \sum_{j = 1}^{k} |z_i - z_j| \le \frac{k^2(z_k - z_1)}{2},$$ which is equivalent to $$\sum_{i < j} (z_j - z_i) \le \frac{k^2(z_k - z_1)}{4}.$$ For $1 \le i \le k-1$, write $d_i = z_{i+1} - z_i.$ Then, it suffices to show that: $$\sum_{i < j} (d_i + d_{i+1} + \cdots + d_{j-1}) \le \frac{k^2(d_1 + d_2 + \cdots + d_{k-1})}{4}.$$ Notice, however, that for $1 \le t \le k-1$, the coefficient of $d_t$ on the LHS is $t(k-t)$ while its coefficient on the RHS is $\frac{k^2}{4}$. By AM-GM, $\frac{k^2}{4} \ge t(k-t)$ for all $1 \le t \le k-1$, and so the lemma is proven. $\blacksquare$ The finish is clear now. Observe that if we let $M = \{ m_1 \le m_2 \le \cdots \le m_{n!} \}$, then the quantity considered earlier is $$\frac{2}{n-1} = \mathbb E \left [\left( \sum_{i = 1}^{n} x_i \left( y_{\alpha(i)} - y_{\beta(i)} \right) \right)^2 \right] = \frac{\sum_{i = 1}^{n!} \sum_{j = 1}^{n!} (m_i - m_j)^2}{(n!)^2},$$ which by the lemma is at most $$\frac{(m_1 - m_{n!})^2}{2}.$$ This implies that $(m_{n!} - m_1)^2 \ge \frac{4}{n-1}$, or $m_{n!} - m_1 \ge \frac{2}{\sqrt{n-1}}.$ By the Rearrangement Inequality, $\sum_{i = 1}^{n} x_i y_i$ is $m_{n!}$ and $\sum_{i = 1}^{n} x_i y_{n+1-i}$ is $m_1$, which means that $$\sum_{i = 1}^{n} (x_i y_i - x_i y_{n+1-i}) \ge \frac{2}{\sqrt{n-1}},$$ as desired. $\square$
22.06.2020 02:24
22.06.2020 02:25
Well guess who predicted Mount Inequality erupted? But after reading Pathological's solution, I was like Mount Bash has erupted.
22.06.2020 02:27
nice ccc day 2 edit: hey the WOOT Wk 5 extra handout (Probability) didn't teach us all this
22.06.2020 02:35
Here is another proof of the key claim: Key Claim wrote: Let $X$ be a bounded random variable, then $\max(X) - \min(X) \geq 2 \sqrt{\operatorname{Var}(X)}$. Shift so that $\mathbb E[X] = 0$. Then, $\operatorname{Var}(X) = \mathbb E[X^2]$. Let $2\Delta = \max(X) - \min(X)$, and $\mu = \frac{\min(X) + \max(X)}{2}$: Note that this is not necessarily $\mathbb E[X] = 0$. Then, \[ \Delta^2 \geq \mathbb E[(X-\mu)^2] = \mathbb E[X^2] + \mu^2 \geq \mathbb E[X^2], \]and we're done!
22.06.2020 03:05
Here's an even shorter proof of that claim: Shift so that $\min X + \max X = 0$. Then \[\operatorname{Var} X = \mathbb{E}[X^2] - \mathbb{E}[X]^2 \leq \mathbb{E}[X^2] \leq \left(\frac{\max X - \min X}{2}\right)^2.\]
22.06.2020 03:32
Does this work? (3 variable smooth) Given that $x_i+x_j+x_k, x_i^2+x_j^2+x_k^2$ are fixed, to maximize $ax_i+bx_j+cx_k$, we can take derivatives and get that we need $0=a(x_j-x_k)+b(x_k-x_i)+c(x_i-x_j)$. Now if we apply this to i<n+1-j<n+1-i and i<j<n+1-i and subtracting equations, we get $(y_i+y_{n+1-j}-y_j-y_{n+1-i})(x_j+x_{n+1-j}-x_i-x_{n+1-j})=0$ so either $y_i=...=y_j$ and $y_{n+1-j}=...=y_{n+1-j}$ or $x_j+x_{n+1-j}=x_i-x_{n+1-j}$. So we basically have a middle block of y's where the first half and second half are equal and the outside terms for the x sequence are symmetric. Obviously not all x,y's are equal blocks so assume y is not all equal blocks so $x_1+x_n=0$. This gives that the x sequence should be $1/sqrt{n}$ n/2 times and negative that n/2 times as well. Now, obviously since we have x sequence its easy to see what the y sequence should be giving our equality case.($y_1=...=y_{n-1}$ actually) Not sure if this works, I don't have the time to type up a full solution sorry, did anyone have a similar approach?
22.06.2020 04:15
Proposed by David Speyer and Kiran Kedlaya
22.06.2020 04:24
We present two approaches. In both approaches, it's helpful motivation that for even $n$, equality occurs at \begin{align*} (x_i) &= \Big( \underbrace{\frac{1}{\sqrt n}, \dots, \frac{1}{\sqrt n}}_{n/2}, \underbrace{-\frac{1}{\sqrt n}, \dots, -\frac{1}{\sqrt n}}_{n/2} \Big) \\ (y_i) &= \Big( \frac{n-1}{\sqrt{n(n-1)}}, \underbrace{- \frac{1}{\sqrt{n(n-1)}}, \dots, - \frac{1}{\sqrt{n(n-1)}}}_{n-1} \Big) \end{align*}First approach (expected value) For a permutation $\sigma$ on $\{1, 2, \dots, n\}$ we define \[ S_\sigma = \sum_{i=1}^n x_i y_{\sigma(i)}. \] Claim: For random permutations $\sigma$, $\mathbb E[S_\sigma] = 0$ and $\mathbb E[S_\sigma^2] = \frac{1}{n-1}$. Proof. The first one is clear. Since $\sum_{i < j} 2x_i x_j = -1$, it follows that (for fixed $i$ and $j$), $\mathbb E[y_{\sigma(i)} y_{\sigma(j)}] = -\frac{1}{n(n-1)}$, Thus \begin{align*} \sum_{i} x_i^2 \cdot \mathbb E \left[ y_{\sigma(i)}^2 \right] &= \frac 1n \\ 2\sum_{i < j} x_i x_j \cdot \mathbb E \left[ y_{\sigma(i)} y_{\sigma(j)} \right] &= (-1) \cdot \frac{1}{n(n-1)} \end{align*}the conclusion follows. $\blacksquare$ Claim: [A random variable in {$[0,1]$} has variance at most $1/4$] If $A$ is a random variable with mean $\mu$ taking values in the closed interval $[m, M]$ then \[ \mathbb E[(A-\mu)^2] \le \frac14 (M-m)^2. \]Proof. By shifting and scaling, we may assume $m = 0$ and $M = 1$, so $A \in [0,1]$ and hence $A^2 \le A$. Then \[ \mathbb E[(A-\mu)^2] = \mathbb E[A^2] - \mu^2 \le \mathbb E[A] - \mu^2 = \mu-\mu^2 \le \frac14. \]This concludes the proof. $\blacksquare$ Thus the previous two claims together give \[ \max_\sigma S_\sigma - \min_\sigma S_\sigma \ge \sqrt{\frac{4}{n-1}} = \frac{2}{\sqrt{n-1}}. \]But $\sum_i x_i y_i = \max_\sigma S_\sigma$ and $\sum_i x_i y_{n+1-i} = \min_\sigma S_\sigma$ by rearrangement inequality and therefore we are done. Outline of second approach (by convexity, due to Alex Zhai) We will instead prove a converse result: given the hypotheses $x_1 \ge \dots \ge x_n$ $y_1 \ge \dots \ge y_n$ $\sum_i x_i = \sum_i y_i = 0$ $\sum_i x_i y_i - \sum_i x_i y_{n+1-i} = \frac{2}{\sqrt{n-1}}$ we will prove that $\sum x_i^2 \sum y_i^2 \le 1$. Fix the choice of $y$'s. We see that we are trying to maximize a convex function in $n$ variables $(x_1, \dots, x_n)$ over a convex domain (actually the intersection of two planes with several half planes). So a maximum can only happen at the boundaries: when at most two of the $x$'s are different. An analogous argument applies to $y$. In this way we find that it suffices to consider situations where $x_\bullet$ takes on at most two different values. The same argument applies to $y_\bullet$. At this point the problem can be checked directly.
04.07.2020 11:51
<Proof without Probability> First, there exists a minimum value since our domain is compact. Assume $(x_1, x_2, ... , x_n, y_1 , ... , y_n)$ is the minimal case. Since it is minimal, any perturbation should not decrease $\sum x_i y_i - x_i y_{n-i}$.
Assume there are more than 3 values for $x_i$s. Assume there are $x_1=...=x_{a} > x_{a+1}=...=x_{a+b} > .... >x_{n-c+1}=...=x_n$. We will perturb $(\alpha, \beta, \gamma)$ maintaining $\alpha=x_1=...=x_a$ and $\beta=x_{a+1}=...=x_{a+b}$ and $\gamma=x_{n-c+1}=...=x_n$ and the condition. Again, $(\alpha, \beta, \gamma)$ satisfying the restriction, $a\alpha+b\beta+c\gamma$ and $a\alpha^2+b\beta^2+c\gamma^2$ are constant, is the intersection of ellipsoid and plane. Note that this cannot be empty or just a point since $\alpha>\beta>\gamma$ by assumption. The restriction $\alpha \ge \beta \ge \gamma$ cut the ellipse into an arc. The boundary are two points with $\alpha=\beta$ and $\beta=\gamma$. Considering the coefficients, we would like to minimize $p\alpha+q\beta+c\gamma$ with $p/a \ge q/b \ge r/c$. We may find a point outside the arc with less value. But it is also enough to prove that $p\alpha+q\beta+c\gamma$ decreases when passing the boundary. (from inside of the arc to outside) For this, we need to verify tangent vector on the boundary in the direction pointing outside of arc. The tangent vector of the circle is perpendicular to $(a,b,c)$ and $(a\alpha, b\beta, c\gamma)$. Note that the two vectors cannot be parallel. Therefore, the tangent vector is parallel to $(\frac{\beta-\gamma}{a}, \frac{\gamma-\alpha}{b}, \frac{\alpha-\beta}{c})$. For the point with $\alpha=\beta$, the tangent vector pointing outside of the arc has positive $\beta$ part, so negative multiple. For the point with $\beta=\gamma$, the tangent vector pointing outside of the arc has positive $\gamma$ part, so positive multiple. If we take inner product with $(p,q,r)$, you always value less or equal then zero. (zero is possible when $p/a=q/b$ or $q/b=r/c$ each.). Since there exists precisely unique maximum value and minimum value, this is only possible when the minimum is outside on the arc. Otherwise, $p/a=q/b=r/c$ and any moves are possible. For both cases, small perturbation of $(\alpha, \beta, \gamma)$ will give smaller $p\alpha+q\beta+c\gamma$ since it is not the boundary.
12.07.2020 04:28
Let $A $ be a random variable such that $P\left(a\le A\le b\right)=1. $ Define $B=A-\mu\, ,\mu=\mathbb E(A). $ $P\left(a'\le B\le b'\right)=1\, a'=a-\mu,\, b'=b-\mu. $ $\left|A-\frac{a'+b'}{2}\right|\le\frac {b'-a'}{2}=\frac {b-a}{2}\equiv B^2-(a'+b')B+\left(\frac {a'+b'}{2}\right)^2\le\left(\frac {b-a}{2}\right)^2$ Taking mean on both sides of the inequality $$\mathbb E(B)=0,\,\mathbb E (B^2)=\mathrm{Var}(A)\implies\mathrm {Var}(A)+\left (\frac {a'+b'}{2}\right)^2\le\left (\frac {b-a}{2}\right)^2$$Thus, $\mathrm {Var}(A)\le\left(\frac {b-a}{2}\right)^2$
12.07.2020 18:39
What is the mount inequality?
12.07.2020 18:51
richardrichard wrote: What is the mount inequality? It's a reference to 2017 USAMO #6, it's not an actual inequality
10.04.2021 04:17
oops does this work Define $P(\sigma)$ to be the value of the product you get when you do permutation $\sigma$ to the sequence of $y$'s to $(y_1', y_2', \cdots, y_n')$, and evaluate \begin{align*} P(\sigma) = x_1y_1' + x_2y_2' + \cdots + x_ny_n'. \end{align*}Observe that the conclusion is then equivalent to \begin{align*} \max(P) - \min(P) \geq \frac{2}{\sqrt{n - 1}} \end{align*}by Rearrangement. Now we spam. We'll use the fact that \begin{align*} \sum_{sym} x_ix_j = \sum_{sym} y_iy_j = - 1 \end{align*}freely throughout the proof from now on. The idea is to calculate \begin{align*} E(P(\sigma)^2) &= E \left( \sum_{i = 1}^n (x_iy_i')^2 + \sum_{sym} x_ix_jy_i'y_j' \right) \\ &= E \left(\sum_{i = 1}^n (x_iy_i')^2 \right) + E \left( \sum_{sym} x_ix_jy_i'y_j' \right) \\ &= \dfrac{(x_1^2 + x_2^2 + \cdots + x_n^2)(y_1^2 + y_2^2 + \cdots y_n^2)}{n} + \dfrac{\left( \sum_{sym} x_ix_j \right) \left( \sum_{sym} y_iy_j \right)}{n(n - 1)} \\ &= \dfrac{1}{n} + \dfrac{1}{n(n - 1)} = \frac{1}{n - 1}. \end{align*}Now we basically do more probability spam: Observe \begin{align*} \left( \frac{\max - \min}{2} \right)^2 &\geq E \left[ \left(P - \frac{\max + \min}{2} \right)^2 \right] \\ &= E[P^2] - [\max + \min ] E[P] + \left( \frac{\max + \min}{2} \right)^2 \\ &\geq \dfrac{1}{n} + \dfrac{1}{n(n - 1)} \\ &= \frac{1}{n - 1} \end{align*}which gives us $\max - \min \geq \frac{2}{\sqrt{n - 1}}$ as desired.
06.07.2021 20:54
Solved with Elliott Liu Let $T_{\sigma}$ be the random variable over all permutations $\sigma$ of \[\sum x_i y_{\sigma(i)}\]Then, since $\sum x_i = \sum y_i=0$, we clearly have $\mathbb E[T_{\sigma}]=0$ $\textbf{Claim 1:}$ \[\mathbb E[\sum x_i^2 y_{\sigma(i)}^2] = \sum \mathbb E[x_i^2 y_{\sigma(i)}^2]=\sum_{i=1}^n \mathbb E[x_i^2] \mathbb E[y_i^2] =n\cdot \frac{1}{n}\cdot \frac{1}{n}=\frac{1}{n}\]$\textbf{Claim 2:}$ \[\sum_{1\leq i<j\leq n}x_ix_j = \frac12((\sum x_i)^2-\sum x_i^2 )= -\frac12\]Applying Claim 2, \[2\cdot \mathbb E\left[ \sum_{1\leq i<j\leq n} x_i x_j y_{\sigma(i)}y_{\sigma(j)}\right] = 2\cdot \sum_{1\leq i<j\leq n} \mathbb E[x_i x_j] \mathbb E[y_{\sigma(i)}y_{\sigma(j)}] = 2\cdot \binom n2 \cdot \frac{-\frac 12}{\binom n2}\cdot \frac{-\frac 12}{\binom n2} = \frac{1}{n(n-1)}\]Now, combining these two results, we get \[\mathbb E[T_{\sigma}^2] = \mathbb E\left[\left(\sum_{i=1}^n x_i y_{\sigma(i)}\right)^2\right] = \mathbb E\left[\sum_{i=1}^n x_i^2 y_{\sigma(i)}^2 + 2 \sum_{1\leq i<j\leq n} x_i x_j y_{\sigma(i)}y_{\sigma(j)} \right]=\frac 1n +\frac{1}{n(n-1)} = \frac{1}{n-1}\]Thus, \[Var(T_{\sigma}) = \mathbb E[T_{\sigma}^2] - \mathbb E[T_{\sigma}]^2 = \frac{1}{n-1}\]For the sake of contradiction, assume that for each $\sigma$, we have that $\theta - \frac{1}{\sqrt{n-1}}\leq T_\sigma \leq \theta +\frac{1}{\sqrt{n-1}}$ for some real number $\theta$, but at most one of the inequalities are an equality. $\textbf{Claim 3:}$ Let $a_i$ take $n$ values, all in the range $[\theta-c,\theta+c]$. Additionally, $\mathbb E(a_i)=0$. Then, it's true that \[Var(a_i) \leq c^2-\theta^2\]$\textbf{Proof 3:}$ Let $p_i$ be the positive values taken, and $n_i$ be the magnitudes of the negative values taken. Assume there are $k$ positives and $l$ negatives. Then \[\sum_{i=1}^k p_i^2 \leq (\theta+c) \sum_{i=1}^k p_i = (\theta+c)\sum_{j=1}^l n_j \leq (\theta+c)\cdot l \cdot (c-\theta)=(c^2-\theta^2)\cdot l\]Similarly, $\sum_{j=1}^l n_j^2 \leq (c^2-\theta^2)\cdot k$. Thus, \[Var(a_i) = \mathbb E(x^2)-\mathbb E(x)^2 \leq \frac{1}{n}((c^2-\theta^2)\cdot k+(c^2-\theta^2)\cdot l)-0^2 = c^2-\theta^2\]$\square$. Applying the previous lemma to $c=\frac{1}{\sqrt{n-1}}$, we have that the variance of $T_\sigma$ is at most $\frac{1}{n-1}$. Since we're at equality, the equality conditions must be true, so the $T_{\sigma}$ values must be on the edges equal to $\theta-c$ or $\theta+c$, thus the difference between the min and max is $\geq 2c$, which finishes because by rearrangement \[\sum_{i=1}^n (x_iy_i-x_iy_{n+1-i}) =\text{ max ($T_{\sigma}$) - min($T_{\sigma}$)} \geq \frac{2}{\sqrt{n-1}}\]$\blacksquare$
14.07.2023 15:00
Let $\sigma (1),\sigma(2),\cdots ,\sigma(n)$ be an arrangement of $1,2,\cdots ,n.$ Define $\mathcal{X}=\sum\limits_{i=1}^nx_iy_{\sigma(i)}.$ Then it is well known that $\sum_{i = 1}^n x_iy_i - \sum\limits_{i=1}^nx_iy_{n + 1 - i}=\max\mathcal{X}-\min\mathcal{X}.$ Using a famous Lemma we have $\max\mathcal{X}-\min\mathcal{X}\geqslant 2\sqrt{\operatorname{Var}(\mathcal{X})}=2\sqrt{\mathbb E(\mathcal X^2)-\left(\mathbb E(\mathcal X)\right)^2}.$ Now we only need to calculate $\mathbb E(\mathcal X^2)$ and $\left(\mathbb E(\mathcal X)\right)^2.$ $\mathbb E(\mathcal X)=\mathbb E\left(\sum\limits_{i=1}^nx_iy_{\sigma(i)}\right)=\sum\limits_{i=1}^nx_i\mathbb E(y_{\sigma(i)})=0.$ $\begin{aligned}\mathbb E(\mathcal X^2)&=\mathbb E\left(\left(\sum\limits_{i=1}^nx_iy_{\sigma(i)}\right)^2\right)=\mathbb E\left(\sum\limits_{i=1}^n\sum\limits_{j=1}^nx_ix_jy_{\sigma(i)}y_{\sigma(j)}\right)\\&=\mathbb E\left(\sum\limits_{i=1}^nx_i^2y_{\sigma(i)}^2\right)+\mathbb E\left(\sum\limits_{1\leq i\neq j\leq n}x_ix_jy_{\sigma(i)}y_{\sigma(j)}\right)\\&=\sum\limits_{i=1}^nx_i^2\mathbb E(y_{\sigma(i)}^2)+\frac{1}{n(n-1)}=\frac 1{n-1}.\end{aligned}$ Therefore $\max\mathcal{X}-\min\mathcal{X}\geqslant\frac{2}{\sqrt{n-1}}.\blacksquare$
05.09.2023 19:43
Inequality can be rewritten as \[ \sum_{i = 1}^n x_i y_i - \sum_{i=1}^n x_i y_{n+1-i} \ge \frac{2}{\sqrt{n-1}} \] We know with Cauchy-Schwarz Inequality that \[ 1 = \left(\sum_{i=1}^n x_i^2 \right) \cdot \left(\sum_{i=1}^n y_i^2 \right) \ge \left(\sum_{i=1}^n x_i y_i \right)^2 \Longrightarrow 1 \ge \sum_{i=1}^n x_i y_i \ge -1\] Analogously \[ 1 = \left(\sum_{i=1}^n x_i^2 \right) \cdot \left (\sum_{i=1}^n y_{n+1-i}^2 \right) \ge \left(\sum_{i=1}^n x_i y_{n+1-i} \right)^2 \Longrightarrow 1 \ge \sum_{i=1}^n x_i y_{n+1-i} \ge -1 \] So \[ \sum_{i = 1}^n x_i y_i - \sum_{i=1}^n x_i y_{n+1-i} \ge 1 - (-1) = 2 \] All that's left is to prove \[ 2 \ge \frac{2}{\sqrt{n-1}} \] which is simple induction: $\newline$ Base case: $n=2 \Longrightarrow 2 \ge \frac{2}{\sqrt{2-1}} = 2$ Now let the inequality be true for n. \[ 2 \ge \frac{2}{\sqrt{n-1}} > \frac{2}{\sqrt{n}} \]
17.10.2023 01:36
21.02.2024 06:38
Consider a permutation of the $y_i$ given by $\pi = (z_1, z_2, z_3, \dots, z_n)$. Then over all such permutations $\pi$ consider the sum, \begin{align*} S_\pi = \sum_{i=1}^{n} x_iz_i \end{align*}and add it as a new element to the set $T$. Note that $|T| = n!$ for concreteness, each element corresponding to a permutation. Then define a random variable $X$ over the set $T$. Note that, \begin{align*} \mathbb{E}[x] &= \frac{1}{n} \sum_{t \in T} t \\ &= \frac{1}{n}\left( \sum_{i = 1}^n x_i \right) \cdot \left(\sum_{i = 1}^n y_i \right)\\ &= 0 \end{align*}Also we may note that, \begin{align*} \mathbb{E}[x^2] &= \frac{1}{n!} \left(\sum_{t \in T} t^2 \right)\\ &= \frac{(n-1)!}{n!} \left(\sum_{i = 1}^n x_i^2 \right)\left(\sum_{i = 1}^n y_i^2 \right) + \frac{4(n-2)!}{n!} \left(\sum_{1 \leq a, b, c, d \leq n} x_ax_by_cy_d \right)\\ &= \frac{1}{n} + \frac{1}{n(n-1)} \left( \sum_{1 \leq i < j \leq n} 2x_ix_j \right)\left( \sum_{1 \leq i < j \leq n} 2y_iy_j \right)\\ &= \frac{1}{n} + \frac{1}{n(n-1)} \left( -1 \right) \left( -1 \right)\\ &= \frac{1}{n} + \frac{1}{n(n-1)}\\ &= \frac{1}{n-1} \end{align*}Now note that \begin{align*} \text{Var}(X) &= \mathbb{E}(x^2) - \mathbb{E}(x)^2\\ &= \frac{1}{n-1} \end{align*}and hence we have, \begin{align*} \text{Dev}(x) = \frac{1}{\sqrt{n-1}} \end{align*}Thus it remains to show that, \begin{align*} \max(T) - \min(T) &\geq 2\text{Dev}(x)\\ \iff \left( \frac{\max(T) - \min(T)}{2} \right)^2 &\geq \text{Var}(x)\\ \end{align*}However this follows from the fact that, \begin{align*} \left( \frac{\max(T) - \min(T)}{2} \right)^2 &\geq \mathbb{E}\left[\left(X - \frac{\max(T) + \min(T)}{2} \right)^2 \right]\\ & = \mathbb{E}[x^2] - \mathbb{E}[x](\max(T) - \min(T)) + \left( \frac{\max(T) + \min(T)}{2} \right)^2\\ &= \mathbb{E}[x^2] + \left( \frac{\max(T) + \min(T)}{2} \right)^2\\ &\geq \mathbb{E}[x^2] \\ &= \mathbb{E}[x^2] - \mathbb{E}[x]^2\\ &= \text{Var}(x) \end{align*}Thus our claim follows and we find that, \begin{align*} \max(T) - \min(T) &\geq 2\text{Dev}(x)\\ &= \frac{2}{\sqrt{n-1}} \end{align*}Noting that by rearrangement $\max(T) = \sum_{i = 1}^n x_iy_i$ and $\min(T) = \sum_{i = 1}^n x_iy_{n + 1 - i}$ the problem follows. Remarks: Solution motivated extremely by 19sla2.
25.06.2024 20:58
We give a combinatorial flavored proof of this.
15.07.2024 09:05
Let $\pi$ be some uniformly random permutation of $y_i$. Then we note that $\mathbb{E} \sum_{i < j} x_ix_j = \frac{1}{2}(( \sum_i x_i )^2 - \sum_i x_i^2) = -\frac{1}{2}$. Let $\mathcal{P}_i = (x_iy_{\pi(i)}$. First note that $\mathbb{E}(\sum_i \mathcal{P}_i) = (\sum_i x_i)(\sum_i y_i) \cdot \frac{1}{2} = 0$. Then $\mathbb{E}((\sum_i \mathcal{P}_i)^2) = \mathbb{E}(\sum_i x_i^2y_{\pi(i)}^2) + 2\mathbb{E}(\sum_{i < j} x_ix_jy_{\pi(i)}y_{\pi(i)})$. Then $2\mathbb{E}(\sum_i x_i^2y_{\pi(i)}^2) = (\sum_i x_i^2)(\sum_i y_{\pi(i)}^2) = \frac{1}{n}$, and $\mathbb{E}(\sum_{i < j} x_ix_jy_{\pi(i)}y_{\pi(j)}) = \mathbb{E}((\sum_{i < j} x_ix_j)(\sum y_{\pi(i)}y_{\pi(j)}))$, which is equal to $(-\frac{1}{2})^2 \cdot \frac{1}{\binom{n}{2}} = \frac{1}{2(n)(n-1)}$. Summing up, we get $\mathbb{E}((\sum_i \mathcal{P}_i)^2) = \frac{1}{n} + \frac{2}{2n(n-1)} = \frac{1}{n-1}$. By Rearrangment Inequality we have $\textbf{min}(P_{\pi(i)}) = \sum_i x_iy_i$ and $\textbf{max}(P_{\pi(i)}) = \sum_i x_iy_{n+1-i} \implies \mathbf{Dev}(P_{\pi(i)}) = \frac{1}{\sqrt{n-1}}$. So it suffices to show that for any random variable $X$ bounded in some finite interval $[a, b]$ with $a > b$, then $2\mathbf{Dev}(X) \leq a - b$. However $\mathbf{Dev}(X)$ is clearly $\leq a - \mathbb{E}(X)$ and $\leq \mathbb{E}(X) - b$ so summing the two gives the desired result, so we are done.