In this short post, I would like to discuss a special case of the construction introduced in the first part of the series, that is compute the set {K_{A_n}}, where {A_n \subset \mathbb C^n \otimes \mathbb C^n} is the antisymmetric subspace of the tensor product. This example plays an important role in the additivity problem for the minimum output entropy of quantum channels, as it was shown in [ghp].

1. Antisymmetric vectors and matrices

For order two tensors {x \in \mathbb C^n \otimes \mathbb C^n}, there are only two symmetry classes, the symmetric and the antisymmetric vectors. The antisymmetric vectors form a vector space

\displaystyle A_n = \mathrm{span}\left\{\frac{1}{\sqrt 2} (e_i \otimes e_j - e_j \otimes e_i) \, : \, 1 \leq i < j \leq n\right\},

where the vectors in the span can be shown to form an orthonormal basis of the {{n \choose 2}}-dimensional subspace {A_n}, whenever {e_i} form a basis of {\mathbb C^n}. Let {P_n \in \mathcal M_{n^2}(\mathbb C)} be the orthogonal projection on the subspace {A_n}. It is easy to see that {P_n} has entries in {\{0,\pm 1/2\}} and it looks as below ({n=10}).


Via the usual isomorphism {\mathbb C^n \otimes \mathbb C^n \simeq \mathbb C^n \otimes (\mathbb C^n)^* = \mathcal M_n(\mathbb C)}, one can see antisymmetric tensors as antisymmetric (or skew-symmetric) matrices: one simply has to rearrange the {n^2} complex coordinates of the tensor in an {n \times n} matrix, respecting the ordering of bases. Note that in this way we obtain antisymmetric ({X^t = -X}) and not anti-Hermitian ({X^* = -X}) elements.

2. Singular values of vectors in the antisymmetric subspace

It is well known that antisymmetric (complex) matrices can be 2-block diagonalized using orthogonal rotations

\displaystyle X = O \begin{bmatrix} 0 & \lambda_1 & & & & \\ -\lambda_1 & 0 & & & & \\ & & 0 & \lambda_2 & & \\ & & -\lambda_2 & 0 & & \\ &&&& \ddots &\\ &&&& & 0 \end{bmatrix}O^t.

The matrix in the middle has either diagonal blocks of size 2, as shown, or null diagonal elements. Hence, the non-zero eigenvalues of an antisymmetric matrix come in pairs {\{\pm \lambda_i\}}. Since antisymmetric matrices are normal, their singular values are just the moduli of the eigenvalues, so non-zero singular values have multiplicity at least 2. We conclude that

\displaystyle K_{A_n} \subset \{ (\lambda_1, \lambda_1, \lambda_2, \lambda_2, \ldots) \, : \, \lambda_i \geq 0, \, \sum_{i=1}^{\lfloor n/2 \rfloor} \lambda_i = 1/2\}.

Actually, it is easy to see we have equality, since the vector

\displaystyle x_\lambda = \sum_{i=1}^{\lfloor n/2 \rfloor} \sqrt{ \lambda_i } (e_i \otimes e_{\lfloor n/2 \rfloor+i} - e_{\lfloor n/2 \rfloor+i} \otimes e_i)

is a unit norm element of {A_n}. We summarize everything in the following theorem, where {\prec} denotes the majorization relation.

Theorem 1. The set of all possible singular values of antisymmetric vectors inside {\mathbb C^n \otimes \mathbb C^n} is given by

\displaystyle K_{A_n} = \{ (\lambda_1, \lambda_1, \lambda_2, \lambda_2, \ldots) \in \Delta_n \, : \, \lambda_i \geq 0, \, \sum_{i=1}^{\lfloor n/2 \rfloor} \lambda_i = 1/2\}.

In particular, the set {K_{A_n}} is convex and we have that {\lambda \prec (1/2, 1/2, 0, \ldots, 0)} for all {\lambda \in K_{A_n}}. Hence, the minimum entropy of a vector inside {K_{A_n}} is 1 bit.


[ghp] A. Grudka, M. Horodecki and L. Pankowski. Constructive counterexamples to the additivity of the minimum output Rényi entropy of quantum channels for all {p > 2}. J. Phys. A: Math. Theor. 43 425304.


A couple of days ago, I remembered the following fun fact from my high-school days in Romania:

\displaystyle \frac{1}{9801}=0,(000102030405\cdots939495969799).

First, note that I am using the European notation for periodic decimal expansions (or repeating decimals). What you get is a periodic expansion, where all the two-digit numbers from 00 to 99, except 98, appear in the periodic part. Ok, there seems to be some magic going on here, let us try to understand what is happening.

First, note that {9801 = 99^2}, so probably there’s nothing special about the 2 in there. Indeed, try

\displaystyle \frac{1}{9^2} = \frac{1}{81} = 0,(012345679).

Good, let us go ahead and generalize this a little more. The 9 in there is the number of fingers you are left with when you lose one, so the same should be true for creatures anatomically different (by the way, there is ice on Mercury, and maybe more !):

\displaystyle \frac{1}{\text{E1}_{16}} = \frac{1}{\text{15}_{16}^2} = 0,(012345679\text{ABCDF})_{16}.

Before stating and proving a general result, let us discuss the easiest case (for us humans), 1/81. The idea comes from [cgo] and the starting point is the formula

\displaystyle \sum_{n=1}^\infty nx^{n-1} = \frac{1}{(1-x)^2},

which is the power series expansion of {(1-x)^{-2}} around 0. For {x=1/10}, we get

\displaystyle  \begin{array}{rcl}  \frac{1}{9^2} &=& \frac{1}{10}^2 \left[ 1 + \frac{2}{10} + \frac{3}{10^2} + \cdots \right] \\[1mm] \\&=& \left[ \frac{0}{10} + \frac{1}{10^2} + \frac{2}{10^3} + \cdots + \frac{7}{10^8} + \frac{8}{10^9} +  \right. \\[1mm] \\  && \qquad \left.  + \frac{9}{10^{10}} + \frac{10}{10^{11}} + \frac{11}{10^{12}} + \frac{12}{10^{13}} + \cdots \right]. \end{array}

At this point, the expression above starts to look like a decimal expansion – the problem is that some of the numerators in the brackets are not digits! To get around this, write

\displaystyle  \begin{array}{rcl}  \frac{10}{10^{11}} &=& \frac{1}{10^{10}} + \frac{0}{10^{11}} \\[1mm] \\ \frac{11}{10^{12}} &=&\frac{1}{10^{11}} + \frac{1}{10^{12}} \\[1mm] \\ \frac{12}{10^{13}} &=& \frac{1}{10^{12}} + \frac{2}{10^{13}} \\[1mm] \\ &\cdots \end{array}

to get

\displaystyle  \begin{array}{rcl}  \frac{1}{9^2} &=& \left[ \frac{0}{10} + \cdots + \frac{7}{10^8} + \frac{8}{10^9} + \frac{9}{10^{10}} + \left( \frac{1}{10^{10}} + \frac{0}{10^{11}} \right ) + \right. \\[1mm]\\ && \qquad \left.  + \left( \frac{1}{10^{11}} + \frac{1}{10^{12}}\right ) + \left( \frac{1}{10^{12}} + \frac{2}{10^{13}} \right ) + \cdots \right]  \\[1mm] \\ &=& \left[ \frac{0}{10} + \cdots + \frac{7}{10^8} + \frac{8}{10^9} + \left(\frac{9}{10^{10}} + \frac{1}{10^{10}}\right ) + \left( \frac{0}{10^{11}} + \frac{1}{10^{11}} \right )  + \right. \\[1mm]\\ && \qquad \left.  +  \left( \frac{1}{10^{12}}+\frac{1}{10^{12}} \right ) + \frac{2}{10^{13}} + \cdots \right] \\[1mm]  \\ &=& \left[ \frac{0}{10} + \cdots + \frac{7}{10^8} + \frac{8}{10^9} + \frac{10}{10^{10}} + \frac{1}{10^{11}}+\frac{2}{10^{12}}+\cdots \right] \\[1mm]  \\ &=& \left[ \frac{0}{10} + \cdots + \frac{7}{10^8} + \frac{8}{10^9} + \left( \frac{1}{10^{9}} +\frac{0}{10^{10}} \right)+ \frac{1}{10^{11}}+\frac{2}{10^{12}}+\cdots \right] \\[1mm]  \\ &=& \left[ \frac{0}{10} + \cdots + \frac{7}{10^8} + \frac{9}{10^9} + \frac{0}{10^{10}} + \frac{1}{10^{11}}+\frac{2}{10^{12}}+\cdots \right], \end{array}

which is the announced formula. The general result can be stated as follows.

Proposition 1. Let {b \geq 3} be a fixed basis. Then, when everything is written in basis {b}, for any {n \geq 1}, one has

\displaystyle  \begin{array}{rcl}  \frac{1}{\overline{b-1} \, \overline{b-1}\cdots\overline{b-1} \, \overline{b-1}^{2}} = 0,(&&00\cdots00 \\ &&00\cdots01 \\ &&00\cdots02 \\ && \qquad \vdots\\ &&\overline{b-1} \, \overline{b-1}\cdots\overline{b-1} \, \overline{b-4} \\ &&\overline{b-1} \, \overline{b-1}\cdots\overline{b-1} \, \overline{b-3} \\ &&\overline{b-1} \, \overline{b-1}\cdots\overline{b-1} \, \overline{b-1}), \end{array}

where each group containing {\cdots} in the expression above has {n} digits and {\overline{x}} denotes the digit {0 \leq x <b} in basis {b}.

Proof: We shall start from the RHS of the equation and work our way towards the LHS. Recall that a periodic decimal expansion in base {b} can be written as a fraction as follows:

\displaystyle 0,(\overline{x_{k-1}} \, \overline{x_{k-2}} \, \overline{x_{k-3}} \cdots \overline{x_{1}} \, \overline{x_0} ) = \frac{\sum_{i=0}^{k-1} \overline{x_i} b^i}{ (b-1) \sum_{i=0}^{k-1} b^i}.

First, let us focus on the case {n=1} and treat the general case at the end of the proof. In this particular case, the RHS of the equation in the statement is equal to

\displaystyle 0,(012\cdots \overline{b-3} \, \overline{b-1}) = \frac{1+\sum_{i=0}^{b-2} (b-2-i)b^i}{(b-1)\sum_{i=0}^{b-2}b^i},

where the 1 in front of the sum stands for the fact that {\overline{b-2}} is replaced by {\overline{b-1}} as the last digit of the period. Using the formulas

\displaystyle  \begin{array}{rcl}  \sum_{i=0}^k b^i &=& \frac{b^{k+1}-1}{b-1}\\ \sum_{i=0}^k i b^i &=& \frac{b+b^{k+1}(kb-k-1)}{(b-1)^2}, \end{array}

we can write

\displaystyle  \begin{array}{rcl}  1+\sum_{i=0}^{b-2} (b-2-i)b^i &=& 1+(b-2)\frac{b^{b-1}-1}{b-1} - \\ && \quad - \frac{b+b^{b-1}[(b-2)b-(b-2)-1]}{(b-1)^2} \\ &=& \frac{b^{b-1}-1}{(b-1)^2}, \end{array}

from which the conclusion easily follows. For the general case, note that the following decimal expansion holds for groups {\widetilde y_i} of {n} digits:

\displaystyle 0,(\widetilde{y_{k-1}} \, \widetilde{y_{k-2}} \, \widetilde{y_{k-3}} \cdots \widetilde{y_{1}} \, \widetilde{y_0} ) = \frac{\sum_{i=0}^{k-1} \widetilde{y_i} b^{ni}}{(b-1)\sum_{i=0}^{nk-1} b^i}.

The proof follows then the same steps as above.\Box


[cgo] A. Cheer and D. Goldston, Explaining the decimal expansion of 1/81 using calculus. Mathematics and Computer Education, 25-3 (1991).

Together with Adrian Basarab and Denis Kouamé, we are looking for a masters student who will work on compressive sensing, random matrices and applications to medical imaging. We are looking for a candidate having a solid background in mathematics, with knowledge of linear algebra, probability theory and numerical simulations. If interested, have a look at the proposal (in french) and contact me !

This is the first post in a series about a problem inside RMT {\cap} QIT that I have been working on for some time now [cn2,bcn]. Since I find it to be very simple and interesting, I will present it in a series of blog notes that should be accessible to a large audience. I will also use this material to prepare the talks I will be giving this summer on this topic ;).

In what follows, all vector spaces shall be assumed to be complex and {k \leq n} are fixed constants. For a vector {y \in \mathbb R^k}, the symbol {y^\downarrow} denotes its ordered version, i.e. {y} and {y^\downarrow} are the same up to permutation of coordinates and {y^\downarrow_1 \geq \cdots \geq y^\downarrow_k}.

1. Singular values of vectors in a tensor product

Using the non-canonical isomorphism {\mathbb C^k \otimes \mathbb C^n \simeq \mathbb C^k \otimes (\mathbb C^n)^*}, one can see any vector

\displaystyle \mathbb C^k \otimes \mathbb C^n \ni x = \sum_{i=1}^k \sum_{j=1}^n x_{ij} e_i \otimes f_j

as a matrix

\displaystyle \mathcal M_{k \times n} \ni X = \sum_{i=1}^k \sum_{j=1}^n x_{ij} e_if_j^*.

In this way, by using the singular value decomposition of the matrix {X} (keep in mind that we assume {k \leq n}), one can write

\displaystyle x = \sum_{i=1}^k \sqrt{\lambda_i} e'_i \otimes f'_i,

where {(f'_i)}, resp. {(g'_i)} are orthonormal families in {\mathbb C^k}, resp. {\mathbb C^n}. The vector {\lambda = \lambda(x) \in \mathbb R_+^{k}} is the singular value vector of {x} and we shall always assume that it is ordered {\lambda(x) = \lambda(x)^\downarrow}. It satisfies the normalization condition

\displaystyle \sum_{i=1}^k \lambda_i(x)= \|x\|^2.

In particular, if {x} is a unit vector, then {\lambda(x) \in \Delta^\downarrow_k}, where {\Delta_k} is the probability simplex

\displaystyle \Delta_k = \left\{ y \in \mathbb R_+^k \, : \, \sum_{i=1}^k y_i = 1\right\}

and {\Delta^\downarrow_k} is its ordered version.

In QIT, the decomposition of x above is called the Schmidt decomposition and the numbers {\lambda_i(x)} are called the Schmidt coefficients of the pure state {|x\rangle}.

2. The singular value set of a vector subspace

Consider now a subspace {V \subset \mathbb C^k \otimes \mathbb C^n} of dimension {\mathrm{dim} V = d} and define the set

\displaystyle K_V = \{ \lambda(x) \, : \, x \in V \text{ and } \|x\| = 1 \} \subseteq \Delta^\downarrow_k,

called the singular value subset of the subspace {V}.

Below are some examples of sets {K_{V}}, in the case {k=3}, where the simplex {\Delta_{3}} is two-dimensional. In all the four cases, {k=n=3} and {d=2}. In the last two pictures, one of the vectors spanning the subspace {V} has singular values {(1/3,1/3,1/3)}.

3. Basic properties

Below is a list of very simple properties of the sets {K_{V}}.

Proposition 1. The set {K_V} is a compact subset of the ordered probability simplex {\Delta_k^\downarrow} having the following properties:

  1. Local invariance: {K_{(U_1 \otimes U_2)V} = K_V}, for unitary matrices {U_1 \in \mathcal U(k)} and {U_2 \in \mathcal U(n)}.
  2. Monotonicity: if {V_1 \subset V_2}, then {K_{V_1} \subset K_{V_2}}.
  3. If {d=1}, {V=\mathbb C x}, then {K_V=\{\lambda(x)\}}.
  4. If {d > (k-1)(n-1)}, then {(1,0,\ldots,0) \in K_V}.

Proof: The first three statements are trivial. The last one is contained in [cmw], Proposition 6 and follows from a standard result in algebraic geometry about the dimension of the intersection of projective varieties. \Box

4. So, what is the problem ?

The question one would like to answer is the following:

How does a typical K_V look like ?

In order to address this, I will introduce random subspaces in the next post future. In the next post, I look at the special case of antisymmetric tensors.


[bcn] S. Belinschi, B. Collins and I. Nechita, Laws of large numbers for eigenvectors and eigenvalues associated to random subspaces in a tensor product, to appear in Invent. Math.

[cn2] B. Collins and I. Nechita, Random quantum channels II: Entanglement of random subspaces, Rényi entropy estimates and additivity problems, Adv. in Math. 226 (2011), 1181–1201.

[cmw] T. Cubitt, A. Montanaro and A. Winter, On the dimension of subspaces with bounded Schmidt rank, J. Math. Phys. 49, 022107 (2008).