### matrix factorization

This question was floating around: can a determinant-1 diagonal matrix be factored into a product of triangular matrices with 1-diagonals, and how?

At least for the 2-by-2 matrix, this is answered in the affirmative here (Section 8.3). Here is a recapitulation.

First, we note that the product of two lower triangular matrices is lower triangular, and the product of two upper triangular matrices is also upper triangular. So if the decomposition is possible, then we should also be able to write the decomposition as an alternating product of lower and upper triangular matrices. Since the identity matrix is both lower and upper triangular, let’s just assume we begin with a lower triangular matrix, that is, is there a decomposition

where \({\bf L}_i\) and \({\bf U}_i\) are lower and upper triangular matrices, respectively, and \({\bf A}\) is diagonal.

Consider the case for a 2-by-2 matrix \({\bf A}={\bf diag}(\lambda, 1/\lambda)\), \(\lambda\neq1\). In general, a triangular matrix transforms coordinate bases in a restricted way that takes place in a subspace. For a 2-by-2 matrix this is even more simplified. Under transformation by a lower triangular matrix, any vector is only changed in its second component. The standard basis vector \(e_1=(1,0)^T\) is changed in its second component. The standard basis vector \(e_2=(0,1)^T\) is unchanged. Analogously, under transformation by an upper triangular matrix, any vector is only changed in its first component. \(e_1\) is unchanged while \(e_2\) is changed in its first component. On the other hand, a diagonal matrix scales the standard vectors.

Put together, we see that if the decomposition is possible, then in general, a product of at least three nontrivial lower-upper matrices is required to transform \(e_1\) to \((\lambda, 0)^T\) as well as to transform \(e_2\) to \((0, 1/\lambda)^T\). {some stuff here} On the other hand, since the right-most transformation does nothing to one of the standard vectors (but does change the other one), at least four nontrivial lower-upper matrices are required to correctly transform both standard vectors.

But any general 2-by-2 determinant-1 matrix can be written as a product of no more than four triangular matrices:

Let \({\bf A}=\left( \begin{array}{cc}x & y \\ z & w \end{array}\right)\). Let \({\bf L}_1=\left( \begin{array}{cc}1 & 0 \\ a & 1 \end{array}\right)\), \({\bf U}_1=\left( \begin{array}{cc}1 & b \\ 0 & 1 \end{array}\right)\), \({\bf L}_2=\left( \begin{array}{cc}1 & 0 \\ c & 1 \end{array}\right)\), and \({\bf U}_2=\left( \begin{array}{cc}1 & d \\ 0 & 1 \end{array}\right)\).

\(({\bf L}_1{\bf U}_1)({\bf L}_2{\bf U}_2)=\left( \begin{array}{cc}1 & b\\a & 1+ab\end{array}\right)\left( \begin{array}{cc}1 & d\\c & 1+cd\end{array}\right)\), and gives the following constraints:(0): \(xw-yz=1\)

(1): \(1+bc=x\)

(2): \(b+d+bcd=y\)

(3): \(a+c+abc=z\)

(4): \(ad+1+ab+cd+abcd=w\)

(5): substitute (1) into (2) \(b=y-xd\)

For \(x\neq1\):

(6): substitute (5) into (1) \(c=\frac{x-1}{y-xd}\)

(7): substitute (0), (5), (6) into (3) \(a=\frac{yz-xzd-x+1}{x(y-xd)}=\frac{w-zd-1}{y-zd}\)

(8): substitute (0), (5), (6), (7) into (4): tautological, (4) is automatically satisfied for all \(d\) s.t. \(y\neq xd\).

For \(x=1\):

\(b=0\) or \(c=0\). W.l.o.g., put \(b=0\). Then \(d=y\). (3) and (4) then become linear in \(a\) and \(c\). Any \(a\) and \(c\) that sum to \(z\) work.

So it takes exactly four for a diagonal 2-by-2 determinant-1 non-identity matrix.

For larger matrices, whether it is possible to decompose, and if so, how to do so, are not clear. Group properties of \(SL_n(\mathbb R)\) may be of use. This will be revisited.