# Matrix triangulation calculators

Matrix triangulation using Gauss and Bareiss methods.

### Esta página existe graças aos esforços das seguintes pessoas:

#### Karen Luckhurst

Criado: 2015-08-09 16:51:09, Ultima atualização: 2020-12-04 10:52:17

Este conteúdo é licenciado de acordo com a Licença Creative Commons de Atribuição/CompartilhaIgual 3.0 (Unported). Isso significa que você pode redistribuir ou modificar livremente este conteúdo sob as mesmas condições de licença e precisa atribuir ao autor original colocando um hyperlink para este trabalho no seu site. Além disto, favor não modificar qualquer referência ao trabalho original (caso houver) que estiverem contidas neste conteúdo.

Below are two calculators for matrix triangulation.
The first uses the Gauss method, the second the Bareiss method. A description of the methods and their theory is below.

#### Matrix triangulation (Gauss method)

Digits after the decimal point: 4
Triangular matrix (Gauss method)

Triangular matrix (Gauss method with maximum selection in a column):

Triangular matrix (Gauss method with a maximum choice in entire matrix):

#### Matrix triangulation (Bareiss method)

Digits after the decimal point: 4
Triangular matrix (Bareiss method)

Triangular matrix (Bareiss method with maximum selection in a column)

Triangular matrix (Bareiss method with a maximum choice in entire matrix)

First we will give a notion to a triangular or row echelon matrix:
The matrix has a row echelon form if:

1. all zero rows, if any, belong at the bottom of the matrix
2. The leading coefficient (the first nonzero number from the left, also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it
3. All nonzero rows (rows with at least one nonzero element) are above any rows of all zeroes

Row echelon matrix example:
1 0 2 5
0 3 0 0
0 0 0 4
The notion of a triangular matrix is more narrow and it's used for square matrices only. It goes like this: the triangular matrix is a square matrix where all elements below the main diagonal are zero.

Example of an upper triangular matrix:
1 0 2 5
0 3 1 3
0 0 4 2
0 0 0 3
By the way, the determinant of a triangular matrix is calculated by simply multiplying all its diagonal elements.

You may ask, what's so interesting about these row echelon (and triangular) matrices? Well, they have an amazing property – any rectangular matrix can be reduced to a row echelon matrix with the elementary transformations.

So, what's the elementary transformations, you may ask?
Elementary matrix transformations are the following operations:

1. Row switching (a row within the matrix can be switched with another row)
2. Row multiplication (each element in a row can be multiplied by a nonzero constant)
3. Row addition (a row can be replaced by the sum of that row and a multiple of another row)

What now?
Elementary matrix transformations retain the equivalence of matrices. And, if you remember that the systems of linear algebraic equations are only written in matrix form, it means that the elementary matrix transformations don't change the set of solutions of the linear algebraic equations system, which this matrix represents.

By triangulating the AX=B linear equation matrix to A'X = B' i.e. with the corresponding column B transformation you can do so called "backsubstitution".

To explain we will use the triangular matrix above and rewrite the equation system in a more common form (I've made up column B):

$1*x_1 + 0*x_2 + 2 * x_3 + 5 * x_4 = 10 \\ 0*x_1 + 3*x_2 + 1 * x_3 + 3 * x_4 = 7 \\ 0*x_1 + 0*x_2 + 4 * x_3 + 2 * x_4 = 5 \\ 0*x_1 + 0*x_2 + 0 * x_3 + 3 * x_4 = 9$

It's clear that first we'll find $x_4$, then, we substitute it to the previous equation, find $x_3$ and so on – moving from the last equation to the first. That is what is called backsubstitution.
This row-reduction algorithm is referred to as the Gauss method. The Gauss method is a classical method for solving systems of linear equations. It is calso called Gaussian elimination as it is a method of the successive elimination of variables, when with the help of elementary transformations the equation systems are reduced to a row echelon (or triangular) form, in which all other variables are placed (starting from the last).

How can you zero the variable $x_1$ in the second equation?
By subtracting the first one from it, multiplied by a factor $\frac{a_{21}}{a_{11}}$
Here is an example:

$2*x_1 + 3*x_2 + 4 * x_3 = 10 \\ 6*x_1 + 3*x_2 - 4 * x_3 = 7$

Zero $x_1$ in the first equation

$6*x_1 + 3*x_2 - 4 * x_3 - \frac{6}{2}(2*x_1 + 3*x_2 + 4 * x_3)= 7 - \frac{6}{2}10 \\ 6*x_1 + 3*x_2 - 4 * x_3 - 6*x_1 - 9*x_2 - 12 * x_3 = 7 - 30 \\ -6*x_2 - 16 * x_3 = -23$

There is no $x_1$ in the second equation
In a generalized sense, the Gauss method can be represented as follows:

$\begin{matrix}For\, j = 0,..., N-2 \\ \qquad \qquad For\,i = j + 1,..., N - 1 \\ \qquad \qquad \qquad \vec{a_i} \leftarrow \vec{a_i} - \frac{a_{ij}}{a_{jj}} \vec{a_j} \end{matrix}$

where N – row dimension,

$\vec{a_i}$ - i-row,
$a_{ij}$ - element in i row, j column

It seems to be a great method, but there is one thing – its division by $a_{jj}$ occurring in the formula. Firstly, if a diagonal element equals zero, this method won't work. Secondly, during the calculation the deviation will rise and the further, the more. So the result won't be precise.
For the deviation reduction, the Gauss method modifications are used. They are based on the fact that the larger the denominator the lower the deviation. These modifications are the Gauss method with maximum selection in a column and the Gauss method with a maximum choice in the entire matrix. As the name implies, before each stem of variable exclusion the element with maximum value is searched for in a row (entire matrix) and the row permutation is performed, so it will change places with $a_{jj}$.

However, there is a radical modification of the Gauss method – the Bareiss method.
How can you get rid of the division? By multiplying the row $\vec{a_i}$ by $a_{jj}$ before subtracting. Then you have to subtract $\vec{a_i}$, multiplyied by $a_{ij}$ without any division.
$\vec{a_i} \leftarrow a_{jj}\vec{a_i} - a_{ij} \vec{a_j}$.
It seems good, but there is a problem of an element value increase during the calculations

Bareiss offered to divide the expression above by $a_{j-1,j-1}$ and showed that where the initial matrix elements are the whole numbers then the resulting number will be whole. It's also assumed that for the zero row $a_{-1,-1}=1$.

By the way, the fact that the Bareiss algorithm reduces integral elements of the initial matrix to a triangular matrix with integral elements, i.e. without deviation accumulation, it quite an important feature from the standpoint of machine arithmetic.

The Bareiss algorithm can be represented as:
$\begin{matrix} a_{-1,-1}=1 \\For\, j = 0,..., N-2 \\ \qquad \qquad For\,i = j + 1,..., N - 1 \\ \qquad \qquad \qquad \vec{a_i} \leftarrow \frac{a_{jj}\vec{a_i} - a_{ij} \vec{a_j}}{a_{j-1,j-1}} \end{matrix}$

This algorithm can be upgraded, similarly to Gauss, with maximum selection in a column (entire matrix) and rearrangement of the corresponding rows (rows and columns).

URL copiado para a área de transferência