Matrices Matrices are represented in ⪆ by lists of row vectors (see ) (for future changes to this policy see Chapter ). The vectors must all have the same length, and their elements must lie in a common ring. However, since checking rectangularness can be expensive functions and methods of operations for matrices often will not give an error message for non-rectangular lists of lists –in such cases the result is undefined.

Because matrices are just a special case of lists, all operations and functions for lists are applicable to matrices also (see chapter ). This especially includes accessing elements of a matrix (see ), changing elements of a matrix (see ), and comparing matrices (see ).

Note that, since a matrix is a list of lists, the behaviour of for matrices is just a special case of for lists (see ); called with an immutable matrix mat, returns a mutable matrix whose rows are identical to the rows of mat. In particular the rows are still immutable. To get a matrix whose rows are mutable, one can use List( mat, ShallowCopy ).

InfoMatrix (Info Class) <#Include Label="InfoMatrix">
Categories of Matrices <#Include Label="IsMatrix"> <#Include Label="IsOrdinaryMatrix"> <#Include Label="IsLieMatrix">
Operators for Matrices The rules for arithmetic operations involving matrices are in fact special cases of those for the arithmetic of lists, given in Section  and the following sections, here we reiterate that definition, in the language of vectors and matrices.

Note that the additive behaviour sketched below is defined only for lists in the category , and the multiplicative behaviour is defined only for lists in the category (see ).

addition mat1 + mat2

returns the sum of the two matrices mat1 and mat2, Probably the most usual situation is that mat1 and mat2 have the same dimensions and are defined over a common field; in this case the sum is a new matrix over the same field where each entry is the sum of the corresponding entries of the matrices.

In more general situations, the sum of two matrices need not be a matrix, for example adding an integer matrix mat1 and a matrix mat2 over a finite field yields the table of pointwise sums, which will be a mixture of finite field elements and integers if mat1 has bigger dimensions than mat2.

addition scalar + mat

addition mat + scalar

returns the sum of the scalar scalar and the matrix mat. Probably the most usual situation is that the entries of mat lie in a common field with scalar; in this case the sum is a new matrix over the same field where each entry is the sum of the scalar and the corresponding entry of the matrix.

More general situations are for example the sum of an integer scalar and a matrix over a finite field, or the sum of a finite field element and an integer matrix.

subtraction mat1 - mat2

subtraction scalar - mat

subtraction mat - scalar

Subtracting a matrix or scalar is defined as adding its additive inverse, so the statements for the addition hold likewise.

multiplication scalar * mat

multiplication mat * scalar

returns the product of the scalar scalar and the matrix mat. Probably the most usual situation is that the elements of mat lie in a common field with scalar; in this case the product is a new matrix over the same field where each entry is the product of the scalar and the corresponding entry of the matrix.

More general situations are for example the product of an integer scalar and a matrix over a finite field, or the product of a finite field element and an integer matrix.

multiplication vec * mat

returns the product of the row vector vec and the matrix mat. Probably the most usual situation is that vec and mat have the same lengths and are defined over a common field, and that all rows of mat have the same length m, say; in this case the product is a new row vector of length m over the same field which is the sum of the scalar multiples of the rows of mat with the corresponding entries of vec.

More general situations are for example the product of an integer vector and a matrix over a finite field, or the product of a vector over a finite field and an integer matrix.

multiplication mat * vec

returns the product of the matrix mat and the row vector vec. (This is the standard product of a matrix with a column vector.) Probably the most usual situation is that the length of vec and of all rows of mat are equal, and that the elements of mat and vec lie in a common field; in this case the product is a new row vector of the same length as mat and over the same field which is the sum of the scalar multiples of the columns of mat with the corresponding entries of vec.

More general situations are for example the product of an integer matrix and a vector over a finite field, or the product of a matrix over a finite field and an integer vector.

multiplication mat1 * mat2

This form evaluates to the (Cauchy) product of the two matrices mat1 and mat2. Probably the most usual situation is that the number of columns of mat1 equals the number of rows of mat2, and that the elements of mat and vec lie in a common field; if mat1 is a matrix with m rows and n columns, say, and mat2 is a matrix with n rows and o columns, the result is a new matrix with m rows and o columns. The element in row i at position j of the product is the sum of mat1[i][l] * mat2[l][j], with l running from 1 to n.

inverse Inverse( mat )

returns the inverse of the matrix mat, which must be an invertible square matrix. If mat is not invertible then fail is returned.

quotient mat1 / mat2

quotient scalar / mat

quotient mat / scalar

quotient vec / mat

In general, left / right is defined as left * right^-1. Thus in the above forms the right operand must always be invertible.

power mat ^ int

conjugate mat1 ^ mat2

image vec ^ mat

Powering a square matrix mat by an integer int yields the int-th power of mat; if int is negative then mat must be invertible, if int is 0 then the result is the identity matrix One( mat ), even if mat is not invertible.

Powering a square matrix mat1 by an invertible square matrix mat2 of the same dimensions yields the conjugate of mat1 by mat2, i.e., the matrix mat2^-1 * mat1 * mat2.

Powering a row vector vec by a matrix mat is in every respect equivalent to vec * mat. This operations reflects the fact that matrices act naturally on row vectors by multiplication from the right, and that the powering operator is &GAP;'s standard for group actions.

matrices Comm( mat1, mat2 )

returns the commutator of the square invertible matrices mat1 and mat2 of the same dimensions and over a common field, which is the matrix mat1^-1 * mat2^-1 * mat1 * mat2.

The following cases are still special cases of the general list arithmetic defined in .

addition scalar + matlist

addition matlist + scalar

subtraction scalar - matlist

subtraction matlist - scalar

multiplication scalar * matlist

multiplication matlist * scalar

quotient matlist / scalar

A scalar scalar may also be added, subtracted, multiplied with, or divided into a list matlist of matrices. The result is a new list of matrices where each matrix is the result of performing the operation with the corresponding matrix in matlist.

multiplication mat * matlist

multiplication matlist * mat

A matrix mat may also be multiplied with a list matlist of matrices. The result is a new list of matrices, where each entry is the product of mat and the corresponding entry in matlist.

quotient matlist / mat

Dividing a list matlist of matrices by an invertible matrix mat evaluates to matlist * mat^-1.

multiplication vec * matlist

returns the product of the vector vec and the list of matrices mat. The lengths l of vec and matlist must be equal. All matrices in matlist must have the same dimensions. The elements of vec and the elements of the matrices in matlist must lie in a common ring. The product is the sum over vec[i] * matlist[i] with i running from 1 to l.

For the mutability of results of arithmetic operations, see .

Properties and Attributes of Matrices <#Include Label="DimensionsMat"> <#Include Label="DefaultFieldOfMatrix"> <#Include Label="TraceMat"> <#Include Label="DeterminantMat"> <#Include Label="DeterminantMatDestructive"> <#Include Label="DeterminantMatDivFree"> <#Include Label="MatObj_IsEmptyMatrix"> <#Include Label="IsMonomialMatrix"> <#Include Label="IsDiagonalMat"> <#Include Label="IsUpperTriangularMat"> <#Include Label="IsLowerTriangularMat">
Matrix Constructions <#Include Label="IdentityMat"> <#Include Label="NullMat"> <#Include Label="EmptyMatrix"> <#Include Label="DiagonalMat"> <#Include Label="PermutationMat"> <#Include Label="TransposedMatImmutable"> <#Include Label="TransposedMatDestructive"> <#Include Label="KroneckerProduct"> <#Include Label="ReflectionMat"> <#Include Label="PrintArray">
Random Matrices <#Include Label="RandomMat"> <#Include Label="RandomInvertibleMat"> <#Include Label="RandomUnimodularMat">
Matrices Representing Linear Equations and the Gaussian Algorithm Gaussian algorithm <#Include Label="RankMat"> <#Include Label="TriangulizedMat"> <#Include Label="TriangulizeMat"> <#Include Label="NullspaceMat"> <#Include Label="NullspaceMatDestructive"> <#Include Label="SolutionMat"> <#Include Label="SolutionMatDestructive"> <#Include Label="BaseFixedSpace">
Eigenvectors and eigenvalues <#Include Label="GeneralisedEigenvalues"> <#Include Label="GeneralisedEigenspaces"> <#Include Label="Eigenvalues"> <#Include Label="Eigenspaces"> <#Include Label="Eigenvectors">
Elementary Divisors See also chapter . <#Include Label="ElementaryDivisorsMat"> <#Include Label="ElementaryDivisorsTransformationsMat"> <#Include Label="DiagonalizeMat">
Echelonized Matrices <#Include Label="SemiEchelonMat"> <#Include Label="SemiEchelonMatDestructive"> <#Include Label="SemiEchelonMatTransformation"> <#Include Label="SemiEchelonMats"> <#Include Label="SemiEchelonMatsDestructive">
Matrices as Basis of a Row Space See also chapter <#Include Label="BaseMat"> <#Include Label="BaseMatDestructive"> <#Include Label="BaseOrthogonalSpaceMat"> <#Include Label="SumIntersectionMat"> <#Include Label="BaseSteinitzVectors">
Triangular Matrices <#Include Label="DiagonalOfMat"> <#Include Label="UpperSubdiagonal"> <#Include Label="DepthOfUpperTriangularMatrix">
Matrices as Linear Mappings <#Include Label="CharacteristicPolynomial"> <#Include Label="RationalCanonicalFormTransform"> <#Include Label="JordanDecomposition"> <#Include Label="BlownUpMat"> <#Include Label="BlownUpVector"> <#Include Label="CompanionMat">
Matrices over Finite Fields Just as for row vectors, (see section ), &GAP; has a special representation for matrices over small finite fields.

To be eligible to be represented in this way, each row of a matrix must be able to be represented as a compact row vector of the same length over the same finite field.

v := Z(2)*[1,0,0,1,1]; [ Z(2)^0, 0*Z(2), 0*Z(2), Z(2)^0, Z(2)^0 ] gap> ConvertToVectorRep(v,2); 2 gap> v; gap> m := [v];; ConvertToMatrixRep(m,GF(2));; m; gap> m := [v,v];; ConvertToMatrixRep(m,GF(2));; m; gap> m := [v,v,v];; ConvertToMatrixRep(m,GF(2));; m; gap> v := Z(3)*[1..8]; [ Z(3), Z(3)^0, 0*Z(3), Z(3), Z(3)^0, 0*Z(3), Z(3), Z(3)^0 ] gap> ConvertToVectorRep(v); 3 gap> m := [v];; ConvertToMatrixRep(m,GF(3));; m; [ [ Z(3), Z(3)^0, 0*Z(3), Z(3), Z(3)^0, 0*Z(3), Z(3), Z(3)^0 ] ] gap> RepresentationsOfObject(m); [ "IsPositionalObjectRep", "Is8BitMatrixRep" ] gap> m := [v,v,v,v];; ConvertToMatrixRep(m,GF(3));; m; < mutable compressed matrix 4x8 over GF(3) > ]]>

All compressed matrices over GF(2) are viewed as <a nxm matrix over GF2>, while over fields GF(q) for q between 3 and 256, matrices with 25 or more entries are viewed in this way, and smaller ones as lists of lists.

Matrices can be converted to this special representation via the following functions.

Note that the main advantage of this special representation of matrices is in low dimensions, where various overheads can be reduced. In higher dimensions, a list of compressed vectors will be almost as fast. Note also that list access and assignment will be somewhat slower for compressed matrices than for plain lists.

In order to form a row of a compressed matrix a vector must accept certain restrictions. Specifically, it cannot change its length or change the field over which it is compressed. The main consequences of this are: that only elements of the appropriate field can be assigned to entries of the vector, and only to positions between 1 and the original length; that the vector cannot be shared between two matrices compressed over different fields.

This is enforced by the filter IsLockedRepresentationVector. When a vector becomes part of a compressed matrix, this filter is set for it. Assignment, , and are all prevented from altering a vector with this filter.

v := [Z(2),Z(2)];; ConvertToVectorRep(v,GF(2));; v; gap> m := [v,v]; [ , ] gap> ConvertToMatrixRep(m,GF(2)); 2 gap> m2 := [m[1], [Z(4),Z(4)]]; # now try and mix in some GF(4) [ , [ Z(2^2), Z(2^2) ] ] gap> ConvertToMatrixRep(m2); # but m2[1] is locked #I ConvertToVectorRep: locked vector not converted to different field fail gap> m2 := [ShallowCopy(m[1]), [Z(4),Z(4)]]; # a fresh copy of row 1 [ , [ Z(2^2), Z(2^2) ] ] gap> ConvertToMatrixRep(m2); # now it works 4 gap> m2; [ [ Z(2)^0, Z(2)^0 ], [ Z(2^2), Z(2^2) ] ] gap> RepresentationsOfObject(m2); [ "IsPositionalObjectRep", "Is8BitMatrixRep" ] ]]>

Arithmetic operations (see  and the following sections) preserve the compression status of matrices in the sense that if all arguments are compressed matrices written over the same field and the result is a matrix then also the result is a compressed matrix written over this field.

There are also two operations that are only available for matrices written over finite fields. <#Include Label="ImmutableMatrix"> <#Include Label="ConvertToMatrixRep"> <#Include Label="ProjectiveOrder"> <#Include Label="SimultaneousEigenvalues">

Inverse and Nullspace of an Integer Matrix Modulo an Ideal The following operations deal with matrices over a ring, but only care about the residues of their entries modulo some ring element. In the case of the integers and a prime number p, say, this is effectively computation in a matrix over the prime field in characteristic p. <#Include Label="InverseMatMod"> <#Include Label="BasisNullspaceModN"> <#Include Label="NullspaceModN">
Special Multiplication Algorithms for Matrices over GF(2) When multiplying two compressed matrices M and N over GF(2) of dimensions a \times b and b \times c, say, where a, b and c are all greater than or equal to 128, &GAP; by default uses a more sophisticated matrix multiplication algorithm, in which linear combinations of groups of 8 rows of M are remembered and re-used in constructing various rows of the product. This is called level 8 grease. To optimise memory access patterns, these combinations are stored for (b+255)/256 sets of 8 rows at once. This number is called the blocking level.

These levels of grease and blocking are found experimentally to give good performance across a range of processors and matrix sizes, but other levels may do even better in some cases. You can control the levels exactly using the functions below.

We plan to include greased blocked matrix multiplication for other finite fields, and greased blocked algorithms for inversion and other matrix operations in a future release.

This function performs the standard unblocked and ungreased matrix multiplication for matrices of any size.

This function computes the product of m1 and m2, which must be compressed matrices over GF(2) of compatible dimensions, using level g grease and level b blocking.

Block Matrices <#Include Label="[1]{matblock}"> <#Include Label="AsBlockMatrix"> <#Include Label="BlockMatrix"> <#Include Label="MatrixByBlockMatrix">
Linear Programming <#Include Label="SimplexMethod">