Linear Algebra
Standard Functions
Linear algebra functions in Julia are largely implemented by calling functions from LAPACK. Sparse factorizations call functions from SuiteSparse.

*(A, B)

Matrix multiplication.

\(A, B)

Matrix division using a polyalgorithm. For input matrices
A
andB
, the resultX
is such thatA*X == B
whenA
is square. The solver that is used depends upon the structure ofA
. IfA
is upper or lower triangular (or diagonal), no factorization ofA
is required and the system is solved with either forward or backward substitution. For nontriangular square matrices, an LU factorization is used.For rectangular
A
the result is the minimumnorm least squares solution computed by a pivoted QR factorization ofA
and a rank estimate ofA
based on the R factor.When
A
is sparse, a similar polyalgorithm is used. For indefinite matrices, theLDLt
factorization does not use pivoting during the numerical factorization and therefore the procedure can fail even for invertible matrices.

dot(x, y)

⋅(x, y)

Compute the dot product. For complex vectors, the first vector is conjugated.

vecdot(x, y)

For any iterable containers
x
andy
(including arrays of any dimension) of numbers (or any element type for whichdot
is defined), compute the Euclidean dot product (the sum ofdot(x[i],y[i])
) as if they were vectors.

cross(x, y)

×(x, y)

Compute the cross product of two 3vectors.

factorize(A)

Compute a convenient factorization of
A
, based upon the type of the input matrix.factorize
checksA
to see if it is symmetric/triangular/etc. ifA
is passed as a generic matrix.factorize
checks every element ofA
to verify/rule out each property. It will shortcircuit as soon as it can rule out symmetry/triangular structure. The return value can be reused for efficient solving of multiple systems. For example:A=factorize(A); x=A\b; y=A\C
.Properties of A
type of factorization Positivedefinite Cholesky (see cholfact()
)Dense Symmetric/Hermitian BunchKaufman (see bkfact()
)Sparse Symmetric/Hermitian LDLt (see ldltfact()
)Triangular Triangular Diagonal Diagonal Bidiagonal Bidiagonal Tridiagonal LU (see lufact()
)Symmetric real tridiagonal LDLt (see ldltfact()
)General square LU (see lufact()
)General nonsquare QR (see qrfact()
)If
factorize
is called on a Hermitian positivedefinite matrix, for instance, thenfactorize
will return a Cholesky factorization.Example:
A = diagm(rand(5)) + diagm(rand(4),1); #A is really bidiagonal factorize(A) #factorize will check to see that A is already factorized
This returns a
5×5 Bidiagonal{Float64}
, which can now be passed to other linear algebra functions (e.g. eigensolvers) which will use specialized methods forBidiagonal
types.

full(F)

Reconstruct the matrix
A
from the factorizationF=factorize(A)
.

Diagonal(A::AbstractMatrix)

Constructs a matrix from the diagonal of
A
.

Diagonal(V::AbstractVector)

Constructs a matrix with
V
as its diagonal.

Bidiagonal(dv, ev, isupper::Bool)

Constructs an upper (
isupper=true
) or lower (isupper=false
) bidiagonal matrix using the given diagonal (dv
) and offdiagonal (ev
) vectors. The result is of typeBidiagonal
and provides efficient specialized linear solvers, but may be converted into a regular matrix withfull()
.ev
‘s length must be one less than the length ofdv
.Example
dv = rand(5) ev = rand(4) Bu = Bidiagonal(dv, ev, true) #e is on the first superdiagonal Bl = Bidiagonal(dv, ev, false) #e is on the first subdiagonal

Bidiagonal(dv, ev, uplo::Char)

Constructs an upper (
uplo='U'
) or lower (uplo='L'
) bidiagonal matrix using the given diagonal (dv
) and offdiagonal (ev
) vectors. The result is of typeBidiagonal
and provides efficient specialized linear solvers, but may be converted into a regular matrix withfull()
.ev
‘s length must be one less than the length ofdv
.Example
dv = rand(5) ev = rand(4) Bu = Bidiagonal(dv, ev, 'U') #e is on the first superdiagonal Bl = Bidiagonal(dv, ev, 'L') #e is on the first subdiagonal

Bidiagonal(A, isupper::Bool)

Construct a
Bidiagonal
matrix from the main diagonal ofA
and its first super (ifisupper=true
) or subdiagonal (ifisupper=false
).Example
A = rand(5,5) Bu = Bidiagonal(A, true) #contains the main diagonal and first superdiagonal of A Bl = Bidiagonal(A, false) #contains the main diagonal and first subdiagonal of A

SymTridiagonal(dv, ev)

Construct a symmetric tridiagonal matrix from the diagonal and first sub/superdiagonal, respectively. The result is of type
SymTridiagonal
and provides efficient specialized eigensolvers, but may be converted into a regular matrix withfull()
.

Tridiagonal(dl, d, du)

Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. The result is of type
Tridiagonal
and provides efficient specialized linear solvers, but may be converted into a regular matrix withfull()
. The lengths ofdl
anddu
must be one less than the length ofd
.

Symmetric(A, uplo=:U)

Construct a
Symmetric
matrix from the upper (ifuplo = :U
) or lower (ifuplo = :L
) triangle ofA
.Example
A = randn(10,10) Supper = Symmetric(A) Slower = Symmetric(A,:L) eigfact(Supper)
eigfact
will use a method specialized for matrices known to be symmetric. Note thatSupper
will not be equal toSlower
unlessA
is itself symmetric (e.g. ifA == A.'
).

Hermitian(A, uplo=:U)

Construct a
Hermitian
matrix from the upper (ifuplo = :U
) or lower (ifuplo = :L
) triangle ofA
.Example
A = randn(10,10) Hupper = Hermitian(A) Hlower = Hermitian(A,:L) eigfact(Hupper)
eigfact
will use a method specialized for matrices known to be Hermitian. Note thatHupper
will not be equal toHlower
unlessA
is itself Hermitian (e.g. ifA == A'
).

lu(A) → L, U, p

Compute the LU factorization of
A
, such thatA[p,:] = L*U
.

lufact(A[, pivot=Val{true}]) → F::LU

Compute the LU factorization of
A
.In most cases, if
A
is a subtypeS
ofAbstractMatrix{T}
with an element typeT
supporting+
,
,*
and/
, the return type isLU{T,S{T}}
. If pivoting is chosen (default) the element type should also supportabs
and<
.The individual components of the factorization
F
can be accessed by indexing:Component Description F[:L]
L
(lower triangular) part ofLU
F[:U]
U
(upper triangular) part ofLU
F[:p]
(right) permutation Vector
F[:P]
(right) permutation Matrix
The relationship between
F
andA
isF[:L]*F[:U] == A[F[:p], :]
F
further supports the following functions:Supported function LU
LU{T,Tridiagonal{T}}
/()
✓ \()
✓ ✓ cond()
✓ det()
✓ ✓ logdet()
✓ ✓ logabsdet()
✓ ✓ size()
✓ ✓

lufact(A::SparseMatrixCSC) → F::UmfpackLU

Compute the LU factorization of a sparse matrix
A
.For sparse
A
with real or complex element type, the return type ofF
isUmfpackLU{Tv, Ti}
, withTv
=Float64
orComplex128
respectively andTi
is an integer type (Int32
orInt64
).The individual components of the factorization
F
can be accessed by indexing:Component Description F[:L]
L
(lower triangular) part ofLU
F[:U]
U
(upper triangular) part ofLU
F[:p]
right permutation Vector
F[:q]
left permutation Vector
F[:Rs]
Vector
of scaling factorsF[:(:)]
(L,U,p,q,Rs)
componentsThe relation between
F
andA
isF[:L]*F[:U] == (F[:Rs] .* A)[F[:p], F[:q]]
F
further supports the following functions:** Implementation note **
lufact(A::SparseMatrixCSC)
uses the UMFPACK library that is part of SuiteSparse. As this library only supports sparse matrices withFloat64
orComplex128
elements,lufact
convertsA
into a copy that is of typeSparseMatrixCSC{Float64}
orSparseMatrixCSC{Complex128}
as appropriate.

lufact!(A) → LU

lufact!
is the same aslufact()
, but saves space by overwriting the inputA
, instead of creating a copy. AnInexactError
exception is thrown if the factorisation produces a number not representable by the element type ofA
, e.g. for integer types.

chol(A) → U

Compute the Cholesky factorization of a positive definite matrix
A
and return the UpperTriangular matrixU
such thatA = U'U
.

chol(x::Number) → y

Compute the square root of a nonnegative number
x
.

cholfact(A, [uplo::Symbol, ]Val{false}) → Cholesky

Compute the Cholesky factorization of a dense symmetric positive definite matrix
A
and return aCholesky
factorization. The matrixA
can either be aSymmetric
orHermitian
StridedMatrix
or a perfectly symmetric or HermitianStridedMatrix
. In the latter case, the optional argumentuplo
may be:L
for using the lower part or:U
for the upper part ofA
. The default is to use:U
. The triangular Cholesky factor can be obtained from the factorizationF
with:F[:L]
andF[:U]
. The following functions are available forCholesky
objects:size
,\
,inv
,det
. APosDefException
exception is thrown in case the matrix is not positive definite.

cholfact(A, [uplo::Symbol, ]Val{true}; tol = 0.0) → CholeskyPivoted

Compute the pivoted Cholesky factorization of a dense symmetric positive semidefinite matrix
A
and return aCholeskyPivoted
factorization. The matrixA
can either be aSymmetric
orHermitian
StridedMatrix
or a perfectly symmetric or HermitianStridedMatrix
. In the latter case, the optional argumentuplo
may be:L
for using the lower part or:U
for the upper part ofA
. The default is to use:U
. The triangular Cholesky factor can be obtained from the factorizationF
with:F[:L]
andF[:U]
. The following functions are available forPivotedCholesky
objects:size
,\
,inv
,det
, andrank
. The argumenttol
determines the tolerance for determining the rank. For negative values, the tolerance is the machine precision.

cholfact(A; shift = 0.0, perm = Int[]) → CHOLMOD.Factor

Compute the Cholesky factorization of a sparse positive definite matrix
A
.A
must be aSparseMatrixCSC
,Symmetric{SparseMatrixCSC}
, orHermitian{SparseMatrixCSC}
. Note that even ifA
doesn’t have the type tag, it must still be symmetric or Hermitian. A fillreducing permutation is used.F = cholfact(A)
is most frequently used to solve systems of equations withF\b
, but also the methodsdiag
,det
,logdet
are defined forF
. You can also extract individual factors fromF
, usingF[:L]
. However, since pivoting is on by default, the factorization is internally represented asA == P'*L*L'*P
with a permutation matrixP
; using justL
without accounting forP
will give incorrect answers. To include the effects of permutation, it’s typically preferable to extact “combined” factors likePtL = F[:PtL]
(the equivalent ofP'*L
) andLtP = F[:UP]
(the equivalent ofL'*P
).Setting optional
shift
keyword argument computes the factorization ofA+shift*I
instead ofA
. If theperm
argument is nonempty, it should be a permutation of1:size(A,1)
giving the ordering to use (instead of CHOLMOD’s default AMD ordering).Note
This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to
SparseMatrixCSC{Float64}
orSparseMatrixCSC{Complex128}
as appropriate.Many other functions from CHOLMOD are wrapped but not exported from the
Base.SparseArrays.CHOLMOD
module.

cholfact!(F::Factor, A; shift = 0.0) → CHOLMOD.Factor

Compute the Cholesky (\(LL'\)) factorization of
A
, reusing the symbolic factorizationF
.A
must be aSparseMatrixCSC
,Symmetric{SparseMatrixCSC}
, orHermitian{SparseMatrixCSC}
. Note that even ifA
doesn’t have the type tag, it must still be symmetric or Hermitian.Note
This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to
SparseMatrixCSC{Float64}
orSparseMatrixCSC{Complex128}
as appropriate.

cholfact!(A, [uplo::Symbol, ]Val{false}) → Cholesky

The same as
cholfact
, but saves space by overwriting the inputA
, instead of creating a copy. AnInexactError
exception is thrown if the factorisation produces a number not representable by the element type ofA
, e.g. for integer types.

cholfact!(A, [uplo::Symbol, ]Val{true}; tol = 0.0) → CholeskyPivoted

The same as
cholfact
, but saves space by overwriting the inputA
, instead of creating a copy. AnInexactError
exception is thrown if the factorisation produces a number not representable by the element type ofA
, e.g. for integer types.

lowrankupdate(C::Cholesky, v::StridedVector) → CC::Cholesky

Update a Cholesky factorization
C
with the vectorv
. IfA = C[:U]'C[:U]
thenCC = cholfact(C[:U]'C[:U] + v*v')
but the computation ofCC
only usesO(n^2)
operations.

lowrankdowndate(C::Cholesky, v::StridedVector) → CC::Cholesky

Downdate a Cholesky factorization
C
with the vectorv
. IfA = C[:U]'C[:U]
thenCC = cholfact(C[:U]'C[:U]  v*v')
but the computation ofCC
only usesO(n^2)
operations.

lowrankupdate!(C::Cholesky, v::StridedVector) → CC::Cholesky

Update a Cholesky factorization
C
with the vectorv
. IfA = C[:U]'C[:U]
thenCC = cholfact(C[:U]'C[:U] + v*v')
but the computation ofCC
only usesO(n^2)
operations. The input factorizationC
is updated in place such that on exitC == CC
. The vectorv
is destroyed during the computation.

lowrankdowndate!(C::Cholesky, v::StridedVector) → CC::Cholesky

Downdate a Cholesky factorization
C
with the vectorv
. IfA = C[:U]'C[:U]
thenCC = cholfact(C[:U]'C[:U]  v*v')
but the computation ofCC
only usesO(n^2)
operations. The input factorizationC
is updated in place such that on exitC == CC
. The vectorv
is destroyed during the computation.

ldltfact(::SymTridiagonal) → LDLt

Compute an
LDLt
factorization of a real symmetric tridiagonal matrix such thatA = L*Diagonal(d)*L'
whereL
is a unit lower triangular matrix andd
is a vector. The main use of anLDLt
factorizationF = ldltfact(A)
is to solve the linear system of equationsAx = b
withF\b
.

ldltfact(A; shift = 0.0, perm=Int[]) → CHOLMOD.Factor

Compute the \(LDL'\) factorization of a sparse matrix
A
.A
must be aSparseMatrixCSC
,Symmetric{SparseMatrixCSC}
, orHermitian{SparseMatrixCSC}
. Note that even ifA
doesn’t have the type tag, it must still be symmetric or Hermitian. A fillreducing permutation is used.F = ldltfact(A)
is most frequently used to solve systems of equationsA*x = b
withF\b
. The returned factorization objectF
also supports the methodsdiag
,det
, andlogdet
. You can extract individual factors fromF
usingF[:L]
. However, since pivoting is on by default, the factorization is internally represented asA == P'*L*D*L'*P
with a permutation matrixP
; using justL
without accounting forP
will give incorrect answers. To include the effects of permutation, it is typically preferable to extact “combined” factors likePtL = F[:PtL]
(the equivalent ofP'*L
) andLtP = F[:UP]
(the equivalent ofL'*P
). The complete list of supported factors is:L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP
.Setting optional
shift
keyword argument computes the factorization ofA+shift*I
instead ofA
. If theperm
argument is nonempty, it should be a permutation of1:size(A,1)
giving the ordering to use (instead of CHOLMOD’s default AMD ordering).Note
This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to
SparseMatrixCSC{Float64}
orSparseMatrixCSC{Complex128}
as appropriate.Many other functions from CHOLMOD are wrapped but not exported from the
Base.SparseArrays.CHOLMOD
module.

ldltfact!(F::Factor, A; shift = 0.0) → CHOLMOD.Factor

Compute the \(LDL'\) factorization of
A
, reusing the symbolic factorizationF
.A
must be aSparseMatrixCSC
,Symmetric{SparseMatrixCSC}
, orHermitian{SparseMatrixCSC}
. Note that even ifA
doesn’t have the type tag, it must still be symmetric or Hermitian.Note
This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to
SparseMatrixCSC{Float64}
orSparseMatrixCSC{Complex128}
as appropriate.

ldltfact!(::SymTridiagonal) → LDLt

Same as
ldltfact
, but saves space by overwriting the inputA
, instead of creating a copy.

qr(v::AbstractVector)

Computes the polar decomposition of a vector.
Input:

v::AbstractVector
 vector to normalize
Outputs:

w
 A unit vector in the direction ofv

r
 The norm ofv
See also:
normalize
,normalize!
,LinAlg.qr!


LinAlg.qr!(v::AbstractVector)

Computes the polar decomposition of a vector. Instead of returning a new vector as
qr(v::AbstractVector)
, this function mutates the input vectorv
in place.Input:

v::AbstractVector
 vector to normalize
Outputs:

w
 A unit vector in the direction ofv
(This is a mutation ofv
). 
r
 The norm ofv
See also:
normalize
,normalize!
,qr


qr(A[, pivot=Val{false}][;thin=true]) → Q, R, [p]

Compute the (pivoted) QR factorization of
A
such that eitherA = Q*R
orA[:,p] = Q*R
. Also seeqrfact
. The default is to compute a thin factorization. Note thatR
is not extended with zeros when the fullQ
is requested.

qrfact(A[, pivot=Val{false}]) → F

Computes the QR factorization of
A
. The return type ofF
depends on the element type ofA
and whether pivoting is specified (withpivot==Val{true}
).Return type eltype(A)
pivot
Relationship between F
andA
QR
not BlasFloat
either A==F[:Q]*F[:R]
QRCompactWY
BlasFloat
Val{false}
A==F[:Q]*F[:R]
QRPivoted
BlasFloat
Val{true}
A[:,F[:p]]==F[:Q]*F[:R]
BlasFloat
refers to any of:Float32
,Float64
,Complex64
orComplex128
.The individual components of the factorization
F
can be accessed by indexing:Component Description QR
QRCompactWY
QRPivoted
F[:Q]
Q
(orthogonal/unitary) part ofQR
✓ ( QRPackedQ
)✓ ( QRCompactWYQ
)✓ ( QRPackedQ
)F[:R]
R
(upper right triangular) part ofQR
✓ ✓ ✓ F[:p]
pivot Vector
✓ F[:P]
(pivot) permutation Matrix
✓ The following functions are available for the
QR
objects:size
,\
. WhenA
is rectangular,\
will return a least squares solution and if the solution is not unique, the one with smallest norm is returned.Multiplication with respect to either thin or full
Q
is allowed, i.e. bothF[:Q]*F[:R]
andF[:Q]*A
are supported. AQ
matrix can be converted into a regular matrix withfull()
which has a named argumentthin
.Note
qrfact
returns multiple types because LAPACK uses several representations that minimize the memory storage requirements of products of Householder elementary reflectors, so that theQ
andR
matrices can be stored compactly rather as two separate dense matrices.The data contained in
QR
orQRPivoted
can be used to construct theQRPackedQ
type, which is a compact representation of the rotation matrix:\[Q = \prod_{i=1}^{\min(m,n)} (I  \tau_i v_i v_i^T)\]where \(\tau_i\) is the scale factor and \(v_i\) is the projection vector associated with the \(i^{th}\) Householder elementary reflector.
The data contained in
QRCompactWY
can be used to construct theQRCompactWYQ
type, which is a compact representation of the rotation matrix\[Q = I + Y T Y^T\]where
Y
is \(m \times r\) lower trapezoidal andT
is \(r \times r\) upper triangular. The compact WY representation [Schreiber1989] is not to be confused with the older, WY representation [Bischof1987]. (The LAPACK documentation usesV
in lieu ofY
.)[Bischof1987] (1, 2) C Bischof and C Van Loan, “The WY representation for products of Householder matrices”, SIAM J Sci Stat Comput 8 (1987), s2s13. doi:10.1137/0908009 [Schreiber1989] R Schreiber and C Van Loan, “A storageefficient WY representation for products of Householder transformations”, SIAM J Sci Stat Comput 10 (1989), 5357. doi:10.1137/0910005

qrfact(A) → SPQR.Factorization

Compute the QR factorization of a sparse matrix
A
. A fillreducing permutation is used. The main application of this type is to solve least squares problems with\
. The function calls the C library SPQR and a few additional functions from the library are wrapped but not exported.

qrfact!(A[, pivot=Val{false}])

qrfact!
is the same asqrfact()
whenA
is a subtype ofStridedMatrix
, but saves space by overwriting the inputA
, instead of creating a copy. AnInexactError
exception is thrown if the factorisation produces a number not representable by the element type ofA
, e.g. for integer types.

full(QRCompactWYQ[, thin=true]) → Matrix

Converts an orthogonal or unitary matrix stored as a
QRCompactWYQ
object, i.e. in the compact WY format [Bischof1987], to a dense matrix.Optionally takes a
thin
Boolean argument, which iftrue
omits the columns that span the rows ofR
in the QR factorization that are zero. The resulting matrix is theQ
in a thin QR factorization (sometimes called the reduced QR factorization). Iffalse
, returns aQ
that spans all rows ofR
in its corresponding QR factorization.

lqfact!(A) → LQ

Compute the LQ factorization of
A
, using the input matrix as a workspace. See alsolq()
.

lqfact(A) → LQ

Compute the LQ factorization of
A
. See alsolq()
.

lq(A; [thin=true]) → L, Q

Perform an LQ factorization of
A
such thatA = L*Q
. The default is to compute a thin factorization. The LQ factorization is the QR factorization ofA.'
.L
is not extended with zeros if the fullQ
is requested.

bkfact(A) → BunchKaufman

Compute the BunchKaufman [Bunch1977] factorization of a real symmetric or complex Hermitian matrix
A
and return aBunchKaufman
object. The following functions are available forBunchKaufman
objects:size
,\
,inv
,issymmetric
,ishermitian
.[Bunch1977] J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163179. url.

bkfact!(A) → BunchKaufman

bkfact!
is the same asbkfact()
, but saves space by overwriting the inputA
, instead of creating a copy.

eig(A,[irange,][vl,][vu,][permute=true,][scale=true]) → D, V

Computes eigenvalues (
D
) and eigenvectors (V
) ofA
. Seeeigfact()
for details on theirange
,vl
, andvu
arguments and thepermute
andscale
keyword arguments. The eigenvectors are returned columnwise.julia> eig([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0]) ([1.0,3.0,18.0], [1.0 0.0 0.0; 0.0 1.0 0.0; 0.0 0.0 1.0])
eig
is a wrapper aroundeigfact()
, extracting all parts of the factorization to a tuple; where possible, usingeigfact()
is recommended.

eig(A, B) → D, V

Computes generalized eigenvalues (
D
) and vectors (V
) ofA
with respect toB
.eig
is a wrapper aroundeigfact()
, extracting all parts of the factorization to a tuple; where possible, usingeigfact()
is recommended.julia> A = [1 0; 0 1] 2×2 Array{Int64,2}: 1 0 0 1 julia> B = [0 1; 1 0] 2×2 Array{Int64,2}: 0 1 1 0 julia> eig(A, B) (Complex{Float64}[0.0+1.0im,0.01.0im], Complex{Float64}[0.01.0im 0.0+1.0im; 1.00.0im 1.0+0.0im])

eigvals(A,[irange,][vl,][vu]) → values

Returns the eigenvalues of
A
. IfA
isSymmetric
,Hermitian
orSymTridiagonal
, it is possible to calculate only a subset of the eigenvalues by specifying either aUnitRange
irange
covering indices of the sorted eigenvalues, or a pairvl
andvu
for the lower and upper boundaries of the eigenvalues.For general nonsymmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The option
permute=true
permutes the matrix to become closer to upper triangular, andscale=true
scales the matrix by its diagonal elements to make rows and columns moreequal in norm. The default istrue
for both options.

eigvals!(A,[irange,][vl,][vu]) → values

Same as
eigvals()
, but saves space by overwriting the inputA
, instead of creating a copy.

eigmax(A; permute::Bool=true, scale::Bool=true)

Returns the largest eigenvalue of
A
. The optionpermute=true
permutes the matrix to become closer to upper triangular, andscale=true
scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues ofA
are complex, this method will fail, since complex numbers cannot be sorted.julia> A = [0 im; im 0] 2×2 Array{Complex{Int64},2}: 0+0im 0+1im 01im 0+0im julia> eigmax(A) 1.0 julia> A = [0 im; 1 0] 2×2 Array{Complex{Int64},2}: 0+0im 0+1im 1+0im 0+0im julia> eigmax(A) ERROR: DomainError: in #eigmax#30(::Bool, ::Bool, ::Function, ::Array{Complex{Int64},2}) at ./linalg/eigen.jl:186 in eigmax(::Array{Complex{Int64},2}) at ./linalg/eigen.jl:184 ...

eigmin(A; permute::Bool=true, scale::Bool=true)

Returns the smallest eigenvalue of
A
. The optionpermute=true
permutes the matrix to become closer to upper triangular, andscale=true
scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues ofA
are complex, this method will fail, since complex numbers cannot be sorted.julia> A = [0 im; im 0] 2×2 Array{Complex{Int64},2}: 0+0im 0+1im 01im 0+0im julia> eigmin(A) 1.0 julia> A = [0 im; 1 0] 2×2 Array{Complex{Int64},2}: 0+0im 0+1im 1+0im 0+0im julia> eigmin(A) ERROR: DomainError: in #eigmin#31(::Bool, ::Bool, ::Function, ::Array{Complex{Int64},2}) at ./linalg/eigen.jl:226 in eigmin(::Array{Complex{Int64},2}) at ./linalg/eigen.jl:224 ...

eigvecs(A, [eigvals,][permute=true,][scale=true]) → Matrix

Returns a matrix
M
whose columns are the eigenvectors ofA
. (Thek
th eigenvector can be obtained from the sliceM[:, k]
.) Thepermute
andscale
keywords are the same as foreigfact()
.For
SymTridiagonal
matrices, if the optional vector of eigenvalueseigvals
is specified, returns the specific corresponding eigenvectors.

eigfact(A,[irange,][vl,][vu,][permute=true,][scale=true]) → Eigen

Computes the eigenvalue decomposition of
A
, returning anEigen
factorization objectF
which contains the eigenvalues inF[:values]
and the eigenvectors in the columns of the matrixF[:vectors]
. (Thek
th eigenvector can be obtained from the sliceF[:vectors][:, k]
.)The following functions are available for
Eigen
objects:inv()
,det()
, andisposdef()
.If
A
isSymmetric
,Hermitian
orSymTridiagonal
, it is possible to calculate only a subset of the eigenvalues by specifying either aUnitRange
irange
covering indices of the sorted eigenvalues or a pairvl
andvu
for the lower and upper boundaries of the eigenvalues.For general nonsymmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The option
permute=true
permutes the matrix to become closer to upper triangular, andscale=true
scales the matrix by its diagonal elements to make rows and columns more equal in norm. The default istrue
for both options.

eigfact(A, B) → GeneralizedEigen

Computes the generalized eigenvalue decomposition of
A
andB
, returning aGeneralizedEigen
factorization objectF
which contains the generalized eigenvalues inF[:values]
and the generalized eigenvectors in the columns of the matrixF[:vectors]
. (Thek
th generalized eigenvector can be obtained from the sliceF[:vectors][:, k]
.)

eigfact!(A[, B])

Same as
eigfact()
, but saves space by overwriting the inputA
(andB
), instead of creating a copy.

hessfact(A)

Compute the Hessenberg decomposition of
A
and return aHessenberg
object. IfF
is the factorization object, the unitary matrix can be accessed withF[:Q]
and the Hessenberg matrix withF[:H]
. WhenQ
is extracted, the resulting type is theHessenbergQ
object, and may be converted to a regular matrix withfull()
.

hessfact!(A)

hessfact!
is the same ashessfact()
, but saves space by overwriting the inputA
, instead of creating a copy.

schurfact(A::StridedMatrix) → F::Schur

Computes the Schur factorization of the matrix
A
. The (quasi) triangular Schur factor can be obtained from theSchur
objectF
with eitherF[:Schur]
orF[:T]
and the orthogonal/unitary Schur vectors can be obtained withF[:vectors]
orF[:Z]
such thatA = F[:vectors]*F[:Schur]*F[:vectors]'
. The eigenvalues ofA
can be obtained withF[:values]
.

schurfact!(A::StridedMatrix) → F::Schur

Same as
schurfact
but uses the input argument as workspace.

schur(A::StridedMatrix) → T::Matrix, Z::Matrix, λ::Vector

Computes the Schur factorization of the matrix
A
. The methods return the (quasi) triangular Schur factorT
and the orthogonal/unitary Schur vectorsZ
such thatA = Z*T*Z'
. The eigenvalues ofA
are returned in the vectorλ
.See
schurfact
.

ordschur(F::Schur, select::Union{Vector{Bool}, BitVector}) → F::Schur

Reorders the Schur factorization
F
of a matrixA = Z*T*Z'
according to the logical arrayselect
returning the reordered factorizationF
object. The selected eigenvalues appear in the leading diagonal ofF[:Schur]
and the corresponding leading columns ofF[:vectors]
form an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded viaselect
.

ordschur!(F::Schur, select::Union{Vector{Bool}, BitVector}) → F::Schur

Same as
ordschur
but overwrites the factorizationF
.

ordschur(T::StridedMatrix, Z::StridedMatrix, select::Union{Vector{Bool}, BitVector}) → T::StridedMatrix, Z::StridedMatrix, λ::Vector

Reorders the Schur factorization of a real matrix
A = Z*T*Z'
according to the logical arrayselect
returning the reordered matricesT
andZ
as well as the vector of eigenvaluesλ
. The selected eigenvalues appear in the leading diagonal ofT
and the corresponding leading columns ofZ
form an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded viaselect
.

ordschur!(T::StridedMatrix, Z::StridedMatrix, select::Union{Vector{Bool}, BitVector}) → T::StridedMatrix, Z::StridedMatrix, λ::Vector

Same as
ordschur
but overwrites the input arguments.

schurfact(A::StridedMatrix, B::StridedMatrix) → F::GeneralizedSchur

Computes the Generalized Schur (or QZ) factorization of the matrices
A
andB
. The (quasi) triangular Schur factors can be obtained from theSchur
objectF
withF[:S]
andF[:T]
, the left unitary/orthogonal Schur vectors can be obtained withF[:left]
orF[:Q]
and the right unitary/orthogonal Schur vectors can be obtained withF[:right]
orF[:Z]
such thatA=F[:left]*F[:S]*F[:right]'
andB=F[:left]*F[:T]*F[:right]'
. The generalized eigenvalues ofA
andB
can be obtained withF[:alpha]./F[:beta]
.

schurfact!(A::StridedMatrix, B::StridedMatrix) → F::GeneralizedSchur

Same as
schurfact
but uses the input matricesA
andB
as workspace.

ordschur(F::GeneralizedSchur, select::Union{Vector{Bool}, BitVector}) → F::GeneralizedSchur

Reorders the Generalized Schur factorization
F
of a matrix pair(A, B) = (Q*S*Z', Q*T*Z')
according to the logical arrayselect
and returns a GeneralizedSchur objectF
. The selected eigenvalues appear in the leading diagonal of bothF[:S]
andF[:T]
, and the left and right orthogonal/unitary Schur vectors are also reordered such that(A, B) = F[:Q]*(F[:S], F[:T])*F[:Z]'
still holds and the generalized eigenvalues ofA
andB
can still be obtained withF[:alpha]./F[:beta]
.

ordschur!(F::GeneralizedSchur, select::Union{Vector{Bool}, BitVector}) → F::GeneralizedSchur

Same as
ordschur
but overwrites the factorizationF
.

ordschur(S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, select) → S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector

Reorders the Generalized Schur factorization of a matrix pair
(A, B) = (Q*S*Z', Q*T*Z')
according to the logical arrayselect
and returns the matricesS
,T
,Q
,Z
and vectorsα
andβ
. The selected eigenvalues appear in the leading diagonal of bothS
andT
, and the left and right unitary/orthogonal Schur vectors are also reordered such that(A, B) = Q*(S, T)*Z'
still holds and the generalized eigenvalues ofA
andB
can still be obtained withα./β
.

ordschur!(S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, select) → S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector

Same as
ordschur
but overwrites the factorization the input arguments.

schur(A::StridedMatrix, B::StridedMatrix) → S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector

See
schurfact
.

svdfact(A[, thin=true]) → SVD

Compute the singular value decomposition (SVD) of
A
and return anSVD
object.U
,S
,V
andVt
can be obtained from the factorizationF
withF[:U]
,F[:S]
,F[:V]
andF[:Vt]
, such thatA = U*diagm(S)*Vt
. The algorithm producesVt
and henceVt
is more efficient to extract thanV
.If
thin=true
(default), a thin SVD is returned. For a \(M \times N\) matrixA
,U
is \(M \times M\) for a full SVD (thin=false
) and \(M \times \min(M, N)\) for a thin SVD.

svdfact!(A[, thin=true]) → SVD

svdfact!
is the same assvdfact()
, but saves space by overwriting the inputA
, instead of creating a copy.If
thin=true
(default), a thin SVD is returned. For a \(M \times N\) matrixA
,U
is \(M \times M\) for a full SVD (thin=false
) and \(M \times \min(M, N)\) for a thin SVD.

svd(A[, thin=true]) → U, S, V

Computes the SVD of
A
, returningU
, vectorS
, andV
such thatA == U*diagm(S)*V'
.If
thin=true
(default), a thin SVD is returned. For a \(M \times N\) matrixA
,U
is \(M \times M\) for a full SVD (thin=false
) and \(M \times \min(M, N)\) for a thin SVD.svd
is a wrapper aroundsvdfact(A)()
, extracting all parts of theSVD
factorization to a tuple. Direct use ofsvdfact
is therefore more efficient.

svdvals(A)

Returns the singular values of
A
.

svdvals!(A)

Returns the singular values of
A
, saving space by overwriting the input.

svdfact(A, B) → GeneralizedSVD

Compute the generalized SVD of
A
andB
, returning aGeneralizedSVD
factorization objectF
, such thatA = F[:U]*F[:D1]*F[:R0]*F[:Q]'
andB = F[:V]*F[:D2]*F[:R0]*F[:Q]'
.For an MbyN matrix
A
and PbyN matrixB
,
F[:U]
is a MbyM orthogonal matrix, 
F[:V]
is a PbyP orthogonal matrix, 
F[:Q]
is a NbyN orthogonal matrix, 
F[:R0]
is a (K+L)byN matrix whose rightmost (K+L)by(K+L) block is nonsingular upper block triangular, 
F[:D1]
is a Mby(K+L) diagonal matrix with 1s in the first K entries, 
F[:D2]
is a Pby(K+L) matrix whose top right LbyL block is diagonal,
K+L
is the effective numerical rank of the matrix[A; B]
.The entries of
F[:D1]
andF[:D2]
are related, as explained in the LAPACK documentation for the generalized SVD and the xGGSVD3 routine which is called underneath (in LAPACK 3.6.0 and newer). 

svd(A, B) → U, V, Q, D1, D2, R0

Wrapper around
svdfact(A, B)()
extracting all parts of the factorization to a tuple. Direct use ofsvdfact
is therefore generally more efficient. The function returns the generalized SVD ofA
andB
, returningU
,V
,Q
,D1
,D2
, andR0
such thatA = U*D1*R0*Q'
andB = V*D2*R0*Q'
.

svdvals(A, B)

Return the generalized singular values from the generalized singular value decomposition of
A
andB
.

LinAlg.Givens(i1, i2, c, s) → G

A Givens rotation linear operator. The fields
c
ands
represent the cosine and sine of the rotation angle, respectively. TheGivens
type supports left multiplicationG*A
and conjugated transpose right multiplicationA*G'
. The type doesn’t have asize
and can therefore be multiplied with matrices of arbitrary size as long asi2<=size(A,2)
forG*A
ori2<=size(A,1)
forA*G'
.See also:
givens()

givens{T}(f::T, g::T, i1::Integer, i2::Integer) → (G::Givens, r::T)

Computes the Givens rotation
G
and scalarr
such that for any vectorx
wherex[i1] = f x[i2] = g
the result of the multiplication
y = G*x
has the property that
y[i1] = r y[i2] = 0
See also:
LinAlg.Givens

givens(x::AbstractVector, i1::Integer, i2::Integer) → (G::Givens, r)

Computes the Givens rotation
G
and scalarr
such that the result of the multiplicationB = G*x
has the property that
B[i1] = r B[i2] = 0
See also:
LinAlg.Givens

givens(A::AbstractArray, i1::Integer, i2::Integer, j::Integer) → (G::Givens, r)

Computes the Givens rotation
G
and scalarr
such that the result of the multiplicationB = G*A
has the property that
B[i1,j] = r B[i2,j] = 0
See also:
LinAlg.Givens

triu(M)

Upper triangle of a matrix.

triu(M, k)

Returns the upper triangle of
M
starting from thek
th superdiagonal.

triu!(M)

Upper triangle of a matrix, overwriting
M
in the process.

triu!(M, k)

Returns the upper triangle of
M
starting from thek
th superdiagonal, overwritingM
in the process.

tril(M)

Lower triangle of a matrix.

tril(M, k)

Returns the lower triangle of
M
starting from thek
th superdiagonal.

tril!(M)

Lower triangle of a matrix, overwriting
M
in the process.

tril!(M, k)

Returns the lower triangle of
M
starting from thek
th superdiagonal, overwritingM
in the process.

diagind(M[, k])

A
Range
giving the indices of thek
th diagonal of the matrixM
.

diag(M[, k])

The
k
th diagonal of a matrix, as a vector. Usediagm
to construct a diagonal matrix.

diagm(v[, k])

Construct a diagonal matrix and place
v
on thek
th diagonal.

scale!(A, b)

scale!(b, A)

Scale an array
A
by a scalarb
overwritingA
inplace.If
A
is a matrix andb
is a vector, thenscale!(A,b)
scales each columni
ofA
byb[i]
(similar toA*Diagonal(b)
), whilescale!(b,A)
scales each rowi
ofA
byb[i]
(similar toDiagonal(b)*A
), again operating inplace onA
. AnInexactError
exception is thrown if the scaling produces a number not representable by the element type ofA
, e.g. for integer types.

Tridiagonal(dl, d, du)

Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. The result is of type
Tridiagonal
and provides efficient specialized linear solvers, but may be converted into a regular matrix withfull()
. The lengths ofdl
anddu
must be one less than the length ofd
.

rank(M)

Compute the rank of a matrix.

norm(A[, p])

Compute the
p
norm of a vector or the operator norm of a matrixA
, defaulting to thep=2
norm.For vectors,
p
can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular,norm(A, Inf)
returns the largest value inabs(A)
, whereasnorm(A, Inf)
returns the smallest.For matrices, the matrix norm induced by the vector
p
norm is used, where valid values ofp
are1
,2
, orInf
. (Note that for sparse matrices,p=2
is currently not implemented.) Usevecnorm()
to compute the Frobenius norm.

vecnorm(A[, p])

For any iterable container
A
(including arrays of any dimension) of numbers (or any element type for whichnorm
is defined), compute thep
norm (defaulting top=2
) as ifA
were a vector of the corresponding length.For example, if
A
is a matrix andp=2
, then this is equivalent to the Frobenius norm.

normalize!(v[, p=2])

Normalize the vector
v
inplace with respect to thep
norm.Inputs:

v::AbstractVector
 vector to be normalized 
p::Real
 Thep
norm to normalize with respect to. Default: 2
Output:

v
 A unit vector being the input vector, rescaled to have norm 1. The input vector is modified inplace.
See also:
normalize
,qr


normalize(v[, p=2])

Normalize the vector
v
with respect to thep
norm.Inputs:

v::AbstractVector
 vector to be normalized 
p::Real
 Thep
norm to normalize with respect to. Default: 2
Output:

v
 A unit vector being a copy of the input vector, scaled to have norm 1
See also:
normalize!
,qr


cond(M[, p])

Condition number of the matrix
M
, computed using the operatorp
norm. Valid values forp
are1
,2
(default), orInf
.

condskeel(M[, x, p])

\[\begin{split}\kappa_S(M, p) & = \left\Vert \left\vert M \right\vert \left\vert M^{1} \right\vert \right\Vert_p \\ \kappa_S(M, x, p) & = \left\Vert \left\vert M \right\vert \left\vert M^{1} \right\vert \left\vert x \right\vert \right\Vert_p\end{split}\]
Skeel condition number \(\kappa_S\) of the matrix
M
, optionally with respect to the vectorx
, as computed using the operatorp
norm.p
isInf
by default, if not provided. Valid values forp
are1
,2
, orInf
.This quantity is also known in the literature as the Bauer condition number, relative condition number, or componentwise relative condition number.

trace(M)

Matrix trace.

det(M)

Matrix determinant.

logdet(M)

Log of matrix determinant. Equivalent to
log(det(M))
, but may provide increased accuracy and/or speed.

logabsdet(M)

Log of absolute value of determinant of real matrix. Equivalent to
(log(abs(det(M))), sign(det(M)))
, but may provide increased accuracy and/or speed.

inv(M)

Matrix inverse.

pinv(M[, tol])

Computes the MoorePenrose pseudoinverse.
For matrices
M
with floating point elements, it is convenient to compute the pseudoinverse by inverting only singular values above a given threshold,tol
.The optimal choice of
tol
varies both with the value ofM
and the intended application of the pseudoinverse. The default value oftol
iseps(real(float(one(eltype(M)))))*maximum(size(A))
, which is essentially machine epsilon for the real part of a matrix element multiplied by the larger matrix dimension. For inverting dense illconditioned matrices in a leastsquares sense,tol = sqrt(eps(real(float(one(eltype(M))))))
is recommended.For more information, see [issue8859], [B96], [S84], [KY88].
[issue8859] Issue 8859, “Fix least squares”, https://github.com/JuliaLang/julia/pull/8859 [B96] Åke Björck, “Numerical Methods for Least Squares Problems”, SIAM Press, Philadelphia, 1996, “Other Titles in Applied Mathematics”, Vol. 51. doi:10.1137/1.9781611971484 [S84] 
 Stewart, “Rank Degeneracy”, SIAM Journal on Scientific and Statistical Computing, 5(2), 1984, 403413. doi:10.1137/0905030
[KY88] Konstantinos Konstantinides and Kung Yao, “Statistical analysis of effective singular values in matrix rank determination”, IEEE Transactions on Acoustics, Speech and Signal Processing, 36(5), 1988, 757763. doi:10.1109/29.1585 

nullspace(M)

Basis for nullspace of
M
.

repmat(A, n, m)

Construct a matrix by repeating the given matrix
n
times in dimension 1 andm
times in dimension 2.

repeat(A::AbstractArray; inner=ntuple(x>1, ndims(A)), outer=ntuple(x>1, ndims(A)))

Construct an array by repeating the entries of
A
. The ith element ofinner
specifies the number of times that the individual entries of the ith dimension ofA
should be repeated. The ith element ofouter
specifies the number of times that a slice along the ith dimension ofA
should be repeated. Ifinner
orouter
are omitted, no repetition is performed.julia> repeat(1:2, inner=2) 4element Array{Int64,1}: 1 1 2 2 julia> repeat(1:2, outer=2) 4element Array{Int64,1}: 1 2 1 2 julia> repeat([1 2; 3 4], inner=(2, 1), outer=(1, 3)) 4×6 Array{Int64,2}: 1 2 1 2 1 2 1 2 1 2 1 2 3 4 3 4 3 4 3 4 3 4 3 4

kron(A, B)

Kronecker tensor product of two vectors or two matrices.

blkdiag(A...)

Concatenate matrices blockdiagonally. Currently only implemented for sparse matrices.

linreg(x, y)

Perform simple linear regression using Ordinary Least Squares. Returns
a
andb
such thata + b*x
is the closest straight line to the given points(x, y)
, i.e., such that the squared error betweeny
anda + b*x
is minimized.Examples:
using PyPlot x = 1.0:12.0 y = [5.5, 6.3, 7.6, 8.8, 10.9, 11.79, 13.48, 15.02, 17.77, 20.81, 22.0, 22.99] a, b = linreg(x, y) # Linear regression plot(x, y, "o") # Plot (x, y) points plot(x, a + b*x) # Plot line determined by linear regression
See also:
\
,cov
,std
,mean

expm(A)

Compute the matrix exponential of
A
, defined by\[e^A = \sum_{n=0}^{\infty} \frac{A^n}{n!}.\]For symmetric or Hermitian
A
, an eigendecomposition (eigfact()
) is used, otherwise the scaling and squaring algorithm (see [H05]) is chosen.[H05] Nicholas J. Higham, “The squaring and scaling method for the matrix exponential revisited”, SIAM Journal on Matrix Analysis and Applications, 26(4), 2005, 11791193. doi:10.1137/090768539

logm(A::StridedMatrix)

If
A
has no negative real eigenvalue, compute the principal matrix logarithm ofA
, i.e. the unique matrix \(X\) such that \(e^X = A\) and \(\pi < Im(\lambda) < \pi\) for all the eigenvalues \(\lambda\) of \(X\). IfA
has nonpositive eigenvalues, a nonprincipal matrix function is returned whenever possible.If
A
is symmetric or Hermitian, its eigendecomposition (eigfact()
) is used, ifA
is triangular an improved version of the inverse scaling and squaring method is employed (see [AH12] and [AHR13]). For general matrices, the complex Schur form (schur()
) is computed and the triangular algorithm is used on the triangular factor.[AH12] Awad H. AlMohy and Nicholas J. Higham, “Improved inverse scaling and squaring algorithms for the matrix logarithm”, SIAM Journal on Scientific Computing, 34(4), 2012, C153C169. doi:10.1137/110852553 [AHR13] Awad H. AlMohy, Nicholas J. Higham and Samuel D. Relton, “Computing the Fréchet derivative of the matrix logarithm and estimating the condition number”, SIAM Journal on Scientific Computing, 35(4), 2013, C394C410. doi:10.1137/120885991

sqrtm(A)

If
A
has no negative real eigenvalues, compute the principal matrix square root ofA
, that is the unique matrix \(X\) with eigenvalues having positive real part such that \(X^2 = A\). Otherwise, a nonprincipal square root is returned.If
A
is symmetric or Hermitian, its eigendecomposition (eigfact()
) is used to compute the square root. Otherwise, the square root is determined by means of the BjörckHammarling method, which computes the complex Schur form (schur()
) and then the complex square root of the triangular factor.[BH83] Åke Björck and Sven Hammarling, “A Schur method for the square root of a matrix”, Linear Algebra and its Applications, 5253, 1983, 127140. doi:10.1016/00243795(83)80010X

lyap(A, C)

Computes the solution
X
to the continuous Lyapunov equationAX + XA' + C = 0
, where no eigenvalue ofA
has a zero real part and no two eigenvalues are negative complex conjugates of each other.

sylvester(A, B, C)

Computes the solution
X
to the Sylvester equationAX + XB + C = 0
, whereA
,B
andC
have compatible dimensions andA
andB
have no eigenvalues with equal real part.

issymmetric(A) → Bool

Test whether a matrix is symmetric.

isposdef(A) → Bool

Test whether a matrix is positive definite.

isposdef!(A) → Bool

Test whether a matrix is positive definite, overwriting
A
in the processes.

istril(A) → Bool

Test whether a matrix is lower triangular.

istriu(A) → Bool

Test whether a matrix is upper triangular.

isdiag(A) → Bool

Test whether a matrix is diagonal.

ishermitian(A) → Bool

Test whether a matrix is Hermitian.

transpose(A)

The transposition operator (
.'
).

transpose!(dest, src)

Transpose array
src
and store the result in the preallocated arraydest
, which should have a size corresponding to(size(src,2),size(src,1))
. No inplace transposition is supported and unexpected results will happen ifsrc
anddest
have overlapping memory regions.

ctranspose(A)

The conjugate transposition operator (
'
).

ctranspose!(dest, src)

Conjugate transpose array
src
and store the result in the preallocated arraydest
, which should have a size corresponding to(size(src,2),size(src,1))
. No inplace transposition is supported and unexpected results will happen ifsrc
anddest
have overlapping memory regions.

eigs(A; nev=6, ncv=max(20, 2*nev+1), which="LM", tol=0.0, maxiter=300, sigma=nothing, ritzvec=true, v0=zeros((0, ))) → (d,[v,],nconv,niter,nmult,resid)

Computes eigenvalues
d
ofA
using implicitly restarted Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively.The following keyword arguments are supported:

nev
: Number of eigenvalues 
ncv
: Number of Krylov vectors used in the computation; should satisfynev+1 <= ncv <= n
for real symmetric problems andnev+2 <= ncv <= n
for other problems, wheren
is the size of the input matrixA
. The default isncv = max(20,2*nev+1)
. Note that these restrictions limit the input matrixA
to be of dimension at least 2. 
which
: type of eigenvalues to compute. See the note below.
which
type of eigenvalues :LM
eigenvalues of largest magnitude (default) :SM
eigenvalues of smallest magnitude :LR
eigenvalues of largest real part :SR
eigenvalues of smallest real part :LI
eigenvalues of largest imaginary part (nonsymmetric or complex A
only):SI
eigenvalues of smallest imaginary part (nonsymmetric or complex A
only):BE
compute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric A
only)
tol
: parameter defining the relative tolerance for convergence of Ritz values (eigenvalue estimates). A Ritz value \(θ\) is considered converged when its associated residual is less than or equal to the product oftol
and \(max(ɛ^{2/3}, θ)\), whereɛ = eps(real(eltype(A)))/2
is LAPACK’s machine epsilon. The residual associated with \(θ\) and its corresponding Ritz vector \(v\) is defined as the norm \(Av  vθ\). The specified value oftol
should be positive; otherwise, it is ignored and \(ɛ\) is used instead. Default: \(ɛ\). 
maxiter
: Maximum number of iterations (default = 300) 
sigma
: Specifies the level shift used in inverse iteration. Ifnothing
(default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close tosigma
using shift and invert iterations. 
ritzvec
: Returns the Ritz vectorsv
(eigenvectors) iftrue

v0
: starting vector from which to start the iterations
eigs
returns thenev
requested eigenvalues ind
, the corresponding Ritz vectorsv
(only ifritzvec=true
), the number of converged eigenvaluesnconv
, the number of iterationsniter
and the number of matrix vector multiplicationsnmult
, as well as the final residual vectorresid
.Note
The
sigma
andwhich
keywords interact: the description of eigenvalues searched for bywhich
do not necessarily refer to the eigenvalues ofA
, but rather the linear operator constructed by the specification of the iteration mode implied bysigma
.sigma
iteration mode which
refers to eigenvalues ofnothing
ordinary (forward) \(A\) real or complex inverse with level shift sigma
\((A  \sigma I )^{1}\) Note
Although
tol
has a default value, the best choice depends strongly on the matrixA
. We recommend that users _always_ specify a value fortol
which suits their specific needs.For details of how the errors in the computed eigenvalues are estimated, see:


 Parlett, “The Symmetric Eigenvalue Problem”, SIAM: Philadelphia, 2/e (1998), Ch. 13.2, “Accessing Accuracy in Lanczos Problems”, pp. 290292 ff.



 Lehoucq and D. C. Sorensen, “Deflation Techniques for an Implicitly Restarted Arnoldi Iteration”, SIAM Journal on Matrix Analysis and Applications (1996), 17(4), 789–821. doi:10.1137/S0895479895281484



eigs(A, B; nev=6, ncv=max(20, 2*nev+1), which="LM", tol=0.0, maxiter=300, sigma=nothing, ritzvec=true, v0=zeros((0, ))) → (d,[v,],nconv,niter,nmult,resid)

Computes generalized eigenvalues
d
ofA
andB
using implicitly restarted Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively.The following keyword arguments are supported:

nev
: Number of eigenvalues 
ncv
: Number of Krylov vectors used in the computation; should satisfynev+1 <= ncv <= n
for real symmetric problems andnev+2 <= ncv <= n
for other problems, wheren
is the size of the input matricesA
andB
. The default isncv = max(20,2*nev+1)
. Note that these restrictions limit the input matrixA
to be of dimension at least 2. 
which
: type of eigenvalues to compute. See the note below.
which
type of eigenvalues :LM
eigenvalues of largest magnitude (default) :SM
eigenvalues of smallest magnitude :LR
eigenvalues of largest real part :SR
eigenvalues of smallest real part :LI
eigenvalues of largest imaginary part (nonsymmetric or complex A
only):SI
eigenvalues of smallest imaginary part (nonsymmetric or complex A
only):BE
compute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric A
only)
tol
: relative tolerance used in the convergence criterion for eigenvalues, similar totol
in theeigs()
method for the ordinary eigenvalue problem, but effectively for the eigenvalues of \(B^{1} A\) instead of \(A\). See the documentation for the ordinary eigenvalue problem ineigs()
and the accompanying note abouttol
. 
maxiter
: Maximum number of iterations (default = 300) 
sigma
: Specifies the level shift used in inverse iteration. Ifnothing
(default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close tosigma
using shift and invert iterations. 
ritzvec
: Returns the Ritz vectorsv
(eigenvectors) iftrue

v0
: starting vector from which to start the iterations
eigs
returns thenev
requested eigenvalues ind
, the corresponding Ritz vectorsv
(only ifritzvec=true
), the number of converged eigenvaluesnconv
, the number of iterationsniter
and the number of matrix vector multiplicationsnmult
, as well as the final residual vectorresid
.Example
X = sprand(10, 5, 0.2) eigs(X, nsv = 2, tol = 1e3)
Note
The
sigma
andwhich
keywords interact: the description of eigenvalues searched for bywhich
do not necessarily refer to the eigenvalue problem \(Av = Bv\lambda\), but rather the linear operator constructed by the specification of the iteration mode implied bysigma
.sigma
iteration mode which
refers to the problemnothing
ordinary (forward) \(Av = Bv\lambda\) real or complex inverse with level shift sigma
\((A  \sigma B )^{1}B = v\nu\) 

svds(A; nsv=6, ritzvec=true, tol=0.0, maxiter=1000, ncv=2*nsv, u0=zeros((0, )), v0=zeros((0, ))) → (SVD([left_sv,] s, [right_sv,]), nconv, niter, nmult, resid)

Computes the largest singular values
s
ofA
using implicitly restarted Lanczos iterations derived fromeigs()
.Inputs

A
: Linear operator whose singular values are desired.A
may be represented as a subtype ofAbstractArray
, e.g., a sparse matrix, or any other type supporting the four methodssize(A)
,eltype(A)
,A * vector
, andA' * vector
. 
nsv
: Number of singular values. Default: 6. 
ritzvec
: Iftrue
, return the left and right singular vectorsleft_sv
andright_sv
. Iffalse
, omit the singular vectors. Default:true
. 
tol
: tolerance, seeeigs()
. 
maxiter
: Maximum number of iterations, seeeigs()
. Default: 1000. 
ncv
: Maximum size of the Krylov subspace, seeeigs()
(there callednev
). Default:2*nsv
. 
u0
: Initial guess for the first left Krylov vector. It may have lengthm
(the first dimension ofA
), or 0. 
v0
: Initial guess for the first right Krylov vector. It may have lengthn
(the second dimension ofA
), or 0.
Outputs

svd
: AnSVD
object containing the left singular vectors, the requested values, and the right singular vectors. Ifritzvec = false
, the left and right singular vectors will be empty. 
nconv
: Number of converged singular values. 
niter
: Number of iterations. 
nmult
: Number of matrix–vector products used. 
resid
: Final residual vector.
Example
X = sprand(10, 5, 0.2) svds(X, nsv = 2)
Implementation note
svds(A)
is formally equivalent to callingeigs
to perform implicitly restarted Lanczos tridiagonalization on the Hermitian matrix \(\begin{pmatrix} 0 & A^\prime \\ A & 0 \end{pmatrix}\), whose eigenvalues are plus and minus the singular values of \(A\). 

peakflops(n; parallel=false)

peakflops
computes the peak flop rate of the computer by using double precisionBase.LinAlg.BLAS.gemm!()
. By default, if no arguments are specified, it multiplies a matrix of sizen x n
, wheren = 2000
. If the underlying BLAS is using multiple threads, higher flop rates are realized. The number of BLAS threads can be set withBLAS.set_num_threads(n)
.If the keyword argument
parallel
is set totrue
,peakflops
is run in parallel on all the worker processors. The flop rate of the entire parallel computer is returned. When running in parallel, only 1 BLAS thread is used. The argumentn
still refers to the size of the problem that is solved on each processor.
Lowlevel matrix operations
Matrix operations involving transpositions operations like A' \ B
are converted by the Julia parser into calls to specially named functions like Ac_ldiv_B
. If you want to overload these operations for your own types, then it is useful to know the names of these functions.
Also, in many cases there are inplace versions of matrix operations that allow you to supply a preallocated output vector or matrix. This is useful when optimizing critical code in order to avoid the overhead of repeated allocations. These inplace operations are suffixed with !
below (e.g. A_mul_B!
) according to the usual Julia convention.

A_ldiv_B!([Y, ]A, B) → Y

Compute
A \ B
inplace and store the result inY
, returning the result. If only two arguments are passed, thenA_ldiv_B!(A, B)
overwritesB
with the result.The argument
A
should not be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced byfactorize()
orcholfact()
). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done inplace via, e.g.,lufact!()
), and performancecritical situations requiringA_ldiv_B!
usually also require finegrained control over the factorization ofA
.

A_ldiv_Bc(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(A\) \ \(Bᴴ\).

A_ldiv_Bt(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(A\) \ \(Bᵀ\).

A_mul_B!(Y, A, B) → Y

Calculates the matrixmatrix or matrixvector product \(A⋅B\) and stores the result in
Y
, overwriting the existing value ofY
. Note thatY
must not be aliased with eitherA
orB
.julia> A=[1.0 2.0; 3.0 4.0]; B=[1.0 1.0; 1.0 1.0]; Y = similar(B); A_mul_B!(Y, A, B); julia> Y 2×2 Array{Float64,2}: 3.0 3.0 7.0 7.0

A_mul_Bc(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(A⋅Bᴴ\).

A_mul_Bt(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(A⋅Bᵀ\).

A_rdiv_Bc(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(A / Bᴴ\).

A_rdiv_Bt(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(A / Bᵀ\).

Ac_ldiv_B(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ\) \ \(B\).

Ac_ldiv_B!([Y, ]A, B) → Y

Similar to
A_ldiv_B!()
, but return \(Aᴴ\) \ \(B\), computing the result inplace inY
(or overwritingB
ifY
is not supplied).

Ac_ldiv_Bc(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ\) \ \(Bᴴ\).

Ac_mul_B(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ⋅B\).

Ac_mul_Bc(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ Bᴴ\).

Ac_rdiv_B(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ / B\).

Ac_rdiv_Bc(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ / Bᴴ\).

At_ldiv_B(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ\) \ \(B\).

At_ldiv_B!([Y, ]A, B) → Y

Similar to
A_ldiv_B!()
, but return \(Aᵀ\) \ \(B\), computing the result inplace inY
(or overwritingB
ifY
is not supplied).

At_ldiv_Bt(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ\) \ \(Bᵀ\).

At_mul_B(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ⋅B\).

At_mul_Bt(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ⋅Bᵀ\).

At_rdiv_B(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ / B\).

At_rdiv_Bt(A, B)

For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ / Bᵀ\).
BLAS Functions
In Julia (as in much of scientific computation), dense linearalgebra operations are based on the LAPACK library, which in turn is built on top of basic linearalgebra buildingblocks known as the BLAS. There are highly optimized implementations of BLAS available for every computer architecture, and sometimes in highperformance linear algebra routines it is useful to call the BLAS functions directly.
Base.LinAlg.BLAS
provides wrappers for some of the BLAS functions. Those BLAS functions that overwrite one of the input arrays have names ending in '!'
. Usually, a BLAS function has four methods defined, for Float64
, Float32
, Complex128
, and Complex64
arrays.

dot(n, X, incx, Y, incy)

Dot product of two vectors consisting of
n
elements of arrayX
with strideincx
andn
elements of arrayY
with strideincy
.

dotu(n, X, incx, Y, incy)

Dot function for two complex vectors.

dotc(n, X, incx, U, incy)

Dot function for two complex vectors conjugating the first vector.

blascopy!(n, X, incx, Y, incy)

Copy
n
elements of arrayX
with strideincx
to arrayY
with strideincy
. ReturnsY
.

nrm2(n, X, incx)

2norm of a vector consisting of
n
elements of arrayX
with strideincx
.

asum(n, X, incx)

Sum of the absolute values of the first
n
elements of arrayX
with strideincx
.

axpy!(a, X, Y)

Overwrite
Y
witha*X + Y
. ReturnsY
.

scal!(n, a, X, incx)

Overwrite
X
witha*X
for the firstn
elements of arrayX
with strideincx
. ReturnsX
.

scal(n, a, X, incx)

Returns
X
scaled bya
for the firstn
elements of arrayX
with strideincx
.

ger!(alpha, x, y, A)

Rank1 update of the matrix
A
with vectorsx
andy
asalpha*x*y' + A
.

syr!(uplo, alpha, x, A)

Rank1 update of the symmetric matrix
A
with vectorx
asalpha*x*x.' + A
. Whenuplo
is ‘U’ the upper triangle ofA
is updated (‘L’ for lower triangle). ReturnsA
.

syrk!(uplo, trans, alpha, A, beta, C)

Rankk update of the symmetric matrix
C
asalpha*A*A.' + beta*C
oralpha*A.'*A + beta*C
according to whethertrans
is ‘N’ or ‘T’. Whenuplo
is ‘U’ the upper triangle ofC
is updated (‘L’ for lower triangle). ReturnsC
.

syrk(uplo, trans, alpha, A)

Returns either the upper triangle or the lower triangle, according to
uplo
(‘U’ or ‘L’), ofalpha*A*A.'
oralpha*A.'*A
, according totrans
(‘N’ or ‘T’).

her!(uplo, alpha, x, A)

Methods for complex arrays only. Rank1 update of the Hermitian matrix
A
with vectorx
asalpha*x*x' + A
. Whenuplo
is ‘U’ the upper triangle ofA
is updated (‘L’ for lower triangle). ReturnsA
.

herk!(uplo, trans, alpha, A, beta, C)

Methods for complex arrays only. Rankk update of the Hermitian matrix
C
asalpha*A*A' + beta*C
oralpha*A'*A + beta*C
according to whethertrans
is ‘N’ or ‘T’. Whenuplo
is ‘U’ the upper triangle ofC
is updated (‘L’ for lower triangle). ReturnsC
.

herk(uplo, trans, alpha, A)

Methods for complex arrays only. Returns either the upper triangle or the lower triangle, according to
uplo
(‘U’ or ‘L’), ofalpha*A*A'
oralpha*A'*A
, according totrans
(‘N’ or ‘T’).

gbmv!(trans, m, kl, ku, alpha, A, x, beta, y)

Update vector
y
asalpha*A*x + beta*y
oralpha*A'*x + beta*y
according totrans
(‘N’ or ‘T’). The matrixA
is a general band matrix of dimensionm
bysize(A,2)
withkl
subdiagonals andku
superdiagonals. Returns the updatedy
.

gbmv(trans, m, kl, ku, alpha, A, x, beta, y)

Returns
alpha*A*x
oralpha*A'*x
according totrans
(‘N’ or ‘T’). The matrixA
is a general band matrix of dimensionm
bysize(A,2)
withkl
subdiagonals andku
superdiagonals.

sbmv!(uplo, k, alpha, A, x, beta, y)

Update vector
y
asalpha*A*x + beta*y
whereA
is a a symmetric band matrix of ordersize(A,2)
withk
superdiagonals stored in the argumentA
. The storage layout forA
is described the reference BLAS module, level2 BLAS at <http://www.netlib.org/lapack/explorehtml/>.Returns the updated
y
.

sbmv(uplo, k, alpha, A, x)

Returns
alpha*A*x
whereA
is a symmetric band matrix of ordersize(A,2)
withk
superdiagonals stored in the argumentA
.

sbmv(uplo, k, A, x)

Returns
A*x
whereA
is a symmetric band matrix of ordersize(A,2)
withk
superdiagonals stored in the argumentA
.

gemm!(tA, tB, alpha, A, B, beta, C)

Update
C
asalpha*A*B + beta*C
or the other three variants according totA
(transposeA
) andtB
. Returns the updatedC
.

gemm(tA, tB, alpha, A, B)

Returns
alpha*A*B
or the other three variants according totA
(transposeA
) andtB
.

gemm(tA, tB, A, B)

Returns
A*B
or the other three variants according totA
(transposeA
) andtB
.

gemv!(tA, alpha, A, x, beta, y)

Update the vector
y
asalpha*A*x + beta*y
oralpha*A'x + beta*y
according totA
(transposeA
). Returns the updatedy
.

gemv(tA, alpha, A, x)

Returns
alpha*A*x
oralpha*A'x
according totA
(transposeA
).

gemv(tA, A, x)

Returns
A*x
orA'x
according totA
(transposeA
).

symm!(side, ul, alpha, A, B, beta, C)

Update
C
asalpha*A*B + beta*C
oralpha*B*A + beta*C
according toside
.A
is assumed to be symmetric. Only theul
triangle ofA
is used. Returns the updatedC
.

symm(side, ul, alpha, A, B)

Returns
alpha*A*B
oralpha*B*A
according toside
.A
is assumed to be symmetric. Only theul
triangle ofA
is used.

symm(side, ul, A, B)

Returns
A*B
orB*A
according toside
.A
is assumed to be symmetric. Only theul
triangle ofA
is used.

symm(tA, tB, alpha, A, B)

Returns
alpha*A*B
or the other three variants according totA
(transposeA
) andtB
.

symv!(ul, alpha, A, x, beta, y)

Update the vector
y
asalpha*A*x + beta*y
.A
is assumed to be symmetric. Only theul
triangle ofA
is used. Returns the updatedy
.

symv(ul, alpha, A, x)

Returns
alpha*A*x
.A
is assumed to be symmetric. Only theul
triangle ofA
is used.

symv(ul, A, x)

Returns
A*x
.A
is assumed to be symmetric. Only theul
triangle ofA
is used.

trmm!(side, ul, tA, dA, alpha, A, B)

Update
B
asalpha*A*B
or one of the other three variants determined byside
(A
on left or right) andtA
(transposeA
). Only theul
triangle ofA
is used.dA
indicates ifA
is unittriangular (the diagonal is assumed to be all ones). Returns the updatedB
.

trmm(side, ul, tA, dA, alpha, A, B)

Returns
alpha*A*B
or one of the other three variants determined byside
(A
on left or right) andtA
(transposeA
). Only theul
triangle ofA
is used.dA
indicates ifA
is unittriangular (the diagonal is assumed to be all ones).

trsm!(side, ul, tA, dA, alpha, A, B)

Overwrite
B
with the solution toA*X = alpha*B
or one of the other three variants determined byside
(A
on left or right ofX
) andtA
(transposeA
). Only theul
triangle ofA
is used.dA
indicates ifA
is unittriangular (the diagonal is assumed to be all ones). Returns the updatedB
.

trsm(side, ul, tA, dA, alpha, A, B)

Returns the solution to
A*X = alpha*B
or one of the other three variants determined byside
(A
on left or right ofX
) andtA
(transposeA
). Only theul
triangle ofA
is used.dA
indicates ifA
is unittriangular (the diagonal is assumed to be all ones).

trmv!(ul, tA, dA, A, b)

Returns
op(A)*b
, whereop
is determined bytA
(N
for identity,T
for transposeA
, andC
for conjugate transposeA
). Only theul
triangle (U
for upper,L
for lower) ofA
is used.dA
indicates ifA
is unittriangular (the diagonal is assumed to be all ones ifU
, or nonunit ifN
). The multiplication occurs inplace onb
.

trmv(ul, tA, dA, A, b)

Returns
op(A)*b
, whereop
is determined bytA
(N
for identity,T
for transposeA
, andC
for conjugate transposeA
). Only theul
triangle (U
for upper,L
for lower) ofA
is used.dA
indicates ifA
is unittriangular (the diagonal is assumed to be all ones ifU
, or nonunit ifN
).

trsv!(ul, tA, dA, A, b)

Overwrite
b
with the solution toA*x = b
or one of the other two variants determined bytA
(transposeA
) andul
(triangle ofA
used).dA
indicates ifA
is unittriangular (the diagonal is assumed to be all ones). Returns the updatedb
.

trsv(ul, tA, dA, A, b)

Returns the solution to
A*x = b
or one of the other two variants determined bytA
(transposeA
) andul
(triangle ofA
is used.)dA
indicates ifA
is unittriangular (the diagonal is assumed to be all ones).

set_num_threads(n)

Set the number of threads the BLAS library should use.

I

An object of type
UniformScaling
, representing an identity matrix of any size.
LAPACK Functions
Base.LinAlg.LAPACK
provides wrappers for some of the LAPACK functions for linear algebra. Those functions that overwrite one of the input arrays have names ending in '!'
.
Usually a function has 4 methods defined, one each for Float64
, Float32
, Complex128
and Complex64
arrays.
Note that the LAPACK API provided by Julia can and will change in the future. Since this API is not userfacing, there is no commitment to support/deprecate this specific set of functions in future releases.

gbtrf!(kl, ku, m, AB) → (AB, ipiv)

Compute the LU factorization of a banded matrix
AB
.kl
is the first subdiagonal containing a nonzero band,ku
is the last superdiagonal containing one, andm
is the first dimension of the matrixAB
. Returns the LU factorization inplace andipiv
, the vector of pivots used.

gbtrs!(trans, kl, ku, m, AB, ipiv, B)

Solve the equation
AB * X = B
.trans
determines the orientation ofAB
. It may beN
(no transpose),T
(transpose), orC
(conjugate transpose).kl
is the first subdiagonal containing a nonzero band,ku
is the last superdiagonal containing one, andm
is the first dimension of the matrixAB
.ipiv
is the vector of pivots returned fromgbtrf!
. Returns the vector or matrixX
, overwritingB
inplace.

gebal!(job, A) → (ilo, ihi, scale)

Balance the matrix
A
before computing its eigensystem or Schur factorization.job
can be one ofN
(A
will not be permuted or scaled),P
(A
will only be permuted),S
(A
will only be scaled), orB
(A
will be both permuted and scaled). ModifiesA
inplace and returnsilo
,ihi
, andscale
. If permuting was turned on,A[i,j] = 0
ifj > i
and1 < j < ilo
orj > ihi
.scale
contains information about the scaling/permutations performed.

gebak!(job, side, ilo, ihi, scale, V)

Transform the eigenvectors
V
of a matrix balanced usinggebal!
to the unscaled/unpermuted eigenvectors of the original matrix. ModifiesV
inplace.side
can beL
(left eigenvectors are transformed) orR
(right eigenvectors are transformed).

gebrd!(A) → (A, d, e, tauq, taup)

Reduce
A
inplace to bidiagonal formA = QBP'
. ReturnsA
, containing the bidiagonal matrixB
;d
, containing the diagonal elements ofB
;e
, containing the offdiagonal elements ofB
;tauq
, containing the elementary reflectors representingQ
; andtaup
, containing the elementary reflectors representingP
.

gelqf!(A, tau)

Compute the
LQ
factorization ofA
,A = LQ
.tau
contains scalars which parameterize the elementary reflectors of the factorization.tau
must have length greater than or equal to the smallest dimension ofA
.Returns
A
andtau
modified inplace.

gelqf!(A) → (A, tau)

Compute the
LQ
factorization ofA
,A = LQ
.Returns
A
, modified inplace, andtau
, which contains scalars which parameterize the elementary reflectors of the factorization.

geqlf!(A, tau)

Compute the
QL
factorization ofA
,A = QL
.tau
contains scalars which parameterize the elementary reflectors of the factorization.tau
must have length greater than or equal to the smallest dimension ofA
.Returns
A
andtau
modified inplace.

geqlf!(A) → (A, tau)

Compute the
QL
factorization ofA
,A = QL
.Returns
A
, modified inplace, andtau
, which contains scalars which parameterize the elementary reflectors of the factorization.

geqrf!(A, tau)

Compute the
QR
factorization ofA
,A = QR
.tau
contains scalars which parameterize the elementary reflectors of the factorization.tau
must have length greater than or equal to the smallest dimension ofA
.Returns
A
andtau
modified inplace.

geqrf!(A) → (A, tau)

Compute the
QR
factorization ofA
,A = QR
.Returns
A
, modified inplace, andtau
, which contains scalars which parameterize the elementary reflectors of the factorization.

geqp3!(A, jpvt, tau)

Compute the pivoted
QR
factorization ofA
,AP = QR
using BLAS level 3.P
is a pivoting matrix, represented byjpvt
.tau
stores the elementary reflectors.jpvt
must have length length greater than or equal ton
ifA
is an(m x n)
matrix.tau
must have length greater than or equal to the smallest dimension ofA
.A
,jpvt
, andtau
are modified inplace.

geqp3!(A, jpvt) → (A, jpvt, tau)

Compute the pivoted
QR
factorization ofA
,AP = QR
using BLAS level 3.P
is a pivoting matrix, represented byjpvt
.jpvt
must have length greater than or equal ton
ifA
is an(m x n)
matrix.Returns
A
andjpvt
, modified inplace, andtau
, which stores the elementary reflectors.

geqp3!(A) → (A, jpvt, tau)

Compute the pivoted
QR
factorization ofA
,AP = QR
using BLAS level 3.Returns
A
, modified inplace,jpvt
, which represents the pivoting matrixP
, andtau
, which stores the elementary reflectors.

gerqf!(A, tau)

Compute the
RQ
factorization ofA
,A = RQ
.tau
contains scalars which parameterize the elementary reflectors of the factorization.tau
must have length greater than or equal to the smallest dimension ofA
.Returns
A
andtau
modified inplace.

gerqf!(A) → (A, tau)

Compute the
RQ
factorization ofA
,A = RQ
.Returns
A
, modified inplace, andtau
, which contains scalars which parameterize the elementary reflectors of the factorization.

geqrt!(A, T)

Compute the blocked
QR
factorization ofA
,A = QR
.T
contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension ofT
sets the block size and it must be between 1 andn
. The second dimension ofT
must equal the smallest dimension ofA
.Returns
A
andT
modified inplace.

geqrt!(A, nb) → (A, T)

Compute the blocked
QR
factorization ofA
,A = QR
.nb
sets the block size and it must be between 1 andn
, the second dimension ofA
.Returns
A
, modified inplace, andT
, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

geqrt3!(A, T)

Recursively computes the blocked
QR
factorization ofA
,A = QR
.T
contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension ofT
sets the block size and it must be between 1 andn
. The second dimension ofT
must equal the smallest dimension ofA
.Returns
A
andT
modified inplace.

geqrt3!(A) → (A, T)

Recursively computes the blocked
QR
factorization ofA
,A = QR
.Returns
A
, modified inplace, andT
, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

getrf!(A) → (A, ipiv, info)

Compute the pivoted
LU
factorization ofA
,A = LU
.Returns
A
, modified inplace,ipiv
, the pivoting information, and aninfo
code which indicates success (info = 0
), a singular value inU
(info = i
, in which caseU[i,i]
is singular), or an error code (info < 0
).

tzrzf!(A) → (A, tau)

Transforms the upper trapezoidal matrix
A
to upper triangular form inplace. ReturnsA
andtau
, the scalar parameters for the elementary reflectors of the transformation.

ormrz!(side, trans, A, tau, C)

Multiplies the matrix
C
byQ
from the transformation supplied bytzrzf!
. Depending onside
ortrans
the multiplication can be leftsided (side = L, Q*C
) or rightsided (side = R, C*Q
) andQ
can be unmodified (trans = N
), transposed (trans = T
), or conjugate transposed (trans = C
). Returns matrixC
which is modified inplace with the result of the multiplication.

gels!(trans, A, B) → (F, B, ssr)

Solves the linear equation
A * X = B
,A.' * X =B
, orA' * X = B
using a QR or LQ factorization. Modifies the matrix/vectorB
in place with the solution.A
is overwritten with itsQR
orLQ
factorization.trans
may be one ofN
(no modification),T
(transpose), orC
(conjugate transpose).gels!
searches for the minimum norm/least squares solution.A
may be under or over determined. The solution is returned inB
.

gesv!(A, B) → (B, A, ipiv)

Solves the linear equation
A * X = B
whereA
is a square matrix using theLU
factorization ofA
.A
is overwritten with itsLU
factorization andB
is overwritten with the solutionX
.ipiv
contains the pivoting information for theLU
factorization ofA
.

getrs!(trans, A, ipiv, B)

Solves the linear equation
A * X = B
,A.' * X =B
, orA' * X = B
for squareA
. Modifies the matrix/vectorB
in place with the solution.A
is theLU
factorization fromgetrf!
, withipiv
the pivoting information.trans
may be one ofN
(no modification),T
(transpose), orC
(conjugate transpose).

getri!(A, ipiv)

Computes the inverse of
A
, using itsLU
factorization found bygetrf!
.ipiv
is the pivot information output andA
contains theLU
factorization ofgetrf!
.A
is overwritten with its inverse.

gesvx!(fact, trans, A, AF, ipiv, equed, R, C, B) → (X, equed, R, C, B, rcond, ferr, berr, work)

Solves the linear equation
A * X = B
(trans = N
),A.' * X =B
(trans = T
), orA' * X = B
(trans = C
) using theLU
factorization ofA
.fact
may beE
, in which caseA
will be equilibrated and copied toAF
;F
, in which caseAF
andipiv
from a previousLU
factorization are inputs; orN
, in which caseA
will be copied toAF
and then factored. Iffact = F
,equed
may beN
, meaningA
has not been equilibrated;R
, meaningA
was multiplied bydiagm(R)
from the left;C
, meaningA
was multiplied bydiagm(C)
from the right; orB
, meaningA
was multiplied bydiagm(R)
from the left anddiagm(C)
from the right. Iffact = F
andequed = R
orB
the elements ofR
must all be positive. Iffact = F
andequed = C
orB
the elements ofC
must all be positive.Returns the solution
X
;equed
, which is an output iffact
is notN
, and describes the equilibration that was performed;R
, the row equilibration diagonal;C
, the column equilibration diagonal;B
, which may be overwritten with its equilibrated formdiagm(R)*B
(iftrans = N
andequed = R,B
) ordiagm(C)*B
(iftrans = T,C
andequed = C,B
);rcond
, the reciprocal condition number ofA
after equilbrating;ferr
, the forward error bound for each solution vector inX
;berr
, the forward error bound for each solution vector inX
; andwork
, the reciprocal pivot growth factor.

gesvx!(A, B)

The noequilibration, notranspose simplification of
gesvx!
.

gelsd!(A, B, rcond) → (B, rnk)

Computes the least norm solution of
A * X = B
by finding theSVD
factorization ofA
, then dividingandconquering the problem.B
is overwritten with the solutionX
. Singular values belowrcond
will be treated as zero. Returns the solution inB
and the effective rank ofA
inrnk
.

gelsy!(A, B, rcond) → (B, rnk)

Computes the least norm solution of
A * X = B
by finding the fullQR
factorization ofA
, then dividingandconquering the problem.B
is overwritten with the solutionX
. Singular values belowrcond
will be treated as zero. Returns the solution inB
and the effective rank ofA
inrnk
.

gglse!(A, c, B, d) → (X,res)

Solves the equation
A * x = c
wherex
is subject to the equality constraintB * x = d
. Uses the formulac  A*x^2 = 0
to solve. ReturnsX
and the residual sumofsquares.

geev!(jobvl, jobvr, A) → (W, VL, VR)

Finds the eigensystem of
A
. Ifjobvl = N
, the left eigenvectors ofA
aren’t computed. Ifjobvr = N
, the right eigenvectors ofA
aren’t computed. Ifjobvl = V
orjobvr = V
, the corresponding eigenvectors are computed. Returns the eigenvalues inW
, the right eigenvectors inVR
, and the left eigenvectors inVL
.

gesdd!(job, A) → (U, S, VT)

Finds the singular value decomposition of
A
,A = U * S * V'
, using a divide and conquer approach. Ifjob = A
, all the columns ofU
and the rows ofV'
are computed. Ifjob = N
, no columns ofU
or rows ofV'
are computed. Ifjob = O
,A
is overwritten with the columns of (thin)U
and the rows of (thin)V'
. Ifjob = S
, the columns of (thin)U
and the rows of (thin)V'
are computed and returned separately.

gesvd!(jobu, jobvt, A) → (U, S, VT)

Finds the singular value decomposition of
A
,A = U * S * V'
. Ifjobu = A
, all the columns ofU
are computed. Ifjobvt = A
all the rows ofV'
are computed. Ifjobu = N
, no columns ofU
are computed. Ifjobvt = N
no rows ofV'
are computed. Ifjobu = O
,A
is overwritten with the columns of (thin)U
. Ifjobvt = O
,A
is overwritten with the rows of (thin)V'
. Ifjobu = S
, the columns of (thin)U
are computed and returned separately. Ifjobvt = S
the rows of (thin)V'
are computed and returned separately.jobu
andjobvt
can’t both beO
.Returns
U
,S
, andVt
, whereS
are the singular values ofA
.

ggsvd!(jobu, jobv, jobq, A, B) → (U, V, Q, alpha, beta, k, l, R)

Finds the generalized singular value decomposition of
A
andB
,U'*A*Q = D1*R
andV'*B*Q = D2*R
.D1
hasalpha
on its diagonal andD2
hasbeta
on its diagonal. Ifjobu = U
, the orthogonal/unitary matrixU
is computed. Ifjobv = V
the orthogonal/unitary matrixV
is computed. Ifjobq = Q
, the orthogonal/unitary matrixQ
is computed. Ifjobu
,jobv
orjobq
isN
, that matrix is not computed. This function is only available in LAPACK versions prior to 3.6.0.

ggsvd3!(jobu, jobv, jobq, A, B) → (U, V, Q, alpha, beta, k, l, R)

Finds the generalized singular value decomposition of
A
andB
,U'*A*Q = D1*R
andV'*B*Q = D2*R
.D1
hasalpha
on its diagonal andD2
hasbeta
on its diagonal. Ifjobu = U
, the orthogonal/unitary matrixU
is computed. Ifjobv = V
the orthogonal/unitary matrixV
is computed. Ifjobq = Q
, the orthogonal/unitary matrixQ
is computed. Ifjobu
,jobv
, orjobq
isN
, that matrix is not computed. This function requires LAPACK 3.6.0.

geevx!(balanc, jobvl, jobvr, sense, A) → (A, w, VL, VR, ilo, ihi, scale, abnrm, rconde, rcondv)

Finds the eigensystem of
A
with matrix balancing. Ifjobvl = N
, the left eigenvectors ofA
aren’t computed. Ifjobvr = N
, the right eigenvectors ofA
aren’t computed. Ifjobvl = V
orjobvr = V
, the corresponding eigenvectors are computed. Ifbalanc = N
, no balancing is performed. Ifbalanc = P
,A
is permuted but not scaled. Ifbalanc = S
,A
is scaled but not permuted. Ifbalanc = B
,A
is permuted and scaled. Ifsense = N
, no reciprocal condition numbers are computed. Ifsense = E
, reciprocal condition numbers are computed for the eigenvalues only. Ifsense = V
, reciprocal condition numbers are computed for the right eigenvectors only. Ifsense = B
, reciprocal condition numbers are computed for the right eigenvectors and the eigenvectors. Ifsense = E,B
, the right and left eigenvectors must be computed.

ggev!(jobvl, jobvr, A, B) → (alpha, beta, vl, vr)

Finds the generalized eigendecomposition of
A
andB
. Ifjobvl = N
, the left eigenvectors aren’t computed. Ifjobvr = N
, the right eigenvectors aren’t computed. Ifjobvl = V
orjobvr = V
, the corresponding eigenvectors are computed.

gtsv!(dl, d, du, B)

Solves the equation
A * X = B
whereA
is a tridiagonal matrix withdl
on the subdiagonal,d
on the diagonal, anddu
on the superdiagonal.Overwrites
B
with the solutionX
and returns it.

gttrf!(dl, d, du) → (dl, d, du, du2, ipiv)

Finds the
LU
factorization of a tridiagonal matrix withdl
on the subdiagonal,d
on the diagonal, anddu
on the superdiagonal.Modifies
dl
,d
, anddu
inplace and returns them and the second superdiagonaldu2
and the pivoting vectoripiv
.

gttrs!(trans, dl, d, du, du2, ipiv, B)

Solves the equation
A * X = B
(trans = N
),A.' * X = B
(trans = T
), orA' * X = B
(trans = C
) using theLU
factorization computed bygttrf!
.B
is overwritten with the solutionX
.

orglq!(A, tau, k = length(tau))

Explicitly finds the matrix
Q
of aLQ
factorization after callinggelqf!
onA
. Uses the output ofgelqf!
.A
is overwritten byQ
.

orgqr!(A, tau, k = length(tau))

Explicitly finds the matrix
Q
of aQR
factorization after callinggeqrf!
onA
. Uses the output ofgeqrf!
.A
is overwritten byQ
.

orgql!(A, tau, k = length(tau))

Explicitly finds the matrix
Q
of aQL
factorization after callinggeqlf!
onA
. Uses the output ofgeqlf!
.A
is overwritten byQ
.

orgrq!(A, tau, k = length(tau))

Explicitly finds the matrix
Q
of aRQ
factorization after callinggerqf!
onA
. Uses the output ofgerqf!
.A
is overwritten byQ
.

ormlq!(side, trans, A, tau, C)

Computes
Q * C
(trans = N
),Q.' * C
(trans = T
),Q' * C
(trans = C
) forside = L
or the equivalent rightsided multiplication forside = R
usingQ
from aLQ
factorization ofA
computed usinggelqf!
.C
is overwritten.

ormqr!(side, trans, A, tau, C)

Computes
Q * C
(trans = N
),Q.' * C
(trans = T
),Q' * C
(trans = C
) forside = L
or the equivalent rightsided multiplication forside = R
usingQ
from aQR
factorization ofA
computed usinggeqrf!
.C
is overwritten.

ormql!(side, trans, A, tau, C)

Computes
Q * C
(trans = N
),Q.' * C
(trans = T
),Q' * C
(trans = C
) forside = L
or the equivalent rightsided multiplication forside = R
usingQ
from aQL
factorization ofA
computed usinggeqlf!
.C
is overwritten.

ormrq!(side, trans, A, tau, C)

Computes
Q * C
(trans = N
),Q.' * C
(trans = T
),Q' * C
(trans = C
) forside = L
or the equivalent rightsided multiplication forside = R
usingQ
from aRQ
factorization ofA
computed usinggerqf!
.C
is overwritten.

gemqrt!(side, trans, V, T, C)

Computes
Q * C
(trans = N
),Q.' * C
(trans = T
),Q' * C
(trans = C
) forside = L
or the equivalent rightsided multiplication forside = R
usingQ
from aQR
factorization ofA
computed usinggeqrt!
.C
is overwritten.

posv!(uplo, A, B) → (A, B)

Finds the solution to
A * X = B
whereA
is a symmetric or Hermitian positive definite matrix. Ifuplo = U
the upper Cholesky decomposition ofA
is computed. Ifuplo = L
the lower Cholesky decomposition ofA
is computed.A
is overwritten by its Cholesky decomposition.B
is overwritten with the solutionX
.

potrf!(uplo, A)

Computes the Cholesky (upper if
uplo = U
, lower ifuplo = L
) decomposition of positivedefinite matrixA
.A
is overwritten and returned with an info code.

potri!(uplo, A)

Computes the inverse of positivedefinite matrix
A
after callingpotrf!
to find its (upper ifuplo = U
, lower ifuplo = L
) Cholesky decomposition.A
is overwritten by its inverse and returned.

potrs!(uplo, A, B)

Finds the solution to
A * X = B
whereA
is a symmetric or Hermitian positive definite matrix whose Cholesky decomposition was computed bypotrf!
. Ifuplo = U
the upper Cholesky decomposition ofA
was computed. Ifuplo = L
the lower Cholesky decomposition ofA
was computed.B
is overwritten with the solutionX
.

pstrf!(uplo, A, tol) → (A, piv, rank, info)

Computes the (upper if
uplo = U
, lower ifuplo = L
) pivoted Cholesky decomposition of positivedefinite matrixA
with a userset tolerancetol
.A
is overwritten by its Cholesky decomposition.Returns
A
, the pivotspiv
, the rank ofA
, and aninfo
code. Ifinfo = 0
, the factorization succeeded. Ifinfo = i > 0
, thenA
is indefinite or rankdeficient.

ptsv!(D, E, B)

Solves
A * X = B
for positivedefinite tridiagonalA
.D
is the diagonal ofA
andE
is the offdiagonal.B
is overwritten with the solutionX
and returned.

pttrf!(D, E)

Computes the LDLt factorization of a positivedefinite tridiagonal matrix with
D
as diagonal andE
as offdiagonal.D
andE
are overwritten and returned.

pttrs!(D, E, B)

Solves
A * X = B
for positivedefinite tridiagonalA
with diagonalD
and offdiagonalE
after computingA
‘s LDLt factorization usingpttrf!
.B
is overwritten with the solutionX
.

trtri!(uplo, diag, A)

Finds the inverse of (upper if
uplo = U
, lower ifuplo = L
) triangular matrixA
. Ifdiag = N
,A
has nonunit diagonal elements. Ifdiag = U
, all diagonal elements ofA
are one.A
is overwritten with its inverse.

trtrs!(uplo, trans, diag, A, B)

Solves
A * X = B
(trans = N
),A.' * X = B
(trans = T
), orA' * X = B
(trans = C
) for (upper ifuplo = U
, lower ifuplo = L
) triangular matrixA
. Ifdiag = N
,A
has nonunit diagonal elements. Ifdiag = U
, all diagonal elements ofA
are one.B
is overwritten with the solutionX
.

trcon!(norm, uplo, diag, A)

Finds the reciprocal condition number of (upper if
uplo = U
, lower ifuplo = L
) triangular matrixA
. Ifdiag = N
,A
has nonunit diagonal elements. Ifdiag = U
, all diagonal elements ofA
are one. Ifnorm = I
, the condition number is found in the infinity norm. Ifnorm = O
or1
, the condition number is found in the one norm.

trevc!(side, howmny, select, T, VL = similar(T), VR = similar(T))

Finds the eigensystem of an upper triangular matrix
T
. Ifside = R
, the right eigenvectors are computed. Ifside = L
, the left eigenvectors are computed. Ifside = B
, both sets are computed. Ifhowmny = A
, all eigenvectors are found. Ifhowmny = B
, all eigenvectors are found and backtransformed usingVL
andVR
. Ifhowmny = S
, only the eigenvectors corresponding to the values inselect
are computed.

trrfs!(uplo, trans, diag, A, B, X, Ferr, Berr) → (Ferr, Berr)

Estimates the error in the solution to
A * X = B
(trans = N
),A.' * X = B
(trans = T
),A' * X = B
(trans = C
) forside = L
, or the equivalent equations a righthandedside = R
X * A
after computingX
usingtrtrs!
. Ifuplo = U
,A
is upper triangular. Ifuplo = L
,A
is lower triangular. Ifdiag = N
,A
has nonunit diagonal elements. Ifdiag = U
, all diagonal elements ofA
are one.Ferr
andBerr
are optional inputs.Ferr
is the forward error andBerr
is the backward error, each componentwise.

stev!(job, dv, ev) → (dv, Zmat)

Computes the eigensystem for a symmetric tridiagonal matrix with
dv
as diagonal andev
as offdiagonal. Ifjob = N
only the eigenvalues are found and returned indv
. Ifjob = V
then the eigenvectors are also found and returned inZmat
.

stebz!(range, order, vl, vu, il, iu, abstol, dv, ev) → (dv, iblock, isplit)

Computes the eigenvalues for a symmetric tridiagonal matrix with
dv
as diagonal andev
as offdiagonal. Ifrange = A
, all the eigenvalues are found. Ifrange = V
, the eigenvalues in the halfopen interval(vl, vu]
are found. Ifrange = I
, the eigenvalues with indices betweenil
andiu
are found. Iforder = B
, eigvalues are ordered within a block. Iforder = E
, they are ordered across all the blocks.abstol
can be set as a tolerance for convergence.

stegr!(jobz, range, dv, ev, vl, vu, il, iu) → (w, Z)

Computes the eigenvalues (
jobz = N
) or eigenvalues and eigenvectors (jobz = V
) for a symmetric tridiagonal matrix withdv
as diagonal andev
as offdiagonal. Ifrange = A
, all the eigenvalues are found. Ifrange = V
, the eigenvalues in the halfopen interval(vl, vu]
are found. Ifrange = I
, the eigenvalues with indices betweenil
andiu
are found. The eigenvalues are returned inw
and the eigenvectors inZ
.

stein!(dv, ev_in, w_in, iblock_in, isplit_in)

Computes the eigenvectors for a symmetric tridiagonal matrix with
dv
as diagonal andev_in
as offdiagonal.w_in
specifies the input eigenvalues for which to find corresponding eigenvectors.iblock_in
specifies the submatrices corresponding to the eigenvalues inw_in
.isplit_in
specifies the splitting points between the submatrix blocks.

syconv!(uplo, A, ipiv) → (A, work)

Converts a symmetric matrix
A
(which has been factorized into a triangular matrix) into two matricesL
andD
. Ifuplo = U
,A
is upper triangular. Ifuplo = L
, it is lower triangular.ipiv
is the pivot vector from the triangular factorization.A
is overwritten byL
andD
.

sysv!(uplo, A, B) → (B, A, ipiv)

Finds the solution to
A * X = B
for symmetric matrixA
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.B
is overwritten by the solutionX
.A
is overwritten by its BunchKaufman factorization.ipiv
contains pivoting information about the factorization.

sytrf!(uplo, A) → (A, ipiv, info)

Computes the BunchKaufman factorization of a symmetric matrix
A
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.Returns
A
, overwritten by the factorization, a pivot vectoripiv
, and the error codeinfo
which is a nonnegative integer. Ifinfo
is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo
.

sytri!(uplo, A, ipiv)

Computes the inverse of a symmetric matrix
A
using the results ofsytrf!
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.A
is overwritten by its inverse.

sytrs!(uplo, A, ipiv, B)

Solves the equation
A * X = B
for a symmetric matrixA
using the results ofsytrf!
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.B
is overwritten by the solutionX
.

hesv!(uplo, A, B) → (B, A, ipiv)

Finds the solution to
A * X = B
for Hermitian matrixA
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.B
is overwritten by the solutionX
.A
is overwritten by its BunchKaufman factorization.ipiv
contains pivoting information about the factorization.

hetrf!(uplo, A) → (A, ipiv, info)

Computes the BunchKaufman factorization of a Hermitian matrix
A
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.Returns
A
, overwritten by the factorization, a pivot vectoripiv
, and the error codeinfo
which is a nonnegative integer. Ifinfo
is positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo
.

hetri!(uplo, A, ipiv)

Computes the inverse of a Hermitian matrix
A
using the results ofsytrf!
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.A
is overwritten by its inverse.

hetrs!(uplo, A, ipiv, B)

Solves the equation
A * X = B
for a Hermitian matrixA
using the results ofsytrf!
. Ifuplo = U
, the upper half ofA
is stored. Ifuplo = L
, the lower half is stored.B
is overwritten by the solutionX
.

syev!(jobz, uplo, A)

Finds the eigenvalues (
jobz = N
) or eigenvalues and eigenvectors (jobz = V
) of a symmetric matrixA
. Ifuplo = U
, the upper triangle ofA
is used. Ifuplo = L
, the lower triangle ofA
is used.

syevr!(jobz, range, uplo, A, vl, vu, il, iu, abstol) → (W, Z)

Finds the eigenvalues (
jobz = N
) or eigenvalues and eigenvectors (jobz = V
) of a symmetric matrixA
. Ifuplo = U
, the upper triangle ofA
is used. Ifuplo = L
, the lower triangle ofA
is used. Ifrange = A
, all the eigenvalues are found. Ifrange = V
, the eigenvalues in the halfopen interval(vl, vu]
are found. Ifrange = I
, the eigenvalues with indices betweenil
andiu
are found.abstol
can be set as a tolerance for convergence.The eigenvalues are returned in
W
and the eigenvectors inZ
.

sygvd!(jobz, range, uplo, A, vl, vu, il, iu, abstol) → (w, A, B)

Finds the generalized eigenvalues (
jobz = N
) or eigenvalues and eigenvectors (jobz = V
) of a symmetric matrixA
and symmetric positivedefinite matrixB
. Ifuplo = U
, the upper triangles ofA
andB
are used. Ifuplo = L
, the lower triangles ofA
andB
are used. Ifitype = 1
, the problem to solve isA * x = lambda * B * x
. Ifitype = 2
, the problem to solve isA * B * x = lambda * x
. Ifitype = 3
, the problem to solve isB * A * x = lambda * x
.

bdsqr!(uplo, d, e_, Vt, U, C) → (d, Vt, U, C)

Computes the singular value decomposition of a bidiagonal matrix with
d
on the diagonal ande_
on the offdiagonal. Ifuplo = U
,e_
is the superdiagonal. Ifuplo = L
,e_
is the subdiagonal. Can optionally also compute the productQ' * C
.Returns the singular values in
d
, and the matrixC
overwritten withQ' * C
.

bdsdc!(uplo, compq, d, e_) → (d, e, u, vt, q, iq)

Computes the singular value decomposition of a bidiagonal matrix with
d
on the diagonal ande_
on the offdiagonal using a divide and conqueq method. Ifuplo = U
,e_
is the superdiagonal. Ifuplo = L
,e_
is the subdiagonal. Ifcompq = N
, only the singular values are found. Ifcompq = I
, the singular values and vectors are found. Ifcompq = P
, the singular values and vectors are found in compact form. Only works for real types.Returns the singular values in
d
, and ifcompq = P
, the compact singular vectors iniq
.

gecon!(normtype, A, anorm)

Finds the reciprocal condition number of matrix
A
. Ifnormtype = I
, the condition number is found in the infinity norm. Ifnormtype = O
or1
, the condition number is found in the one norm.A
must be the result ofgetrf!
andanorm
is the norm ofA
in the relevant norm.

gehrd!(ilo, ihi, A) → (A, tau)

Converts a matrix
A
to Hessenberg form. IfA
is balanced withgebal!
thenilo
andihi
are the outputs ofgebal!
. Otherwise they should beilo = 1
andihi = size(A,2)
.tau
contains the elementary reflectors of the factorization.

orghr!(ilo, ihi, A, tau)

Explicitly finds
Q
, the orthogonal/unitary matrix fromgehrd!
.ilo
,ihi
,A
, andtau
must correspond to the input/output togehrd!
.

gees!(jobvs, A) → (A, vs, w)

Computes the eigenvalues (
jobvs = N
) or the eigenvalues and Schur vectors (jobvs = V
) of matrixA
.A
is overwritten by its Schur form.Returns
A
,vs
containing the Schur vectors, andw
, containing the eigenvalues.

gges!(jobvsl, jobvsr, A, B) → (A, B, alpha, beta, vsl, vsr)

Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (
jobsvl = V
), or right Schur vectors (jobvsr = V
) ofA
andB
.The generalized eigenvalues are returned in
alpha
andbeta
. The left Schur vectors are returned invsl
and the right Schur vectors are returned invsr
.

trexc!(compq, ifst, ilst, T, Q) → (T, Q)

Reorder the Schur factorization of a matrix. If
compq = V
, the Schur vectorsQ
are reordered. Ifcompq = N
they are not modified.ifst
andilst
specify the reordering of the vectors.

trsen!(compq, job, select, T, Q) → (T, Q, w)

Reorder the Schur factorization of a matrix and optionally finds reciprocal condition numbers. If
job = N
, no condition numbers are found. Ifjob = E
, only the condition number for this cluster of eigenvalues is found. Ifjob = V
, only the condition number for the invariant subspace is found. Ifjob = B
then the condition numbers for the cluster and subspace are found. Ifcompq = V
the Schur vectorsQ
are updated. Ifcompq = N
the Schur vectors are not modified.select
determines which eigenvalues are in the cluster.Returns
T
,Q
, and reordered eigenvalues inw
.

tgsen!(select, S, T, Q, Z) → (S, T, alpha, beta, Q, Z)

Reorders the vectors of a generalized Schur decomposition.
select
specifices the eigenvalues in each cluster.

trsyl!(transa, transb, A, B, C, isgn=1) → (C, scale)

Solves the Sylvester matrix equation
A * X +/ X * B = scale*C
whereA
andB
are both quasiupper triangular. Iftransa = N
,A
is not modified. Iftransa = T
,A
is transposed. Iftransa = C
,A
is conjugate transposed. Similarly fortransb
andB
. Ifisgn = 1
, the equationA * X + X * B = scale * C
is solved. Ifisgn = 1
, the equationA * X  X * B = scale * C
is solved.Returns
X
(overwritingC
) andscale
.
© 2009–2016 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors
Licensed under the MIT License.
https://docs.julialang.org/en/release0.5/stdlib/linalg/