Linear Algebra
Standard Functions
Linear algebra functions in Julia are largely implemented by calling functions from LAPACK. Sparse factorizations call functions from SuiteSparse.
-
*(A, B) -
Matrix multiplication.
-
\(A, B) -
Matrix division using a polyalgorithm. For input matrices
AandB, the resultXis such thatA*X == BwhenAis square. The solver that is used depends upon the structure ofA. IfAis upper or lower triangular (or diagonal), no factorization ofAis required and the system is solved with either forward or backward substitution. For non-triangular square matrices, an LU factorization is used.For rectangular
Athe result is the minimum-norm least squares solution computed by a pivoted QR factorization ofAand a rank estimate ofAbased on the R factor.When
Ais sparse, a similar polyalgorithm is used. For indefinite matrices, theLDLtfactorization does not use pivoting during the numerical factorization and therefore the procedure can fail even for invertible matrices.
-
dot(x, y) -
⋅(x, y) -
Compute the dot product. For complex vectors, the first vector is conjugated.
-
vecdot(x, y) -
For any iterable containers
xandy(including arrays of any dimension) of numbers (or any element type for whichdotis defined), compute the Euclidean dot product (the sum ofdot(x[i],y[i])) as if they were vectors.
-
cross(x, y) -
×(x, y) -
Compute the cross product of two 3-vectors.
-
factorize(A) -
Compute a convenient factorization of
A, based upon the type of the input matrix.factorizechecksAto see if it is symmetric/triangular/etc. ifAis passed as a generic matrix.factorizechecks every element ofAto verify/rule out each property. It will short-circuit as soon as it can rule out symmetry/triangular structure. The return value can be reused for efficient solving of multiple systems. For example:A=factorize(A); x=A\b; y=A\C.Properties of Atype of factorization Positive-definite Cholesky (see cholfact())Dense Symmetric/Hermitian Bunch-Kaufman (see bkfact())Sparse Symmetric/Hermitian LDLt (see ldltfact())Triangular Triangular Diagonal Diagonal Bidiagonal Bidiagonal Tridiagonal LU (see lufact())Symmetric real tridiagonal LDLt (see ldltfact())General square LU (see lufact())General non-square QR (see qrfact())If
factorizeis called on a Hermitian positive-definite matrix, for instance, thenfactorizewill return a Cholesky factorization.Example:
A = diagm(rand(5)) + diagm(rand(4),1); #A is really bidiagonal factorize(A) #factorize will check to see that A is already factorized
This returns a
5×5 Bidiagonal{Float64}, which can now be passed to other linear algebra functions (e.g. eigensolvers) which will use specialized methods forBidiagonaltypes.
-
full(F) -
Reconstruct the matrix
Afrom the factorizationF=factorize(A).
-
Diagonal(A::AbstractMatrix) -
Constructs a matrix from the diagonal of
A.
-
Diagonal(V::AbstractVector) -
Constructs a matrix with
Vas its diagonal.
-
Bidiagonal(dv, ev, isupper::Bool) -
Constructs an upper (
isupper=true) or lower (isupper=false) bidiagonal matrix using the given diagonal (dv) and off-diagonal (ev) vectors. The result is of typeBidiagonaland provides efficient specialized linear solvers, but may be converted into a regular matrix withfull().ev‘s length must be one less than the length ofdv.Example
dv = rand(5) ev = rand(4) Bu = Bidiagonal(dv, ev, true) #e is on the first superdiagonal Bl = Bidiagonal(dv, ev, false) #e is on the first subdiagonal
-
Bidiagonal(dv, ev, uplo::Char) -
Constructs an upper (
uplo='U') or lower (uplo='L') bidiagonal matrix using the given diagonal (dv) and off-diagonal (ev) vectors. The result is of typeBidiagonaland provides efficient specialized linear solvers, but may be converted into a regular matrix withfull().ev‘s length must be one less than the length ofdv.Example
dv = rand(5) ev = rand(4) Bu = Bidiagonal(dv, ev, 'U') #e is on the first superdiagonal Bl = Bidiagonal(dv, ev, 'L') #e is on the first subdiagonal
-
Bidiagonal(A, isupper::Bool) -
Construct a
Bidiagonalmatrix from the main diagonal ofAand its first super- (ifisupper=true) or sub-diagonal (ifisupper=false).Example
A = rand(5,5) Bu = Bidiagonal(A, true) #contains the main diagonal and first superdiagonal of A Bl = Bidiagonal(A, false) #contains the main diagonal and first subdiagonal of A
-
SymTridiagonal(dv, ev) -
Construct a symmetric tridiagonal matrix from the diagonal and first sub/super-diagonal, respectively. The result is of type
SymTridiagonaland provides efficient specialized eigensolvers, but may be converted into a regular matrix withfull().
-
Tridiagonal(dl, d, du) -
Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. The result is of type
Tridiagonaland provides efficient specialized linear solvers, but may be converted into a regular matrix withfull(). The lengths ofdlanddumust be one less than the length ofd.
-
Symmetric(A, uplo=:U) -
Construct a
Symmetricmatrix from the upper (ifuplo = :U) or lower (ifuplo = :L) triangle ofA.Example
A = randn(10,10) Supper = Symmetric(A) Slower = Symmetric(A,:L) eigfact(Supper)
eigfactwill use a method specialized for matrices known to be symmetric. Note thatSupperwill not be equal toSlowerunlessAis itself symmetric (e.g. ifA == A.').
-
Hermitian(A, uplo=:U) -
Construct a
Hermitianmatrix from the upper (ifuplo = :U) or lower (ifuplo = :L) triangle ofA.Example
A = randn(10,10) Hupper = Hermitian(A) Hlower = Hermitian(A,:L) eigfact(Hupper)
eigfactwill use a method specialized for matrices known to be Hermitian. Note thatHupperwill not be equal toHlowerunlessAis itself Hermitian (e.g. ifA == A').
-
lu(A) → L, U, p -
Compute the LU factorization of
A, such thatA[p,:] = L*U.
-
lufact(A[, pivot=Val{true}]) → F::LU -
Compute the LU factorization of
A.In most cases, if
Ais a subtypeSofAbstractMatrix{T}with an element typeTsupporting+,-,*and/, the return type isLU{T,S{T}}. If pivoting is chosen (default) the element type should also supportabsand<.The individual components of the factorization
Fcan be accessed by indexing:Component Description F[:L]L(lower triangular) part ofLUF[:U]U(upper triangular) part ofLUF[:p](right) permutation VectorF[:P](right) permutation MatrixThe relationship between
FandAisF[:L]*F[:U] == A[F[:p], :]Ffurther supports the following functions:Supported function LULU{T,Tridiagonal{T}}/()✓ \()✓ ✓ cond()✓ det()✓ ✓ logdet()✓ ✓ logabsdet()✓ ✓ size()✓ ✓
-
lufact(A::SparseMatrixCSC) → F::UmfpackLU -
Compute the LU factorization of a sparse matrix
A.For sparse
Awith real or complex element type, the return type ofFisUmfpackLU{Tv, Ti}, withTv=Float64orComplex128respectively andTiis an integer type (Int32orInt64).The individual components of the factorization
Fcan be accessed by indexing:Component Description F[:L]L(lower triangular) part ofLUF[:U]U(upper triangular) part ofLUF[:p]right permutation VectorF[:q]left permutation VectorF[:Rs]Vectorof scaling factorsF[:(:)](L,U,p,q,Rs)componentsThe relation between
FandAisF[:L]*F[:U] == (F[:Rs] .* A)[F[:p], F[:q]]Ffurther supports the following functions:** Implementation note **
lufact(A::SparseMatrixCSC)uses the UMFPACK library that is part of SuiteSparse. As this library only supports sparse matrices withFloat64orComplex128elements,lufactconvertsAinto a copy that is of typeSparseMatrixCSC{Float64}orSparseMatrixCSC{Complex128}as appropriate.
-
lufact!(A) → LU -
lufact!is the same aslufact(), but saves space by overwriting the inputA, instead of creating a copy. AnInexactErrorexception is thrown if the factorisation produces a number not representable by the element type ofA, e.g. for integer types.
-
chol(A) → U -
Compute the Cholesky factorization of a positive definite matrix
Aand return the UpperTriangular matrixUsuch thatA = U'U.
-
chol(x::Number) → y -
Compute the square root of a non-negative number
x.
-
cholfact(A, [uplo::Symbol, ]Val{false}) → Cholesky -
Compute the Cholesky factorization of a dense symmetric positive definite matrix
Aand return aCholeskyfactorization. The matrixAcan either be aSymmetricorHermitianStridedMatrixor a perfectly symmetric or HermitianStridedMatrix. In the latter case, the optional argumentuplomay be:Lfor using the lower part or:Ufor the upper part ofA. The default is to use:U. The triangular Cholesky factor can be obtained from the factorizationFwith:F[:L]andF[:U]. The following functions are available forCholeskyobjects:size,\,inv,det. APosDefExceptionexception is thrown in case the matrix is not positive definite.
-
cholfact(A, [uplo::Symbol, ]Val{true}; tol = 0.0) → CholeskyPivoted -
Compute the pivoted Cholesky factorization of a dense symmetric positive semi-definite matrix
Aand return aCholeskyPivotedfactorization. The matrixAcan either be aSymmetricorHermitianStridedMatrixor a perfectly symmetric or HermitianStridedMatrix. In the latter case, the optional argumentuplomay be:Lfor using the lower part or:Ufor the upper part ofA. The default is to use:U. The triangular Cholesky factor can be obtained from the factorizationFwith:F[:L]andF[:U]. The following functions are available forPivotedCholeskyobjects:size,\,inv,det, andrank. The argumenttoldetermines the tolerance for determining the rank. For negative values, the tolerance is the machine precision.
-
cholfact(A; shift = 0.0, perm = Int[]) → CHOLMOD.Factor -
Compute the Cholesky factorization of a sparse positive definite matrix
A.Amust be aSparseMatrixCSC,Symmetric{SparseMatrixCSC}, orHermitian{SparseMatrixCSC}. Note that even ifAdoesn’t have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used.F = cholfact(A)is most frequently used to solve systems of equations withF\b, but also the methodsdiag,det,logdetare defined forF. You can also extract individual factors fromF, usingF[:L]. However, since pivoting is on by default, the factorization is internally represented asA == P'*L*L'*Pwith a permutation matrixP; using justLwithout accounting forPwill give incorrect answers. To include the effects of permutation, it’s typically preferable to extact “combined” factors likePtL = F[:PtL](the equivalent ofP'*L) andLtP = F[:UP](the equivalent ofL'*P).Setting optional
shiftkeyword argument computes the factorization ofA+shift*Iinstead ofA. If thepermargument is nonempty, it should be a permutation of1:size(A,1)giving the ordering to use (instead of CHOLMOD’s default AMD ordering).Note
This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to
SparseMatrixCSC{Float64}orSparseMatrixCSC{Complex128}as appropriate.Many other functions from CHOLMOD are wrapped but not exported from the
Base.SparseArrays.CHOLMODmodule.
-
cholfact!(F::Factor, A; shift = 0.0) → CHOLMOD.Factor -
Compute the Cholesky (\(LL'\)) factorization of
A, reusing the symbolic factorizationF.Amust be aSparseMatrixCSC,Symmetric{SparseMatrixCSC}, orHermitian{SparseMatrixCSC}. Note that even ifAdoesn’t have the type tag, it must still be symmetric or Hermitian.Note
This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to
SparseMatrixCSC{Float64}orSparseMatrixCSC{Complex128}as appropriate.
-
cholfact!(A, [uplo::Symbol, ]Val{false}) → Cholesky -
The same as
cholfact, but saves space by overwriting the inputA, instead of creating a copy. AnInexactErrorexception is thrown if the factorisation produces a number not representable by the element type ofA, e.g. for integer types.
-
cholfact!(A, [uplo::Symbol, ]Val{true}; tol = 0.0) → CholeskyPivoted -
The same as
cholfact, but saves space by overwriting the inputA, instead of creating a copy. AnInexactErrorexception is thrown if the factorisation produces a number not representable by the element type ofA, e.g. for integer types.
-
lowrankupdate(C::Cholesky, v::StridedVector) → CC::Cholesky -
Update a Cholesky factorization
Cwith the vectorv. IfA = C[:U]'C[:U]thenCC = cholfact(C[:U]'C[:U] + v*v')but the computation ofCConly usesO(n^2)operations.
-
lowrankdowndate(C::Cholesky, v::StridedVector) → CC::Cholesky -
Downdate a Cholesky factorization
Cwith the vectorv. IfA = C[:U]'C[:U]thenCC = cholfact(C[:U]'C[:U] - v*v')but the computation ofCConly usesO(n^2)operations.
-
lowrankupdate!(C::Cholesky, v::StridedVector) → CC::Cholesky -
Update a Cholesky factorization
Cwith the vectorv. IfA = C[:U]'C[:U]thenCC = cholfact(C[:U]'C[:U] + v*v')but the computation ofCConly usesO(n^2)operations. The input factorizationCis updated in place such that on exitC == CC. The vectorvis destroyed during the computation.
-
lowrankdowndate!(C::Cholesky, v::StridedVector) → CC::Cholesky -
Downdate a Cholesky factorization
Cwith the vectorv. IfA = C[:U]'C[:U]thenCC = cholfact(C[:U]'C[:U] - v*v')but the computation ofCConly usesO(n^2)operations. The input factorizationCis updated in place such that on exitC == CC. The vectorvis destroyed during the computation.
-
ldltfact(::SymTridiagonal) → LDLt -
Compute an
LDLtfactorization of a real symmetric tridiagonal matrix such thatA = L*Diagonal(d)*L'whereLis a unit lower triangular matrix anddis a vector. The main use of anLDLtfactorizationF = ldltfact(A)is to solve the linear system of equationsAx = bwithF\b.
-
ldltfact(A; shift = 0.0, perm=Int[]) → CHOLMOD.Factor -
Compute the \(LDL'\) factorization of a sparse matrix
A.Amust be aSparseMatrixCSC,Symmetric{SparseMatrixCSC}, orHermitian{SparseMatrixCSC}. Note that even ifAdoesn’t have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used.F = ldltfact(A)is most frequently used to solve systems of equationsA*x = bwithF\b. The returned factorization objectFalso supports the methodsdiag,det, andlogdet. You can extract individual factors fromFusingF[:L]. However, since pivoting is on by default, the factorization is internally represented asA == P'*L*D*L'*Pwith a permutation matrixP; using justLwithout accounting forPwill give incorrect answers. To include the effects of permutation, it is typically preferable to extact “combined” factors likePtL = F[:PtL](the equivalent ofP'*L) andLtP = F[:UP](the equivalent ofL'*P). The complete list of supported factors is:L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP.Setting optional
shiftkeyword argument computes the factorization ofA+shift*Iinstead ofA. If thepermargument is nonempty, it should be a permutation of1:size(A,1)giving the ordering to use (instead of CHOLMOD’s default AMD ordering).Note
This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to
SparseMatrixCSC{Float64}orSparseMatrixCSC{Complex128}as appropriate.Many other functions from CHOLMOD are wrapped but not exported from the
Base.SparseArrays.CHOLMODmodule.
-
ldltfact!(F::Factor, A; shift = 0.0) → CHOLMOD.Factor -
Compute the \(LDL'\) factorization of
A, reusing the symbolic factorizationF.Amust be aSparseMatrixCSC,Symmetric{SparseMatrixCSC}, orHermitian{SparseMatrixCSC}. Note that even ifAdoesn’t have the type tag, it must still be symmetric or Hermitian.Note
This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to
SparseMatrixCSC{Float64}orSparseMatrixCSC{Complex128}as appropriate.
-
ldltfact!(::SymTridiagonal) → LDLt -
Same as
ldltfact, but saves space by overwriting the inputA, instead of creating a copy.
-
qr(v::AbstractVector) -
Computes the polar decomposition of a vector.
Input:
-
v::AbstractVector- vector to normalize
Outputs:
-
w- A unit vector in the direction ofv -
r- The norm ofv
See also:
normalize,normalize!,LinAlg.qr! -
-
LinAlg.qr!(v::AbstractVector) -
Computes the polar decomposition of a vector. Instead of returning a new vector as
qr(v::AbstractVector), this function mutates the input vectorvin place.Input:
-
v::AbstractVector- vector to normalize
Outputs:
-
w- A unit vector in the direction ofv(This is a mutation ofv). -
r- The norm ofv
See also:
normalize,normalize!,qr -
-
qr(A[, pivot=Val{false}][;thin=true]) → Q, R, [p] -
Compute the (pivoted) QR factorization of
Asuch that eitherA = Q*RorA[:,p] = Q*R. Also seeqrfact. The default is to compute a thin factorization. Note thatRis not extended with zeros when the fullQis requested.
-
qrfact(A[, pivot=Val{false}]) → F -
Computes the QR factorization of
A. The return type ofFdepends on the element type ofAand whether pivoting is specified (withpivot==Val{true}).Return type eltype(A)pivotRelationship between FandAQRnot BlasFloateither A==F[:Q]*F[:R]QRCompactWYBlasFloatVal{false}A==F[:Q]*F[:R]QRPivotedBlasFloatVal{true}A[:,F[:p]]==F[:Q]*F[:R]BlasFloatrefers to any of:Float32,Float64,Complex64orComplex128.The individual components of the factorization
Fcan be accessed by indexing:Component Description QRQRCompactWYQRPivotedF[:Q]Q(orthogonal/unitary) part ofQR✓ ( QRPackedQ)✓ ( QRCompactWYQ)✓ ( QRPackedQ)F[:R]R(upper right triangular) part ofQR✓ ✓ ✓ F[:p]pivot Vector✓ F[:P](pivot) permutation Matrix✓ The following functions are available for the
QRobjects:size,\. WhenAis rectangular,\will return a least squares solution and if the solution is not unique, the one with smallest norm is returned.Multiplication with respect to either thin or full
Qis allowed, i.e. bothF[:Q]*F[:R]andF[:Q]*Aare supported. AQmatrix can be converted into a regular matrix withfull()which has a named argumentthin.Note
qrfactreturns multiple types because LAPACK uses several representations that minimize the memory storage requirements of products of Householder elementary reflectors, so that theQandRmatrices can be stored compactly rather as two separate dense matrices.The data contained in
QRorQRPivotedcan be used to construct theQRPackedQtype, which is a compact representation of the rotation matrix:\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T)\]where \(\tau_i\) is the scale factor and \(v_i\) is the projection vector associated with the \(i^{th}\) Householder elementary reflector.
The data contained in
QRCompactWYcan be used to construct theQRCompactWYQtype, which is a compact representation of the rotation matrix\[Q = I + Y T Y^T\]where
Yis \(m \times r\) lower trapezoidal andTis \(r \times r\) upper triangular. The compact WY representation [Schreiber1989] is not to be confused with the older, WY representation [Bischof1987]. (The LAPACK documentation usesVin lieu ofY.)[Bischof1987] (1, 2) C Bischof and C Van Loan, “The WY representation for products of Householder matrices”, SIAM J Sci Stat Comput 8 (1987), s2-s13. doi:10.1137/0908009 [Schreiber1989] R Schreiber and C Van Loan, “A storage-efficient WY representation for products of Householder transformations”, SIAM J Sci Stat Comput 10 (1989), 53-57. doi:10.1137/0910005
-
qrfact(A) → SPQR.Factorization -
Compute the QR factorization of a sparse matrix
A. A fill-reducing permutation is used. The main application of this type is to solve least squares problems with\. The function calls the C library SPQR and a few additional functions from the library are wrapped but not exported.
-
qrfact!(A[, pivot=Val{false}]) -
qrfact!is the same asqrfact()whenAis a subtype ofStridedMatrix, but saves space by overwriting the inputA, instead of creating a copy. AnInexactErrorexception is thrown if the factorisation produces a number not representable by the element type ofA, e.g. for integer types.
-
full(QRCompactWYQ[, thin=true]) → Matrix -
Converts an orthogonal or unitary matrix stored as a
QRCompactWYQobject, i.e. in the compact WY format [Bischof1987], to a dense matrix.Optionally takes a
thinBoolean argument, which iftrueomits the columns that span the rows ofRin the QR factorization that are zero. The resulting matrix is theQin a thin QR factorization (sometimes called the reduced QR factorization). Iffalse, returns aQthat spans all rows ofRin its corresponding QR factorization.
-
lqfact!(A) → LQ -
Compute the LQ factorization of
A, using the input matrix as a workspace. See alsolq().
-
lqfact(A) → LQ -
Compute the LQ factorization of
A. See alsolq().
-
lq(A; [thin=true]) → L, Q -
Perform an LQ factorization of
Asuch thatA = L*Q. The default is to compute a thin factorization. The LQ factorization is the QR factorization ofA.'.Lis not extended with zeros if the fullQis requested.
-
bkfact(A) → BunchKaufman -
Compute the Bunch-Kaufman [Bunch1977] factorization of a real symmetric or complex Hermitian matrix
Aand return aBunchKaufmanobject. The following functions are available forBunchKaufmanobjects:size,\,inv,issymmetric,ishermitian.[Bunch1977] J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163-179. url.
-
bkfact!(A) → BunchKaufman -
bkfact!is the same asbkfact(), but saves space by overwriting the inputA, instead of creating a copy.
-
eig(A,[irange,][vl,][vu,][permute=true,][scale=true]) → D, V -
Computes eigenvalues (
D) and eigenvectors (V) ofA. Seeeigfact()for details on theirange,vl, andvuarguments and thepermuteandscalekeyword arguments. The eigenvectors are returned columnwise.julia> eig([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0]) ([1.0,3.0,18.0], [1.0 0.0 0.0; 0.0 1.0 0.0; 0.0 0.0 1.0])
eigis a wrapper aroundeigfact(), extracting all parts of the factorization to a tuple; where possible, usingeigfact()is recommended.
-
eig(A, B) → D, V -
Computes generalized eigenvalues (
D) and vectors (V) ofAwith respect toB.eigis a wrapper aroundeigfact(), extracting all parts of the factorization to a tuple; where possible, usingeigfact()is recommended.julia> A = [1 0; 0 -1] 2×2 Array{Int64,2}: 1 0 0 -1 julia> B = [0 1; 1 0] 2×2 Array{Int64,2}: 0 1 1 0 julia> eig(A, B) (Complex{Float64}[0.0+1.0im,0.0-1.0im], Complex{Float64}[0.0-1.0im 0.0+1.0im; -1.0-0.0im -1.0+0.0im])
-
eigvals(A,[irange,][vl,][vu]) → values -
Returns the eigenvalues of
A. IfAisSymmetric,HermitianorSymTridiagonal, it is possible to calculate only a subset of the eigenvalues by specifying either aUnitRangeirangecovering indices of the sorted eigenvalues, or a pairvlandvufor the lower and upper boundaries of the eigenvalues.For general non-symmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The option
permute=truepermutes the matrix to become closer to upper triangular, andscale=truescales the matrix by its diagonal elements to make rows and columns moreequal in norm. The default istruefor both options.
-
eigvals!(A,[irange,][vl,][vu]) → values -
Same as
eigvals(), but saves space by overwriting the inputA, instead of creating a copy.
-
eigmax(A; permute::Bool=true, scale::Bool=true) -
Returns the largest eigenvalue of
A. The optionpermute=truepermutes the matrix to become closer to upper triangular, andscale=truescales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues ofAare complex, this method will fail, since complex numbers cannot be sorted.julia> A = [0 im; -im 0] 2×2 Array{Complex{Int64},2}: 0+0im 0+1im 0-1im 0+0im julia> eigmax(A) 1.0 julia> A = [0 im; -1 0] 2×2 Array{Complex{Int64},2}: 0+0im 0+1im -1+0im 0+0im julia> eigmax(A) ERROR: DomainError: in #eigmax#30(::Bool, ::Bool, ::Function, ::Array{Complex{Int64},2}) at ./linalg/eigen.jl:186 in eigmax(::Array{Complex{Int64},2}) at ./linalg/eigen.jl:184 ...
-
eigmin(A; permute::Bool=true, scale::Bool=true) -
Returns the smallest eigenvalue of
A. The optionpermute=truepermutes the matrix to become closer to upper triangular, andscale=truescales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues ofAare complex, this method will fail, since complex numbers cannot be sorted.julia> A = [0 im; -im 0] 2×2 Array{Complex{Int64},2}: 0+0im 0+1im 0-1im 0+0im julia> eigmin(A) -1.0 julia> A = [0 im; -1 0] 2×2 Array{Complex{Int64},2}: 0+0im 0+1im -1+0im 0+0im julia> eigmin(A) ERROR: DomainError: in #eigmin#31(::Bool, ::Bool, ::Function, ::Array{Complex{Int64},2}) at ./linalg/eigen.jl:226 in eigmin(::Array{Complex{Int64},2}) at ./linalg/eigen.jl:224 ...
-
eigvecs(A, [eigvals,][permute=true,][scale=true]) → Matrix -
Returns a matrix
Mwhose columns are the eigenvectors ofA. (Thekth eigenvector can be obtained from the sliceM[:, k].) Thepermuteandscalekeywords are the same as foreigfact().For
SymTridiagonalmatrices, if the optional vector of eigenvalueseigvalsis specified, returns the specific corresponding eigenvectors.
-
eigfact(A,[irange,][vl,][vu,][permute=true,][scale=true]) → Eigen -
Computes the eigenvalue decomposition of
A, returning anEigenfactorization objectFwhich contains the eigenvalues inF[:values]and the eigenvectors in the columns of the matrixF[:vectors]. (Thekth eigenvector can be obtained from the sliceF[:vectors][:, k].)The following functions are available for
Eigenobjects:inv(),det(), andisposdef().If
AisSymmetric,HermitianorSymTridiagonal, it is possible to calculate only a subset of the eigenvalues by specifying either aUnitRangeirangecovering indices of the sorted eigenvalues or a pairvlandvufor the lower and upper boundaries of the eigenvalues.For general nonsymmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The option
permute=truepermutes the matrix to become closer to upper triangular, andscale=truescales the matrix by its diagonal elements to make rows and columns more equal in norm. The default istruefor both options.
-
eigfact(A, B) → GeneralizedEigen -
Computes the generalized eigenvalue decomposition of
AandB, returning aGeneralizedEigenfactorization objectFwhich contains the generalized eigenvalues inF[:values]and the generalized eigenvectors in the columns of the matrixF[:vectors]. (Thekth generalized eigenvector can be obtained from the sliceF[:vectors][:, k].)
-
eigfact!(A[, B]) -
Same as
eigfact(), but saves space by overwriting the inputA(andB), instead of creating a copy.
-
hessfact(A) -
Compute the Hessenberg decomposition of
Aand return aHessenbergobject. IfFis the factorization object, the unitary matrix can be accessed withF[:Q]and the Hessenberg matrix withF[:H]. WhenQis extracted, the resulting type is theHessenbergQobject, and may be converted to a regular matrix withfull().
-
hessfact!(A) -
hessfact!is the same ashessfact(), but saves space by overwriting the inputA, instead of creating a copy.
-
schurfact(A::StridedMatrix) → F::Schur -
Computes the Schur factorization of the matrix
A. The (quasi) triangular Schur factor can be obtained from theSchurobjectFwith eitherF[:Schur]orF[:T]and the orthogonal/unitary Schur vectors can be obtained withF[:vectors]orF[:Z]such thatA = F[:vectors]*F[:Schur]*F[:vectors]'. The eigenvalues ofAcan be obtained withF[:values].
-
schurfact!(A::StridedMatrix) → F::Schur -
Same as
schurfactbut uses the input argument as workspace.
-
schur(A::StridedMatrix) → T::Matrix, Z::Matrix, λ::Vector -
Computes the Schur factorization of the matrix
A. The methods return the (quasi) triangular Schur factorTand the orthogonal/unitary Schur vectorsZsuch thatA = Z*T*Z'. The eigenvalues ofAare returned in the vectorλ.See
schurfact.
-
ordschur(F::Schur, select::Union{Vector{Bool}, BitVector}) → F::Schur -
Reorders the Schur factorization
Fof a matrixA = Z*T*Z'according to the logical arrayselectreturning the reordered factorizationFobject. The selected eigenvalues appear in the leading diagonal ofF[:Schur]and the corresponding leading columns ofF[:vectors]form an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded viaselect.
-
ordschur!(F::Schur, select::Union{Vector{Bool}, BitVector}) → F::Schur -
Same as
ordschurbut overwrites the factorizationF.
-
ordschur(T::StridedMatrix, Z::StridedMatrix, select::Union{Vector{Bool}, BitVector}) → T::StridedMatrix, Z::StridedMatrix, λ::Vector -
Reorders the Schur factorization of a real matrix
A = Z*T*Z'according to the logical arrayselectreturning the reordered matricesTandZas well as the vector of eigenvaluesλ. The selected eigenvalues appear in the leading diagonal ofTand the corresponding leading columns ofZform an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded viaselect.
-
ordschur!(T::StridedMatrix, Z::StridedMatrix, select::Union{Vector{Bool}, BitVector}) → T::StridedMatrix, Z::StridedMatrix, λ::Vector -
Same as
ordschurbut overwrites the input arguments.
-
schurfact(A::StridedMatrix, B::StridedMatrix) → F::GeneralizedSchur -
Computes the Generalized Schur (or QZ) factorization of the matrices
AandB. The (quasi) triangular Schur factors can be obtained from theSchurobjectFwithF[:S]andF[:T], the left unitary/orthogonal Schur vectors can be obtained withF[:left]orF[:Q]and the right unitary/orthogonal Schur vectors can be obtained withF[:right]orF[:Z]such thatA=F[:left]*F[:S]*F[:right]'andB=F[:left]*F[:T]*F[:right]'. The generalized eigenvalues ofAandBcan be obtained withF[:alpha]./F[:beta].
-
schurfact!(A::StridedMatrix, B::StridedMatrix) → F::GeneralizedSchur -
Same as
schurfactbut uses the input matricesAandBas workspace.
-
ordschur(F::GeneralizedSchur, select::Union{Vector{Bool}, BitVector}) → F::GeneralizedSchur -
Reorders the Generalized Schur factorization
Fof a matrix pair(A, B) = (Q*S*Z', Q*T*Z')according to the logical arrayselectand returns a GeneralizedSchur objectF. The selected eigenvalues appear in the leading diagonal of bothF[:S]andF[:T], and the left and right orthogonal/unitary Schur vectors are also reordered such that(A, B) = F[:Q]*(F[:S], F[:T])*F[:Z]'still holds and the generalized eigenvalues ofAandBcan still be obtained withF[:alpha]./F[:beta].
-
ordschur!(F::GeneralizedSchur, select::Union{Vector{Bool}, BitVector}) → F::GeneralizedSchur -
Same as
ordschurbut overwrites the factorizationF.
-
ordschur(S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, select) → S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector -
Reorders the Generalized Schur factorization of a matrix pair
(A, B) = (Q*S*Z', Q*T*Z')according to the logical arrayselectand returns the matricesS,T,Q,Zand vectorsαandβ. The selected eigenvalues appear in the leading diagonal of bothSandT, and the left and right unitary/orthogonal Schur vectors are also reordered such that(A, B) = Q*(S, T)*Z'still holds and the generalized eigenvalues ofAandBcan still be obtained withα./β.
-
ordschur!(S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, select) → S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector -
Same as
ordschurbut overwrites the factorization the input arguments.
-
schur(A::StridedMatrix, B::StridedMatrix) → S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector -
See
schurfact.
-
svdfact(A[, thin=true]) → SVD -
Compute the singular value decomposition (SVD) of
Aand return anSVDobject.U,S,VandVtcan be obtained from the factorizationFwithF[:U],F[:S],F[:V]andF[:Vt], such thatA = U*diagm(S)*Vt. The algorithm producesVtand henceVtis more efficient to extract thanV.If
thin=true(default), a thin SVD is returned. For a \(M \times N\) matrixA,Uis \(M \times M\) for a full SVD (thin=false) and \(M \times \min(M, N)\) for a thin SVD.
-
svdfact!(A[, thin=true]) → SVD -
svdfact!is the same assvdfact(), but saves space by overwriting the inputA, instead of creating a copy.If
thin=true(default), a thin SVD is returned. For a \(M \times N\) matrixA,Uis \(M \times M\) for a full SVD (thin=false) and \(M \times \min(M, N)\) for a thin SVD.
-
svd(A[, thin=true]) → U, S, V -
Computes the SVD of
A, returningU, vectorS, andVsuch thatA == U*diagm(S)*V'.If
thin=true(default), a thin SVD is returned. For a \(M \times N\) matrixA,Uis \(M \times M\) for a full SVD (thin=false) and \(M \times \min(M, N)\) for a thin SVD.svdis a wrapper aroundsvdfact(A)(), extracting all parts of theSVDfactorization to a tuple. Direct use ofsvdfactis therefore more efficient.
-
svdvals(A) -
Returns the singular values of
A.
-
svdvals!(A) -
Returns the singular values of
A, saving space by overwriting the input.
-
svdfact(A, B) → GeneralizedSVD -
Compute the generalized SVD of
AandB, returning aGeneralizedSVDfactorization objectF, such thatA = F[:U]*F[:D1]*F[:R0]*F[:Q]'andB = F[:V]*F[:D2]*F[:R0]*F[:Q]'.For an M-by-N matrix
Aand P-by-N matrixB,-
F[:U]is a M-by-M orthogonal matrix, -
F[:V]is a P-by-P orthogonal matrix, -
F[:Q]is a N-by-N orthogonal matrix, -
F[:R0]is a (K+L)-by-N matrix whose rightmost (K+L)-by-(K+L) block is nonsingular upper block triangular, -
F[:D1]is a M-by-(K+L) diagonal matrix with 1s in the first K entries, -
F[:D2]is a P-by-(K+L) matrix whose top right L-by-L block is diagonal,
K+Lis the effective numerical rank of the matrix[A; B].The entries of
F[:D1]andF[:D2]are related, as explained in the LAPACK documentation for the generalized SVD and the xGGSVD3 routine which is called underneath (in LAPACK 3.6.0 and newer). -
-
svd(A, B) → U, V, Q, D1, D2, R0 -
Wrapper around
svdfact(A, B)()extracting all parts of the factorization to a tuple. Direct use ofsvdfactis therefore generally more efficient. The function returns the generalized SVD ofAandB, returningU,V,Q,D1,D2, andR0such thatA = U*D1*R0*Q'andB = V*D2*R0*Q'.
-
svdvals(A, B) -
Return the generalized singular values from the generalized singular value decomposition of
AandB.
-
LinAlg.Givens(i1, i2, c, s) → G -
A Givens rotation linear operator. The fields
candsrepresent the cosine and sine of the rotation angle, respectively. TheGivenstype supports left multiplicationG*Aand conjugated transpose right multiplicationA*G'. The type doesn’t have asizeand can therefore be multiplied with matrices of arbitrary size as long asi2<=size(A,2)forG*Aori2<=size(A,1)forA*G'.See also:
givens()
-
givens{T}(f::T, g::T, i1::Integer, i2::Integer) → (G::Givens, r::T) -
Computes the Givens rotation
Gand scalarrsuch that for any vectorxwherex[i1] = f x[i2] = g
the result of the multiplication
y = G*x
has the property that
y[i1] = r y[i2] = 0
See also:
LinAlg.Givens
-
givens(x::AbstractVector, i1::Integer, i2::Integer) → (G::Givens, r) -
Computes the Givens rotation
Gand scalarrsuch that the result of the multiplicationB = G*x
has the property that
B[i1] = r B[i2] = 0
See also:
LinAlg.Givens
-
givens(A::AbstractArray, i1::Integer, i2::Integer, j::Integer) → (G::Givens, r) -
Computes the Givens rotation
Gand scalarrsuch that the result of the multiplicationB = G*A
has the property that
B[i1,j] = r B[i2,j] = 0
See also:
LinAlg.Givens
-
triu(M) -
Upper triangle of a matrix.
-
triu(M, k) -
Returns the upper triangle of
Mstarting from thekth superdiagonal.
-
triu!(M) -
Upper triangle of a matrix, overwriting
Min the process.
-
triu!(M, k) -
Returns the upper triangle of
Mstarting from thekth superdiagonal, overwritingMin the process.
-
tril(M) -
Lower triangle of a matrix.
-
tril(M, k) -
Returns the lower triangle of
Mstarting from thekth superdiagonal.
-
tril!(M) -
Lower triangle of a matrix, overwriting
Min the process.
-
tril!(M, k) -
Returns the lower triangle of
Mstarting from thekth superdiagonal, overwritingMin the process.
-
diagind(M[, k]) -
A
Rangegiving the indices of thekth diagonal of the matrixM.
-
diag(M[, k]) -
The
kth diagonal of a matrix, as a vector. Usediagmto construct a diagonal matrix.
-
diagm(v[, k]) -
Construct a diagonal matrix and place
von thekth diagonal.
-
scale!(A, b) -
scale!(b, A) -
Scale an array
Aby a scalarboverwritingAin-place.If
Ais a matrix andbis a vector, thenscale!(A,b)scales each columniofAbyb[i](similar toA*Diagonal(b)), whilescale!(b,A)scales each rowiofAbyb[i](similar toDiagonal(b)*A), again operating in-place onA. AnInexactErrorexception is thrown if the scaling produces a number not representable by the element type ofA, e.g. for integer types.
-
Tridiagonal(dl, d, du) -
Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. The result is of type
Tridiagonaland provides efficient specialized linear solvers, but may be converted into a regular matrix withfull(). The lengths ofdlanddumust be one less than the length ofd.
-
rank(M) -
Compute the rank of a matrix.
-
norm(A[, p]) -
Compute the
p-norm of a vector or the operator norm of a matrixA, defaulting to thep=2-norm.For vectors,
pcan assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular,norm(A, Inf)returns the largest value inabs(A), whereasnorm(A, -Inf)returns the smallest.For matrices, the matrix norm induced by the vector
p-norm is used, where valid values ofpare1,2, orInf. (Note that for sparse matrices,p=2is currently not implemented.) Usevecnorm()to compute the Frobenius norm.
-
vecnorm(A[, p]) -
For any iterable container
A(including arrays of any dimension) of numbers (or any element type for whichnormis defined), compute thep-norm (defaulting top=2) as ifAwere a vector of the corresponding length.For example, if
Ais a matrix andp=2, then this is equivalent to the Frobenius norm.
-
normalize!(v[, p=2]) -
Normalize the vector
vin-place with respect to thep-norm.Inputs:
-
v::AbstractVector- vector to be normalized -
p::Real- Thep-norm to normalize with respect to. Default: 2
Output:
-
v- A unit vector being the input vector, rescaled to have norm 1. The input vector is modified in-place.
See also:
normalize,qr -
-
normalize(v[, p=2]) -
Normalize the vector
vwith respect to thep-norm.Inputs:
-
v::AbstractVector- vector to be normalized -
p::Real- Thep-norm to normalize with respect to. Default: 2
Output:
-
v- A unit vector being a copy of the input vector, scaled to have norm 1
See also:
normalize!,qr -
-
cond(M[, p]) -
Condition number of the matrix
M, computed using the operatorp-norm. Valid values forpare1,2(default), orInf.
-
condskeel(M[, x, p]) -
\[\begin{split}\kappa_S(M, p) & = \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \right\Vert_p \\ \kappa_S(M, x, p) & = \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \left\vert x \right\vert \right\Vert_p\end{split}\]
Skeel condition number \(\kappa_S\) of the matrix
M, optionally with respect to the vectorx, as computed using the operatorp-norm.pisInfby default, if not provided. Valid values forpare1,2, orInf.This quantity is also known in the literature as the Bauer condition number, relative condition number, or componentwise relative condition number.
-
trace(M) -
Matrix trace.
-
det(M) -
Matrix determinant.
-
logdet(M) -
Log of matrix determinant. Equivalent to
log(det(M)), but may provide increased accuracy and/or speed.
-
logabsdet(M) -
Log of absolute value of determinant of real matrix. Equivalent to
(log(abs(det(M))), sign(det(M))), but may provide increased accuracy and/or speed.
-
inv(M) -
Matrix inverse.
-
pinv(M[, tol]) -
Computes the Moore-Penrose pseudoinverse.
For matrices
Mwith floating point elements, it is convenient to compute the pseudoinverse by inverting only singular values above a given threshold,tol.The optimal choice of
tolvaries both with the value ofMand the intended application of the pseudoinverse. The default value oftoliseps(real(float(one(eltype(M)))))*maximum(size(A)), which is essentially machine epsilon for the real part of a matrix element multiplied by the larger matrix dimension. For inverting dense ill-conditioned matrices in a least-squares sense,tol = sqrt(eps(real(float(one(eltype(M))))))is recommended.For more information, see [issue8859], [B96], [S84], [KY88].
[issue8859] Issue 8859, “Fix least squares”, https://github.com/JuliaLang/julia/pull/8859 [B96] Åke Björck, “Numerical Methods for Least Squares Problems”, SIAM Press, Philadelphia, 1996, “Other Titles in Applied Mathematics”, Vol. 51. doi:10.1137/1.9781611971484 [S84] -
- Stewart, “Rank Degeneracy”, SIAM Journal on Scientific and Statistical Computing, 5(2), 1984, 403-413. doi:10.1137/0905030
[KY88] Konstantinos Konstantinides and Kung Yao, “Statistical analysis of effective singular values in matrix rank determination”, IEEE Transactions on Acoustics, Speech and Signal Processing, 36(5), 1988, 757-763. doi:10.1109/29.1585 -
-
nullspace(M) -
Basis for nullspace of
M.
-
repmat(A, n, m) -
Construct a matrix by repeating the given matrix
ntimes in dimension 1 andmtimes in dimension 2.
-
repeat(A::AbstractArray; inner=ntuple(x->1, ndims(A)), outer=ntuple(x->1, ndims(A))) -
Construct an array by repeating the entries of
A. The i-th element ofinnerspecifies the number of times that the individual entries of the i-th dimension ofAshould be repeated. The i-th element ofouterspecifies the number of times that a slice along the i-th dimension ofAshould be repeated. Ifinnerorouterare omitted, no repetition is performed.julia> repeat(1:2, inner=2) 4-element Array{Int64,1}: 1 1 2 2 julia> repeat(1:2, outer=2) 4-element Array{Int64,1}: 1 2 1 2 julia> repeat([1 2; 3 4], inner=(2, 1), outer=(1, 3)) 4×6 Array{Int64,2}: 1 2 1 2 1 2 1 2 1 2 1 2 3 4 3 4 3 4 3 4 3 4 3 4
-
kron(A, B) -
Kronecker tensor product of two vectors or two matrices.
-
blkdiag(A...) -
Concatenate matrices block-diagonally. Currently only implemented for sparse matrices.
-
linreg(x, y) -
Perform simple linear regression using Ordinary Least Squares. Returns
aandbsuch thata + b*xis the closest straight line to the given points(x, y), i.e., such that the squared error betweenyanda + b*xis minimized.Examples:
using PyPlot x = 1.0:12.0 y = [5.5, 6.3, 7.6, 8.8, 10.9, 11.79, 13.48, 15.02, 17.77, 20.81, 22.0, 22.99] a, b = linreg(x, y) # Linear regression plot(x, y, "o") # Plot (x, y) points plot(x, a + b*x) # Plot line determined by linear regression
See also:
\,cov,std,mean
-
expm(A) -
Compute the matrix exponential of
A, defined by\[e^A = \sum_{n=0}^{\infty} \frac{A^n}{n!}.\]For symmetric or Hermitian
A, an eigendecomposition (eigfact()) is used, otherwise the scaling and squaring algorithm (see [H05]) is chosen.[H05] Nicholas J. Higham, “The squaring and scaling method for the matrix exponential revisited”, SIAM Journal on Matrix Analysis and Applications, 26(4), 2005, 1179-1193. doi:10.1137/090768539
-
logm(A::StridedMatrix) -
If
Ahas no negative real eigenvalue, compute the principal matrix logarithm ofA, i.e. the unique matrix \(X\) such that \(e^X = A\) and \(-\pi < Im(\lambda) < \pi\) for all the eigenvalues \(\lambda\) of \(X\). IfAhas nonpositive eigenvalues, a nonprincipal matrix function is returned whenever possible.If
Ais symmetric or Hermitian, its eigendecomposition (eigfact()) is used, ifAis triangular an improved version of the inverse scaling and squaring method is employed (see [AH12] and [AHR13]). For general matrices, the complex Schur form (schur()) is computed and the triangular algorithm is used on the triangular factor.[AH12] Awad H. Al-Mohy and Nicholas J. Higham, “Improved inverse scaling and squaring algorithms for the matrix logarithm”, SIAM Journal on Scientific Computing, 34(4), 2012, C153-C169. doi:10.1137/110852553 [AHR13] Awad H. Al-Mohy, Nicholas J. Higham and Samuel D. Relton, “Computing the Fréchet derivative of the matrix logarithm and estimating the condition number”, SIAM Journal on Scientific Computing, 35(4), 2013, C394-C410. doi:10.1137/120885991
-
sqrtm(A) -
If
Ahas no negative real eigenvalues, compute the principal matrix square root ofA, that is the unique matrix \(X\) with eigenvalues having positive real part such that \(X^2 = A\). Otherwise, a nonprincipal square root is returned.If
Ais symmetric or Hermitian, its eigendecomposition (eigfact()) is used to compute the square root. Otherwise, the square root is determined by means of the Björck-Hammarling method, which computes the complex Schur form (schur()) and then the complex square root of the triangular factor.[BH83] Åke Björck and Sven Hammarling, “A Schur method for the square root of a matrix”, Linear Algebra and its Applications, 52-53, 1983, 127-140. doi:10.1016/0024-3795(83)80010-X
-
lyap(A, C) -
Computes the solution
Xto the continuous Lyapunov equationAX + XA' + C = 0, where no eigenvalue ofAhas a zero real part and no two eigenvalues are negative complex conjugates of each other.
-
sylvester(A, B, C) -
Computes the solution
Xto the Sylvester equationAX + XB + C = 0, whereA,BandChave compatible dimensions andAand-Bhave no eigenvalues with equal real part.
-
issymmetric(A) → Bool -
Test whether a matrix is symmetric.
-
isposdef(A) → Bool -
Test whether a matrix is positive definite.
-
isposdef!(A) → Bool -
Test whether a matrix is positive definite, overwriting
Ain the processes.
-
istril(A) → Bool -
Test whether a matrix is lower triangular.
-
istriu(A) → Bool -
Test whether a matrix is upper triangular.
-
isdiag(A) → Bool -
Test whether a matrix is diagonal.
-
ishermitian(A) → Bool -
Test whether a matrix is Hermitian.
-
transpose(A) -
The transposition operator (
.').
-
transpose!(dest, src) -
Transpose array
srcand store the result in the preallocated arraydest, which should have a size corresponding to(size(src,2),size(src,1)). No in-place transposition is supported and unexpected results will happen ifsrcanddesthave overlapping memory regions.
-
ctranspose(A) -
The conjugate transposition operator (
').
-
ctranspose!(dest, src) -
Conjugate transpose array
srcand store the result in the preallocated arraydest, which should have a size corresponding to(size(src,2),size(src,1)). No in-place transposition is supported and unexpected results will happen ifsrcanddesthave overlapping memory regions.
-
eigs(A; nev=6, ncv=max(20, 2*nev+1), which="LM", tol=0.0, maxiter=300, sigma=nothing, ritzvec=true, v0=zeros((0, ))) → (d,[v,],nconv,niter,nmult,resid) -
Computes eigenvalues
dofAusing implicitly restarted Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively.The following keyword arguments are supported:
-
nev: Number of eigenvalues -
ncv: Number of Krylov vectors used in the computation; should satisfynev+1 <= ncv <= nfor real symmetric problems andnev+2 <= ncv <= nfor other problems, wherenis the size of the input matrixA. The default isncv = max(20,2*nev+1). Note that these restrictions limit the input matrixAto be of dimension at least 2. -
which: type of eigenvalues to compute. See the note below.
whichtype of eigenvalues :LMeigenvalues of largest magnitude (default) :SMeigenvalues of smallest magnitude :LReigenvalues of largest real part :SReigenvalues of smallest real part :LIeigenvalues of largest imaginary part (nonsymmetric or complex Aonly):SIeigenvalues of smallest imaginary part (nonsymmetric or complex Aonly):BEcompute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric Aonly)-
tol: parameter defining the relative tolerance for convergence of Ritz values (eigenvalue estimates). A Ritz value \(θ\) is considered converged when its associated residual is less than or equal to the product oftoland \(max(ɛ^{2/3}, |θ|)\), whereɛ = eps(real(eltype(A)))/2is LAPACK’s machine epsilon. The residual associated with \(θ\) and its corresponding Ritz vector \(v\) is defined as the norm \(||Av - vθ||\). The specified value oftolshould be positive; otherwise, it is ignored and \(ɛ\) is used instead. Default: \(ɛ\). -
maxiter: Maximum number of iterations (default = 300) -
sigma: Specifies the level shift used in inverse iteration. Ifnothing(default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close tosigmausing shift and invert iterations. -
ritzvec: Returns the Ritz vectorsv(eigenvectors) iftrue -
v0: starting vector from which to start the iterations
eigsreturns thenevrequested eigenvalues ind, the corresponding Ritz vectorsv(only ifritzvec=true), the number of converged eigenvaluesnconv, the number of iterationsniterand the number of matrix vector multiplicationsnmult, as well as the final residual vectorresid.Note
The
sigmaandwhichkeywords interact: the description of eigenvalues searched for bywhichdo not necessarily refer to the eigenvalues ofA, but rather the linear operator constructed by the specification of the iteration mode implied bysigma.sigmaiteration mode whichrefers to eigenvalues ofnothingordinary (forward) \(A\) real or complex inverse with level shift sigma\((A - \sigma I )^{-1}\) Note
Although
tolhas a default value, the best choice depends strongly on the matrixA. We recommend that users _always_ specify a value fortolwhich suits their specific needs.For details of how the errors in the computed eigenvalues are estimated, see:
-
-
- Parlett, “The Symmetric Eigenvalue Problem”, SIAM: Philadelphia, 2/e (1998), Ch. 13.2, “Accessing Accuracy in Lanczos Problems”, pp. 290-292 ff.
-
-
-
- Lehoucq and D. C. Sorensen, “Deflation Techniques for an Implicitly Restarted Arnoldi Iteration”, SIAM Journal on Matrix Analysis and Applications (1996), 17(4), 789–821. doi:10.1137/S0895479895281484
-
-
-
eigs(A, B; nev=6, ncv=max(20, 2*nev+1), which="LM", tol=0.0, maxiter=300, sigma=nothing, ritzvec=true, v0=zeros((0, ))) → (d,[v,],nconv,niter,nmult,resid) -
Computes generalized eigenvalues
dofAandBusing implicitly restarted Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively.The following keyword arguments are supported:
-
nev: Number of eigenvalues -
ncv: Number of Krylov vectors used in the computation; should satisfynev+1 <= ncv <= nfor real symmetric problems andnev+2 <= ncv <= nfor other problems, wherenis the size of the input matricesAandB. The default isncv = max(20,2*nev+1). Note that these restrictions limit the input matrixAto be of dimension at least 2. -
which: type of eigenvalues to compute. See the note below.
whichtype of eigenvalues :LMeigenvalues of largest magnitude (default) :SMeigenvalues of smallest magnitude :LReigenvalues of largest real part :SReigenvalues of smallest real part :LIeigenvalues of largest imaginary part (nonsymmetric or complex Aonly):SIeigenvalues of smallest imaginary part (nonsymmetric or complex Aonly):BEcompute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric Aonly)-
tol: relative tolerance used in the convergence criterion for eigenvalues, similar totolin theeigs()method for the ordinary eigenvalue problem, but effectively for the eigenvalues of \(B^{-1} A\) instead of \(A\). See the documentation for the ordinary eigenvalue problem ineigs()and the accompanying note abouttol. -
maxiter: Maximum number of iterations (default = 300) -
sigma: Specifies the level shift used in inverse iteration. Ifnothing(default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close tosigmausing shift and invert iterations. -
ritzvec: Returns the Ritz vectorsv(eigenvectors) iftrue -
v0: starting vector from which to start the iterations
eigsreturns thenevrequested eigenvalues ind, the corresponding Ritz vectorsv(only ifritzvec=true), the number of converged eigenvaluesnconv, the number of iterationsniterand the number of matrix vector multiplicationsnmult, as well as the final residual vectorresid.Example
X = sprand(10, 5, 0.2) eigs(X, nsv = 2, tol = 1e-3)
Note
The
sigmaandwhichkeywords interact: the description of eigenvalues searched for bywhichdo not necessarily refer to the eigenvalue problem \(Av = Bv\lambda\), but rather the linear operator constructed by the specification of the iteration mode implied bysigma.sigmaiteration mode whichrefers to the problemnothingordinary (forward) \(Av = Bv\lambda\) real or complex inverse with level shift sigma\((A - \sigma B )^{-1}B = v\nu\) -
-
svds(A; nsv=6, ritzvec=true, tol=0.0, maxiter=1000, ncv=2*nsv, u0=zeros((0, )), v0=zeros((0, ))) → (SVD([left_sv,] s, [right_sv,]), nconv, niter, nmult, resid) -
Computes the largest singular values
sofAusing implicitly restarted Lanczos iterations derived fromeigs().Inputs
-
A: Linear operator whose singular values are desired.Amay be represented as a subtype ofAbstractArray, e.g., a sparse matrix, or any other type supporting the four methodssize(A),eltype(A),A * vector, andA' * vector. -
nsv: Number of singular values. Default: 6. -
ritzvec: Iftrue, return the left and right singular vectorsleft_svandright_sv. Iffalse, omit the singular vectors. Default:true. -
tol: tolerance, seeeigs(). -
maxiter: Maximum number of iterations, seeeigs(). Default: 1000. -
ncv: Maximum size of the Krylov subspace, seeeigs()(there callednev). Default:2*nsv. -
u0: Initial guess for the first left Krylov vector. It may have lengthm(the first dimension ofA), or 0. -
v0: Initial guess for the first right Krylov vector. It may have lengthn(the second dimension ofA), or 0.
Outputs
-
svd: AnSVDobject containing the left singular vectors, the requested values, and the right singular vectors. Ifritzvec = false, the left and right singular vectors will be empty. -
nconv: Number of converged singular values. -
niter: Number of iterations. -
nmult: Number of matrix–vector products used. -
resid: Final residual vector.
Example
X = sprand(10, 5, 0.2) svds(X, nsv = 2)
Implementation note
svds(A)is formally equivalent to callingeigsto perform implicitly restarted Lanczos tridiagonalization on the Hermitian matrix \(\begin{pmatrix} 0 & A^\prime \\ A & 0 \end{pmatrix}\), whose eigenvalues are plus and minus the singular values of \(A\). -
-
peakflops(n; parallel=false) -
peakflopscomputes the peak flop rate of the computer by using double precisionBase.LinAlg.BLAS.gemm!(). By default, if no arguments are specified, it multiplies a matrix of sizen x n, wheren = 2000. If the underlying BLAS is using multiple threads, higher flop rates are realized. The number of BLAS threads can be set withBLAS.set_num_threads(n).If the keyword argument
parallelis set totrue,peakflopsis run in parallel on all the worker processors. The flop rate of the entire parallel computer is returned. When running in parallel, only 1 BLAS thread is used. The argumentnstill refers to the size of the problem that is solved on each processor.
Low-level matrix operations
Matrix operations involving transpositions operations like A' \ B are converted by the Julia parser into calls to specially named functions like Ac_ldiv_B. If you want to overload these operations for your own types, then it is useful to know the names of these functions.
Also, in many cases there are in-place versions of matrix operations that allow you to supply a pre-allocated output vector or matrix. This is useful when optimizing critical code in order to avoid the overhead of repeated allocations. These in-place operations are suffixed with ! below (e.g. A_mul_B!) according to the usual Julia convention.
-
A_ldiv_B!([Y, ]A, B) → Y -
Compute
A \ Bin-place and store the result inY, returning the result. If only two arguments are passed, thenA_ldiv_B!(A, B)overwritesBwith the result.The argument
Ashould not be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced byfactorize()orcholfact()). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g.,lufact!()), and performance-critical situations requiringA_ldiv_B!usually also require fine-grained control over the factorization ofA.
-
A_ldiv_Bc(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(A\) \ \(Bᴴ\).
-
A_ldiv_Bt(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(A\) \ \(Bᵀ\).
-
A_mul_B!(Y, A, B) → Y -
Calculates the matrix-matrix or matrix-vector product \(A⋅B\) and stores the result in
Y, overwriting the existing value ofY. Note thatYmust not be aliased with eitherAorB.julia> A=[1.0 2.0; 3.0 4.0]; B=[1.0 1.0; 1.0 1.0]; Y = similar(B); A_mul_B!(Y, A, B); julia> Y 2×2 Array{Float64,2}: 3.0 3.0 7.0 7.0
-
A_mul_Bc(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(A⋅Bᴴ\).
-
A_mul_Bt(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(A⋅Bᵀ\).
-
A_rdiv_Bc(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(A / Bᴴ\).
-
A_rdiv_Bt(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(A / Bᵀ\).
-
Ac_ldiv_B(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ\) \ \(B\).
-
Ac_ldiv_B!([Y, ]A, B) → Y -
Similar to
A_ldiv_B!(), but return \(Aᴴ\) \ \(B\), computing the result in-place inY(or overwritingBifYis not supplied).
-
Ac_ldiv_Bc(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ\) \ \(Bᴴ\).
-
Ac_mul_B(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ⋅B\).
-
Ac_mul_Bc(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ Bᴴ\).
-
Ac_rdiv_B(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ / B\).
-
Ac_rdiv_Bc(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᴴ / Bᴴ\).
-
At_ldiv_B(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ\) \ \(B\).
-
At_ldiv_B!([Y, ]A, B) → Y -
Similar to
A_ldiv_B!(), but return \(Aᵀ\) \ \(B\), computing the result in-place inY(or overwritingBifYis not supplied).
-
At_ldiv_Bt(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ\) \ \(Bᵀ\).
-
At_mul_B(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ⋅B\).
-
At_mul_Bt(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ⋅Bᵀ\).
-
At_rdiv_B(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ / B\).
-
At_rdiv_Bt(A, B) -
For matrices or vectors \(A\) and \(B\), calculates \(Aᵀ / Bᵀ\).
BLAS Functions
In Julia (as in much of scientific computation), dense linear-algebra operations are based on the LAPACK library, which in turn is built on top of basic linear-algebra building-blocks known as the BLAS. There are highly optimized implementations of BLAS available for every computer architecture, and sometimes in high-performance linear algebra routines it is useful to call the BLAS functions directly.
Base.LinAlg.BLAS provides wrappers for some of the BLAS functions. Those BLAS functions that overwrite one of the input arrays have names ending in '!'. Usually, a BLAS function has four methods defined, for Float64, Float32, Complex128, and Complex64 arrays.
-
dot(n, X, incx, Y, incy) -
Dot product of two vectors consisting of
nelements of arrayXwith strideincxandnelements of arrayYwith strideincy.
-
dotu(n, X, incx, Y, incy) -
Dot function for two complex vectors.
-
dotc(n, X, incx, U, incy) -
Dot function for two complex vectors conjugating the first vector.
-
blascopy!(n, X, incx, Y, incy) -
Copy
nelements of arrayXwith strideincxto arrayYwith strideincy. ReturnsY.
-
nrm2(n, X, incx) -
2-norm of a vector consisting of
nelements of arrayXwith strideincx.
-
asum(n, X, incx) -
Sum of the absolute values of the first
nelements of arrayXwith strideincx.
-
axpy!(a, X, Y) -
Overwrite
Ywitha*X + Y. ReturnsY.
-
scal!(n, a, X, incx) -
Overwrite
Xwitha*Xfor the firstnelements of arrayXwith strideincx. ReturnsX.
-
scal(n, a, X, incx) -
Returns
Xscaled byafor the firstnelements of arrayXwith strideincx.
-
ger!(alpha, x, y, A) -
Rank-1 update of the matrix
Awith vectorsxandyasalpha*x*y' + A.
-
syr!(uplo, alpha, x, A) -
Rank-1 update of the symmetric matrix
Awith vectorxasalpha*x*x.' + A. Whenuplois ‘U’ the upper triangle ofAis updated (‘L’ for lower triangle). ReturnsA.
-
syrk!(uplo, trans, alpha, A, beta, C) -
Rank-k update of the symmetric matrix
Casalpha*A*A.' + beta*Coralpha*A.'*A + beta*Caccording to whethertransis ‘N’ or ‘T’. Whenuplois ‘U’ the upper triangle ofCis updated (‘L’ for lower triangle). ReturnsC.
-
syrk(uplo, trans, alpha, A) -
Returns either the upper triangle or the lower triangle, according to
uplo(‘U’ or ‘L’), ofalpha*A*A.'oralpha*A.'*A, according totrans(‘N’ or ‘T’).
-
her!(uplo, alpha, x, A) -
Methods for complex arrays only. Rank-1 update of the Hermitian matrix
Awith vectorxasalpha*x*x' + A. Whenuplois ‘U’ the upper triangle ofAis updated (‘L’ for lower triangle). ReturnsA.
-
herk!(uplo, trans, alpha, A, beta, C) -
Methods for complex arrays only. Rank-k update of the Hermitian matrix
Casalpha*A*A' + beta*Coralpha*A'*A + beta*Caccording to whethertransis ‘N’ or ‘T’. Whenuplois ‘U’ the upper triangle ofCis updated (‘L’ for lower triangle). ReturnsC.
-
herk(uplo, trans, alpha, A) -
Methods for complex arrays only. Returns either the upper triangle or the lower triangle, according to
uplo(‘U’ or ‘L’), ofalpha*A*A'oralpha*A'*A, according totrans(‘N’ or ‘T’).
-
gbmv!(trans, m, kl, ku, alpha, A, x, beta, y) -
Update vector
yasalpha*A*x + beta*yoralpha*A'*x + beta*yaccording totrans(‘N’ or ‘T’). The matrixAis a general band matrix of dimensionmbysize(A,2)withklsub-diagonals andkusuper-diagonals. Returns the updatedy.
-
gbmv(trans, m, kl, ku, alpha, A, x, beta, y) -
Returns
alpha*A*xoralpha*A'*xaccording totrans(‘N’ or ‘T’). The matrixAis a general band matrix of dimensionmbysize(A,2)withklsub-diagonals andkusuper-diagonals.
-
sbmv!(uplo, k, alpha, A, x, beta, y) -
Update vector
yasalpha*A*x + beta*ywhereAis a a symmetric band matrix of ordersize(A,2)withksuper-diagonals stored in the argumentA. The storage layout forAis described the reference BLAS module, level-2 BLAS at <http://www.netlib.org/lapack/explore-html/>.Returns the updated
y.
-
sbmv(uplo, k, alpha, A, x) -
Returns
alpha*A*xwhereAis a symmetric band matrix of ordersize(A,2)withksuper-diagonals stored in the argumentA.
-
sbmv(uplo, k, A, x) -
Returns
A*xwhereAis a symmetric band matrix of ordersize(A,2)withksuper-diagonals stored in the argumentA.
-
gemm!(tA, tB, alpha, A, B, beta, C) -
Update
Casalpha*A*B + beta*Cor the other three variants according totA(transposeA) andtB. Returns the updatedC.
-
gemm(tA, tB, alpha, A, B) -
Returns
alpha*A*Bor the other three variants according totA(transposeA) andtB.
-
gemm(tA, tB, A, B) -
Returns
A*Bor the other three variants according totA(transposeA) andtB.
-
gemv!(tA, alpha, A, x, beta, y) -
Update the vector
yasalpha*A*x + beta*yoralpha*A'x + beta*yaccording totA(transposeA). Returns the updatedy.
-
gemv(tA, alpha, A, x) -
Returns
alpha*A*xoralpha*A'xaccording totA(transposeA).
-
gemv(tA, A, x) -
Returns
A*xorA'xaccording totA(transposeA).
-
symm!(side, ul, alpha, A, B, beta, C) -
Update
Casalpha*A*B + beta*Coralpha*B*A + beta*Caccording toside.Ais assumed to be symmetric. Only theultriangle ofAis used. Returns the updatedC.
-
symm(side, ul, alpha, A, B) -
Returns
alpha*A*Boralpha*B*Aaccording toside.Ais assumed to be symmetric. Only theultriangle ofAis used.
-
symm(side, ul, A, B) -
Returns
A*BorB*Aaccording toside.Ais assumed to be symmetric. Only theultriangle ofAis used.
-
symm(tA, tB, alpha, A, B) -
Returns
alpha*A*Bor the other three variants according totA(transposeA) andtB.
-
symv!(ul, alpha, A, x, beta, y) -
Update the vector
yasalpha*A*x + beta*y.Ais assumed to be symmetric. Only theultriangle ofAis used. Returns the updatedy.
-
symv(ul, alpha, A, x) -
Returns
alpha*A*x.Ais assumed to be symmetric. Only theultriangle ofAis used.
-
symv(ul, A, x) -
Returns
A*x.Ais assumed to be symmetric. Only theultriangle ofAis used.
-
trmm!(side, ul, tA, dA, alpha, A, B) -
Update
Basalpha*A*Bor one of the other three variants determined byside(Aon left or right) andtA(transposeA). Only theultriangle ofAis used.dAindicates ifAis unit-triangular (the diagonal is assumed to be all ones). Returns the updatedB.
-
trmm(side, ul, tA, dA, alpha, A, B) -
Returns
alpha*A*Bor one of the other three variants determined byside(Aon left or right) andtA(transposeA). Only theultriangle ofAis used.dAindicates ifAis unit-triangular (the diagonal is assumed to be all ones).
-
trsm!(side, ul, tA, dA, alpha, A, B) -
Overwrite
Bwith the solution toA*X = alpha*Bor one of the other three variants determined byside(Aon left or right ofX) andtA(transposeA). Only theultriangle ofAis used.dAindicates ifAis unit-triangular (the diagonal is assumed to be all ones). Returns the updatedB.
-
trsm(side, ul, tA, dA, alpha, A, B) -
Returns the solution to
A*X = alpha*Bor one of the other three variants determined byside(Aon left or right ofX) andtA(transposeA). Only theultriangle ofAis used.dAindicates ifAis unit-triangular (the diagonal is assumed to be all ones).
-
trmv!(ul, tA, dA, A, b) -
Returns
op(A)*b, whereopis determined bytA(Nfor identity,Tfor transposeA, andCfor conjugate transposeA). Only theultriangle (Ufor upper,Lfor lower) ofAis used.dAindicates ifAis unit-triangular (the diagonal is assumed to be all ones ifU, or non-unit ifN). The multiplication occurs in-place onb.
-
trmv(ul, tA, dA, A, b) -
Returns
op(A)*b, whereopis determined bytA(Nfor identity,Tfor transposeA, andCfor conjugate transposeA). Only theultriangle (Ufor upper,Lfor lower) ofAis used.dAindicates ifAis unit-triangular (the diagonal is assumed to be all ones ifU, or non-unit ifN).
-
trsv!(ul, tA, dA, A, b) -
Overwrite
bwith the solution toA*x = bor one of the other two variants determined bytA(transposeA) andul(triangle ofAused).dAindicates ifAis unit-triangular (the diagonal is assumed to be all ones). Returns the updatedb.
-
trsv(ul, tA, dA, A, b) -
Returns the solution to
A*x = bor one of the other two variants determined bytA(transposeA) andul(triangle ofAis used.)dAindicates ifAis unit-triangular (the diagonal is assumed to be all ones).
-
set_num_threads(n) -
Set the number of threads the BLAS library should use.
-
I -
An object of type
UniformScaling, representing an identity matrix of any size.
LAPACK Functions
Base.LinAlg.LAPACK provides wrappers for some of the LAPACK functions for linear algebra. Those functions that overwrite one of the input arrays have names ending in '!'.
Usually a function has 4 methods defined, one each for Float64, Float32, Complex128 and Complex64 arrays.
Note that the LAPACK API provided by Julia can and will change in the future. Since this API is not user-facing, there is no commitment to support/deprecate this specific set of functions in future releases.
-
gbtrf!(kl, ku, m, AB) → (AB, ipiv) -
Compute the LU factorization of a banded matrix
AB.klis the first subdiagonal containing a nonzero band,kuis the last superdiagonal containing one, andmis the first dimension of the matrixAB. Returns the LU factorization in-place andipiv, the vector of pivots used.
-
gbtrs!(trans, kl, ku, m, AB, ipiv, B) -
Solve the equation
AB * X = B.transdetermines the orientation ofAB. It may beN(no transpose),T(transpose), orC(conjugate transpose).klis the first subdiagonal containing a nonzero band,kuis the last superdiagonal containing one, andmis the first dimension of the matrixAB.ipivis the vector of pivots returned fromgbtrf!. Returns the vector or matrixX, overwritingBin-place.
-
gebal!(job, A) → (ilo, ihi, scale) -
Balance the matrix
Abefore computing its eigensystem or Schur factorization.jobcan be one ofN(Awill not be permuted or scaled),P(Awill only be permuted),S(Awill only be scaled), orB(Awill be both permuted and scaled). ModifiesAin-place and returnsilo,ihi, andscale. If permuting was turned on,A[i,j] = 0ifj > iand1 < j < iloorj > ihi.scalecontains information about the scaling/permutations performed.
-
gebak!(job, side, ilo, ihi, scale, V) -
Transform the eigenvectors
Vof a matrix balanced usinggebal!to the unscaled/unpermuted eigenvectors of the original matrix. ModifiesVin-place.sidecan beL(left eigenvectors are transformed) orR(right eigenvectors are transformed).
-
gebrd!(A) → (A, d, e, tauq, taup) -
Reduce
Ain-place to bidiagonal formA = QBP'. ReturnsA, containing the bidiagonal matrixB;d, containing the diagonal elements ofB;e, containing the off-diagonal elements ofB;tauq, containing the elementary reflectors representingQ; andtaup, containing the elementary reflectors representingP.
-
gelqf!(A, tau) -
Compute the
LQfactorization ofA,A = LQ.taucontains scalars which parameterize the elementary reflectors of the factorization.taumust have length greater than or equal to the smallest dimension ofA.Returns
Aandtaumodified in-place.
-
gelqf!(A) → (A, tau) -
Compute the
LQfactorization ofA,A = LQ.Returns
A, modified in-place, andtau, which contains scalars which parameterize the elementary reflectors of the factorization.
-
geqlf!(A, tau) -
Compute the
QLfactorization ofA,A = QL.taucontains scalars which parameterize the elementary reflectors of the factorization.taumust have length greater than or equal to the smallest dimension ofA.Returns
Aandtaumodified in-place.
-
geqlf!(A) → (A, tau) -
Compute the
QLfactorization ofA,A = QL.Returns
A, modified in-place, andtau, which contains scalars which parameterize the elementary reflectors of the factorization.
-
geqrf!(A, tau) -
Compute the
QRfactorization ofA,A = QR.taucontains scalars which parameterize the elementary reflectors of the factorization.taumust have length greater than or equal to the smallest dimension ofA.Returns
Aandtaumodified in-place.
-
geqrf!(A) → (A, tau) -
Compute the
QRfactorization ofA,A = QR.Returns
A, modified in-place, andtau, which contains scalars which parameterize the elementary reflectors of the factorization.
-
geqp3!(A, jpvt, tau) -
Compute the pivoted
QRfactorization ofA,AP = QRusing BLAS level 3.Pis a pivoting matrix, represented byjpvt.taustores the elementary reflectors.jpvtmust have length length greater than or equal tonifAis an(m x n)matrix.taumust have length greater than or equal to the smallest dimension ofA.A,jpvt, andtauare modified in-place.
-
geqp3!(A, jpvt) → (A, jpvt, tau) -
Compute the pivoted
QRfactorization ofA,AP = QRusing BLAS level 3.Pis a pivoting matrix, represented byjpvt.jpvtmust have length greater than or equal tonifAis an(m x n)matrix.Returns
Aandjpvt, modified in-place, andtau, which stores the elementary reflectors.
-
geqp3!(A) → (A, jpvt, tau) -
Compute the pivoted
QRfactorization ofA,AP = QRusing BLAS level 3.Returns
A, modified in-place,jpvt, which represents the pivoting matrixP, andtau, which stores the elementary reflectors.
-
gerqf!(A, tau) -
Compute the
RQfactorization ofA,A = RQ.taucontains scalars which parameterize the elementary reflectors of the factorization.taumust have length greater than or equal to the smallest dimension ofA.Returns
Aandtaumodified in-place.
-
gerqf!(A) → (A, tau) -
Compute the
RQfactorization ofA,A = RQ.Returns
A, modified in-place, andtau, which contains scalars which parameterize the elementary reflectors of the factorization.
-
geqrt!(A, T) -
Compute the blocked
QRfactorization ofA,A = QR.Tcontains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension ofTsets the block size and it must be between 1 andn. The second dimension ofTmust equal the smallest dimension ofA.Returns
AandTmodified in-place.
-
geqrt!(A, nb) → (A, T) -
Compute the blocked
QRfactorization ofA,A = QR.nbsets the block size and it must be between 1 andn, the second dimension ofA.Returns
A, modified in-place, andT, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.
-
geqrt3!(A, T) -
Recursively computes the blocked
QRfactorization ofA,A = QR.Tcontains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension ofTsets the block size and it must be between 1 andn. The second dimension ofTmust equal the smallest dimension ofA.Returns
AandTmodified in-place.
-
geqrt3!(A) → (A, T) -
Recursively computes the blocked
QRfactorization ofA,A = QR.Returns
A, modified in-place, andT, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.
-
getrf!(A) → (A, ipiv, info) -
Compute the pivoted
LUfactorization ofA,A = LU.Returns
A, modified in-place,ipiv, the pivoting information, and aninfocode which indicates success (info = 0), a singular value inU(info = i, in which caseU[i,i]is singular), or an error code (info < 0).
-
tzrzf!(A) → (A, tau) -
Transforms the upper trapezoidal matrix
Ato upper triangular form in-place. ReturnsAandtau, the scalar parameters for the elementary reflectors of the transformation.
-
ormrz!(side, trans, A, tau, C) -
Multiplies the matrix
CbyQfrom the transformation supplied bytzrzf!. Depending onsideortransthe multiplication can be left-sided (side = L, Q*C) or right-sided (side = R, C*Q) andQcan be unmodified (trans = N), transposed (trans = T), or conjugate transposed (trans = C). Returns matrixCwhich is modified in-place with the result of the multiplication.
-
gels!(trans, A, B) → (F, B, ssr) -
Solves the linear equation
A * X = B,A.' * X =B, orA' * X = Busing a QR or LQ factorization. Modifies the matrix/vectorBin place with the solution.Ais overwritten with itsQRorLQfactorization.transmay be one ofN(no modification),T(transpose), orC(conjugate transpose).gels!searches for the minimum norm/least squares solution.Amay be under or over determined. The solution is returned inB.
-
gesv!(A, B) → (B, A, ipiv) -
Solves the linear equation
A * X = BwhereAis a square matrix using theLUfactorization ofA.Ais overwritten with itsLUfactorization andBis overwritten with the solutionX.ipivcontains the pivoting information for theLUfactorization ofA.
-
getrs!(trans, A, ipiv, B) -
Solves the linear equation
A * X = B,A.' * X =B, orA' * X = Bfor squareA. Modifies the matrix/vectorBin place with the solution.Ais theLUfactorization fromgetrf!, withipivthe pivoting information.transmay be one ofN(no modification),T(transpose), orC(conjugate transpose).
-
getri!(A, ipiv) -
Computes the inverse of
A, using itsLUfactorization found bygetrf!.ipivis the pivot information output andAcontains theLUfactorization ofgetrf!.Ais overwritten with its inverse.
-
gesvx!(fact, trans, A, AF, ipiv, equed, R, C, B) → (X, equed, R, C, B, rcond, ferr, berr, work) -
Solves the linear equation
A * X = B(trans = N),A.' * X =B(trans = T), orA' * X = B(trans = C) using theLUfactorization ofA.factmay beE, in which caseAwill be equilibrated and copied toAF;F, in which caseAFandipivfrom a previousLUfactorization are inputs; orN, in which caseAwill be copied toAFand then factored. Iffact = F,equedmay beN, meaningAhas not been equilibrated;R, meaningAwas multiplied bydiagm(R)from the left;C, meaningAwas multiplied bydiagm(C)from the right; orB, meaningAwas multiplied bydiagm(R)from the left anddiagm(C)from the right. Iffact = Fandequed = RorBthe elements ofRmust all be positive. Iffact = Fandequed = CorBthe elements ofCmust all be positive.Returns the solution
X;equed, which is an output iffactis notN, and describes the equilibration that was performed;R, the row equilibration diagonal;C, the column equilibration diagonal;B, which may be overwritten with its equilibrated formdiagm(R)*B(iftrans = Nandequed = R,B) ordiagm(C)*B(iftrans = T,Candequed = C,B);rcond, the reciprocal condition number ofAafter equilbrating;ferr, the forward error bound for each solution vector inX;berr, the forward error bound for each solution vector inX; andwork, the reciprocal pivot growth factor.
-
gesvx!(A, B) -
The no-equilibration, no-transpose simplification of
gesvx!.
-
gelsd!(A, B, rcond) → (B, rnk) -
Computes the least norm solution of
A * X = Bby finding theSVDfactorization ofA, then dividing-and-conquering the problem.Bis overwritten with the solutionX. Singular values belowrcondwill be treated as zero. Returns the solution inBand the effective rank ofAinrnk.
-
gelsy!(A, B, rcond) → (B, rnk) -
Computes the least norm solution of
A * X = Bby finding the fullQRfactorization ofA, then dividing-and-conquering the problem.Bis overwritten with the solutionX. Singular values belowrcondwill be treated as zero. Returns the solution inBand the effective rank ofAinrnk.
-
gglse!(A, c, B, d) → (X,res) -
Solves the equation
A * x = cwherexis subject to the equality constraintB * x = d. Uses the formula||c - A*x||^2 = 0to solve. ReturnsXand the residual sum-of-squares.
-
geev!(jobvl, jobvr, A) → (W, VL, VR) -
Finds the eigensystem of
A. Ifjobvl = N, the left eigenvectors ofAaren’t computed. Ifjobvr = N, the right eigenvectors ofAaren’t computed. Ifjobvl = Vorjobvr = V, the corresponding eigenvectors are computed. Returns the eigenvalues inW, the right eigenvectors inVR, and the left eigenvectors inVL.
-
gesdd!(job, A) → (U, S, VT) -
Finds the singular value decomposition of
A,A = U * S * V', using a divide and conquer approach. Ifjob = A, all the columns ofUand the rows ofV'are computed. Ifjob = N, no columns ofUor rows ofV'are computed. Ifjob = O,Ais overwritten with the columns of (thin)Uand the rows of (thin)V'. Ifjob = S, the columns of (thin)Uand the rows of (thin)V'are computed and returned separately.
-
gesvd!(jobu, jobvt, A) → (U, S, VT) -
Finds the singular value decomposition of
A,A = U * S * V'. Ifjobu = A, all the columns ofUare computed. Ifjobvt = Aall the rows ofV'are computed. Ifjobu = N, no columns ofUare computed. Ifjobvt = Nno rows ofV'are computed. Ifjobu = O,Ais overwritten with the columns of (thin)U. Ifjobvt = O,Ais overwritten with the rows of (thin)V'. Ifjobu = S, the columns of (thin)Uare computed and returned separately. Ifjobvt = Sthe rows of (thin)V'are computed and returned separately.jobuandjobvtcan’t both beO.Returns
U,S, andVt, whereSare the singular values ofA.
-
ggsvd!(jobu, jobv, jobq, A, B) → (U, V, Q, alpha, beta, k, l, R) -
Finds the generalized singular value decomposition of
AandB,U'*A*Q = D1*RandV'*B*Q = D2*R.D1hasalphaon its diagonal andD2hasbetaon its diagonal. Ifjobu = U, the orthogonal/unitary matrixUis computed. Ifjobv = Vthe orthogonal/unitary matrixVis computed. Ifjobq = Q, the orthogonal/unitary matrixQis computed. Ifjobu,jobvorjobqisN, that matrix is not computed. This function is only available in LAPACK versions prior to 3.6.0.
-
ggsvd3!(jobu, jobv, jobq, A, B) → (U, V, Q, alpha, beta, k, l, R) -
Finds the generalized singular value decomposition of
AandB,U'*A*Q = D1*RandV'*B*Q = D2*R.D1hasalphaon its diagonal andD2hasbetaon its diagonal. Ifjobu = U, the orthogonal/unitary matrixUis computed. Ifjobv = Vthe orthogonal/unitary matrixVis computed. Ifjobq = Q, the orthogonal/unitary matrixQis computed. Ifjobu,jobv, orjobqisN, that matrix is not computed. This function requires LAPACK 3.6.0.
-
geevx!(balanc, jobvl, jobvr, sense, A) → (A, w, VL, VR, ilo, ihi, scale, abnrm, rconde, rcondv) -
Finds the eigensystem of
Awith matrix balancing. Ifjobvl = N, the left eigenvectors ofAaren’t computed. Ifjobvr = N, the right eigenvectors ofAaren’t computed. Ifjobvl = Vorjobvr = V, the corresponding eigenvectors are computed. Ifbalanc = N, no balancing is performed. Ifbalanc = P,Ais permuted but not scaled. Ifbalanc = S,Ais scaled but not permuted. Ifbalanc = B,Ais permuted and scaled. Ifsense = N, no reciprocal condition numbers are computed. Ifsense = E, reciprocal condition numbers are computed for the eigenvalues only. Ifsense = V, reciprocal condition numbers are computed for the right eigenvectors only. Ifsense = B, reciprocal condition numbers are computed for the right eigenvectors and the eigenvectors. Ifsense = E,B, the right and left eigenvectors must be computed.
-
ggev!(jobvl, jobvr, A, B) → (alpha, beta, vl, vr) -
Finds the generalized eigendecomposition of
AandB. Ifjobvl = N, the left eigenvectors aren’t computed. Ifjobvr = N, the right eigenvectors aren’t computed. Ifjobvl = Vorjobvr = V, the corresponding eigenvectors are computed.
-
gtsv!(dl, d, du, B) -
Solves the equation
A * X = BwhereAis a tridiagonal matrix withdlon the subdiagonal,don the diagonal, andduon the superdiagonal.Overwrites
Bwith the solutionXand returns it.
-
gttrf!(dl, d, du) → (dl, d, du, du2, ipiv) -
Finds the
LUfactorization of a tridiagonal matrix withdlon the subdiagonal,don the diagonal, andduon the superdiagonal.Modifies
dl,d, andduin-place and returns them and the second superdiagonaldu2and the pivoting vectoripiv.
-
gttrs!(trans, dl, d, du, du2, ipiv, B) -
Solves the equation
A * X = B(trans = N),A.' * X = B(trans = T), orA' * X = B(trans = C) using theLUfactorization computed bygttrf!.Bis overwritten with the solutionX.
-
orglq!(A, tau, k = length(tau)) -
Explicitly finds the matrix
Qof aLQfactorization after callinggelqf!onA. Uses the output ofgelqf!.Ais overwritten byQ.
-
orgqr!(A, tau, k = length(tau)) -
Explicitly finds the matrix
Qof aQRfactorization after callinggeqrf!onA. Uses the output ofgeqrf!.Ais overwritten byQ.
-
orgql!(A, tau, k = length(tau)) -
Explicitly finds the matrix
Qof aQLfactorization after callinggeqlf!onA. Uses the output ofgeqlf!.Ais overwritten byQ.
-
orgrq!(A, tau, k = length(tau)) -
Explicitly finds the matrix
Qof aRQfactorization after callinggerqf!onA. Uses the output ofgerqf!.Ais overwritten byQ.
-
ormlq!(side, trans, A, tau, C) -
Computes
Q * C(trans = N),Q.' * C(trans = T),Q' * C(trans = C) forside = Lor the equivalent right-sided multiplication forside = RusingQfrom aLQfactorization ofAcomputed usinggelqf!.Cis overwritten.
-
ormqr!(side, trans, A, tau, C) -
Computes
Q * C(trans = N),Q.' * C(trans = T),Q' * C(trans = C) forside = Lor the equivalent right-sided multiplication forside = RusingQfrom aQRfactorization ofAcomputed usinggeqrf!.Cis overwritten.
-
ormql!(side, trans, A, tau, C) -
Computes
Q * C(trans = N),Q.' * C(trans = T),Q' * C(trans = C) forside = Lor the equivalent right-sided multiplication forside = RusingQfrom aQLfactorization ofAcomputed usinggeqlf!.Cis overwritten.
-
ormrq!(side, trans, A, tau, C) -
Computes
Q * C(trans = N),Q.' * C(trans = T),Q' * C(trans = C) forside = Lor the equivalent right-sided multiplication forside = RusingQfrom aRQfactorization ofAcomputed usinggerqf!.Cis overwritten.
-
gemqrt!(side, trans, V, T, C) -
Computes
Q * C(trans = N),Q.' * C(trans = T),Q' * C(trans = C) forside = Lor the equivalent right-sided multiplication forside = RusingQfrom aQRfactorization ofAcomputed usinggeqrt!.Cis overwritten.
-
posv!(uplo, A, B) → (A, B) -
Finds the solution to
A * X = BwhereAis a symmetric or Hermitian positive definite matrix. Ifuplo = Uthe upper Cholesky decomposition ofAis computed. Ifuplo = Lthe lower Cholesky decomposition ofAis computed.Ais overwritten by its Cholesky decomposition.Bis overwritten with the solutionX.
-
potrf!(uplo, A) -
Computes the Cholesky (upper if
uplo = U, lower ifuplo = L) decomposition of positive-definite matrixA.Ais overwritten and returned with an info code.
-
potri!(uplo, A) -
Computes the inverse of positive-definite matrix
Aafter callingpotrf!to find its (upper ifuplo = U, lower ifuplo = L) Cholesky decomposition.Ais overwritten by its inverse and returned.
-
potrs!(uplo, A, B) -
Finds the solution to
A * X = BwhereAis a symmetric or Hermitian positive definite matrix whose Cholesky decomposition was computed bypotrf!. Ifuplo = Uthe upper Cholesky decomposition ofAwas computed. Ifuplo = Lthe lower Cholesky decomposition ofAwas computed.Bis overwritten with the solutionX.
-
pstrf!(uplo, A, tol) → (A, piv, rank, info) -
Computes the (upper if
uplo = U, lower ifuplo = L) pivoted Cholesky decomposition of positive-definite matrixAwith a user-set tolerancetol.Ais overwritten by its Cholesky decomposition.Returns
A, the pivotspiv, the rank ofA, and aninfocode. Ifinfo = 0, the factorization succeeded. Ifinfo = i > 0, thenAis indefinite or rank-deficient.
-
ptsv!(D, E, B) -
Solves
A * X = Bfor positive-definite tridiagonalA.Dis the diagonal ofAandEis the off-diagonal.Bis overwritten with the solutionXand returned.
-
pttrf!(D, E) -
Computes the LDLt factorization of a positive-definite tridiagonal matrix with
Das diagonal andEas off-diagonal.DandEare overwritten and returned.
-
pttrs!(D, E, B) -
Solves
A * X = Bfor positive-definite tridiagonalAwith diagonalDand off-diagonalEafter computingA‘s LDLt factorization usingpttrf!.Bis overwritten with the solutionX.
-
trtri!(uplo, diag, A) -
Finds the inverse of (upper if
uplo = U, lower ifuplo = L) triangular matrixA. Ifdiag = N,Ahas non-unit diagonal elements. Ifdiag = U, all diagonal elements ofAare one.Ais overwritten with its inverse.
-
trtrs!(uplo, trans, diag, A, B) -
Solves
A * X = B(trans = N),A.' * X = B(trans = T), orA' * X = B(trans = C) for (upper ifuplo = U, lower ifuplo = L) triangular matrixA. Ifdiag = N,Ahas non-unit diagonal elements. Ifdiag = U, all diagonal elements ofAare one.Bis overwritten with the solutionX.
-
trcon!(norm, uplo, diag, A) -
Finds the reciprocal condition number of (upper if
uplo = U, lower ifuplo = L) triangular matrixA. Ifdiag = N,Ahas non-unit diagonal elements. Ifdiag = U, all diagonal elements ofAare one. Ifnorm = I, the condition number is found in the infinity norm. Ifnorm = Oor1, the condition number is found in the one norm.
-
trevc!(side, howmny, select, T, VL = similar(T), VR = similar(T)) -
Finds the eigensystem of an upper triangular matrix
T. Ifside = R, the right eigenvectors are computed. Ifside = L, the left eigenvectors are computed. Ifside = B, both sets are computed. Ifhowmny = A, all eigenvectors are found. Ifhowmny = B, all eigenvectors are found and backtransformed usingVLandVR. Ifhowmny = S, only the eigenvectors corresponding to the values inselectare computed.
-
trrfs!(uplo, trans, diag, A, B, X, Ferr, Berr) → (Ferr, Berr) -
Estimates the error in the solution to
A * X = B(trans = N),A.' * X = B(trans = T),A' * X = B(trans = C) forside = L, or the equivalent equations a right-handedside = RX * Aafter computingXusingtrtrs!. Ifuplo = U,Ais upper triangular. Ifuplo = L,Ais lower triangular. Ifdiag = N,Ahas non-unit diagonal elements. Ifdiag = U, all diagonal elements ofAare one.FerrandBerrare optional inputs.Ferris the forward error andBerris the backward error, each component-wise.
-
stev!(job, dv, ev) → (dv, Zmat) -
Computes the eigensystem for a symmetric tridiagonal matrix with
dvas diagonal andevas off-diagonal. Ifjob = Nonly the eigenvalues are found and returned indv. Ifjob = Vthen the eigenvectors are also found and returned inZmat.
-
stebz!(range, order, vl, vu, il, iu, abstol, dv, ev) → (dv, iblock, isplit) -
Computes the eigenvalues for a symmetric tridiagonal matrix with
dvas diagonal andevas off-diagonal. Ifrange = A, all the eigenvalues are found. Ifrange = V, the eigenvalues in the half-open interval(vl, vu]are found. Ifrange = I, the eigenvalues with indices betweenilandiuare found. Iforder = B, eigvalues are ordered within a block. Iforder = E, they are ordered across all the blocks.abstolcan be set as a tolerance for convergence.
-
stegr!(jobz, range, dv, ev, vl, vu, il, iu) → (w, Z) -
Computes the eigenvalues (
jobz = N) or eigenvalues and eigenvectors (jobz = V) for a symmetric tridiagonal matrix withdvas diagonal andevas off-diagonal. Ifrange = A, all the eigenvalues are found. Ifrange = V, the eigenvalues in the half-open interval(vl, vu]are found. Ifrange = I, the eigenvalues with indices betweenilandiuare found. The eigenvalues are returned inwand the eigenvectors inZ.
-
stein!(dv, ev_in, w_in, iblock_in, isplit_in) -
Computes the eigenvectors for a symmetric tridiagonal matrix with
dvas diagonal andev_inas off-diagonal.w_inspecifies the input eigenvalues for which to find corresponding eigenvectors.iblock_inspecifies the submatrices corresponding to the eigenvalues inw_in.isplit_inspecifies the splitting points between the submatrix blocks.
-
syconv!(uplo, A, ipiv) → (A, work) -
Converts a symmetric matrix
A(which has been factorized into a triangular matrix) into two matricesLandD. Ifuplo = U,Ais upper triangular. Ifuplo = L, it is lower triangular.ipivis the pivot vector from the triangular factorization.Ais overwritten byLandD.
-
sysv!(uplo, A, B) → (B, A, ipiv) -
Finds the solution to
A * X = Bfor symmetric matrixA. Ifuplo = U, the upper half ofAis stored. Ifuplo = L, the lower half is stored.Bis overwritten by the solutionX.Ais overwritten by its Bunch-Kaufman factorization.ipivcontains pivoting information about the factorization.
-
sytrf!(uplo, A) → (A, ipiv, info) -
Computes the Bunch-Kaufman factorization of a symmetric matrix
A. Ifuplo = U, the upper half ofAis stored. Ifuplo = L, the lower half is stored.Returns
A, overwritten by the factorization, a pivot vectoripiv, and the error codeinfowhich is a non-negative integer. Ifinfois positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo.
-
sytri!(uplo, A, ipiv) -
Computes the inverse of a symmetric matrix
Ausing the results ofsytrf!. Ifuplo = U, the upper half ofAis stored. Ifuplo = L, the lower half is stored.Ais overwritten by its inverse.
-
sytrs!(uplo, A, ipiv, B) -
Solves the equation
A * X = Bfor a symmetric matrixAusing the results ofsytrf!. Ifuplo = U, the upper half ofAis stored. Ifuplo = L, the lower half is stored.Bis overwritten by the solutionX.
-
hesv!(uplo, A, B) → (B, A, ipiv) -
Finds the solution to
A * X = Bfor Hermitian matrixA. Ifuplo = U, the upper half ofAis stored. Ifuplo = L, the lower half is stored.Bis overwritten by the solutionX.Ais overwritten by its Bunch-Kaufman factorization.ipivcontains pivoting information about the factorization.
-
hetrf!(uplo, A) → (A, ipiv, info) -
Computes the Bunch-Kaufman factorization of a Hermitian matrix
A. Ifuplo = U, the upper half ofAis stored. Ifuplo = L, the lower half is stored.Returns
A, overwritten by the factorization, a pivot vectoripiv, and the error codeinfowhich is a non-negative integer. Ifinfois positive the matrix is singular and the diagonal part of the factorization is exactly zero at positioninfo.
-
hetri!(uplo, A, ipiv) -
Computes the inverse of a Hermitian matrix
Ausing the results ofsytrf!. Ifuplo = U, the upper half ofAis stored. Ifuplo = L, the lower half is stored.Ais overwritten by its inverse.
-
hetrs!(uplo, A, ipiv, B) -
Solves the equation
A * X = Bfor a Hermitian matrixAusing the results ofsytrf!. Ifuplo = U, the upper half ofAis stored. Ifuplo = L, the lower half is stored.Bis overwritten by the solutionX.
-
syev!(jobz, uplo, A) -
Finds the eigenvalues (
jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrixA. Ifuplo = U, the upper triangle ofAis used. Ifuplo = L, the lower triangle ofAis used.
-
syevr!(jobz, range, uplo, A, vl, vu, il, iu, abstol) → (W, Z) -
Finds the eigenvalues (
jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrixA. Ifuplo = U, the upper triangle ofAis used. Ifuplo = L, the lower triangle ofAis used. Ifrange = A, all the eigenvalues are found. Ifrange = V, the eigenvalues in the half-open interval(vl, vu]are found. Ifrange = I, the eigenvalues with indices betweenilandiuare found.abstolcan be set as a tolerance for convergence.The eigenvalues are returned in
Wand the eigenvectors inZ.
-
sygvd!(jobz, range, uplo, A, vl, vu, il, iu, abstol) → (w, A, B) -
Finds the generalized eigenvalues (
jobz = N) or eigenvalues and eigenvectors (jobz = V) of a symmetric matrixAand symmetric positive-definite matrixB. Ifuplo = U, the upper triangles ofAandBare used. Ifuplo = L, the lower triangles ofAandBare used. Ifitype = 1, the problem to solve isA * x = lambda * B * x. Ifitype = 2, the problem to solve isA * B * x = lambda * x. Ifitype = 3, the problem to solve isB * A * x = lambda * x.
-
bdsqr!(uplo, d, e_, Vt, U, C) → (d, Vt, U, C) -
Computes the singular value decomposition of a bidiagonal matrix with
don the diagonal ande_on the off-diagonal. Ifuplo = U,e_is the superdiagonal. Ifuplo = L,e_is the subdiagonal. Can optionally also compute the productQ' * C.Returns the singular values in
d, and the matrixCoverwritten withQ' * C.
-
bdsdc!(uplo, compq, d, e_) → (d, e, u, vt, q, iq) -
Computes the singular value decomposition of a bidiagonal matrix with
don the diagonal ande_on the off-diagonal using a divide and conqueq method. Ifuplo = U,e_is the superdiagonal. Ifuplo = L,e_is the subdiagonal. Ifcompq = N, only the singular values are found. Ifcompq = I, the singular values and vectors are found. Ifcompq = P, the singular values and vectors are found in compact form. Only works for real types.Returns the singular values in
d, and ifcompq = P, the compact singular vectors iniq.
-
gecon!(normtype, A, anorm) -
Finds the reciprocal condition number of matrix
A. Ifnormtype = I, the condition number is found in the infinity norm. Ifnormtype = Oor1, the condition number is found in the one norm.Amust be the result ofgetrf!andanormis the norm ofAin the relevant norm.
-
gehrd!(ilo, ihi, A) → (A, tau) -
Converts a matrix
Ato Hessenberg form. IfAis balanced withgebal!theniloandihiare the outputs ofgebal!. Otherwise they should beilo = 1andihi = size(A,2).taucontains the elementary reflectors of the factorization.
-
orghr!(ilo, ihi, A, tau) -
Explicitly finds
Q, the orthogonal/unitary matrix fromgehrd!.ilo,ihi,A, andtaumust correspond to the input/output togehrd!.
-
gees!(jobvs, A) → (A, vs, w) -
Computes the eigenvalues (
jobvs = N) or the eigenvalues and Schur vectors (jobvs = V) of matrixA.Ais overwritten by its Schur form.Returns
A,vscontaining the Schur vectors, andw, containing the eigenvalues.
-
gges!(jobvsl, jobvsr, A, B) → (A, B, alpha, beta, vsl, vsr) -
Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (
jobsvl = V), or right Schur vectors (jobvsr = V) ofAandB.The generalized eigenvalues are returned in
alphaandbeta. The left Schur vectors are returned invsland the right Schur vectors are returned invsr.
-
trexc!(compq, ifst, ilst, T, Q) → (T, Q) -
Reorder the Schur factorization of a matrix. If
compq = V, the Schur vectorsQare reordered. Ifcompq = Nthey are not modified.ifstandilstspecify the reordering of the vectors.
-
trsen!(compq, job, select, T, Q) → (T, Q, w) -
Reorder the Schur factorization of a matrix and optionally finds reciprocal condition numbers. If
job = N, no condition numbers are found. Ifjob = E, only the condition number for this cluster of eigenvalues is found. Ifjob = V, only the condition number for the invariant subspace is found. Ifjob = Bthen the condition numbers for the cluster and subspace are found. Ifcompq = Vthe Schur vectorsQare updated. Ifcompq = Nthe Schur vectors are not modified.selectdetermines which eigenvalues are in the cluster.Returns
T,Q, and reordered eigenvalues inw.
-
tgsen!(select, S, T, Q, Z) → (S, T, alpha, beta, Q, Z) -
Reorders the vectors of a generalized Schur decomposition.
selectspecifices the eigenvalues in each cluster.
-
trsyl!(transa, transb, A, B, C, isgn=1) → (C, scale) -
Solves the Sylvester matrix equation
A * X +/- X * B = scale*CwhereAandBare both quasi-upper triangular. Iftransa = N,Ais not modified. Iftransa = T,Ais transposed. Iftransa = C,Ais conjugate transposed. Similarly fortransbandB. Ifisgn = 1, the equationA * X + X * B = scale * Cis solved. Ifisgn = -1, the equationA * X - X * B = scale * Cis solved.Returns
X(overwritingC) andscale.
© 2009–2016 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors
Licensed under the MIT License.
https://docs.julialang.org/en/release-0.5/stdlib/linalg/