Linear algebra

Linear algebra

In addition to (and as part of) its support for multi-dimensional arrays, Julia provides native implementations of many common and useful linear algebra operations. Basic operations, such as trace, det, and inv are all supported:

julia> A = [1 2 3; 4 1 6; 7 8 1]
3×3 Array{Int64,2}:
 1  2  3
 4  1  6
 7  8  1

julia> trace(A)
3

julia> det(A)
104.0

julia> inv(A)
3×3 Array{Float64,2}:
 -0.451923   0.211538    0.0865385
  0.365385  -0.192308    0.0576923
  0.240385   0.0576923  -0.0673077

As well as other useful operations, such as finding eigenvalues or eigenvectors:

julia> A = [1.5 2 -4; 3 -1 -6; -10 2.3 4]
3×3 Array{Float64,2}:
   1.5   2.0  -4.0
   3.0  -1.0  -6.0
 -10.0   2.3   4.0

julia> eigvals(A)
3-element Array{Complex{Float64},1}:
  9.31908+0.0im
 -2.40954+2.72095im
 -2.40954-2.72095im

julia> eigvecs(A)
3×3 Array{Complex{Float64},2}:
 -0.488645+0.0im  0.182546-0.39813im   0.182546+0.39813im
 -0.540358+0.0im  0.692926+0.0im       0.692926-0.0im
   0.68501+0.0im  0.254058-0.513301im  0.254058+0.513301im

In addition, Julia provides many factorizations which can be used to speed up problems such as linear solve or matrix exponentiation by pre-factorizing a matrix into a form more amenable (for performance or memory reasons) to the problem. See the documentation on factorize for more information. As an example:

julia> A = [1.5 2 -4; 3 -1 -6; -10 2.3 4]
3×3 Array{Float64,2}:
   1.5   2.0  -4.0
   3.0  -1.0  -6.0
 -10.0   2.3   4.0

julia> factorize(A)
Base.LinAlg.LU{Float64,Array{Float64,2}} with factors L and U:
[1.0 0.0 0.0; -0.15 1.0 0.0; -0.3 -0.132196 1.0]
[-10.0 2.3 4.0; 0.0 2.345 -3.4; 0.0 0.0 -5.24947]

Since A is not Hermitian, symmetric, triangular, tridiagonal, or bidiagonal, an LU factorization may be the best we can do. Compare with:

julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]
3×3 Array{Float64,2}:
  1.5   2.0  -4.0
  2.0  -1.0  -3.0
 -4.0  -3.0   5.0

julia> factorize(B)
Base.LinAlg.BunchKaufman{Float64,Array{Float64,2}}([-1.64286 0.142857 -0.8; 2.0 -2.8 -0.6; -4.0 -3.0 5.0], [1, 2, 3], 'U', true, false, 0)

Here, Julia was able to detect that B is in fact symmetric, and used a more appropriate factorization. Often it's possible to write more efficient code for a matrix that is known to have certain properties e.g. it is symmetric, or tridiagonal. Julia provides some special types so that you can "tag" matrices as having these properties. For instance:

julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]
3×3 Array{Float64,2}:
  1.5   2.0  -4.0
  2.0  -1.0  -3.0
 -4.0  -3.0   5.0

julia> sB = Symmetric(B)
3×3 Symmetric{Float64,Array{Float64,2}}:
  1.5   2.0  -4.0
  2.0  -1.0  -3.0
 -4.0  -3.0   5.0

sB has been tagged as a matrix that's (real) symmetric, so for later operations we might perform on it, such as eigenfactorization or computing matrix-vector products, efficiencies can be found by only referencing half of it. For example:

julia> B = [1.5 2 -4; 2 -1 -3; -4 -3 5]
3×3 Array{Float64,2}:
  1.5   2.0  -4.0
  2.0  -1.0  -3.0
 -4.0  -3.0   5.0

julia> sB = Symmetric(B)
3×3 Symmetric{Float64,Array{Float64,2}}:
  1.5   2.0  -4.0
  2.0  -1.0  -3.0
 -4.0  -3.0   5.0

julia> x = [1; 2; 3]
3-element Array{Int64,1}:
 1
 2
 3

julia> sB\x
3-element Array{Float64,1}:
 -1.73913
 -1.1087
 -1.45652

The \ operation here performs the linear solution. Julia's parser provides convenient dispatch to specialized methods for the transpose of a matrix left-divided by a vector, or for the various combinations of transpose operations in matrix-matrix solutions. Many of these are further specialized for certain special matrix types. For example, A\B will end up calling Base.LinAlg.A_ldiv_B! while A'\B will end up calling Base.LinAlg.Ac_ldiv_B, even though we used the same left-division operator. This works for matrices too: A.'\B.' would call Base.LinAlg.At_ldiv_Bt. The left-division operator is pretty powerful and it's easy to write compact, readable code that is flexible enough to solve all sorts of systems of linear equations.

Special matrices

Matrices with special symmetries and structures arise often in linear algebra and are frequently associated with various matrix factorizations. Julia features a rich collection of special matrix types, which allow for fast computation with specialized routines that are specially developed for particular matrix types.

The following tables summarize the types of special matrices that have been implemented in Julia, as well as whether hooks to various optimized methods for them in LAPACK are available.

Type Description
Hermitian Hermitian matrix
UpperTriangular Upper triangular matrix
LowerTriangular Lower triangular matrix
Tridiagonal Tridiagonal matrix
SymTridiagonal Symmetric tridiagonal matrix
Bidiagonal Upper/lower bidiagonal matrix
Diagonal Diagonal matrix
UniformScaling Uniform scaling operator

Elementary operations

Matrix type + - * \ Other functions with optimized methods
Hermitian MV inv(), sqrtm(), expm()
UpperTriangular MV MV inv(), det()
LowerTriangular MV MV inv(), det()
SymTridiagonal M M MS MV eigmax(), eigmin()
Tridiagonal M M MS MV
Bidiagonal M M MS MV
Diagonal M M MV MV inv(), det(), logdet(), /()
UniformScaling M M MVS MVS /()

Legend:

Key Description
M (matrix) An optimized method for matrix-matrix operations is available
V (vector) An optimized method for matrix-vector operations is available
S (scalar) An optimized method for matrix-scalar operations is available

Matrix factorizations

Matrix type LAPACK eig() eigvals() eigvecs() svd() svdvals()
Hermitian HE ARI
UpperTriangular TR A A A
LowerTriangular TR A A A
SymTridiagonal ST A ARI AV
Tridiagonal GT
Bidiagonal BD A A
Diagonal DI A

Legend:

Key Description Example
A (all) An optimized method to find all the characteristic values and/or vectors is available e.g. eigvals(M)
R (range) An optimized method to find the ilth through the ihth characteristic values are available eigvals(M, il, ih)
I (interval) An optimized method to find the characteristic values in the interval [vl, vh] is available eigvals(M, vl, vh)
V (vectors) An optimized method to find the characteristic vectors corresponding to the characteristic values x=[x1, x2,...] is available eigvecs(M, x)

The uniform scaling operator

A UniformScaling operator represents a scalar times the identity operator, λ*I. The identity operator I is defined as a constant and is an instance of UniformScaling. The size of these operators are generic and match the other matrix in the binary operations +, -, * and \. For A+I and A-I this means that A must be square. Multiplication with the identity operator I is a noop (except for checking that the scaling factor is one) and therefore almost without overhead.

Matrix factorizations

Matrix factorizations (a.k.a. matrix decompositions) compute the factorization of a matrix into a product of matrices, and are one of the central concepts in linear algebra.

The following table summarizes the types of matrix factorizations that have been implemented in Julia. Details of their associated methods can be found in the Linear Algebra section of the standard library documentation.

Type Description
Cholesky Cholesky factorization
CholeskyPivoted Pivoted Cholesky factorization
LU LU factorization
LUTridiagonal LU factorization for Tridiagonal matrices
UmfpackLU LU factorization for sparse matrices (computed by UMFPack)
QR QR factorization
QRCompactWY Compact WY form of the QR factorization
QRPivoted Pivoted QR factorization
Hessenberg Hessenberg decomposition
Eigen Spectral decomposition
SVD Singular value decomposition
GeneralizedSVD Generalized SVD

© 2009–2016 Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors
Licensed under the MIT License.
https://docs.julialang.org/en/release-0.6/manual/linear-algebra/

在线笔记
App下载
App下载

扫描二维码

下载编程狮App

公众号
微信公众号

编程狮公众号

意见反馈
返回顶部