Addons/math/lapack
User Guide | Installation | Development | Categories | Git | Build Log
math/lapack
General Explanation
LAPACK (Linear Algebra Package) is a set of routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers.
The math/lapack
addon was replaced with the math/lapack2
addon from J9.01.
Labs and Demos
See LAPACK lab from Studio|Labs menu.
For perspective, see the addon 'math/eigenpic'. (Historically, we had the eigenpictures demo from Studio|Demos menu, which did the same thing using lapack, but for LAPACK is primarily valuable for its speed in handling certain large matrices.)
An Example of Using LAPACK
Here's an example of using eigenvalues to find the (matrix) square root of a matrix, i.e. for matrix A, find B such that A -: B +/ . * B.
First, load the lapack scripts we'll need and make "jlapack" functions available without namespace qualification.
load 'math/lapack math/lapack/dgeev' coinsert 'jlapack'
Next, define a couple of utilities we'll need.
NB.* dm: put vector as major diagonal of diagonal matrix of 0s. dm=:*=@i.@# NB.* round: round number y to nearest x, e.g. 0.05 round 0.92 0.94 0.98. round=: 13 : 'x*<.0.5+y%x'
(The definition of dm above (from Essays/Eigenvalues), replaces the previously used verb diagvec. diagvec=: (([: - [: i. #) |."0 1 ] {.~&> #)"1) ).
Now, write the square root function.
NB.* matSqrt: return (matrix) square root of (square) matrix y.. matSqrt=: 3 : 0 y=. (]{.~2$[: >./$) y NB. Force to be square mat: pad w/0s. 'leigv eigv reigv'=. dgeev y reigv +/ . * (%:dm eigv) +/ . * %.reigv )
To see this in action, we'll find the square root of the i.2 2 matrix.
i.2 2 0 1 2 3 ]mm=. matSqrt i.2 2 0.2570312j0.64730689 0.45771509j_0.1817485 0.91543019j_0.36349701 1.6301765j0.10206138
Square this result to ensure it's really the square root.
mm+/ . *mm 6.1062266e_16j_1.110223e_16 1j8.3266727e_17 2j_2.220446e_16 3j_5.5511151e_17
This looks funny because floating-point imprecision has left us with some small values very close to zero. Let's round the result to the nearest trillionth so that it's clearer.
1e_12 round mm+/ . *mm 0 1 2 3
Some Caveats
Our insistence on a square matrix will change the shape of a non-square matrix we use as an argument.
matSqrt 10+i. 2 3 2.0507211j0.20540582 2.2280857j_0.15996935 2.4054503j_0.52534453 2.6331921j_0.18905469 2.860934j0.14723515 3.0886759j0.48352499 0 0 0
Checking the result:
1e_12 round (+/ . *)~matSqrt 10+i. 2 3 10 11 12 13 14 15 0 0 0
A more general problem with the LAPACK functions is that they convert their results to complex numbers. This often doesn't matter but the difference is not obvious. We have to use the 3!:0 "type" function to see it.
So, we start with a matrix of real numbers.
]rfpm=: <:+:?2 2$0 _0.55999291 0.20008783 _0.4484063 0.92774329 3!:0 rfpm 8
Type 8 is (real) "floating point". However, when we check our answer by squaring the square root, it may look the same after we've done our rounding, but it is subtly different.
1e_12 round (+/ . *)~matSqrt rfpm _0.55999291 0.20008783 _0.4484063 0.92774329 3!:0 ] 1e_12 round (+/ . *)~matSqrt rfpm 16
Type 16 is "complex". To convert the result to the type we're expecting, assuming we know that we don't really need the imaginary portion, we can use +. to separate the real and imaginary portions, then {."1 to take only the real part.
([:{."1 +.) 1e_12 round (+/ . *)~matSqrt rfpm _0.55999291 0.20008783 _0.4484063 0.92774329 3!:0 ] ([:{."1 +.) 1e_12 round (+/ . *)~matSqrt rfpm 8
Type 8 is (real) "floating point".
Some Explanation of the Method Used
This square root method is the application of a more general method of applying a function to a matrix called "Sylvester's Matrix Theorem". We can implement this more generally as an adverb.
NB.* matFnc: apply arbitrary function to (square) matrix y. using Sylvester's NB. matrix theorem. matFnc=: 1 : 0 y=. (]{.~2$[: >./$) y NB. Force to be square mat: pad w/0s. 'leigv eigv reigv'=. dgeev y reigv +/ . * (u dm eigv) +/ . * %.reigv )
Here, instead of using the square-root function on the last line, we apply the provided function "u".
]mm=. 3+i.3 3 3 4 5 6 7 8 9 10 11 ]rr=. *: matFnc mm 78 90 102 132 153 174 186 216 246
Checking the result:
mm +/ . * mm 78 90 102 132 153 174 186 216 246
Check that we can replicate our earlier square root function with this more general method:
(%: matFnc mm)-:matSqrt mm 1
Interestingly, the square-root of the square does not give us the original matrix.
matSqrt *: matFnc mm 4.1060503 4.2385035 4.3709568 6.0928488 7.0200216 7.9471941 8.0796474 9.8015394 11.523432
However, the square of this square root does get us back to where we started.
(+/ . *)~matSqrt *: matFnc mm 78 90 102 132 153 174 186 216 246 *: matFnc matSqrt *: matFnc mm 78 90 102 132 153 174 186 216 246
Discussion
You are really solving A=X^2^. Even in the 1x1 case, you expect 2 solutions. In the nxn case you generally expect 2^n^ solutions, corresponding to the +/- the square root of |e|, where e is an eigenvalue. An easier and more common example than finding square roots is finding powers of a matrix after diagonalizing it. In this case it is clear that you can replace matrix multiplication by ordinary multiplication, and you do not need the full power of Sylvester's theorem. -- John Randall <<DateTime(2007-08-28T12:31:47Z)>>