Help / Learning / Ch 22: Vectors and Matrices
>> << Pri JfC LJ Phr Dic Voc !: Rel NuVoc wd Help Learning J
|
Chapter 22: Vectors and MatricesIn this chapter we look at built-in functions which support computation with vectors and matrices. 22.1 The Dot Product ConjunctionRecall the composition of verbs, from Chapter 08.
A sum-of-products verb can be composed from sum and product
with the @: conjunction.
There is a conjunction . (dot, called "Dot Product").
It can be used instead of @: to compute the sum-of-products of two lists.
Evidently, the . conjunction is a form of composition, a variation of @: or @. We will see below that it is more convenient for working with vectors and matrices. 22.2 Scalar Product of VectorsRecall that P is a list of 3 numbers. If we interpret these numbers as
coordinates
of a point in 3-dimensional space, then P can be regarded as
defining a vector, a line-segment with length and direction, from the
origin at 0 0 0
to the point P. We can refer to the vector P.
A textbook definition of scalar product of vectors P and Q may appear in the form: (magnitude P) * (magnitude Q) * (cos alpha) where the magnitude (or length) of a vector is the square root of sum of squares of components, and alpha is the smallest non-negative angle between P and Q. To show the equivalence of this form with P dot Q, we can define utility-verbs ma for magnitude-of-a-vector and ca for cos-of-angle-between-vectors. ma =: %: @: (+/ @: *:) ca =: 4 : '(-/ *: b,(ma x-y), c) % (2*(b=.ma x)*(c=. ma y))' We expect the magnitude of vector 3 4 to be 5, and expect the angle
between P
and itself to be zero, and thus cosine to be 1.
then we see that the dot verb is equivalent to the textbook form above
22.3 Matrix ProductThe verb we called dot is "matrix product" for vectors and matrices.
To compute Z =: A dot B the last dimension of A must equal the first dimension of B. A =: 2 5 $ 1 B =: 5 4 $ 2
The example shows that the last-and-first
dimensions disappear from the result. If these two
dimensions are not equal then an error is signalled.
22.4 Generalizations22.4.1 Various VerbsThe "Dot Product" conjunction forms the
dot-product verb with (+/ . *). Other verbs
can be formed on the pattern (u . v). g =: +. / . *. Taking the "child" relationship to be the matrix C: C =: 4 4 $ 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 Then the grandchild relationship is, so to speak,
the child relationship squared.
We can see from C that person 3 is a child of person 1, and person 1 is a child of person 0. Hence, as we see in G person 3 is a grandchild of person 0. 22.4.2 Symbolic ArithmeticBy 'symbolic arithmetic' is meant, for example, symbolically adding the strings 'a' and 'b' to get the string 'a+b'. Here is a small collection of utility functions to do some limited symbolic arithmetic on (boxed) strings. pa =: ('('&,) @: (,&')') cp =: [ ` pa @. (+./ @: ('+-*' & e.)) symbol =: (1 : (':';'< (cp > x), u, (cp > y)'))(" 0 0) splus =: '+' symbol sminus =: '-' symbol sprod =: '*' symbol a =: <'a' b =: <'b' c =: <'c'
As a variant of the symbolic product, we could elide the multiplication symbol to give an effect more like conventional notation: sprodc =: '' symbol
As arguments to the "Dot Product" conjunction we could supply verbs to perform symbolic arithmetic. For the dot verb, which we recall is (+/ . *), a symbolic version is: sdot =: splus / . sprodc To illustrate: S =: 3 2 $ < "0 'abcdef' T =: 2 3 $ < "0 'pqrstu'
22.4.3 Matrix Product in More than 2 DimensionsAn example in 3 dimensions will be sufficiently general. Symbolically: A =: 1 2 3 $ <"0 'abcdef' B =: 3 2 2 $ <"0 'mnopqrstuvwx'
The last dimension of A must equal the first dimension of B. The shape of the result Z is the leading dimensions of A followed by the trailing dimensions of B.
The last-and-first dimension of A and B have disappeared, because each dimensionless scalar in Z combines a "row" of A with a "column" of B. We see in the result Z that each row of A is combined separately with the whole of B. 22.4.4 Dot Compared With @:Recall from Chapter 07 that a dyadic verb v has a left and right rank. Here are some utility functions to extract the ranks from a given verb. RANKS =: 1 : 'u b. 0' LRANK =: 1 : '1 { (u RANKS)' NB. left rank only
The general scheme defining dyadic verbs of the form (u . v) is: u . v means u @ (v " (1+L, _)) where L = (v LRANK) or equivalently, u . v means (u @: v) " (1+L, _) and hence +/.* means (+/ @: *)" 1 _ and so we see the difference between . and @:. For simple vector arguments they are the same, in which case the dimensions of the arguments must be the same, but this is not the condition we require for matrix multiplication in general, where (in the example above) each row of A is combined with the whole of B. 22.5 DeterminantThe monadic verb (- / . *) computes the determinant of a matrix. det =: - / . *
Symbolically: sdet =: sminus / . sprodc
22.5.1 Singular MatricesA matrix is said to be singular if the rows (or columns)
are not linearly independent, that is, if one row
(or column) can be obtained from another
by multiplying by a constant.
A singular matrix has a zero determinant.
We see that the resulting term (amb-mab) must be zero for all a, b and m. 22.6 Matrix Divide
22.6.1 Simultaneous EquationsThe built-in verb %. (percent dot) is called "Matrix Divide". It can be used to find solutions to systems of simultaneous linear equations. For example, consider the equations written conventionally as: 3x + 4y = 11 2x + 3y = 8 Rewriting as a matrix equation, we have, informally, M dot U = R where M is the matrix of coefficients
U is the vector of unknowns x,y and R is the vector of
right-hand-side values:
The vector of unknowns U (that is, x,y) can be found
by dividing R by matrix M.
We see that M dot U equals R, that is, U solves the equations. 22.6.2 Complex, Rational and Vector VariablesThe equations to be solved may be in complex variables.
For example:
or in rationals. In this case both M and R must be rationals to give a rational result. M =: 2 2 $ 3x 4x 2x 3x R =: 15r22 11r16
In the previous examples the unknowns in U were scalars. Now suppose the unknowns are vectors and our equations for solving are: 3x + 4y = 15 22 2x + 3y = 11 16 so we write: M =: 2 2 $ 3 4 2 3 R =: 2 2 $ 15 22 11 16
The unknowns x and y are the rows of U, that is, vectors. 22.6.3 Curve FittingSuppose we aim to plot the best straight line fitting a set of data points. If the data points are x,y pairs given as: x =: 10 20 30 y =: 31 49 70 we aim to find a and b for the equation: y = a + bx The 3 data points give us 3 equations in the 2 unknowns a and b. Conventionally: 1 . a + 10 . b = 31 1 . a + 20 . b = 49 1 . a + 30 . b = 70 so we take the matrix of coefficients M to be M =: 3 2 $ 1 10 1 20 1 30 and divide y by matrix M to get the vector of unknowns
U, (that is, a,b)
Here we have more equations than unknowns,
(more rows than columns in M)
and so the solutions U are the best fit
to all the equations together. We see that M dot U
is close to, but not exactly equal to, y.
The method extends straightforwardly to fitting a polynomial to a set of data points. Suppose we aim to fit
y = a + bx + cx2 to the data points: x =: 0 1 2 3 y =: 1 6 17 34.1 The four equations to be solved are:
1.a + bx0 + cx02 = y0 1 a + bx1 + cx12 = y1 1.a + bx2 + cx22 = y2 1.a + bx3 + cx32 = y3 and so the columns of matrix M are 1, x, x^2, conveniently given by: x ^/ 0 1 2 M =: x ^/ 0 1 2 and the unknowns a, b, c are given by vector U as follows:
There may be more equations than unknowns, as this example shows, but evidently there cannot be fewer. That is, in R %. M matrix M must have no more columns than rows. 22.6.4 Dividing by Higher-Rank ArraysHere is an example of U =: R %. M, in which the divisor M is of rank 3. M =: 3 2 2 $ 3 4 2 3 0 3 1 2 3 1 2 3 R =: 21 42
The dyadic rank of %. is _ 2, %. b. 0 2 _ 2 and so in this example
the whole of R is combined separately
with each of the 3 matrices in M. That is, we have 3
separate sets of equations, each with the same right-hand-side R
Thus we have 3 separate solutions (the rows of U). 22.7 Identity MatrixA (non-singular) square matrix M divided by itself yields an "identity matrix", I say, such that (M dot I) = M. M =: 3 3 $ 3 4 7 0 0 4 6 0 3
22.8 Matrix InverseThe monadic verb %. computes the inverse of a matrix
That is, %. M is
equivalent to I %. M for a suitable
identity matrix I:
For a vector V, the inverse W
has the reciprocal magnitude and the same direction.
Thus the product of the magnitudes is 1 and the cosine
of the angle between is 1.
This is the end of Chapter 22. |
NEXT
Table of Contents
The examples in this chapter were executed using J version 802 beta.
This chapter last updated 19 May 2015
Copyright © Roger Stokes 2014.
This material may be freely reproduced,
provided that acknowledgement is made.
>> << Pri JfC LJ Phr Dic Voc !: Rel NuVoc wd Help Learning J