Looks like the nullspace (kernel) of a matrix having a float entry fails. How do I fix/work around? I need it!
> interface(version); TTY Iris, Release 5, SGI MIPS UNIX, Jan 11 1998 > with(linalg): > nullspace( matrix(1,2, [ 1 , 3/2 ] ) ); {[-3/2, 1]} > nullspace( matrix(1,2, [ 1 , 1.5 ] ) ); {}
It is corrected with Maple 6. (U. Klein)
Nullspace appears to work for square matrices, but not for non-square ones. So a work-around would be to use the following instead of nullspace(A):
> AA:= evalm( htranspose(A) &* A); > nullspace(AA);
(1) You can try (for your example)
A:=matrix(1,2,[1,1.5]): linsolve(A,[0]);
But then you have to factor out the parameters to get the basis.
(2) You can multiply your matrix by a power of 10 to eliminate the decimals. This has no effect on the nullspace.
(3) If rref gives [ I F ]
, then the basis for the nullspace is the columns of
[ -F ] [ I ]
But if rref doesn’t give this form you have to some column swapping.
(4) (The best?) You can do evalf(Svd(A,U,V))
. The columns of V corresponding to
the zero singular values (i.e., the last n columns of V where n is the nullity of
A, n=# of columns
of A - rank(A)
, will be a basis for Nul A. An orthonormal basis, in
fact.
Maybe this fixes your problem:
> with(linalg): > map(convert,matrix(1,2,[1,1.5]),rational); [1 3/2] > nullspace(%); {[-3/2, 1]}
On a PC with NT 4.0 Release 4 did it right, but Release 5 did not.
The nullspace of a square matrix (for example) contains only the nullvector, unless the matrix can be reduce by Gaussian elimination to a matrix with a row/rows of zeros.
Using floats to approximate the entries will make this unlikely. What tends to happen is that you get very small numbers in the reduced matrix, which should be zero. (Perhaps you know this already.) This same problem occurs when finding eigenvalues.
I ran accross this in writing a linalg text for my students here at Kwantlen.
My solution is to find the reduced matrix and set to zero any elements at the ends of the
lower rows that are extremely small (eg 10^-18
). This can be done manually, by
inspection or by applying a "filter"
that zeros any element smaller than some
amount.
The result should be an close approximation to the null-space. Hopefully this will do the job for you.
The following proc. does this:
> newnullspace:=proc(m,threshold)local filter,ht,augmat,redmat; > filter:=(x,t)-> if abs(x)<t then 0 else x;fi; > ht:=linalg[coldim](m); > augmat:=linalg[augment](m,vector(ht,0)); > redmat:=linalg[gausselim](augmat); > redmat:=map(filter,evalm(redmat),threshold); > RETURN(linalg[backsub](redmat)); > end: > linalg[nullspace](matrix(3,3,[1,2,4,2,3,5,6,6,6.001])); {} > newnullspace(matrix(3,3,[1,2,4,2,3,5,6,6,6.001]),0.01); [1.999500000 _t[1], -2.999666667 _t[1], _t[1]]
The problem lies in the procedure `linalg/kernel/float`
. It computes the nullity of a
matrix by counting the number of singular values that are 0. Alas, if the matrix has less rows
than columns, this count does not indicate the nullity.
One workaround is to make the matrix square by stacking it onto a zero matrix,
> M := matrix(1,2,[1,1.5]): > M0 := stackmatrix(M,[0$2]): > kernel(M0); {vector([-.8320502943, .5547001962])}
A second is to convert the entries to rational values; however, this may be undesireable for performance reasons.
A third is to correct the offending procedure. Following is what I believe to be a corrected version [no guarantees].
`linalg/kernel/float` := proc(A, nullity) local K, p, m, n, S, U, V; option `Copyright (c) 1996 Waterloo Maple Inc. All rights reserved.`; m := linalg[rowdim](A); n := linalg[coldim](A); S := evalf(Svd(A, 'U', 'V')); # Determine the position, p, of the first zero in S. # Because the values of S are in decreasing order, # the remaining terms must also be zero. # If S has no zeros then p exits the loop assigned min(m,n)+1. # Columns p to n of V form a basis for the kernel of A. for p to min(m,n) while S[p] <> 0 and 10^(Digits - 1)*abs(S[p]) >= abs(S[1]) do od; if nargs > 1 then nullity := n-p+1 fi; K := {linalg['col'](V, p .. n)}; if type(A, 'array'(2)) then K else map(convert, K, list) fi end:
Comment: The loop that counts the number of singular values that are 0 assumes that the singular values in S are arranged in decreasing order. This seems to be true, but the help page for Svd does not mention this fact. Presumably that is how linpack does it...
> kernel(M); {vector([-.8320502943, .5547001962])} > evalm(M &* %[]); vector([0])
If you want to test this procedure, you should execute readlib(`linalg/kernel`)
before
assigning it, otherwise it will be replaced by the old version.
To install this procedure into your own library use
save `linalg/kernel`, `linalg/kernel/float`, ``.yourlibrary.`/linalg/kernel.m`:
where yourlibrary expands to the path of your personal library (not the main maple library).
If it precedes the main maple library in your libname assignment, then subsequent invocations of kernel will work properly [assuming my patch is correct].
Sorry, my mistake. I put in place a code to do an SVD decomposition of the matrix but made an simple error. A workaround is to pad the matrix with enough zero rows to make it square. I.e.
> nullspace( matrix( [[1,1.5],[0,0]] ) ); {[-.8320502943, .5547001962]}
It seems that the problem only occurs in the case of 1 x n -matrices. linsolve works well, but
returns the nullspace in parametric form (with _t[i]'s)
. To convert this to a base you
could use (general case):
> fnullspace := proc(B::matrix) > local i,r,q; > q :=linsolve(B,vector(rowdim(B),[0$rowdim(B)])); > r := `union`(seq(indets(q[i]),i=1..vectdim(q))); > {seq(map(coeff,q,r[i]),i=1..nops(r))} > end: > A := matrix(1,2,[1,1.5]): > fnullspace(A); {[1, -.6666666667]} > B := matrix(3,3,[[1,1.5,1],[1,1.5,1],[1,1.5,1]]): > fnullspace(B); {[1, -.6666666667, 0], [0, -.6666666667, 1]}
This is in response to a recent demonstration of a bug in linalg. I’ve lost the original so can’t
reply directly to it. The problem is caused by incorrect calculation of the nullity (N) in
`linalg/kernel/float`
, introduced in release 5.
Release 4 doesn’t make this error. Use of the SVD is a good idea, but please developers, lets
do the implementation right! Below is an illustration of the problem, a rerun using a fixed
version of `linalg/kernel/float`
, and the fixed version of `linalg/kernel/float`
.
> with(linalg): Warning, new definition for norm Warning, new definition for trace > A:= matrix(1,2,[1,-1.5]); A := [1 -1.5] > kernel(A); {} > B:= matrix(2,3,[[1,-1.5,0],[1,-1.5,0]]); [1 -1.5 0] B := [ ] [1 -1.5 0] > kernel(B); {[0, 0, 1.]} > read `patch.lna`; #new `linalg/kernel/float` > kernel(A); {[-.8320502943, -.5547001962]} > kernel(B); {[0, 0, 1.], [-.8320502943, -.5547001962, 0]}
New calculation of N (nullity) and new use of it in new `linalg/kernel/float`
.
> eval(`linalg/kernel/float`); proc(A, nullity) local K, N, i, m, n, S, U, V; m := linalg[rowdim](A); n := linalg[coldim](A); S := evalf(Svd(A, 'U', 'V')); N := max(n-m,0); #Note new calc of N for i to min(m, n) do if S[i] = 0 or 10^(Digits - 1)*abs(S[i]) < abs(S[1]) then N := N + min(m, n) - i + 1; break #Note different use of N fi od; if nargs = 2 then nullity := N fi; K := {linalg['col'](V, n + 1 - N .. n)}; if type(A, 'array'(2)) then K else map(convert, K, list) fi end
An analogous problem seems to occur with version 4.0:
with(linalg): kernel(matrix(1,2,[1,3/2])); {[-3/2, 1]} kernel(matrix(1,2,[1,1.5])); {[1, -.6666666667]}
Does anyone know how to solve/fix it even for 4.0 version?
This is neither analogous nor a problem. It is simply correct (the basis vector in the second case being a multiple of that of the first case).
On the other hand, Release 4 does have numerical problems with "kernel"
for matrices with
floats. For example, about half the time it returns for
> kernel(matrix([[3.,4,5],[6,7,8],[9,8,7]]));
where a correct result would be {[1, -2, 1]}
.
The problem is that Release 4 uses essentially the method we’re taught in elementary linear
algebra courses: solve the system A x = 0
by Gaussian elimination (using linsolve).
Unfortunately, rounding error can get you into trouble when A is singular. In the case of
A = matrix([[3.,4,5],[6,7,8],[9,8,7]])
, Gaussian elimination is likely to return a
result such as
[ 9 8 7 ] [ 0 1.666666666 3.333333333 ] [ 0 0 x ]
where x is nearly, but (because of rounding error) not quite, 0.
The much better method which Release 5 uses (but doesn’t quite get right for matrices with
more columns than rows) is to use the singular value decomposition as suggested by
Denis Sevee: do evalf(Svd(A,U,V))
, and then if A has n columns and there are r
nonzero singular values (where values close enough to 0 to be probable results of
rounding error are considered as 0), take the last n-r
columns of V as a basis of
kernel(A).
The bug in Release 5 is that it takes the last min(m,n)-r
columns rather than the last n-r
columns.
Another work-around for this bug is to use "stackmatrix" to add enough rows of 0’s to make a square matrix. You could define:
> newkernel:= proc(M) local m,n; m:= linalg[rowdim](M); n:= linalg[coldim](M); if m >= n then linalg[kernel](args) else linalg[kernel](linalg[stackmatrix](M, matrix(n-m,n,0)),args[2..-1]) fi end;
| On Wed, 18 Nov 1998, Robert Israel wrote: ...
This is very interesting. In release 4, I can get any of the following results:
> kernel(matrix([[3.,4,5],[6,7,8],[9,8,7]])); {[-.5000000000, 1, -.5000000000]} > kernel(matrix([[3.,4,5],[6,7,8],[9,8,7]])); {} > kernel(matrix([[3.,4,5],[6,7,8],[9,8,7]])); {[-.4999999999, 1, -.5000000001]} > kernel(matrix([[3.,4,5],[6,7,8],[9,8,7]])); {}
Release 5 consistently gives the result:
> kernel(matrix([[3.,4,5],[6,7,8],[9,8,7]])); {[.4082482905, -.8164965809, .4082482905]}