In some of my recent research (related to section 4 of this) I’ve needed to estimate the one norm of a large (n^4 x n^2) dense matrix without computing each element. All we can assume is the ability to compute matrix-vector products (via some rather complicated function), meaning we only know the entries of the matrix implicitly.
To compute the one norm of a matrix known only implicitly we can use the algorithm by Higham and Tisseur (paper), which only needs to compute matrix vector products. This algorithm has been implemented in MATLAB for a while but only made the transition to Python in the latest SciPy release.
To define an implicit matrix I’ve recently discovered that we can use the LinearOperator class found in scipy.sparse.linalg. For instance, suppose we want to do an explicit finite difference method (wiki) then we need matrix-vector products with the tridiagonal Toeplitz matrix diag(-1, 2, -1); of which there is a nice explanation here. However we don’t need to store the matrix since we can describe the matrix-vector products with the following function.
def mymatvec(z): y = zeros((z.size, 1)) y = 2*z-z y[y.size-1] = 2*z[z.size-1]-z[z.size-2] for i in range(1,z.size-1): y[i] = 2*z[i]-z[i-1]-z[i+1] return y
We can then make the implicit finite difference matrix of size n.
from scipy.sparse.linalg import LinearOperator n = 1000 A = LinearOperator((n, n), matvec=mymatvec)
Some other useful parameters of the LinearOperator class allow you to define matrix-matrix multiplication and multiplication by a vector on the left.
Strangely the new onenormest function doesn’t use this inbuilt function to multiply with vectors from left, we need to define A.T ourselves! Luckily in our example A is symmetric so this is easy.
A.T = LinearOperator((n, n), matvec=mymatvec)
We can now use the norm estimator.
from scipy.sparse.linalg import onenormest onenormest(A)