The goal of this project is to investigate some iterative methods to solve linear systems and to compute eigenvalues. In practice, these methods are used on large structured matrices. We will use MATLAB in our numerical investigations.
(k+1) (k) M x = N x + b, k = 0,1, ...
For M the diagonal of A and N = M-A, we obtain the method of Jacobi. The formula to update the solution vector x componentwise at each step is
n (k+1) (k) 1 ( --- (k) ) x = x + ---- ( b - > a x ), i = 1,2,..,n. i i a ( i --- ij j ) ii j=1
One immediate improvement to the method of Jacobi is to use the j-th component of the new solution vector x^(k+1) in the update of the i-th component, for j < i:
i-1 n (k+1) (k) 1 ( --- (k+1) --- (k) ) x = x + ---- ( b - > a x - > a x ), i = 1,2,..,n. i i a ( i --- ij j --- ij j ) ii j=1 j=iThe formula above is used in the method of Gauss-Seidel, which corresponds to taking M as the lower triangular part and diagonal of A. Read also section 2.10 of the text book.
>> Q = orth(rand(5)); D = diag(rand(1,5)); >> A = Q*D*Q'; z = rand(5,1); b = A*z; >> x0 = z + rand(5,1)*1e-4; >> [x,dx] = jacobi(A,b,x0,1e-8,50) >> [x,dx] = gauss_seidel(A,b,x0,1e-8,50)
Make a table with three columns: number of test case, number of iterations needed by Jacobi, and number of iterations needed by Gauss-Seidel.
n --- (i) x = > alpha v , for some alpha_i --- i i=1
Observe what happens if we multiply A with x:
n --- (i) A x = > alpha A v --- i i=1 n --- (i) = > alpha lambda v --- i i i=1 n lambda --- i (i) = lambda > alpha --------- v 1 --- i lambda i=1 1If lambda_1 has the largest modulus, then
| | | lambda | | i | |---------| < 1 | lambda | | 1 | | |and A^k x converges to a vector parallel to v_1, as k goes to infinity.
This observation forms the idea behind the power method, see pages 543-544 in the text book.
>> A = rand(5); >> [v,d] = eig(A); >> x0 = v(:,2); >> [lambda,x,error] = powers(A,x0,eps,50)In the experiment above, the start vector is one of the eigenvectors. What do you observe on the plot of the logarithms of the errors? Is there a difference with the case of a random start vector? In any case, explain.
If you have questions, comments, or difficulties, feel free to come to my office for help.
function [x,dx] = jacobi(A,b,x,eps,N) % % The function jacobi applies Jacobi's method to solve A*x = b. % On entry: % A coefficient matrix of the linear system; % b right-hand side vector of the linear system; % x initial approximation; % eps accuracy requirement: stop when norm(dx) < eps; % N maximal number of steps allowed. % On return: % x approximation for the solution of A*x = b; % dx vector last used to update x, % if success, then norm(dx) < eps. % % Example: % Q = orth(rand(5)); D = diag(rand(1,5)); % A = Q*D*Q'; z = rand(5,1); b = A*z; % x = z + rand(5,1)*1e-4; % [x,dx] = jacobi(A,b,x,1e-8,50) % n = size(A,1); fprintf('Running the method of Jacobi...\n'); for i = 1:N dx = b - A*x; for j = 1:n dx(j) = dx(j)/A(j,j); x(j) = x(j) + dx(j); end; fprintf(' norm(dx) = %.4e\n', norm(dx)); if (norm(dx) < eps) fprintf('Succeeded in %d steps\n', i); return; end; end; fprintf('Failed to reached accuracy requirement in %d steps.\n', N);
function [x,dx] = gauss_seidel(A,b,x,eps,N) % % The function gauss_seidel applies the method of Gauss-Seidel % to solve A*x = b. % On entry: % A coefficient matrix of the linear system; % b right-hand side vector of the linear system; % x initial approximation; % eps accuracy requirement, stop when norm(dx) < eps; % N maximal number of steps allowed. % On return: % x approximation for the solution of A*x = b; % dx vector last used to update x, % if success, then norm(dx) < eps. % % Example: % Q = orth(rand(5)); D = diag(rand(1,5)); % A = Q*D*Q'; z = rand(5,1); b = A*z; % x = z + rand(5,1)*1e-4; % [x,dx] = gauss_seidel(A,b,x,1e-8,50) %
function [lambda,x,err] = powers(A,x,eps,N) % % Applies the power method to find the dominant eigenvalue lambda % and corresponding eigenvector x of the matrix A. % On entry: % A a matrix; % x initial approximation for the eigenvector; % eps accuracy requirement: stops when norm(x-A*x) < eps; % N maximal number of iterations allowed. % On return: % lambda modulus of dominant eigenvalue of A; % x eigenvector corresponding with lambda; % err estimate for the errors; % the logarithms of the errors are plotted. % % Example: % A = rand(5); x = rand(5,1); % [lambda,x,error] = powers(A,x,1.0e-13,50) % x0 = x/norm(x); fprintf('Running the power method...\n'); for i = 1:N x = A*x0; x = x/norm(x); err(i) = norm(x-x0); fprintf(' error : %.4e\n', err(i)); if err(i) < eps fprintf('Succeeded in %d steps.\n', i); lambda = norm(A*x); plot(1:i,log10(err),'*'); return; end; x0 = x; end; fprintf('Failed to reach accuracy requirement in %d steps.\n', N); lambda = norm(A*x); plot(1:N,log10(err),'*');