MCS 471 Practice Problems 2: Computational Linear Algebra

Hand Calculator and Maple Calculations

Do NOT Hand In: Practice Problems ONLY!



Background Reading:


Practice Problems 2 Topics:
  1. Floating Point Arithmetic, Efficiency and Accuracy are part of all exams.
  2. Forward Gaussian Elimination with Back Substitution
  3. Virtual Row (Partial) Pivoting and Row Scaling
  4. Virtual Full Pivoting
  5. Linear Algebra Computational Complexity
  6. Determinants, Inverse and Multiple RHSs by FGE
  7. LU Decomposition with and without Pivoting
  8. Norms, Condition Numbers and Error Propagation
  9. Multidimensional Newton's Method (Nonlinear Algebra)
  10. Direct and Inverse Power Method for Eigenvalues and Eigenvectors
-->

Practice Problems:

In following computational questions, use 4 Digit Exam Precision: Round to 4 significant decimal digits only when you record an intermediate or final answer in your exam booklet; and continue calculations with these rounded, recorded numbers.

  1. Using Forward Gaussian Elimination with virtual partial pivoting, virtual row scaling, and back substitution, solve the following algebraic system for the vector x=[x_i]_{3×1},

    Record augmented matrices with marked multipliers, pivot vectors, and scale vectors, i.e., [A | b || P || S ], at each elimination step, including the initial step.

  2. Find the corresponding actually pivoted Doolittle form of the LU Decomposition by converting the virtually pivoted rows including multipliers in the answer to question #1, i.e., separately record the L_{ap} & U_{ap} which are the actual lower and upper triangular matrices for matrix A_{ap}, respectively, where the subscript ``{ap}'' notation means Actually Pivoted. {Note: L_{ap}·U_{ap} must approximately equal the actually pivoted A_{ap}. Hint: This is an actually easy question.}

  3. Using the actually pivoted LU Decomposition {L_{ap},U_{ap}} of question #2, find the first approximation to the eigenvalue \lambda_3(1) of smallest absolute magnitude and its normalized associated eigenvector X_3(1) by the Inverse Power Method starting from the zeroth iteration vector

    Use the infinity norm. {Caution: Do NOT compute any inverses directly, else there will be a large deduction of points for inefficiency. Ignore the fact that Aap is not a symmetric matrix.}

  4. Show that the norm of the relative solution error is no larger than the condition number times the norm of the relative residual, i.e.,

    Give reasons for every step. {You may assume that A·e=r is known, that A-1 exists, and || x ||p not= 0 & || b ||p not= 0.}


In the following older problems prior to Fall 1999, use "CHOPPING EXAM PRECISION": The answers are calculated for chopping to 4 significant (4C) digits since the problems are from a time when chopping was used.

Note: Maple comments are not part of these sample exam problems, but were added afterward in the editing stage to aid in analyzing the problems.

  1. Solve: using only forward Gaussian elimination with back substitution to 3 significant digits. Record multipliers. Use no partial pivoting nor use scaling.
  2. Solve: using only forward Gaussian elimination With "virtual" partial pivoting and back substitution to 3 significant digits. Record multipliers and pivot vectors for each elimination step; but use no scaling.
  3. Solve: using forward Gaussian elimination with "virtual" partial pivoting, "virtual" scaling and back substitution chopping to 3 significant digits. Show multipliers, scale vectors and pivot vectors for each step. Record the final answer and that only rounded to 2 significant digits.
  4. Solve: using forward Gaussian elimination, "virtual" scaling, "virtual" partial pivoting and back substitution to 4 significant digits. Record augmented matrices with scale and pivot vectors at each elimination step. Round your final solution to 3 significant digits.
    (Ans.:  [A|B|P|S]=
    8.955       9.230     13.53     63.40   1  13.53
    4.479       5.770     7.058     79.60   2  7.058 
    8.955       12.31     3.530     95.50   3  12.31
    
    =(4c) 
    (+1.000)m  -3.080     10.00    -32.10   3  10.00
    (+.5001)m  -.3862     5.292     31.84   2  5.292
    8.955       12.31     3.530     95.50   1  -----
    
     =(4c)   
    -----      -3.080     10.00    -32.10   3  -----
    -----     (+.1253)m   4.039     35.86   1  -----  ,
    8.955       12.31     3.530     95.50   2  -----
     
    X(3) =(4c) 35.86/4.039 =(4c) 8.878 =(3r) 8.88
    X(2) =(4c) (-31.2-10.*8.878)/(-3.08) =(4c) 39.24 =(3r) 39.2
    X(1) =(4c) (95.5-3.53*8.878-12.31*39.24)/8.955 =(4c) -46.77 =(3r) -46.8
    
    
  5. Solve A*X=B and simultaneously find the inverse A-1 for using forward Gaussian elimination with back substitution only. (Note: use no pivoting or scaling.) Compute the condition number of A in the 1-norm.
    (partial (Corrected in red) ans.: X=(4c) [10.37    .9960]T; 
    
    A-1=(4c) 
    3.996   0.4473e-1;
    0.2582  -7.215e-7
    
    ||A-1||1=(4c) 4.254;    
    Cond(A)1=(4c) 38.98
    
    
  6. Using Newton's method and forward Gaussian elimination, approximate the vector zero of by starting at [X,Y](1) = [1., 1.] and finding the next iterate as well as its vector-f and its Jacobian (matrix of gradients).
    (partial ans.:  
    (X,Y)=(4c) (.8619,1.052); (f1,f2)=(4c) (.07819,.001472);
    
    jacobian(f)=(6.895,2.104,-1.,2.863) by cols.;
    
    (alternate ans.: 
    (X,Y)=(4c) (.8618,1.052); (f1,f2)=(4c)(-.07750,.001572);
    
    jacobian(f)=(6.894,2.104,-1.,2.863) by columns.
    
    
  7. (masters-exam/a/w83). Approximate the solution of the following linear algebraic system using forward Gaussian elimination, partial pivoting with "virtual" scaling (i.e., not actual scaling) and back substitution. At each elimination step record the augmented matrix (A/B), pivot vector, scale vector and multipliers chopped to 4 significant digits and continue calculations with these recorded results. Calculate the absolute value norm of the residuals relative to the approximate solution in the same norm.
    8.955*X(1) + 9.230*X(2) + 13.53*X(3) = 63.40
    4.479*X(1) + 5.770*X(2) + 7.058*X(3) = 79.60
    8.955*X(1) + 12.31*X(2) + 3.530*X(3) = 92.87
    
    (partial ans.:  X =(4c) (-47.08,39.18, 9.121);
    norm of rel. residuals = ||r||/||a||/||b|| =(4c)  .9997e-3 ,
    or = ||r||/||b|| =(4c)  0.4043e-3 .
    
    
  8. (masters-exam/a/s82). Approximate the solution of the following linear algebraic system using forward Gaussian elimination along with "virtual" scaling, partial pivoting and back substitution. At each elimination step record the augmented matrix [A|B], scale vector, pivot vector and multipliers chopped to 4 significant digits and continue calculations with these chopped results and the number of multiplications or divisions and the number of additions or subtractions used.
    4.477*X(1) + 1.538*X(2) + 13.53*X(3) = 31.72
    8.958*X(1) + 3.846*X(2) + 28.23*X(3) = 159.2
    8.955*X(1) + 4.103*X(2) + 7.063*X(3) = 95.49
    
    (Answer: [A|B|P|S] =     
    4.477  1.538  13.53  31.72  1  13.53
    8.958  3.846  28.23  159.2  2  28.23
    8.955  4.103  7.063  95.49  3  8.955
    
    =(4c) 
    (0.4999)m -0.5130  9.999 -16.01  3  9.999
    (1.000)m  -0.2570  21.16  63.71  2  21.16
    8.955      4.103   7.063  95.49  1  -----
    
    =(4c)  
    ----- -0.5130     9.999 -16.01  3  ----- ;
    ----- (0.5009)m   16.15  71.72  1  ----- ;
    8.955  4.103      7.063  95.49  2  ----- ;
     
    X(3) =(4c)  4.440 =(3r) 4.44
    X(2) =(4c)  117.7 =(3r) 118.
    X(1) =(4c) -46.76 =(3r) -46.8
     
    sum-mults. = 19 (or 22, depending on what is counted);  sum-adds = 11
    

  9. Inverse Power Method with LU Decomposition Example Simulated on MapleVrel3 Worksheet.


Web Source: http://www.math.uic.edu/~hanson/mcs471/pp2.html

Email Comments or Questions to hanson A T math uic edu

Click Here For Current Class HomePage