Lecture 10: Maple for Nonlinear Equations; Secant Method Convergence
(GW1.11)
Lecture 11: Fixed Point Method (GW1.6); Muller's Method (GW1.5)
Lecture 12: Application of Inverse Quadratic Interpolation to
Muller Root; Optimization Using Golden Section Search
Lecture 13: Introduction to Linear Algebra (GW2.2)
Lecture 14: Show Computational Complexity Cramer's Rule O(e*(n+1)!)
Lecture 15: Forward Gaussian Elimination and Backward Substitution
(GW2.3-4)
Lecture 16: Computational Complexity of FGE & BS, also Gauss-Jordan
Elimination (GW2.4)
Lecture 17: Computational Complexity of Inverse and Determinants by FGE
(GW2.4,2.7); (Emphasize Efficiency by Correcting Inefficient Notions Taught in
Linear Algebra Courses)
Lecture 18: First Hour Exam on Precision and Nonlinear Equations
Lecture 19: Review First Hour Exam; Problem of Small Pivots (GW2.6)
Lecture 20: FGE with Virtual Row Pivoting and Virtual Row Scaling;
Algorithm (GW2.4); (Emphasize Computational Accuracy with Ill-Conditioned
Examples)
Lecture 21: FGE with Virtual Full Pivoting (GW2.4)
Lecture 22: LU Decomposition (Doolittle Version using FGE with Saved
Multipliers); Maple and Octave for Computational Linear Algebra (GW2.5)
Lecture 27: Introduction to Interpolation; Newton-Horner's Method
of Fast Polynomial Computation (GW3.1,3.2); Calculus of Finite Differences
Lecture 28: Finite Difference and Divided Difference Tables;
Newton-Gregory Polynomial Form (GW3.3, 3.4)
Lecture 29: Second Hour Exam on Computational Linear Algebra
Lecture 30: Review Second Exam
Lecture 31: Theoretical Errors in Interpolation (GW3.9);
Lecture 32: Computational Complexity of Polynomial Interpolation: Lagrange
versus Newton; Mention Splines If Time Permits; Dispel Notion of High Degree
Polynomial Approximation with Problems of Nonuniform Convergence
Lecture 33: Discrete Approximation of Coordinates, Ordinates and
Derivatives (GW4.2,4.3) (Caution: Text is Overly Complicated Here as Elsewhere,
Emphasize Forward, Backward and Central Finite Difference Forms, Using Central
Only for Second Order)
Lecture 34: Numerical Integration, Simple Newton-Cotes Rules and
Local Theoretical Errors: Rectangular (LRR1, RRR1, MPRR1), Trapezoidal (TR2)
and Simpson's (1/3: SR3) Rules (GW4.4, 4.5, 4.6)
Lecture 35: Composite Rules and Global Theoretical Errors (GW4.5-4.7);
(Emphasize Efficient Algorithms Saving Floating Point Operations and Function
Evaluations)
Lecture 36: More Composite Rules ; Use of Global Theoretical Error
Estimates to Estimate Minimal Step Size or Number of Nodes; Extrapolation
To The Limit Techniques (GW4.4.4-4.7); Use of Maple int and student Package
for TRN and SRN
Lecture 37: Gaussian Quadrature Rules: GR1=MPRR1, GR2, GR3, MXGRN
(Gauss-Legendre); Briefly Mention Gauss-Laguerre, -Hermite, -Chebychev for
Singular Integrals (GW4.9); Briefly Explain Idea of Adaptive Quadrature and
Relation to Extrapolation (GW4.10)
Lecture 38: Theoretical versus Truncation (Rounding/Chopping) Errors
(Dispel Pure Math Silly Notion of Taking Step Size as Small as You Please:
If Global Theoretical Error
= O(h^{q+1}) and Global Truncation Error = O(MacEps/h), where StepSize = h,
MacEps = O(b^{1-p}), and p = Digits of Precision in Base = b, Then There Exists
Finite Optimal Step Size h* = O(MacEps^{1/(q+1)}/q)); Begin Euler's Method for
Numerical Solution of ODEs (GW5.3)
Lecture 39: More Numerical Solution of ODEs, Euler's Method, Local
Theoretical Error, Modified Euler's Method (Approximate TR2 via Predication
and Correction) (GW5.3, some 5.10?); (Motivate by Equivalence of EM to LRR1 and
Calculus Tangent Line Approximation)
Lecture 40: Runge-Kutta Method (RK4: Approximate Simpson's Rule SR3;
Notion Helps to Remember Form); Algorithm (GW5.4); Start Multistep (GW5.5);
Use of Maple and Octave
Lecture 41: More Predictor-Corrector Multistep Methods (Emphasize
Adams-Moulton Method Only; Text is Over Done Here; Skip Milne's Method)
(GW5.5, 5.7); Global Discretization Error for Euler's Method (GW5.10);
(Caution: Global Estimate Apply Only to Unstable Case)
Lecture 42: More Euler Global Theoretical Error but with GLobal
Truncation (Rounding/Chopping) Error (If Global Theoretical Euler Error
= O(h) and Global Roundoff Error = O(MacEps/h), Then There Exists
Finite Optimal Step Size h* = O(\sqrt{MacEps})) (GW5.9-5.10, poorly)
Lecture 43: Boundary Value Problems and Shooting Method (GW6.1)
(Caution: Motivate by Nonlinear Example, like y"(x) = -y^2(x), since Method
Most Useful for Nonlinear Problems, Unlike Text Example)
Lecture 44: Boundary Value Problems and Linear Algebra Methods (GW6.2)
(Emphasize Thomas Elimination Algorithm for Tridiagonal System, noting the
Importance of Tridiagonal Form Efficiency in Numerics; Caution: Do Nontrivial,
Nonconstant Coefficient, Second Order, Linear Boundary Value Problem Example)
Lecture 45: General Form of Thomas Tridiagonal Algorithm; Review for
Final Examination; Mention Eigenvalue Problems if Time Permits.
CAUTION: For Summer Session, Lecture Schedule Must Be Compressed In
A Different Order)