Numerical Linear Algebra with Applications,
Edition 1 Using MATLAB
By William Ford

Publication Date: 02 Sep 2014
Description

Numerical Linear Algebra with Applications is designed for those who want to gain a practical knowledge of modern computational techniques for the numerical solution of linear algebra problems, using MATLAB as the vehicle for computation. The book contains all the material necessary for a first year graduate or advanced undergraduate course on numerical linear algebra with numerous applications to engineering and science. With a unified presentation of computation, basic algorithm analysis, and numerical methods to compute solutions, this book is ideal for solving real-world problems.

The text consists of six introductory chapters that thoroughly provide the required background for those who have not taken a course in applied or theoretical linear algebra. It explains in great detail the algorithms necessary for the accurate computation of the solution to the most frequently occurring problems in numerical linear algebra. In addition to examples from engineering and science applications, proofs of required results are provided without leaving out critical details. The Preface suggests ways in which the book can be used with or without an intensive study of proofs.

This book will be a useful reference for graduate or advanced undergraduate students in engineering, science, and mathematics. It will also appeal to professionals in engineering and science, such as practicing engineers who want to see how numerical linear algebra problems can be solved using a programming language such as MATLAB, MAPLE, or Mathematica.

Key Features

  • Six introductory chapters that thoroughly provide the required background for those who have not taken a course in applied or theoretical linear algebra
  • Detailed explanations and examples
  • A through discussion of the algorithms necessary for the accurate computation of the solution to the most frequently occurring problems in numerical linear algebra
  • Examples from engineering and science applications
About the author
By William Ford, University of the Pacific, Stockton, California, USA
Table of Contents
  • Dedication
  • List of Figures
  • List of Algorithms
  • Preface
    • Topics
    • Intended Audience
    • Ways to Use the Book
    • Matlab Library
    • Supplement
    • Acknowledgments
  • Chapter 1: Matrices
    • Abstract
    • 1.1 Matrix Arithmetic
    • 1.2 Linear Transformations
    • 1.3 Powers of Matrices
    • 1.4 Nonsingular Matrices
    • 1.5 The Matrix Transpose and Symmetric Matrices
    • 1.6 Chapter Summary
    • 1.7 Problems
  • Chapter 2: Linear Equations
    • Abstract
    • 2.1 Introduction to Linear Equations
    • 2.2 Solving Square Linear Systems
    • 2.3 Gaussian Elimination
    • 2.4 Systematic Solution of Linear Systems
    • 2.5 Computing the Inverse
    • 2.6 Homogeneous Systems
    • 2.7 Application: A Truss
    • 2.8 Application: Electrical Circuit
    • 2.9 Chapter Summary
    • 2.10 Problems
  • Chapter 3: Subspaces
    • Abstract
    • 3.1 Introduction
    • 3.2 Subspaces of n
    • 3.3 Linear Independence
    • 3.4 Basis of a Subspace
    • 3.5 The Rank of a Matrix
    • 3.6 Chapter summary
    • 3.7 Problems
  • Chapter 4: Determinants
    • Abstract
    • 4.1 Developing the Determinant of A 2 × 2 and A 3 × 3 matrix
    • 4.2 Expansion by Minors
    • 4.3 Computing a Determinant Using Row Operations
    • 4.4 Application: Encryption
    • 4.5 Chapter Summary
    • 4.6 Problems
  • Chapter 5: Eigenvalues and Eigenvectors
    • Abstract
    • 5.1 Definitions and Examples
    • 5.2 Selected Properties of Eigenvalues and Eigenvectors
    • 5.3 Diagonalization
    • 5.4 Applications
    • 5.5 Computing Eigenvalues and Eigenvectors Using Matlab
    • 5.6 Chapter Summary
    • 5.7 Problems
  • Chapter 6: Orthogonal Vectors and Matrices
    • Abstract
    • 6.1 Introduction
    • 6.2 The Inner Product
    • 6.3 Orthogonal Matrices
    • 6.4 Symmetric Matrices and Orthogonality
    • 6.5 The L2 inner product
    • 6.6 The Cauchy-Schwarz Inequality
    • 6.7 Signal Comparison
    • 6.8 Chapter Summary
    • 6.9 Problems
  • Chapter 7: Vector and Matrix Norms
    • Abstract
    • 7.1 Vector Norms
    • 7.2 Matrix Norms
    • 7.3 Submultiplicative Matrix Norms
    • 7.4 Computing the Matrix 2-Norm
    • 7.5 Properties of the Matrix 2-Norm
    • 7.6 Chapter Summary
    • 7.7 Problems
  • Chapter 8: Floating Point Arithmetic
    • Abstract
    • 8.1 Integer Representation
    • 8.2 Floating-Point Representation
    • 8.3 Floating-Point Arithmetic
    • 8.4 Minimizing Errors
    • 8.5 Chapter summary
    • 8.6 Problems
  • Chapter 9: Algorithms
    • Abstract
    • 9.1 Pseudocode Examples
    • 9.2 Algorithm Efficiency
    • 9.3 The Solution to Upper and Lower Triangular Systems
    • 9.4 The Thomas Algorithm
    • 9.5 Chapter Summary
    • 9.6 Problems
  • Chapter 10: Conditioning of Problems and Stability of Algorithms
    • Abstract
    • 10.1 Why do we need numerical linear algebra?
    • 10.2 Computation error
    • 10.3 Algorithm stability
    • 10.4 Conditioning of a problem
    • 10.5 Perturbation analysis for solving a linear system
    • 10.6 Properties of the matrix condition number
    • 10.7 Matlab computation of a matrix condition number
    • 10.8 Estimating the condition number
    • 10.9 Introduction to perturbation analysis of eigenvalue problems
    • 10.10 Chapter summary
    • 10.11 Problems
  • Chapter 11: Gaussian Elimination and the LU Decomposition
    • Abstract
    • 11.1 LU Decomposition
    • 11.2 Using LU to Solve Equations
    • 11.3 Elementary Row Matrices
    • 11.4 Derivation of the LU Decomposition
    • 11.5 Gaussian Elimination with Partial Pivoting
    • 11.6 Using the LU Decomposition to Solve Axi=bi,1≤i≤k
    • 11.7 Finding A–1
    • 11.8 Stability and Efficiency of Gaussian Elimination
    • 11.9 Iterative Refinement
    • 11.10 Chapter Summary
    • 11.11 Problems
  • Chapter 12: Linear System Applications
    • Abstract
    • 12.1 Fourier Series
    • 12.2 Finite Difference Approximations
    • 12.3 Least-Squares Polynomial Fitting
    • 12.4 Cubic Spline Interpolation
    • 12.5 Chapter Summary
    • 12.6 Problems
  • Chapter 13: Important Special Systems
    • Abstract
    • 13.1 Tridiagonal Systems
    • 13.2 Symmetric Positive Definite Matrices
    • 13.3 The Cholesky Decomposition
    • 13.4 Chapter Summary
    • 13.5 Problems
  • Chapter 14: Gram-Schmidt Orthonormalization
    • Abstract
    • 14.1 The Gram-Schmidt Process
    • 14.2 Numerical Stability of the Gram-Schmidt Process
    • 14.3 The QR Decomposition
    • 14.4 Applications of The QR Decomposition
    • 14.5 Chapter Summary
    • 14.6 Problems
  • Chapter 15: The Singular Value Decomposition
    • Abstract
    • 15.1 The SVD Theorem
    • 15.2 Using the SVD to Determine Properties of a Matrix
    • 15.3 SVD and Matrix Norms
    • 15.4 Geometric Interpretation of the SVD
    • 15.5 Computing the SVD Using MATLAB
    • 15.6 Computing A–1
    • 15.7 Image Compression Using the SVD
    • 15.8 Final Comments
    • 15.9 Chapter Summary
    • 15.10 Problems
  • Chapter 16: Least-Squares Problems
    • Abstract
    • 16.1 Existence and Uniqueness of Least-Squares Solutions
    • 16.2 Solving Overdetermined Least-Squares Problems
    • 16.3 Conditioning of Least-Squares Problems
    • 16.4 Rank-Deficient Least-Squares Problems
    • 16.5 Underdetermined Linear Systems
    • 16.6 Chapter Summary
    • 16.7 Problems
  • Chapter 17: Implementing the QR Decomposition
    • Abstract
    • 17.1 Review of the QR Decomposition Using Gram-Schmidt
    • 17.2 Givens Rotations
    • 17.3 Creating a Sequence of Zeros in a Vector Using Givens Rotations
    • 17.4 Product of a Givens Matrix with a General Matrix
    • 17.5 Zeroing-Out Column Entries in a Matrix Using Givens Rotations
    • 17.6 Accurate Computation of the Givens Parameters
    • 17.7 THe Givens Algorithm for the QR Decomposition
    • 17.8 Householder Reflections
    • 17.9 Computing the QR Decomposition Using Householder Reflections
    • 17.10 Chapter Summary
    • 17.11 Problems
  • Chapter 18: The Algebraic Eigenvalue Problem
    • Abstract
    • 18.1 Applications of The Eigenvalue Problem
    • 18.2 Computation of Selected Eigenvalues and Eigenvectors
    • 18.3 The Basic QR Iteration
    • 18.4 Transformation to Upper Hessenberg Form
    • 18.5 The Unshifted Hessenberg QR Iteration
    • 18.6 The Shifted Hessenberg QR Iteration
    • 18.7 Schur's Triangularization
    • 18.8 The Francis Algorithm
    • 18.9 Computing Eigenvectors
    • 18.10 Computing Both Eigenvalues and Their Corresponding Eigenvectors
    • 18.11 Sensitivity of Eigenvalues to Perturbations
    • 18.12 Chapter Summary
    • 18.13 Problems
  • Chapter 19: The Symmetric Eigenvalue Problem
    • Abstract
    • 19.1 The Spectral Theorem and Properties of A Symmetric Matrix
    • 19.2 The Jacobi Method
    • 19.3 The Symmetric QR Iteration Method
    • 19.4 The Symmetric Francis Algorithm
    • 19.5 The Bisection Method
    • 19.6 The Divide-And-Conquer Method
    • 19.7 Chapter Summary
    • 19.8 Problems
  • Chapter 20: Basic Iterative Methods
    • Abstract
    • 20.1 Jacobi Method
    • 20.2 The Gauss-Seidel Iterative Method
    • 20.3 The Sor Iteration
    • 20.4 Convergence of the Basic Iterative Methods
    • 20.5 Application: Poisson's Equation
    • 20.6 Chapter Summary
    • 20.7 Problems
  • Chapter 21: Krylov Subspace Methods
    • Abstract
    • 21.1 Large, Sparse Matrices
    • 21.2 The CG Method
    • 21.3 Preconditioning
    • 21.4 Preconditioning For CG
    • 21.5 Krylov Subspaces
    • 21.6 The Arnoldi Method
    • 21.7 GMRES
    • 21.8 The Symmetric Lanczos Method
    • 21.9 The Minres Method
    • 21.10 Comparison of Iterative Methods
    • 21.11 Poisson's Equation Revisited
    • 21.12 The Biharmonic Equation
    • 21.13 Chapter Summary
    • 21.14 Problems
  • Chapter 22: Large Sparse Eigenvalue Problems
    • Abstract
    • 22.1 The Power Method
    • 22.2 Eigenvalue Computation Using the Arnoldi Process
    • 22.3 The Implicitly Restarted Arnoldi Method
    • 22.4 Eigenvalue Computation Using the Lanczos Process
    • 22.5 Chapter Summary
    • 22.6 Problems
  • Chapter 23: Computing the Singular Value Decomposition
    • Abstract
    • 23.1 Development of the One-Sided Jacobi Method For Computing the Reduced Svd
    • 23.2 The One-Sided Jacobi Algorithm
    • 23.3 Transforming a Matrix to Upper-Bidiagonal Form
    • 23.4 Demmel and Kahan Zero-Shift QR Downward Sweep Algorithm
    • 23.5 Chapter Summary
    • 23.6 Problems
  • Appendix A: Complex Numbers
    • A.1 Constructing the Complex Numbers
    • A.2 Calculating with complex numbers
    • A.3 Geometric Representation of
    • A.4 Complex Conjugate
    • A.5 Complex numbers in matlab
    • A.6 Euler’s formula
    • A.7 Problems
    • A.7.1 MATLAB Problems
  • Appendix B: Mathematical Induction
    • B.1 Problems
  • Appendix C: Chebyshev Polynomials
    • C.1 Definition
    • C.2 Properties
    • C.3 Problems
  • Glossary
  • Bibliography
  • Index
Book details
ISBN: 9780123944351
Page Count: 628
Retail Price : £93.99
  • Elementary Linear Algebra, 4th Edition, Andrill/Hecker, 97801203747518, $99.95, Jan. 2010
  • Matrix Methods: Applied Linear Algebra, 3rd Edition, Bronson, Sep. 2008, 9780123744272, $94.95
  • Essential Matlab for Scientists and Engineers, Hahn/Valentine C., 9780750684170, Mar. 2010, $41.95
  • Linear Algebra, Bronson, 9780120887842; Mar. 2007, $79.95
Instructor Resources
Audience

Graduate or advanced undergraduate students in engineering, science, and mathematics, professionals in engineering and science, such as practicing engineers who want to see how numerical linear algebra problems can be solved using a programming language such as MATLAB, MAPLE, or Mathematica.

Reviews
people find this review helpful