Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Generalized Minimal Residual Method and Lanczos Method - Slides | CSE 275, Study notes of Computer Science

Material Type: Notes; Professor: Yang; Class: Matrix Computation; Subject: Computer Science & Engineering; University: University of California-Merced; Term: Unknown 1989;

Typology: Study notes

Pre 2010

Uploaded on 08/18/2009

koofers-user-hix
koofers-user-hix 🇺🇸

10 documents

1 / 17

Related documents


Partial preview of the text

Download Generalized Minimal Residual Method and Lanczos Method - Slides | CSE 275 and more Study notes Computer Science in PDF only on Docsity! CSE 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 19 1 / 17 Overview Generalized minimal residuals Lanczos method 2 / 17 Residual minimization in Kn (cont’d) Use the Arnoldi iteration to construct a sequence of Krylov matrices Qn whose columns q1,q2, . . . span the successive Krylov subspaces Kn Thus we write xn = Qny instead of xn = Knc, and the least squares problem is to find y ∈ Cn such that ‖AQny − b‖ = minimum Superficially the above problem has dimensions m × n Because of special structure of Krylov subspaces, it is essentially of dimension (n + 1)× n Recall AQn = Qn+1H̃n in Arnoldi iteration, and thus ‖Qn+1H̃ny − b‖ = minimum Both vectors inside the norm are in the column space of Qn+1 and thus multiply on the left by Q∗n+1 does not change the norm ‖H̃ny − Q∗n+1b‖ = minimum 5 / 17 Residual minimization in Kn (cont’d) We note by construction of the Krylov matrices {Qn}, Q∗n+1b is equal to ‖b‖e1 where e1 = [1, 0, 0, . . . , 0]∗ Thus, we have ‖H̃ny − ‖b‖e1‖ = minimum At step n of GMRES we solve this problem for y and then set xn = Qny 6 / 17 Mechanics of GMRES GMRES q1 = b/‖b‖ for n = 1, 2, . . . do Step n of Arnoldi iteration Find y to minimize ‖rn‖ = ‖H̃ny − ‖b‖e1‖ xn = Qny end for At each step, GMRES minimizes the norm of the residual rn = b− Axn over all vectors xn ∈ Kn The quantity ‖rn‖ is computed in the course of finding y The “Find y” step is an (n + 1)× n matrix least squares problem with Hessenberg structure Use QR factorize to solve y in minimizing ‖rn‖ with O(n2) flops due to the Hessenberg structure 7 / 17 GMRES and polynomial approximation (cont’d) The corresponding residual rn = b− Axn is rn = (I − Aqn(A))b, where pn is the polynomial defined by pn(z) = 1− zq(z), and thus rn = pn(A)b for some polynomial pn ∈ Pn The GMRES process chooses the coefficients of pn to minimize the norm of this residual The GMRES solves the following approximation problem successively for n = 1, 2, 3, . . . Find pn ∈ Pn such that ‖pn(A)b‖ = minimum 10 / 17 Lanczos iteration The Lanczos iteration is the Arnoldi iteration specialized to the case where A is Hermitian Further assume A is real and symmetric in the following analysis First note that Hn is symmetric and real, thus its eigenvalues, the Ritz values or Lanczos estimates are also real Second, since Hn is both symmetric and Hessenberg, it is tridiagonal This means in the inner loop of the Arnoldi iteration, the limits 1 to n can be replaced by n − 1 to n Instead of the (n + 1)-term recurrence at step n, the Lanczos iteration involves just a three-term recurrence Each step of the Lanczos iteration is much cheaper than the corresponding step f the Arnoldi iteration 11 / 17 Three-term recurrence The fact that Hn is tridiagonal is important From QR factors of the Krylov matrix, the Hessenberg matrices Hn are Hn = Q ∗ nAQn We can write entry-wise for real matrices, A, Hn, and Qn as hij = q > i Aqj which implies that hij = 0 for i > j + 1 since Aqj ∈ 〈q1,q2, . . . ,qj+1〉 and the Krylov vectors are orthogonal Taking the transpose gives h>ij = q > j A >qi As A = A>, then hij = 0 for j > i + 1 by the same reason 12 / 17 The Lanczos iteration algorithm (cont’d) Theorem The matrices Qn of vectors qn generated by the Lanczos iteration are reduced QR factors of the Krylov matrix Kn = QnRn The tridiagonal matrices Tn are the corresponding projections Tn = Q ∗ nAQn and the successive iterates are related by the formula AQn = Qn+1T̃n which we can write in the form of a three-term recurrence at step n Aqn = βn−1qn−1 + αnqn + βnqn+1 As long as the Lanczos iteration does not break down (i.e., Kn is full rank n), the characteristic polynomial of Tn is the unique polynomial p n ∈ Pn that shares the Arnoldi/Lanczos approximation problem, i.e., that achieves ‖pn(A)b‖ = minimum 15 / 17 Numerical example Let A be the 203× 203 matrix A = diag(0, 0.01, 0.02, . . . , 1.99, 2, 2.5, 3.0) The spectrum of A consists of a dense collection of eigenvalues throughout [0, 2] together with two outliers, 2.5 and 3.0 Lanczos iteration beginning with a random starting vector q1 At step 9, seven Ritz values and the associated Lanczos polynomial is uniformly small on that interval The leading Ritz values are 1.93, 2.48, 2.999962 The polynomial is small throughout [0, 2] ∪ {2.5} ∪ {3.0} 16 / 17 Numerical example At step 20, the leading Ritz values are 1.9906, 2.49999999999987, 3.00000000000000 17 / 17
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved