Ebook Info
- Published: 1997
- Number of pages: 360 pages
- Format: PDF
- File Size: 22.38 MB
- Authors: Rajendra Bhatia
Description
This book presents a substantial part of matrix analysis that is functional analytic in spirit. Topics covered include the theory of majorization, variational principles for eigenvalues, operator monotone and convex functions, and perturbation of matrix functions and matrix inequalities. The book offers several powerful methods and techniques of wide applicability, and it discusses connections with other areas of mathematics.
User’s Reviews
Editorial Reviews: Review R. BhatiaMatrix Analysis”A highly readable and attractive account of the subject. The book is a must for anyone working in matrix analysis; it can be recommended to graduate students as well as to specialists.”―ZENTRALBLATT MATH”There is an ample selection of exercises carefully positioned throughout the text. In addition each chapter includes problems of varying difficulty in which themes from the main text are extended.”―MATHEMATICAL REVIEWS From the Back Cover The aim of this book is to present a substantial part of matrix analysis that is functional analytic in spirit. Much of this will be of interest to graduate students and research workers in operator theory, operator algebras, mathematical physics, and numerical analysis. The book can be used as a basic text for graduate courses on advanced linear algebra and matrix analysis. It can also be used as supplementary text for courses in operator theory and numerical analysis. Among topics covered are the theory of majorization, variational principles of eigenvalues, operator monotone and convex functions, perturbation of matrix functions, and matrix inequalities. Much of this is presented for the first time in a unified way in a textbook. The reader will learn several powerful methods and techniques of wide applicability, and see connections with other areas of mathematics. A large selection of matrix inequalities will make this book a valuable reference for students and researchers who are working in numerical analysis, mathematical physics and operator theory.
Reviews from Amazon users which were colected at the time this book was published on the website:
⭐This book is fascinating! Bhatia has made an excellent selection of topics. It is frequently cited in the quantum information literature, and I assume also in the literature of other research subjects. This is on matrix analysis, and it has the flavor of finite-dimensional functional analysis. It is concise and has a very interesting selection of topics.I have a few suggested tweaks for future(?) editions or classroom discussions:Remarks on chapter 2:The presentation at the beginning of chapter 2 would be more motivated if one operationally defines x to majorize y iff y = Ax for some doubly-stochastic matrix A. Bhatia uses an algebraic definition and then proves the equivalence after six pages later. Immediately giving an unmotivated algebraic condition robs the reader of the chance to discover or prove the condition for himself.There is a very confusing typo in the proof of theorem II.2.8. The statement “Let r be the smallest of the positive coordinates of x”should read “Let r be the smallest of the positive coordinates of y”.Another small remark: Just after the statement of Corollary II.3.4 Bhatia states that “one part of Theorem II.3.1 and Exercise II.3.2 is subsumed by [Corollary II.3.4].” In fact, they are equivalent! That II.3.1 and II.3.2 imply II.3.4 follows immediately from the followingObservation: If f:R->R and g:R->R are convex and f is monotonically-increasing then f composed with g is convex.Notes on chapter 4:It would be nice to have the isomorphism between balls and norms presented, perhaps just as an exercise. Then the reader can get a visual mental picture of the various conditions for a norm to be a symmetric gauge function. It might also be nice to move theorem IV.2.1 to the very beginning of that chapter, so that the reader sees the point of section IV.1 immediately.A small remark is that the proof of Theorem IV.1.8 is made slightly more transparent by the observation that by Theorem IV.1.6 on has [Phi(x^p)]^(1/p) = Sup Phi(xz),where the supremum is over z such that (Phi[z^q])^(1/q)=1. (The Sup is attained when x^p = z^q.) Then Theorem IV.1.8 follows immediately from the triangle inequality and subadditivity of suprema: [Phi(x+y)^p]^1/p = Sup Phi((x+y)z) <= Sup [Phi(xz)+Phi(yz)] <= Sup Phi(xz) + Sup Phi(yz)Chapter 5:Chapter 5 covers some of the most interesting and surprising mathematics I have ever seen.Remarks:1. All the regularity needed to classify the matrix monotone functions is already present in the case of 2 x 2 matrix monotone functions. Perhaps concretely classifying them would modularize the parts of a complicated proof, allowing some separation between discussion of operator convexity and monotonicity. (Let f:R->R be non-constant. Then f is 2×2 matrix monotone iff f is differentiable with df/dt>0 everywhere and (df/dt)^(-1/2) concave. Furthermore, the first two estimates of Lemma v.4.1 continue to hold for 2×2 matrix monotone functions.)2. Theorem V.3.3 has somewhat restrictive assumptions: Let f:R->R be extended to a map on self adjoint matrices using the functional calculus. Then all that is needed to differentiate f(A+tH) at t=0, where A and H are self-adjoint and t is a real parameter, is for f to be differentiable on the spectrum of A. (f could be discontinuous except on spec(A), for example.)3. I would have liked to have the definition the “second divided difference” of f at the points {a,b,c} to be “the highest-degree-coefficient of the at-most quadratic polynomial P that interpolates f on the set {a,b,c}. When a=b then one choses P such that P'(a)=f'(a) as well. When a=b=c then one also takes P”(a)=f”(a).” This is the point of exercise V.3.7, but it makes for easier reading for the definition to be conceptual and let the exercise be to work out the algebraic consequences.Furthermore, if desired one can actually avoid this calculation and proceed to the proof of Theorem V.3.10. (Just replace f by interpolating polynomials and evaluate everything by by algebra. It has the flavor of Feynman diagrams.)4. In Hansen and Pedersen “Jensen’s operator inequality,” Bulletin of the London Mathematial Society,” 35 pp. 553-564 (2003); arXiv:math.OA/0204049 (2002), the original authors of the non-commutative jensen inequality state”With hindsight we must admit that we unfortunately proved and used [a different formulation of the noncommutatitve Jensen’s inequality]. However, this necessitated the further conditions that 0 is an element of I and that f(0) < 0, conditions that have haunted the theory since then."Bhatia's presentation is somewhat out-of-date because it does not include the more up-to-date Jensen's inequality from the more recent work cited above. (Note that the more recent paper occured after the current 1996 edition of Bhatia was published.)Furthermore, in the same paper, Hansen and Pedersen also introduce a nice version Jensen's trace inequality. It is the same as their sharper form of Jensen's operator inequality, except that both sides have a trace in front and that the operator convex function f is replaced by an arbitrary (scalar) convex function f:R->R. (f acts on matrices using the functional calculus). In particular, the trace inequality is much simpler to prove and more widely applicable although less powerful.5. It would be nice in future editions(?) to include a reference to Petz and Nielsen’s nice little proof of strong subadditivity of the von Neuman entropy.Chapter 7:I would have liked to see section 7.1 replaced with the following theorem statement (very similar to what’s already in 7.1), and see it proved without chosing an arbitrary basis. (Using an arbitrary basis makes Bhatia’s proof of the C-S theorem a bit messy, but a reformulation avoids that.)Definition: A unitary map U on a Hilbert space is a planer rotation iffU restricts to the identity on a subspace P of co-dimension 2, and P is unitarily equivalent to cos(t) sin(t) -sin(t) cos(t)on P.Theorem: Let E and F be distinct subspaces of the Hilbert space H, with dim E = dim F. Then there exists a set of planer rotations {R_i} with the properties that1. The two-dimensional rotation subspaces of the R_i are mutually orthogonal and intersect E and F. (In particular, the R_i commute.)2. Each R rotates by an angle theta in (0, pi/2].3. E is rotated onto F by the product of the R_i.Furthermore, the collection of angles theta_i is uniquely determined by E and F, including multiplicity. If the angles theta_i are distinct and strictly less than pi/2 then the corresponding R_i are also uniquely determined.Further remark on chapter VII: There is an error on page 223. The author states “we have a bijection psi from H tensor H onto L(H), that is linear in the first variable and conjugate linear in the second variable”.This is impossible, since (lamda v) tensor w = w tensor (lamda w). In particular, any map that is linear in the first variable is necessarily linear in the second variable. The practice of introducing a map from H tensor H to L(H) is a cause of much ugly basis-invariance-breaking in quantum information theory and consequently should be discouraged.
⭐Excellent. The deepest book in matrix analysis y have seen.
⭐This book is an expansion of the author’s lecture notes “Perturbation Bounds for Matrix Eigenvalues” published in 1987. I have used both versions for my students’ projects. The book under review centers around the themes on matrix inequalities and perturbation of eigenvalues and eigenspaces. The first half of the book covers the “classical” material of majorisation and matrix inequalities in a very clear and readable manner. The second half is a survey of the modern treatment of perturbation of matrix eigenvalues and eigenspaces. It includes lots of recent research results by the author and others within the last ten years. This book has a large collection of challenging exercises. It is an excellent text for a senior undergraduate or graduate course on matrix analysis.
⭐Nice book. Many useful facts combined in one volume. Real pleasure to read it.The only drawback is sketchy last chapter (almost no proofs due to the lack of space, I believe).
Keywords
Free Download Matrix Analysis (Graduate Texts in Mathematics, 169) 1997th Edition in PDF format
Matrix Analysis (Graduate Texts in Mathematics, 169) 1997th Edition PDF Free Download
Download Matrix Analysis (Graduate Texts in Mathematics, 169) 1997th Edition 1997 PDF Free
Matrix Analysis (Graduate Texts in Mathematics, 169) 1997th Edition 1997 PDF Free Download
Download Matrix Analysis (Graduate Texts in Mathematics, 169) 1997th Edition PDF
Free Download Ebook Matrix Analysis (Graduate Texts in Mathematics, 169) 1997th Edition