En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 15A18 2 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

When is the resolvent like a rank one matrix ? - Greenbaum, Anne (Auteur de la Conférence) | CIRM H

Multi angle

Let $A$ be a square matrix. The resolvent, $(A-z I)^{-1}, z \in \mathbb{C}$, plays an important role in many applications; for example, in studying functions of $A$, one often uses the Cauchy integral formula,$$f(A)=-\frac{1}{2 \pi i} \int_{\Gamma}(A-z I)^{-1} f(z) d z$$where $\Gamma$ is the boundary of a region $\Omega$ that contains the spectrum of $A$ and on which $f$ is analytic. If $z$ is very close to a simple eigenvalue $\lambda$ of $A$ - much closer to $\lambda$ than to any other eigenvalue of $A$ - then $(A-z I)^{-1} \approx \frac{1}{\lambda-z} x y^*$, where $x$ and $y$ are right and left normalized eigenvectors of $A$ corresponding to eigenvalue $\lambda$. It is sometimes observed, however, that $(A-z I)^{-1}$ is close to a rank one matrix even when $z$ is not very close to an eigenvalue of $A$. In this case, one can write $(A-z I)^{-1} \approx \sigma_1(z) u_1(z) v_1(z)^*$, where $\sigma_1(z)$ is the largest singular value of $(A-z I)^{-1}$ and $u_1(z)$ and $v_1(z)$ are the corresponding left and right singular vectors. We use singular value/vector perturbation theory to describe conditions under which $(A-$ $z I)^{-1}$ can be well-approximated by rank one matrices for a wide range of $z$ values. If $\lambda$ is a simple ill-conditioned eigenvalue of $A$, if the smallest nonzero singular value of $A-\lambda I$ is well-separated from 0 , and if a certain other condition involving the singular vectors of $A-\lambda I$ is satisfied, then it is shown that $(A-z I)^{-1}$ is close to a rank one matrix for a wide range of $z$ values. An application of this result in comparing bounds on $\|f(A)\|$ is described [1] for example, in studying functions of $A$, one often uses the Cauchy integral formula,$$f(A)=-\frac{1}{2 \pi i} \int_{\Gamma}(A-z I)^{-1} f(z) d z$$where $\Gamma$ is the boundary of a region $\Omega$ that contains the spectrum of $A$ and on which $f$ is analytic. If $z$ is very close to a simple eigenvalue $\lambda$ of $A$ - much closer to $\lambda$ than to any other eigenvalue of $A$ - then $(A-z I)^{-1} \approx \frac{1}{\lambda-z} x y^*$, where $x$ and $y$ are right and left normalized eigenvectors of $A$ corresponding to eigenvalue $\lambda$. It is sometimes observed, however, that $(A-z I)^{-1}$ is close to a rank one matrix even when $z$ is not very close to an eigenvalue of $A$. In this case, one can write $(A-z I)^{-1} \approx \sigma_1(z) u_1(z) v_1(z)^*$, where $\sigma_1(z)$ is the largest singular value of $(A-z I)^{-1}$ and $u_1(z)$ and $v_1(z)$ are the corresponding left and right singular vectors.We use singular value/vector perturbation theory to describe conditions under which $(A-$ $z I)^{-1}$ can be well-approximated by rank one matrices for a wide range of $z$ values. If $\lambda$ is a simple ill-conditioned eigenvalue of $A$, if the smallest nonzero singular value of $A-\lambda I$ is well-separated from 0 , and if a certain other condition involving the singular vectors of $A-\lambda I$ is satisfied, then it is shown that $(A-z I)^{-1}$ is close to a rank one matrix for a wide range of $z$ values. An application of this result in comparing bounds on $\|f(A)\|$ is described [1].[-]
Let $A$ be a square matrix. The resolvent, $(A-z I)^{-1}, z \in \mathbb{C}$, plays an important role in many applications; for example, in studying functions of $A$, one often uses the Cauchy integral formula,$$f(A)=-\frac{1}{2 \pi i} \int_{\Gamma}(A-z I)^{-1} f(z) d z$$where $\Gamma$ is the boundary of a region $\Omega$ that contains the spectrum of $A$ and on which $f$ is analytic. If $z$ is very close to a simple eigenvalue $\lambda$ of $A$ - ...[+]

15A60 ; 15A18 ; 65F99

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Subspace iteration and variants, revisited - Saad, Yousef (Auteur de la Conférence) | CIRM H

Multi angle

Computing invariant subspaces is at the core of many applications, from machine learning to signal processing, and control theory, to name just a few examples. Often one wishes to com- pute the subspace associated with eigenvalues located at one end of the spectrum, i.e., either the largest or the smallest eigenvalues. In addition, it is quite common that the data at hand undergoes frequent changes and one is required to keep updating or tracking the target invariant subspace. The talk will present standard tools for computing invariant subspaces, with a focus on methods that do not require solving linear systems. One of the best known techniques for computing invariant subspaces is the subspace iteration algorithm [2]. While this algorithm tends to be slower than a Krylov subspace approach such as the Lanczos algorithm, it has many attributes that make it the method of choice in many applications. One of these attributes is its tolerance of changes in the matrix. An alternative framework that will be emphasized is that of Grassmann manifolds [1]. We will derive gradient-type methods and show the many connections that exist between different viewpoints adopted by practitioners, e.g., the TraceMin algorithm [3]. The talk will end with a few illustrative examples.[-]
Computing invariant subspaces is at the core of many applications, from machine learning to signal processing, and control theory, to name just a few examples. Often one wishes to com- pute the subspace associated with eigenvalues located at one end of the spectrum, i.e., either the largest or the smallest eigenvalues. In addition, it is quite common that the data at hand undergoes frequent changes and one is required to keep updating or ...[+]

65F15 ; 15A23 ; 15A18

Sélection Signaler une erreur