Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We give a new expression for the law of the eigenvalues of the discrete Anderson model on the finite interval [0, N], in terms of two random processes starting at both ends of the interval. Using this formula, we deduce that the tail of the eigenvectors behaves approximately like exponential of a Brownian motion with a drift. A similar result has recently been shown by B. Rifkind and B. Virag in the critical case, that is, when the random potential is multiplied by a factor 1/ √N.
[-]
We give a new expression for the law of the eigenvalues of the discrete Anderson model on the finite interval [0, N], in terms of two random processes starting at both ends of the interval. Using this formula, we deduce that the tail of the eigenvectors behaves approximately like exponential of a Brownian motion with a drift. A similar result has recently been shown by B. Rifkind and B. Virag in the critical case, that is, when the random ...
[+]
60B20 ; 65F15
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Structure is a fundamental concept in linear algebra: matrices arising from applications often inherit a special form from the original problem, and this special form can be analysed and exploited to design efficient algorithms. In this short course we will present some examples of matrix structure and related applications. Here we are interested in data-sparse structure, that is, structure that allows us to represent an n × n matrix using only O(n) parameters. One notable example is provided by quasi separable matrices, a class of (generally dense) rank-structured matrices where off-diagonal blocks have low rank.
We will give an overview of the properties of these structured classes and present a few examples of how algorithms that perform basic tasks – e.g., solving linear systems, computing eigenvalues, approximating matrix functions – can be tailored to specific structures.
[-]
Structure is a fundamental concept in linear algebra: matrices arising from applications often inherit a special form from the original problem, and this special form can be analysed and exploited to design efficient algorithms. In this short course we will present some examples of matrix structure and related applications. Here we are interested in data-sparse structure, that is, structure that allows us to represent an n × n matrix using only ...
[+]
15B99 ; 65F15 ; 65F60
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Structure is a fundamental concept in linear algebra: matrices arising from applications often inherit a special form from the original problem, and this special form can be analysed and exploited to design efficient algorithms. In this short course we will present some examples of matrix structure and related applications. Here we are interested in data-sparse structure, that is, structure that allows us to represent an n × n matrix using only O(n) parameters. One notable example is provided by quasi separable matrices, a class of (generally dense) rank-structured matrices where off-diagonal blocks have low rank.
We will give an overview of the properties of these structured classes and present a few examples of how algorithms that perform basic tasks - e.g., solving linear systems, computing eigenvalues, approximating matrix functions - can be tailored to specific structures.
[-]
Structure is a fundamental concept in linear algebra: matrices arising from applications often inherit a special form from the original problem, and this special form can be analysed and exploited to design efficient algorithms. In this short course we will present some examples of matrix structure and related applications. Here we are interested in data-sparse structure, that is, structure that allows us to represent an n × n matrix using only ...
[+]
15B99 ; 65F15 ; 65F60
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Computing invariant subspaces is at the core of many applications, from machine learning to signal processing, and control theory, to name just a few examples. Often one wishes to com- pute the subspace associated with eigenvalues located at one end of the spectrum, i.e., either the largest or the smallest eigenvalues. In addition, it is quite common that the data at hand undergoes frequent changes and one is required to keep updating or tracking the target invariant subspace. The talk will present standard tools for computing invariant subspaces, with a focus on methods that do not require solving linear systems. One of the best known techniques for computing invariant subspaces is the subspace iteration algorithm [2]. While this algorithm tends to be slower than a Krylov subspace approach such as the Lanczos algorithm, it has many attributes that make it the method of choice in many applications. One of these attributes is its tolerance of changes in the matrix. An alternative framework that will be emphasized is that of Grassmann manifolds [1]. We will derive gradient-type methods and show the many connections that exist between different viewpoints adopted by practitioners, e.g., the TraceMin algorithm [3]. The talk will end with a few illustrative examples.
[-]
Computing invariant subspaces is at the core of many applications, from machine learning to signal processing, and control theory, to name just a few examples. Often one wishes to com- pute the subspace associated with eigenvalues located at one end of the spectrum, i.e., either the largest or the smallest eigenvalues. In addition, it is quite common that the data at hand undergoes frequent changes and one is required to keep updating or ...
[+]
65F15 ; 15A23 ; 15A18