En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 65F10 14 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Algebraic multigrid and subdivision - Charina, Maria (Auteur de la conférence) | CIRM H

Multi angle

Multigrid is an iterative method for solving large linear systems of equations whose Toeplitz system matrix is positive definite. One of the crucial steps of any Multigrid method is based on multivariate subdivision. We derive sufficient conditions for convergence and optimality of Multigrid in terms of trigonometric polynomials associated with the corresponding subdivision schemes.
(This is a joint work with Marco Donatelli, Lucia Romani and Valentina Turati).[-]
Multigrid is an iterative method for solving large linear systems of equations whose Toeplitz system matrix is positive definite. One of the crucial steps of any Multigrid method is based on multivariate subdivision. We derive sufficient conditions for convergence and optimality of Multigrid in terms of trigonometric polynomials associated with the corresponding subdivision schemes.
(This is a joint work with Marco Donatelli, Lucia Romani and ...[+]

65N55 ; 65N30 ; 65F10 ; 65F35

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
The performance of numerical algorithms, both regarding stability and complexity, can be understood in a unified way in terms of condition numbers. This requires to identify the appropriate geometric settings and to characterize condition in geometric ways.
A probabilistic analysis of numerical algorithms can be reduced to a corresponding analysis of condition numbers, which leads to fascinating problems of geometric probability and integral geometry. The most well known example is Smale's 17th problem, which asks to find a solution of a given system of n complex homogeneous polynomial equations in $n$ + 1 unknowns. This problem can be solved in average (and even smoothed) polynomial time.
In the course we will explain the concepts necessary to state and solve Smale's 17th problem. We also show how these ideas lead to new numerical algorithms for computing eigenpairs of matrices that provably run in average polynomial time. Making these algorithms more efficient or adapting them to structured settings are challenging and rewarding research problems. We intend to address some of these issues at the end of the course.[-]
The performance of numerical algorithms, both regarding stability and complexity, can be understood in a unified way in terms of condition numbers. This requires to identify the appropriate geometric settings and to characterize condition in geometric ways.
A probabilistic analysis of numerical algorithms can be reduced to a corresponding analysis of condition numbers, which leads to fascinating problems of geometric probability and integral ...[+]

65F35 ; 65K05 ; 68Q15 ; 15A12 ; 65F10 ; 90C51 ; 65H10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
This talk focuses on challenges that we address when designing linear solvers that aim at achieving scalability on large scale computers, while also preserving numerical robustness. We will consider preconditioned Krylov subspace solvers. Getting scalability relies on reducing global synchronizations between processors, while also increasing the arithmetic intensity on one processor. Achieving robustness relies on ensuring that the condition number of the preconditioned matrix is bounded. We will discuss two different approaches for this. The first approach relies on enlarged Krylov subspace methods that aim at computing an enlarged subspace and obtain a faster convergence of the iterative method. The second approach relies on a multilevel Schwarz preconditioner, a multilevel extension of the GenEO preconditioner, that is basedon constructing robustly a hierarchy of coarse spaces. Numerical results on large scale computers, in particular for linear systems arising from solving linear elasticity problems, will discuss the efficiency of the proposed methods.[-]
This talk focuses on challenges that we address when designing linear solvers that aim at achieving scalability on large scale computers, while also preserving numerical robustness. We will consider preconditioned Krylov subspace solvers. Getting scalability relies on reducing global synchronizations between processors, while also increasing the arithmetic intensity on one processor. Achieving robustness relies on ensuring that the condition ...[+]

65F08 ; 65F10 ; 65N55

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Linear matrix equations such as the Lyapunov and Sylvester equations and their generalizations have classically played an important role in the analysis of dynamical systems, in control theory and in eigenvalue computation. More recently, matrix equations have emerged as a natural linear algebra framework for the discretized version of (systems of) partial differential equations (PDEs), possibly evolving in time. In this new framework, new challenges have arisen. In this talk we review some of the key methodologies for solving large scale linear and quadratic matrix equations. We will also discuss recent matrix-based strategies for the numerical solution of time-dependent problems arising in control and in the analysis of spatial pattern formations in certain electrodeposition models.[-]
Linear matrix equations such as the Lyapunov and Sylvester equations and their generalizations have classically played an important role in the analysis of dynamical systems, in control theory and in eigenvalue computation. More recently, matrix equations have emerged as a natural linear algebra framework for the discretized version of (systems of) partial differential equations (PDEs), possibly evolving in time. In this new framework, new ...[+]

65F10 ; 65M22 ; 15A24

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Both multigrid and domain decomposition methods are so called optimal solvers for Laplace type problems, but how do they compare? I will start by showing in what sense these methods are optimal for the Laplace equation, which will reveal that while both multigrid and domain decomposition are iterative solvers, there are fundamental differences between them. Multigrid for Laplace's equation is a standalone solver, while classical domain decomposition methods like the additive Schwarz method or Neumann-Neumann and FETI methods need Krylov acceleration to work. I will explain in detail for each case why this is so, and then also present modifications so that Krylov acceleration is not necessary any more. For overlapping methods, this leads to the use of partitions of unity, while for non-overlapping methods, the coarse space can be a remedy. Good coarse spaces in domain decomposition methods are very different from coarse spaces in multigrid, due to the very aggressive coarsening in domain decomposition. I will introduce the concept of optimal coarse spaces for domain decomposition in a sense very different from the optimal above, and then present approximations of this coarse space. Together with optimized transmission conditions, this leads to a two level domain decomposition method of Schwarz type which is competitive with multigrid for Laplace's equation in wallclock time.[-]
Both multigrid and domain decomposition methods are so called optimal solvers for Laplace type problems, but how do they compare? I will start by showing in what sense these methods are optimal for the Laplace equation, which will reveal that while both multigrid and domain decomposition are iterative solvers, there are fundamental differences between them. Multigrid for Laplace's equation is a standalone solver, while classical domain ...[+]

65N55 ; 65N22 ; 65F10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Iterative methods for linear systems were invented for the same reasons as they are used today,namely to reduce computational cost. Gauss states in a letter to his friend Gerling in 1823: 'you will in the future hardly eliminate directly, at least not when you have more than two unknowns'.
Richardson's paper from 1910 was then very influential, and is a model of a modern numerical analysis paper: modeling, discretization, approximate solution of the discrete problem,and a real application. Richardson's method is much more sophisticated that how it is usually presented today, and his dream became reality in the PhD thesis of Gene Golub.
The work of Stiefel, Hestenes and Lanczos in the early 1950 sparked the success story of Krylov methods, and these methods can also be understood in the context of extrapolation, pioneered by Brezinski and Sidi, based on seminal work by Wynn.
This brings us to the modern iterative methods for solving partial differential equations,which come in two main classes: domain decomposition methods and multigrid methods. Domain decomposition methods go back to the alternating Schwarz method invented by Herman Amandus Schwarz in 1869 to close a gap in the proof of Riemann's famous Mapping Theorem. Multigrid goes back to the seminal work by Fedorenko in 1961, with main contributions by Brandt and Hackbusch in the Seventies.
I will show in my presentation how these methods function on the same model problem ofthe temperature distribution in a simple room. All these methods are today used as preconditioners for Krylov methods, which leads to the most powerful iterative solvers currently knownfor linear systems.[-]
Iterative methods for linear systems were invented for the same reasons as they are used today,namely to reduce computational cost. Gauss states in a letter to his friend Gerling in 1823: 'you will in the future hardly eliminate directly, at least not when you have more than two unknowns'.
Richardson's paper from 1910 was then very influential, and is a model of a modern numerical analysis paper: modeling, discretization, approximate solution of ...[+]

65N22 ; 65F10 ; 65B05 ; 65-02 ; 65-03

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
There are more and more computing elements in modern supercomputers. This increases the probability of computer errors. Errors that do not stop the computation are called soft errors or silent errors. Of course, they could have a negative impact on the output of the code. So, it is of interest to be able to detect these silent errors and to correct them.
In this talk we are concerned with the detection and correction of silent errors in the conjugate gradient (CG) algorithm to solve linear systems Ax = b with a symmetric positive definite matrix A. Silent errors in CG may affect or even prevent the convergence of the algorithm. We propose a new way to detect silent errors using a scalar relation that must be satisfied by CG variables,
$\alpha_{2 k-1}\tfrac{\left(A p_{k-1}, A p_{k-1}\right)}{\left(r_{k-1}, r_{k-1}\right)}=1+\beta_{k},(1)$
where rj's are the residual vectors, pj's the descent directions and
$\alpha_{k-1}=\tfrac{\left(r_{k-1}, r_{k-1}\right)}{\left(\mathrm{p}_{\mathrm{k}-1}, \mathrm{Ap}_{\mathrm{k}-1}\right)}$, $\beta_{\mathrm{k}}=\frac{\left(\mathrm{r}_{\mathrm{k}}, \mathrm{r}_{\mathrm{k}}\right)}{\left(r_{k-1}, r_{k-1}\right)}$
are the coefficients computed in $\mathrm{CG}$.
We study how relation (1) is modified in finite precision arithmetic and define a criterion to detect when this relation is not satisfied.
Checking relation (1) involves computing an additional dot product, but, as it was shown some time ago in [1] and more recently in [2], relation (1) can be used to introduce more parallelism in the algorithm.
Assuming that the input data $(A, b)$ is not corrupted, we model silent errors by bit flips in the output of some CG steps. When an error is detected in some iteration $\mathrm{k}$, we could restore the CG data from iteration $k-2$ to be able to continue the computation safely.
Numerical experiments will show the efficiency of this approach.[-]
There are more and more computing elements in modern supercomputers. This increases the probability of computer errors. Errors that do not stop the computation are called soft errors or silent errors. Of course, they could have a negative impact on the output of the code. So, it is of interest to be able to detect these silent errors and to correct them.
In this talk we are concerned with the detection and correction of silent errors in the ...[+]

65F10 ; 65F30 ; 65F50

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The parallel-in-time integration of wave-type equations is well known to be a difficult task. When applying classical waveform-relaxation (WR) and parareal type methods, one generally experiences rapid error growth before reaching convergence in a finite number of iterations. This negative behavior prevents, in general, the successful application of these domain decomposition methods. In this talk, the focus is on WR-type methods. Classical WR convergence analyses use classical Laplace/Fourier techniques. However, these approaches provide analyses for unbounded time intervals, and do not allow one to describe precisely the WR converge behavior on finite time intervals. In this talk, we present a novel analysis based on the methods of characteristics, which allows us, on the one hand, to obtain a detailed characterization of the error growth along with the iterations and, on the other hand, to introduce a new parallel-in-time computational strategy. Numerical experiments support our new theoretical and numerical findings. This is a joint work with Martin J. Gander and Ilario Mazzieri.[-]
The parallel-in-time integration of wave-type equations is well known to be a difficult task. When applying classical waveform-relaxation (WR) and parareal type methods, one generally experiences rapid error growth before reaching convergence in a finite number of iterations. This negative behavior prevents, in general, the successful application of these domain decomposition methods. In this talk, the focus is on WR-type methods. Classical WR ...[+]

65M55 ; 35L05 ; 65F10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Preconditioning for parallel-in-time - Wathen, Andy (Auteur de la conférence) | CIRM H

Multi angle

This talk consists of two parts, one elementary and one related to the solution of complicated systems of evolutionary partial differential equations. In the first part we show how preconditioning for all-at-once descriptions of linear time-dependent differential equations can defeat the everpresent danger of high-index nilpotency associated with the principle of causality. In particular we will describe some theory for periodic preconditioning of initial value problems that establishes it as a viable Parallelin-time (PinT) approach. The second part builds on much excellent work on PinT methods for scalar parabolic PDEs such as the diffusion equation to propose PinT methods for more complicated evolutionary PDE systems. We will explain the idea with reference to the time-dependent incompressible Stokes and Navier-Stokes equations and indicate it's more broad applicability. This part of the talk is joint with Federico Danieli and Ben Southworth.[-]
This talk consists of two parts, one elementary and one related to the solution of complicated systems of evolutionary partial differential equations. In the first part we show how preconditioning for all-at-once descriptions of linear time-dependent differential equations can defeat the everpresent danger of high-index nilpotency associated with the principle of causality. In particular we will describe some theory for periodic preconditioning ...[+]

65F10 ; 65L99 ; 65M99

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Krylov subspace solvers and preconditioners - Vuik, Kees (Auteur de la conférence) | CIRM H

Multi angle

Sélection Signaler une erreur