En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 65N55 10 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
I will review (some of) the HPC solution strategies developed in Feel++. We present our advances in developing a language specific to partial differential equations embedded in C++. We have been developing the Feel++ framework (Finite Element method Embedded Language in C++) to the point where it allows to use a very wide range of Galerkin methods and advanced numerical methods such as domain decomposition methods including mortar and three fields methods, fictitious domain methods or certified reduced basis. We shall present an overview of the various ingredients as well as some illustrations. The ingredients include a very expressive embedded language, seamless interpolation, mesh adaption, seamless parallelisation. As to the illustrations, they exercise the versatility of the framework either by allowing the development and/or numerical verification of (new) mathematical methods or the development of large multi-physics applications - e.g. fluid-structure interaction using either an Arbitrary Lagrangian Eulerian formulation or a levelset based one; high field magnets modeling which involves electro-thermal, magnetostatics, mechanical and thermo-hydraulics model; ... - The range of users span from mechanical engineers in industry, physicists in complex fluids, computer scientists in biomedical applications to applied mathematicians thanks to the shared common mathematical embedded language hiding linear algebra and computer science complexities.[-]
I will review (some of) the HPC solution strategies developed in Feel++. We present our advances in developing a language specific to partial differential equations embedded in C++. We have been developing the Feel++ framework (Finite Element method Embedded Language in C++) to the point where it allows to use a very wide range of Galerkin methods and advanced numerical methods such as domain decomposition methods including mortar and three ...[+]

65N30 ; 65N55 ; 65Y05 ; 65Y15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
I will present an efficient implementation of the highly robust and scalable GenEO preconditioner in the high-performance PDE framework DUNE. The GenEO coarse space is constructed by combining low energy solutions of local generalised eigenproblems using a partition of unity. In this talk, both weak and strong scaling for the GenEO solver on over 15,000 cores will be demonstrated by solving an industrially motivated problem with over 200 million degrees of freedom in aerospace composites modelling. Further, it will be shown that for highly complex parameter distributions in certain real-world applications, established methods can become intractable while GenEO remains fully effective. In the context of multilevel Markov chain Monte Carlo (MLMCMC), the GenEO coarse space also plays an important role as an effective surrogate model in PDE-constrained Bayesian inference. The second part will therefore focus on the approximation properties of the GenEO coarse space and on a high-performance parallel implementation of MLMCMC.
This is joint work with Tim Dodwell (Exeter), Anne Reinarz (TU Munich) and Linus Seelinger (Heidelberg).[-]
I will present an efficient implementation of the highly robust and scalable GenEO preconditioner in the high-performance PDE framework DUNE. The GenEO coarse space is constructed by combining low energy solutions of local generalised eigenproblems using a partition of unity. In this talk, both weak and strong scaling for the GenEO solver on over 15,000 cores will be demonstrated by solving an industrially motivated problem with over 200 million ...[+]

65F08 ; 65N22 ; 65N30 ; 65N55

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Both multigrid and domain decomposition methods are so called optimal solvers for Laplace type problems, but how do they compare? I will start by showing in what sense these methods are optimal for the Laplace equation, which will reveal that while both multigrid and domain decomposition are iterative solvers, there are fundamental differences between them. Multigrid for Laplace's equation is a standalone solver, while classical domain decomposition methods like the additive Schwarz method or Neumann-Neumann and FETI methods need Krylov acceleration to work. I will explain in detail for each case why this is so, and then also present modifications so that Krylov acceleration is not necessary any more. For overlapping methods, this leads to the use of partitions of unity, while for non-overlapping methods, the coarse space can be a remedy. Good coarse spaces in domain decomposition methods are very different from coarse spaces in multigrid, due to the very aggressive coarsening in domain decomposition. I will introduce the concept of optimal coarse spaces for domain decomposition in a sense very different from the optimal above, and then present approximations of this coarse space. Together with optimized transmission conditions, this leads to a two level domain decomposition method of Schwarz type which is competitive with multigrid for Laplace's equation in wallclock time.[-]
Both multigrid and domain decomposition methods are so called optimal solvers for Laplace type problems, but how do they compare? I will start by showing in what sense these methods are optimal for the Laplace equation, which will reveal that while both multigrid and domain decomposition are iterative solvers, there are fundamental differences between them. Multigrid for Laplace's equation is a standalone solver, while classical domain ...[+]

65N55 ; 65N22 ; 65F10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Time parallel time integration - Gander, Martin (Auteur de la Conférence) | CIRM H

Multi angle

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In domain decomposition methods, most of the computational cost lies in the successive solutions of the local problems in subdomains via forward-backward substitutions and in the orthogonalization of interface search directions. All these operations are performed, in the best case, via BLAS-1 or BLAS-2 routines which are inefficient on multicore systems with hierarchical memory. A way to improve the parallel efficiency of the method consists in working with several search directions, since multiple forward-backward substitutions and reorthogonalizations involve BLAS-3 routines. In the case of a problem with several right-hand-sides, using a block Krylov method is a straightforward way to work with multiple search directions. This will be illustrated with an application in electromagnetism using FETI-2LM method. For problems with a single right-hand-side, deriving several search directions that make sense from the optimal one constructed by the Krylov method is not so easy. The recently developed S-FETI method gives a very good approach that does not only improve parallel efficiency but can also reduce the global computational cost in the case of very heterogeneous problems.[-]
In domain decomposition methods, most of the computational cost lies in the successive solutions of the local problems in subdomains via forward-backward substitutions and in the orthogonalization of interface search directions. All these operations are performed, in the best case, via BLAS-1 or BLAS-2 routines which are inefficient on multicore systems with hierarchical memory. A way to improve the parallel efficiency of the method consists in ...[+]

65N22 ; 65N30 ; 65N55 ; 65Y05 ; 65F10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Algebraic multigrid and subdivision - Charina, Maria (Auteur de la Conférence) | CIRM H

Multi angle

Multigrid is an iterative method for solving large linear systems of equations whose Toeplitz system matrix is positive definite. One of the crucial steps of any Multigrid method is based on multivariate subdivision. We derive sufficient conditions for convergence and optimality of Multigrid in terms of trigonometric polynomials associated with the corresponding subdivision schemes.
(This is a joint work with Marco Donatelli, Lucia Romani and Valentina Turati).[-]
Multigrid is an iterative method for solving large linear systems of equations whose Toeplitz system matrix is positive definite. One of the crucial steps of any Multigrid method is based on multivariate subdivision. We derive sufficient conditions for convergence and optimality of Multigrid in terms of trigonometric polynomials associated with the corresponding subdivision schemes.
(This is a joint work with Marco Donatelli, Lucia Romani and ...[+]

65N55 ; 65N30 ; 65F10 ; 65F35

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
This talk focuses on challenges that we address when designing linear solvers that aim at achieving scalability on large scale computers, while also preserving numerical robustness. We will consider preconditioned Krylov subspace solvers. Getting scalability relies on reducing global synchronizations between processors, while also increasing the arithmetic intensity on one processor. Achieving robustness relies on ensuring that the condition number of the preconditioned matrix is bounded. We will discuss two different approaches for this. The first approach relies on enlarged Krylov subspace methods that aim at computing an enlarged subspace and obtain a faster convergence of the iterative method. The second approach relies on a multilevel Schwarz preconditioner, a multilevel extension of the GenEO preconditioner, that is basedon constructing robustly a hierarchy of coarse spaces. Numerical results on large scale computers, in particular for linear systems arising from solving linear elasticity problems, will discuss the efficiency of the proposed methods.[-]
This talk focuses on challenges that we address when designing linear solvers that aim at achieving scalability on large scale computers, while also preserving numerical robustness. We will consider preconditioned Krylov subspace solvers. Getting scalability relies on reducing global synchronizations between processors, while also increasing the arithmetic intensity on one processor. Achieving robustness relies on ensuring that the condition ...[+]

65F08 ; 65F10 ; 65N55

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This presentation will be kept at a basic level, both continuous and algebraic versions of the methods will be given in their most common variants and the main ingredients of domain decomposition methods will be presented. The content will follow the lines of the chapters 1 and 3 from the domain decomposition book. A short introduction to Freefem software will be given which will allow the students to use quickly the codes illustrating the methods.
Outcomes: At the end of this first lecture, students will have a basic understanding of the methods but also of their implementation.[-]
This presentation will be kept at a basic level, both continuous and algebraic versions of the methods will be given in their most common variants and the main ingredients of domain decomposition methods will be presented. The content will follow the lines of the chapters 1 and 3 from the domain decomposition book. A short introduction to Freefem software will be given which will allow the students to use quickly the codes illustrating the ...[+]

65N55

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Domain decomposition methods are meant to be used as parallel solvers and scalability (behaviour independent of the number of subdomains/processors) and robustness with respect to the physical parameters are very important issues. An introduction to coarse spaces and two-level methods for symmetric positive definite (SPD) problems will be given together with the presentation of a few variants of domain decomposition preconditioners (AS, RAS, ORAS, SORAS). The content will follow chapters 4 and 5 from the book, although more recent research results will also be included.
Outcomes: Students will be able to understand the use and the impact of the two-level methods both for scalability and robustness (even if at this stage the codes are sequential).[-]
Domain decomposition methods are meant to be used as parallel solvers and scalability (behaviour independent of the number of subdomains/processors) and robustness with respect to the physical parameters are very important issues. An introduction to coarse spaces and two-level methods for symmetric positive definite (SPD) problems will be given together with the presentation of a few variants of domain decomposition preconditioners (AS, RAS, ...[+]

65N55

Sélection Signaler une erreur