En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Numerical Analysis and Scientific Computing 211 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Extended Lagrange spaces and optimal control - Mehrmann, Volker (Author of the conference) | CIRM H

Post-edited

Mathematical modeling and numerical mathematics of today is very much Lagrangian and modern automated modeling techniques lead to differential-algebraic systems. The optimal control for such systems in general cannot be obtained using the classical Euler-Lagrange approach or the maximum principle, but it is shown how this approach can be extended.

93C05 ; 93C15 ; 49K15 ; 34H05

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
One of the important "products" of wavelet theory consists in the insight that it is often beneficial to consider sparsity in signal processing applications. In fact, wavelet compression relies on the fact that wavelet expansions of real-world signals and images are usually sparse. Compressive sensing builds on sparsity and tells us that sparse signals (expansions) can be recovered from incomplete linear measurements (samples) efficiently. This finding triggered an enormous research activity in recent years both in signal processing applications as well as their mathematical foundations. The present talk discusses connections of compressive sensing and time-frequency analysis (the sister of wavelet theory). In particular, we give on overview on recent results on compressive sensing with time-frequency structured random matrices.

Keywords: compressive sensing - time-frequency analysis - wavelets - sparsity - random matrices - $\ell_1$-minimization - radar - wireless communications[-]
One of the important "products" of wavelet theory consists in the insight that it is often beneficial to consider sparsity in signal processing applications. In fact, wavelet compression relies on the fact that wavelet expansions of real-world signals and images are usually sparse. Compressive sensing builds on sparsity and tells us that sparse signals (expansions) can be recovered from incomplete linear measurements (samples) efficiently. This ...[+]

94A20 ; 94A08 ; 42C40 ; 60B20 ; 90C25

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Linear solvers for reservoir simulation - Hénon, Pascal (Author of the conference) | CIRM H

Multi angle

In this presentation, we will first present the main goals and principles of reservoir simulation. Then we will focus on linear systems that arise in such simulation. The main HPC challenge is to solve those systems efficiently on massively parallel computers. The specificity of those systems is that their convergence is mostly governed by the elliptic part of the equations and the linear solver needs to take advantage of it to be efficient. The reference method in reservoir simulation is CPR-AMG which usually relies on AMG to solve the quasi elliptic part of the system. We will present some works on improving AMG scalability for the reservoir linear systems (work done in collaboration with CERFACS). We will then introduce an on-going work with INRIA to take advantage of their enlarged Krylov method (EGMRES) in the CPR method.[-]
In this presentation, we will first present the main goals and principles of reservoir simulation. Then we will focus on linear systems that arise in such simulation. The main HPC challenge is to solve those systems efficiently on massively parallel computers. The specificity of those systems is that their convergence is mostly governed by the elliptic part of the equations and the linear solver needs to take advantage of it to be efficient. The ...[+]

65F10 ; 65N22 ; 65Y05

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We review Optimized Schwarz waveform relaxation methods which are space-time domain decomposition methods. The main ideas are explained on the heat equation, and extension to advection-diffusion equations are illustrated by numerical results. We present the Schwarz for TrioCFD project, which aims at using this kind of methods for the Stokes equations.

65M55 ; 65M60 ; 65M12 ; 65Y20

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion of adaptivity includes several aspects such as mesh refinements based on either a priori or a posteriori error estimates, the local choice of different time stepping methods and the selection of the total number of levels and the number of samples at different levels. Our Adaptive MLMC estimator uses a hierarchy of adaptively refined, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform discretization MLMC method introduced independently by M. Giles and S. Heinrich. In particular, we show that our adaptive MLMC algorithms are asymptotically accurate and have the correct complexity with an improved control of the multiplicative constant factor in the asymptotic analysis. In this context, we developed novel techniques for estimation of parameters needed in our MLMC algorithms, such as the variance of the difference between consecutive approximations. These techniques take particular care of the deepest levels, where for efficiency reasons only few realizations are available to produce essential estimates. Moreover, we show the asymptotic normality of the statistical error in the MLMC estimator, justifying in this way our error estimate that allows prescribing both the required accuracy and confidence level in the final result. We present several examples to illustrate the above results and the corresponding computational savings.[-]
We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion ...[+]

65C30 ; 65C05 ; 60H15 ; 60H35 ; 35R60

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Stochastic models are used in many scientific fields, including mechanics, physics, life sciences, queues and social-network studies, chemistry. Stochastic modeling is necessary when deterministic ones cannot capture features of the dynamics, for instance, to represent effects of unresolved small-scale fluctuations, or when systems are subjected to important inherent noise. Often, stochastic models are not completely known and involve some calibrated parameters that should be considered as uncertain. In this case, it is critical to assess the impact of the uncertain model parameters on the stochastic model predictions. This is usually achieved by performing a sensitivity analysis (SA) which characterizes changes in a model output when the uncertain parameters are varied. In the case of a stochastic model, one classically applies the SA to statistical moments of the prediction, estimating, for instance, the derivatives with respect to the uncertain parameters of the output mean and variance. In this presentation, we introduce new approaches of SA in a stochastic system based on variance decomposition methods (ANOVA, Sobol). Compared to previous methods, our SA methods are global, with respect to both the parameters and stochasticity, and decompose the variance into stochastic, parametric and mixed contributions.
We consider first the case of uncertain Stochastic Differential Equations (SDE), that is systems with external noisy forcing and uncertain parameters. A polynomial chaos (PC) analysis with stochastic expansion coefficients is proposed to approximate the SDE solution. We first use a Galerkin formalism to determine the expansion coefficients, leading to a hierarchy of SDEs. Under the mild assumption that the noise and uncertain parameters are independent, the Galerkin formalism naturally separates parametric uncertainty and stochastic forcing dependencies, enabling an orthogonal decomposition of the variance, and consequently identify contributions arising
from the uncertainty in parameters, the stochastic forcing, and a coupled term. Non-intrusive approaches are subsequently considered for application to more complex systems hardly amenable to Galerkin projection. We also discuss parallel implementations and application to derived quantity of interest, in particular, a novel sampling strategy for non-smooth quantities of interest but smooth SDE solution. Numerical examples are provided to illustrate the output of the SA and the computational complexity of the method.
Second, we consider the case of stochastic simulators governed by a set of reaction channels with stochastic dynamics. Reformulating the system dynamics in terms of independent standardized Poisson processes permits the identification of individual realizations of each reaction channel dynamic and a quantitative characterization of the inherent stochasticity sources. By judiciously exploiting the inherent stochasticity of the system, we can then compute the global sensitivities associated with individual reaction channels, as well as the importance of channel interactions. This approach is subsequently extended to account for the effects of uncertain parameters and we propose dedicated algorithms to perform the Sobols decomposition of the variance into contributions from an arbitrary subset of uncertain parameters and stochastic reaction channels. The algorithms are illustrated in simplified systems, including the birth-death, Schlgl, and Michaelis-Menten models. The sensitivity analysis output is also contrasted with a local derivative-based sensitivity analysis method.[-]
Stochastic models are used in many scientific fields, including mechanics, physics, life sciences, queues and social-network studies, chemistry. Stochastic modeling is necessary when deterministic ones cannot capture features of the dynamics, for instance, to represent effects of unresolved small-scale fluctuations, or when systems are subjected to important inherent noise. Often, stochastic models are not completely known and involve some ...[+]

60H35 ; 65C30 ; 65D15

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
These schemes have several important features:

- The discrete problem has the same structure as the continous one, so existence, energy estimates, and possibly uniqueness can be obtained with the same kind of arguments

- Monotonicity guarantees the stability of the scheme: it is robust in the deterministic limit

- convergence to classical or weak solutions can be proved

Finally, there are particular cases named variational MFGS in which the system of PDEs can be seen as the optimality conditions of some optimal control problem driven by a PDE. In such cases, augmented Lagrangian methods can be used for solving the discrete nonlinear system. The mini-course will be orgamized as follows

1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.

2. Monotone finite difference schemes

3. Examples of applications

4. Variational MFG and related algorithms for solving the discrete system of nonlinear equations[-]
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and ...[+]

49K20 ; 49N70 ; 35F21 ; 35K40 ; 35K55 ; 35Q84 ; 65K10 ; 65M06 ; 65M12 ; 91A23 ; 91A15

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
These schemes have several important features:

- The discrete problem has the same structure as the continous one, so existence, energy estimates, and possibly uniqueness can be obtained with the same kind of arguments

- Monotonicity guarantees the stability of the scheme: it is robust in the deterministic limit

- convergence to classical or weak solutions can be proved

Finally, there are particular cases named variational MFGS in which the system of PDEs can be seen as the optimality conditions of some optimal control problem driven by a PDE. In such cases, augmented Lagrangian methods can be used for solving the discrete nonlinear system. The mini-course will be orgamized as follows

1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.

2. Monotone finite difference schemes

3. Examples of applications

4. Variational MFG and related algorithms for solving the discrete system of nonlinear equations[-]
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and ...[+]

49K20 ; 49N70 ; 35F21 ; 35K40 ; 35K55 ; 35Q84 ; 65K10 ; 65M06 ; 65M12 ; 91A23 ; 91A15

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Forward and backward simulation of Euler scheme - Gobet, Emmanuel (Author of the conference) | CIRM H

Multi angle

We analyse how reverting Random Number Generator can be efficiently used to save memory in solving dynamic programming equation. For SDEs, it takes the form of forward and backward Euler scheme. Surprisingly the error induced by time reversion is of order 1.

60H10 ; 60H15 ; 60H30 ; 65C10

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Combining cut element methods and hybridization - Burman, Erik (Author of the conference) | CIRM H

Multi angle

Recently there has been a surge in interest in cut, or unfitted, finite element methods. In this class of methods typically the computational mesh is independent of the geometry. Interfaces and boundaries are allowed to cut through the mesh in a very general fashion. Constraints on the boundaries such as boundary or transmission conditions are typically imposed weakly using Nitsche's method. In this talk we will discuss how these ideas can be combined in a fruitful way with the idea of hybridization, where additional degrees of freedom are added on the interfaces to further improve the decoupling of the systems, allowing for static condensation of interior unknowns. In the first part of the talk we will discuss how hybridization can be combined with the classical cut finite element method, using standard H1 -conforming finite elements in each subdomain, leading to a robust method allowing for the integration of polytopal geometries, where the subdomains are independent of the underlying mesh. This leads to a framework where it is easy to integrate multiscale features such as strongly varying coefficients, or multidimensional coupling, as in flow in fractured domains. Some examples of such applications will be given. In the second part of the talk we will focus on the Hybridized High Order Method (HHO) and show how cut techniques can be introduced in this context. The HHO is a recently introduced nonconforming method that allows for arbitrary order discretization of diffusive problems on polytopal meshes. HHO methods have hybrid unknowns, made of polynomials in the mesh elements and on the faces, without any continuity requirement. They rely on high-order local reconstructions, which are used to build consistent Galerkin contributions and appropriate stabilization terms designed to preserve the high-order approximation properties of the local reconstructions. Here we will show how cut element techniques can be introduced as a tool for the handling of (possibly curved) interfaces or boundaries that are allowed to cut through the polytopal mesh. In this context the cut element method plays the role of a local interface model, where the associated degrees of freedom are eliminated in the static condensation step. Issues of robustness and accuracy will be discussed and illustrated by some numerical examples.[-]
Recently there has been a surge in interest in cut, or unfitted, finite element methods. In this class of methods typically the computational mesh is independent of the geometry. Interfaces and boundaries are allowed to cut through the mesh in a very general fashion. Constraints on the boundaries such as boundary or transmission conditions are typically imposed weakly using Nitsche's method. In this talk we will discuss how these ideas can be ...[+]

65N30 ; 34A38

Bookmarks Report an error