##
Post-edited
An introduction to molecular dynamics

Stoltz, Gabriel (Auteur de la Conférence) | CIRM (Editeur )

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Stoltz, Gabriel (Auteur de la Conférence) | CIRM (Editeur )

The aim of this two-hour lecture is to present the mathematical underpinnings of some common numerical approaches to compute average properties as predicted by statistical physics. The first part provides an overview of the most important concepts of statistical physics (in particular thermodynamic ensembles). The aim of the second part is to provide an introduction to the practical computation of averages with respect to the Boltzmann-Gibbs measure using appropriate stochastic dynamics of Langevin type. Rigorous ergodicity results as well as elements on the estimation of numerical errors are provided. The last part is devoted to the computation of transport coefficients such as the mobility or autodiffusion in fluids, relying either on integrated equilibrium correlations à la Green-Kubo, or on the linear response of nonequilibrium dynamics in their steady-states.
The aim of this two-hour lecture is to present the mathematical underpinnings of some common numerical approaches to compute average properties as predicted by statistical physics. The first part provides an overview of the most important concepts of statistical physics (in particular thermodynamic ensembles). The aim of the second part is to provide an introduction to the practical computation of averages with respect to the Boltzmann-Gibbs ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Belomestny, Denis (Auteur de la Conférence) | CIRM (Editeur )

We propose a novel projection-based particle method for solving the McKean-Vlasov stochastic differential equations. Our approach is based on a projection-type estimation of the marginal density of the solution in each time step.

The projection-based particle method leads in many situation to a significant reduction of numerical complexity compared to the widely used kernel density estimation algorithms.

We derive strong convergence rates and rates of density estimation. The convergence analysis in the case of linearly growing coefficients turns out to be rather challenging and requires some new type of averaging technique.

This case is exemplified by explicit solutions to a class of McKean-Vlasov equations with affine drift.

The performance of the proposed algorithm is illustrated by several numerical examples. We propose a novel projection-based particle method for solving the McKean-Vlasov stochastic differential equations. Our approach is based on a projection-type estimation of the marginal density of the solution in each time step.

The projection-based particle method leads in many situation to a significant reduction of numerical complexity compared to the widely used kernel density estimation algorithms.

We derive strong convergence rates and ...

The projection-based particle method leads in many situation to a significant reduction of numerical complexity compared to the widely used kernel density estimation algorithms.

We derive strong convergence rates and rates of density estimation. The convergence analysis in the case of linearly growing coefficients turns out to be rather challenging and requires some new type of averaging technique.

This case is exemplified by explicit solutions to a class of McKean-Vlasov equations with affine drift.

The performance of the proposed algorithm is illustrated by several numerical examples. We propose a novel projection-based particle method for solving the McKean-Vlasov stochastic differential equations. Our approach is based on a projection-type estimation of the marginal density of the solution in each time step.

The projection-based particle method leads in many situation to a significant reduction of numerical complexity compared to the widely used kernel density estimation algorithms.

We derive strong convergence rates and ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Le Maître, Olivier (Auteur de la Conférence) | CIRM (Editeur )

Stochastic models are used in many scientific fields, including mechanics, physics, life sciences, queues and social-network studies, chemistry. Stochastic modeling is necessary when deterministic ones cannot capture features of the dynamics, for instance, to represent effects of unresolved small-scale fluctuations, or when systems are subjected to important inherent noise. Often, stochastic models are not completely known and involve some calibrated parameters that should be considered as uncertain. In this case, it is critical to assess the impact of the uncertain model parameters on the stochastic model predictions. This is usually achieved by performing a sensitivity analysis (SA) which characterizes changes in a model output when the uncertain parameters are varied. In the case of a stochastic model, one classically applies the SA to statistical moments of the prediction, estimating, for instance, the derivatives with respect to the uncertain parameters of the output mean and variance. In this presentation, we introduce new approaches of SA in a stochastic system based on variance decomposition methods (ANOVA, Sobol). Compared to previous methods, our SA methods are global, with respect to both the parameters and stochasticity, and decompose the variance into stochastic, parametric and mixed contributions.

We consider first the case of uncertain Stochastic Differential Equations (SDE), that is systems with external noisy forcing and uncertain parameters. A polynomial chaos (PC) analysis with stochastic expansion coefficients is proposed to approximate the SDE solution. We first use a Galerkin formalism to determine the expansion coefficients, leading to a hierarchy of SDEs. Under the mild assumption that the noise and uncertain parameters are independent, the Galerkin formalism naturally separates parametric uncertainty and stochastic forcing dependencies, enabling an orthogonal decomposition of the variance, and consequently identify contributions arising

from the uncertainty in parameters, the stochastic forcing, and a coupled term. Non-intrusive approaches are subsequently considered for application to more complex systems hardly amenable to Galerkin projection. We also discuss parallel implementations and application to derived quantity of interest, in particular, a novel sampling strategy for non-smooth quantities of interest but smooth SDE solution. Numerical examples are provided to illustrate the output of the SA and the computational complexity of the method.

Second, we consider the case of stochastic simulators governed by a set of reaction channels with stochastic dynamics. Reformulating the system dynamics in terms of independent standardized Poisson processes permits the identification of individual realizations of each reaction channel dynamic and a quantitative characterization of the inherent stochasticity sources. By judiciously exploiting the inherent stochasticity of the system, we can then compute the global sensitivities associated with individual reaction channels, as well as the importance of channel interactions. This approach is subsequently extended to account for the effects of uncertain parameters and we propose dedicated algorithms to perform the Sobols decomposition of the variance into contributions from an arbitrary subset of uncertain parameters and stochastic reaction channels. The algorithms are illustrated in simplified systems, including the birth-death, Schlgl, and Michaelis-Menten models. The sensitivity analysis output is also contrasted with a local derivative-based sensitivity analysis method. Stochastic models are used in many scientific fields, including mechanics, physics, life sciences, queues and social-network studies, chemistry. Stochastic modeling is necessary when deterministic ones cannot capture features of the dynamics, for instance, to represent effects of unresolved small-scale fluctuations, or when systems are subjected to important inherent noise. Often, stochastic models are not completely known and involve some ...

We consider first the case of uncertain Stochastic Differential Equations (SDE), that is systems with external noisy forcing and uncertain parameters. A polynomial chaos (PC) analysis with stochastic expansion coefficients is proposed to approximate the SDE solution. We first use a Galerkin formalism to determine the expansion coefficients, leading to a hierarchy of SDEs. Under the mild assumption that the noise and uncertain parameters are independent, the Galerkin formalism naturally separates parametric uncertainty and stochastic forcing dependencies, enabling an orthogonal decomposition of the variance, and consequently identify contributions arising

from the uncertainty in parameters, the stochastic forcing, and a coupled term. Non-intrusive approaches are subsequently considered for application to more complex systems hardly amenable to Galerkin projection. We also discuss parallel implementations and application to derived quantity of interest, in particular, a novel sampling strategy for non-smooth quantities of interest but smooth SDE solution. Numerical examples are provided to illustrate the output of the SA and the computational complexity of the method.

Second, we consider the case of stochastic simulators governed by a set of reaction channels with stochastic dynamics. Reformulating the system dynamics in terms of independent standardized Poisson processes permits the identification of individual realizations of each reaction channel dynamic and a quantitative characterization of the inherent stochasticity sources. By judiciously exploiting the inherent stochasticity of the system, we can then compute the global sensitivities associated with individual reaction channels, as well as the importance of channel interactions. This approach is subsequently extended to account for the effects of uncertain parameters and we propose dedicated algorithms to perform the Sobols decomposition of the variance into contributions from an arbitrary subset of uncertain parameters and stochastic reaction channels. The algorithms are illustrated in simplified systems, including the birth-death, Schlgl, and Michaelis-Menten models. The sensitivity analysis output is also contrasted with a local derivative-based sensitivity analysis method. Stochastic models are used in many scientific fields, including mechanics, physics, life sciences, queues and social-network studies, chemistry. Stochastic modeling is necessary when deterministic ones cannot capture features of the dynamics, for instance, to represent effects of unresolved small-scale fluctuations, or when systems are subjected to important inherent noise. Often, stochastic models are not completely known and involve some ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Turkedjiev, Plamen (Auteur de la Conférence) | CIRM (Editeur )

In this lecture, we shall discuss the key steps involved in the use of least squares regression for approximating the solution to BSDEs. This includes how to obtain explicit error estimates, and how these error estimates can be used to tune the parameters of the numerical scheme based on complexity considerations.

The algorithms are based on a two stage approximation process. Firstly, a suitable discrete time process is chosen to approximate the of the continuous time solution of the BSDE. The nodes of the discrete time processes can be expressed as conditional expectations. As we shall demonstrate, the choice of discrete time process is very important, as its properties will impact the performance of the overall numerical scheme. In the second stage, the conditional expectation is approximated in functional form using least squares regression on synthetically generated data - Monte Carlo simulations drawn from a suitable probability distribution. A key feature of the regression step is that the explanatory variables are built on a user chosen finite dimensional linear space of functions, which the user specifies by setting basis functions. The choice of basis functions is made on the hypothesis that it contains the solution, so regularity and boundedness assumptions are used in its construction. The impact of the choice of the basis functions is exposed in error estimates.

In addition to the choice of discrete time approximation and the basis functions, the Markovian structure of the problem gives significant additional freedom with regards to the Monte Carlo simulations. We demonstrate how to use this additional freedom to develop generic stratified sampling approaches that are independent of the underlying transition density function. Moreover, we demonstrate how to leverage the stratification method to develop a HPC algorithm for implementation on GPUs.

Thanks to the Feynmann-Kac relation between the the solution of a BSDE and its associated semilinear PDE, the approximation of the BSDE can be directly used to approximate the solution of the PDE. Moreover, the smoothness properties of the PDE play a crucial role in the selection of the hypothesis space of regressions functions, so this relationship is vitally important for the numerical scheme.

We conclude with some draw backs of the regression approach, notably the curse of dimensionality. In this lecture, we shall discuss the key steps involved in the use of least squares regression for approximating the solution to BSDEs. This includes how to obtain explicit error estimates, and how these error estimates can be used to tune the parameters of the numerical scheme based on complexity considerations.

The algorithms are based on a two stage approximation process. Firstly, a suitable discrete time process is chosen to approximate the ...

The algorithms are based on a two stage approximation process. Firstly, a suitable discrete time process is chosen to approximate the of the continuous time solution of the BSDE. The nodes of the discrete time processes can be expressed as conditional expectations. As we shall demonstrate, the choice of discrete time process is very important, as its properties will impact the performance of the overall numerical scheme. In the second stage, the conditional expectation is approximated in functional form using least squares regression on synthetically generated data - Monte Carlo simulations drawn from a suitable probability distribution. A key feature of the regression step is that the explanatory variables are built on a user chosen finite dimensional linear space of functions, which the user specifies by setting basis functions. The choice of basis functions is made on the hypothesis that it contains the solution, so regularity and boundedness assumptions are used in its construction. The impact of the choice of the basis functions is exposed in error estimates.

In addition to the choice of discrete time approximation and the basis functions, the Markovian structure of the problem gives significant additional freedom with regards to the Monte Carlo simulations. We demonstrate how to use this additional freedom to develop generic stratified sampling approaches that are independent of the underlying transition density function. Moreover, we demonstrate how to leverage the stratification method to develop a HPC algorithm for implementation on GPUs.

Thanks to the Feynmann-Kac relation between the the solution of a BSDE and its associated semilinear PDE, the approximation of the BSDE can be directly used to approximate the solution of the PDE. Moreover, the smoothness properties of the PDE play a crucial role in the selection of the hypothesis space of regressions functions, so this relationship is vitally important for the numerical scheme.

We conclude with some draw backs of the regression approach, notably the curse of dimensionality. In this lecture, we shall discuss the key steps involved in the use of least squares regression for approximating the solution to BSDEs. This includes how to obtain explicit error estimates, and how these error estimates can be used to tune the parameters of the numerical scheme based on complexity considerations.

The algorithms are based on a two stage approximation process. Firstly, a suitable discrete time process is chosen to approximate the ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Tempone, Raul (Auteur de la Conférence) | CIRM (Editeur )

We describe and analyze the Multi-Index Monte Carlo (MIMC) and the Multi-Index Stochastic Collocation (MISC) method for computing statistics of the solution of a PDE with random data. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. These mixed differences yield new and improved complexity results, which are natural generalizations of Giles's MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence. On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that, in the optimal case, the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally efficient. We show the effectiveness of MIMC and MISC in some computational tests using the mimclib open source library, including PDEs with random coefficients and Stochastic Interacting Particle Systems. Finally, we will briefly discuss the use of Markovian projection for the approximation of prices in the context of American basket options.
We describe and analyze the Multi-Index Monte Carlo (MIMC) and the Multi-Index Stochastic Collocation (MISC) method for computing statistics of the solution of a PDE with random data. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Tempone, Raul (Auteur de la Conférence) | CIRM (Editeur )

We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion of adaptivity includes several aspects such as mesh refinements based on either a priori or a posteriori error estimates, the local choice of different time stepping methods and the selection of the total number of levels and the number of samples at different levels. Our Adaptive MLMC estimator uses a hierarchy of adaptively refined, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform discretization MLMC method introduced independently by M. Giles and S. Heinrich. In particular, we show that our adaptive MLMC algorithms are asymptotically accurate and have the correct complexity with an improved control of the multiplicative constant factor in the asymptotic analysis. In this context, we developed novel techniques for estimation of parameters needed in our MLMC algorithms, such as the variance of the difference between consecutive approximations. These techniques take particular care of the deepest levels, where for efficiency reasons only few realizations are available to produce essential estimates. Moreover, we show the asymptotic normality of the statistical error in the MLMC estimator, justifying in this way our error estimate that allows prescribing both the required accuracy and confidence level in the final result. We present several examples to illustrate the above results and the corresponding computational savings.
We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Bossy, Mireille (Auteur de la Conférence) | CIRM (Editeur )

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Crisan, Dan (Auteur de la Conférence) | CIRM (Editeur )

The talk will have two parts: In the first part, I will go over some of the basic feature of cubature methods for approximating solutions of classical SDEs and how they can be adapted to solve Backward SDEs. In the second part, I will introduce some recent results on the use of cubature method for approximating solutions of McKean-Vlasov SDEs.

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Rey-Bellet, Luc (Auteur de la Conférence) | CIRM (Editeur )

Z

- © Powered by Kentika
- |
- 2017
- |
- Mentions légales