m

F Nous contacter


0

Documents  65C05 | enregistrements trouvés : 18

O
     

-A +A

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

In this short course, we recall the basics of Markov chain Monte Carlo (Gibbs & Metropolis sampelrs) along with the most recent developments like Hamiltonian Monte Carlo, Rao-Blackwellisation, divide & conquer strategies, pseudo-marginal and other noisy versions. We also cover the specific approximate method of ABC that is currently used in many fields to handle complex models in manageable conditions, from the original motivation in population genetics to the several reinterpretations of the approach found in the recent literature. Time allowing, we will also comment on the programming developments like BUGS, STAN and Anglican that stemmed from those specific algorithms.
In this short course, we recall the basics of Markov chain Monte Carlo (Gibbs & Metropolis sampelrs) along with the most recent developments like Hamiltonian Monte Carlo, Rao-Blackwellisation, divide & conquer strategies, pseudo-marginal and other noisy versions. We also cover the specific approximate method of ABC that is currently used in many fields to handle complex models in manageable conditions, from the original motivation in population ...

65C05 ; 65C40 ; 60J10 ; 62F15

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained for each component of the Markov chain. We generalize this result when the initial distribution is not the target probability measure. The obtained diffusive limit is the solution to a stochastic differential equation nonlinear in the sense of McKean. We prove convergence to equilibrium for this equation. We discuss practical counterparts in order to optimize the variance of the proposal distribution to accelerate convergence to equilibrium. Our analysis confirms the interest of the constant acceptance rate strategy (with acceptance rate between 1/4 and 1/3).
We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained ...

60J22 ; 60J10 ; 60G50 ; 60F17 ; 60J60 ; 60G09 ; 65C40 ; 65C05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted ...

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion of adaptivity includes several aspects such as mesh refinements based on either a priori or a posteriori error estimates, the local choice of different time stepping methods and the selection of the total number of levels and the number of samples at different levels. Our Adaptive MLMC estimator uses a hierarchy of adaptively refined, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform discretization MLMC method introduced independently by M. Giles and S. Heinrich. In particular, we show that our adaptive MLMC algorithms are asymptotically accurate and have the correct complexity with an improved control of the multiplicative constant factor in the asymptotic analysis. In this context, we developed novel techniques for estimation of parameters needed in our MLMC algorithms, such as the variance of the difference between consecutive approximations. These techniques take particular care of the deepest levels, where for efficiency reasons only few realizations are available to produce essential estimates. Moreover, we show the asymptotic normality of the statistical error in the MLMC estimator, justifying in this way our error estimate that allows prescribing both the required accuracy and confidence level in the final result. We present several examples to illustrate the above results and the corresponding computational savings.
We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion ...

65C30 ; 65C05 ; 60H15 ; 60H35 ; 35R60

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

We describe and analyze the Multi-Index Monte Carlo (MIMC) and the Multi-Index Stochastic Collocation (MISC) method for computing statistics of the solution of a PDE with random data. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. These mixed differences yield new and improved complexity results, which are natural generalizations of Giles's MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence. On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that, in the optimal case, the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally efficient. We show the effectiveness of MIMC and MISC in some computational tests using the mimclib open source library, including PDEs with random coefficients and Stochastic Interacting Particle Systems. Finally, we will briefly discuss the use of Markovian projection for the approximation of prices in the context of American basket options.
We describe and analyze the Multi-Index Monte Carlo (MIMC) and the Multi-Index Stochastic Collocation (MISC) method for computing statistics of the solution of a PDE with random data. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, ...

65C30 ; 65C05 ; 60H15 ; 60H35 ; 35R60 ; 65M70

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

In this lecture, we shall discuss the key steps involved in the use of least squares regression for approximating the solution to BSDEs. This includes how to obtain explicit error estimates, and how these error estimates can be used to tune the parameters of the numerical scheme based on complexity considerations.
The algorithms are based on a two stage approximation process. Firstly, a suitable discrete time process is chosen to approximate the of the continuous time solution of the BSDE. The nodes of the discrete time processes can be expressed as conditional expectations. As we shall demonstrate, the choice of discrete time process is very important, as its properties will impact the performance of the overall numerical scheme. In the second stage, the conditional expectation is approximated in functional form using least squares regression on synthetically generated data - Monte Carlo simulations drawn from a suitable probability distribution. A key feature of the regression step is that the explanatory variables are built on a user chosen finite dimensional linear space of functions, which the user specifies by setting basis functions. The choice of basis functions is made on the hypothesis that it contains the solution, so regularity and boundedness assumptions are used in its construction. The impact of the choice of the basis functions is exposed in error estimates.
In addition to the choice of discrete time approximation and the basis functions, the Markovian structure of the problem gives significant additional freedom with regards to the Monte Carlo simulations. We demonstrate how to use this additional freedom to develop generic stratified sampling approaches that are independent of the underlying transition density function. Moreover, we demonstrate how to leverage the stratification method to develop a HPC algorithm for implementation on GPUs.
Thanks to the Feynmann-Kac relation between the the solution of a BSDE and its associated semilinear PDE, the approximation of the BSDE can be directly used to approximate the solution of the PDE. Moreover, the smoothness properties of the PDE play a crucial role in the selection of the hypothesis space of regressions functions, so this relationship is vitally important for the numerical scheme.
We conclude with some draw backs of the regression approach, notably the curse of dimensionality.
In this lecture, we shall discuss the key steps involved in the use of least squares regression for approximating the solution to BSDEs. This includes how to obtain explicit error estimates, and how these error estimates can be used to tune the parameters of the numerical scheme based on complexity considerations.
The algorithms are based on a two stage approximation process. Firstly, a suitable discrete time process is chosen to approximate the ...

65C05 ; 65C30 ; 93E24 ; 60H35 ; 60H10

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

In this talk we first quickly present a classical and simple model used to describe flow in porous media (based on Darcy's Law). The high heterogeneity of the media and the lack of data are taken into account by the use of random permability fields. We then present some mathematical particularities of the random fields frequently used for such applications and the corresponding theoretical and numerical issues.
After giving a short overview of various applications of this basic model, we study in more detail the problem of the contamination of an aquifer by migration of pollutants. We present a numerical method to compute the mean spreading of a diffusive set of particles representing a tracer plume in an advecting flow field. We deal with the uncertainty thanks to a Monte Carlo method and use a stochastic particle method to approximate the solution of the transport-diffusion equation. Error estimates will be established and numerical results (obtained by A.Beaudoin et al. using PARADIS Software) will be presented. In particular the influence of the molecular diffusion and the heterogeneity on the asymptotic longitudinal macrodispersion will be investigated thanks to numerical experiments. Studying qualitatively and quantitatively the influence of molecular diffusion, correlation length and standard deviation is an important question in hydrogeolgy.
In this talk we first quickly present a classical and simple model used to describe flow in porous media (based on Darcy's Law). The high heterogeneity of the media and the lack of data are taken into account by the use of random permability fields. We then present some mathematical particularities of the random fields frequently used for such applications and the corresponding theoretical and numerical issues.
After giving a short overview of ...

76S05 ; 76M28 ; 65C05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Uncertainty quantification (UQ) in the context of engineering applications aims aims at quantifying the effects of uncertainty in the input parameters of complex models on their output responses. Due to the increased availability of computational power and advanced modelling techniques, current simulation tools can provide unprecedented insight in the behaviour of complex systems. However, the associated computational costs have also increased significantly, often hindering the applicability of standard UQ techniques based on Monte-Carlo sampling. To overcome this limitation, metamodels (also referred to as surrogate models) have become a staple tool in the Engineering UQ community. This lecture will introduce a general framework for dealing with uncertainty in the presence of expensive computational models, in particular for reliability analysis (also known as rare event estimation). Reliability analysis focuses on the tail behaviour of a stochastic model response, so as to compute the probability of exceedance of a given performance measure, that would result in a critical failure of the system under study. Classical approximation-based techniques, as well as their modern metamodel-based counter-parts will be introduced.
Uncertainty quantification (UQ) in the context of engineering applications aims aims at quantifying the effects of uncertainty in the input parameters of complex models on their output responses. Due to the increased availability of computational power and advanced modelling techniques, current simulation tools can provide unprecedented insight in the behaviour of complex systems. However, the associated computational costs have also increased ...

62P30 ; 65C05 ; 90B25 ; 62N05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Consider a problem of Markovian trajectories of particles for which you are trying to estimate the probability of a event.
Under the assumption that you can represent this event as the last event of a nested sequence of events, it is possible to design a splitting algorithm to estimate the probability of the last event in an efficient way. Moreover you can obtain a sequence of trajectories which realize this particular event, giving access to statistical representation of quantities conditionally to realize the event.
In this talk I will present the "Adaptive Multilevel Splitting" algorithm and its application to various toy models. I will explain why it creates an unbiased estimator of a probability, and I will give results obtained from numerical simulations.
Consider a problem of Markovian trajectories of particles for which you are trying to estimate the probability of a event.
Under the assumption that you can represent this event as the last event of a nested sequence of events, it is possible to design a splitting algorithm to estimate the probability of the last event in an efficient way. Moreover you can obtain a sequence of trajectories which realize this particular event, giving access to ...

60J22 ; 65C35 ; 65C05 ; 65C40

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This talk is devoted to the presentation of algorithms for simulating rare events in a molecular dynamics context, e.g., the simulation of reactive paths. We will consider $\mathbb{R}^d$ as the space of configurations for a given system, where the probability of a specific configuration is given by a Gibbs measure depending on a temperature parameter. The dynamics of the system is given by an overdamped Langevin (or gradient) equation. The problem is to find how the system can evolve from a local minimum of the potential to another, following the above dynamics. After a brief overview of classical Monte Carlo methods, we will expose recent results on adaptive multilevel splitting techniques.
This talk is devoted to the presentation of algorithms for simulating rare events in a molecular dynamics context, e.g., the simulation of reactive paths. We will consider $\mathbb{R}^d$ as the space of configurations for a given system, where the probability of a specific configuration is given by a Gibbs measure depending on a temperature parameter. The dynamics of the system is given by an overdamped Langevin (or gradient) equation. The ...

65C05 ; 65C60 ; 65C35 ; 62L12 ; 62D05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

During this talk, I will present how the development of non-reversible algorithms by piecewise deterministic Markov processes (PDMP) was first motivated by the impressive successes of cluster algorithms for the simulation of lattice spin systems. I will especially stress how the spin involution symmetry crucial to the cluster schemes was replaced by the exploitation of more general symmetry, in particular thanks to the factorization of the energy function.
During this talk, I will present how the development of non-reversible algorithms by piecewise deterministic Markov processes (PDMP) was first motivated by the impressive successes of cluster algorithms for the simulation of lattice spin systems. I will especially stress how the spin involution symmetry crucial to the cluster schemes was replaced by the exploitation of more general symmetry, in particular thanks to the factorization of the ...

65C05 ; 65C40 ; 60K35 ; 68K87

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Bayesian computational methods
Robert, Christian P. (Auteur de la Conférence) | CIRM (Editeur )

This is a short introduction to the many directions of current research in Bayesian computational statistics, from accelerating MCMC algorithms, to using partly deterministic Markov processes like the bouncy particle and the zigzag samplers, to approximating the target or the proposal distributions in such methods. The main illustration focuses on the evaluation of normalising constants and ratios of normalising constants.

62C10 ; 65C60 ; 62F15 ; 65C05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  An introduction to particle filters
Chopin, Nicolas (Auteur de la Conférence) | CIRM (Editeur )

This course will give a gentle introduction to SMC (Sequential Monte Carlo algorithms):
• motivation: state-space (hidden Markov) models, sequential analysis of such models; non-sequential problems that may be tackled using SMC.
• Formalism: Markov kernels, Feynman-Kac distributions.
• Monte Carlo tricks: importance sampling and resampling
• standard particle filters: bootstrap, guided, auxiliary
• maximum likelihood estimation of state-stace models
• Bayesian estimation of these models: PMCMC, SMC$^2$.
This course will give a gentle introduction to SMC (Sequential Monte Carlo algorithms):
• motivation: state-space (hidden Markov) models, sequential analysis of such models; non-sequential problems that may be tackled using SMC.
• Formalism: Markov kernels, Feynman-Kac distributions.
• Monte Carlo tricks: importance sampling and resampling
• standard particle filters: bootstrap, guided, auxiliary
• maximum likelihood estimation of state-stace ...

62F15 ; 62D05 ; 65C05 ; 60J22 ; 62M05 ; 62M20

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

After an overview of some approaches to define random sequences, we will discuss pseudorandom sequences and low-discrepancy sequences. Applications to numerical integration, Koksma-Hlawka inequality, and Niederreiter’s uniform point sets will be discussed. We will then present randomized quasi-Monte Carlo sequences.

65C20 ; 65C05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

In the first part, we briefly recall the theory of stochastic differential equations (SDEs) and present Maruyama's classical theorem on strong convergence of the Euler-Maruyama method, for which both drift and diffusion coefficient of the SDE need to be Lipschitz continuous.

65C05 ; 91G60 ; 60H10

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

The models of Bachelier and Samuelson will be introduced. Methods for generating number sequences from non-uniform distributions, such as inverse transformation and acceptance rejection, as well as generation of stochastic processes will be discussed. Applications to pricing options via rendomized quasi-Monte Carlo methods will be presented.

65C20 ; 65C05 ; 91G60

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

After an overview of some approaches to define random sequences, we will discuss pseudorandom sequences and low-discrepancy sequences. Applications to numerical integration, Koksma-Hlawka inequality, and Niederreiter’s uniform point sets will be discussed. We will then present randomized quasi-Monte Carlo sequences.

65C20 ; 65C05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

In the second part we show how the classical result can be used also for SDEs with drift that may be discontinuous and diffusion that may be degenerate. In that context I will present a concept of (multidimensional) piecewise Lipschitz drift where the set of discontinuities is a sufficiently smooth hypersurface in the multi-dimensional euclidean space. We discuss geometric properties of the set of discontinuities that are needed to transfer the convergence result from the Lipschitz case to the piecewise Lipschitz case.
In the second part we show how the classical result can be used also for SDEs with drift that may be discontinuous and diffusion that may be degenerate. In that context I will present a concept of (multidimensional) piecewise Lipschitz drift where the set of discontinuities is a sufficiently smooth hypersurface in the multi-dimensional euclidean space. We discuss geometric properties of the set of discontinuities that are needed to transfer the ...

65C05 ; 91G60 ; 60H10

Z