F Nous contacter


0

Search by event  1556 | enregistrements trouvés : 31

O

-A +A

Sélection courante (0) : Tout sélectionner / Tout déselectionner

P Q

Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
These schemes have several important features:

- The discrete problem has the same structure as the continous one, so existence, energy estimates, and possibly uniqueness can be obtained with the same kind of arguments

- Monotonicity guarantees the stability of the scheme: it is robust in the deterministic limit

- convergence to classical or weak solutions can be proved

Finally, there are particular cases named variational MFGS in which the system of PDEs can be seen as the optimality conditions of some optimal control problem driven by a PDE. In such cases, augmented Lagrangian methods can be used for solving the discrete nonlinear system. The mini-course will be orgamized as follows

1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.

2. Monotone finite difference schemes

3. Examples of applications

4. Variational MFG and related algorithms for solving the discrete system of nonlinear equations
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and ...

49K20 ; 49N70 ; 35K40 ; 35K55 ; 35Q84 ; 65K10 ; 65M06 ; 65M12 ; 91A23 ; 91A15

We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained for each component of the Markov chain. We generalize this result when the initial distribution is not the target probability measure. The obtained diffusive limit is the solution to a stochastic differential equation nonlinear in the sense of McKean. We prove convergence to equilibrium for this equation. We discuss practical counterparts in order to optimize the variance of the proposal distribution to accelerate convergence to equilibrium. Our analysis confirms the interest of the constant acceptance rate strategy (with acceptance rate between 1/4 and 1/3). We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained ...

60J22 ; 60J10 ; 60G50 ; 60F17 ; 60J60 ; 60G09 ; 65C40 ; 65C05

In this talk we first quickly present a classical and simple model used to describe flow in porous media (based on Darcy's Law). The high heterogeneity of the media and the lack of data are taken into account by the use of random permability fields. We then present some mathematical particularities of the random fields frequently used for such applications and the corresponding theoretical and numerical issues.
After giving a short overview of various applications of this basic model, we study in more detail the problem of the contamination of an aquifer by migration of pollutants. We present a numerical method to compute the mean spreading of a diffusive set of particles representing a tracer plume in an advecting flow field. We deal with the uncertainty thanks to a Monte Carlo method and use a stochastic particle method to approximate the solution of the transport-diffusion equation. Error estimates will be established and numerical results (obtained by A.Beaudoin et al. using PARADIS Software) will be presented. In particular the influence of the molecular diffusion and the heterogeneity on the asymptotic longitudinal macrodispersion will be investigated thanks to numerical experiments. Studying qualitatively and quantitatively the influence of molecular diffusion, correlation length and standard deviation is an important question in hydrogeolgy.
In this talk we first quickly present a classical and simple model used to describe flow in porous media (based on Darcy's Law). The high heterogeneity of the media and the lack of data are taken into account by the use of random permability fields. We then present some mathematical particularities of the random fields frequently used for such applications and the corresponding theoretical and numerical issues.
After giving a short overview of ...

76S05 ; 76M28 ; 65C05

The mathematical framework of variational inequalities is a powerful tool to model problems arising in mechanics such as elasto-plasticity where the physical laws change when some state variables reach a certain threshold [1]. Somehow, it is not surprising that the models used in the literature for the hysteresis effect of non-linear elasto-plastic oscillators submitted to random vibrations [2] are equivalent to (finite dimensional) stochastic variational inequalities (SVIs) [3]. This presentation concerns (a) cycle properties of a SVI modeling an elasto-perfectly-plastic oscillator excited by a white noise together with an application to the risk of failure [4,5]. (b) a set of Backward Kolmogorov equations for computing means, moments and correlation [6]. (c) free boundary value problems and HJB equations for the control of SVIs. For engineering applications, it is related to the problem of critical excitation [7]. This point concerns what we are doing during the CEMRACS research project. (d) (if time permits) on-going research on the modeling of a moving plate on turbulent convection [8]. This is a mixture of joint works and / or discussions with, amongst others, A. Bensoussan, L. Borsoi, C. Feau, M. Huang, M. Laurière, G. Stadler, J. Wylie, J. Zhang and J.Q. Zhong. The mathematical framework of variational inequalities is a powerful tool to model problems arising in mechanics such as elasto-plasticity where the physical laws change when some state variables reach a certain threshold [1]. Somehow, it is not surprising that the models used in the literature for the hysteresis effect of non-linear elasto-plastic oscillators submitted to random vibrations [2] are equivalent to (finite dimensional) stochastic ...

74H50 ; 35R60 ; 60H10 ; 60H30 ; 74C05

Consider a problem of Markovian trajectories of particles for which you are trying to estimate the probability of a event.
Under the assumption that you can represent this event as the last event of a nested sequence of events, it is possible to design a splitting algorithm to estimate the probability of the last event in an efficient way. Moreover you can obtain a sequence of trajectories which realize this particular event, giving access to statistical representation of quantities conditionally to realize the event.
In this talk I will present the "Adaptive Multilevel Splitting" algorithm and its application to various toy models. I will explain why it creates an unbiased estimator of a probability, and I will give results obtained from numerical simulations.
Consider a problem of Markovian trajectories of particles for which you are trying to estimate the probability of a event.
Under the assumption that you can represent this event as the last event of a nested sequence of events, it is possible to design a splitting algorithm to estimate the probability of the last event in an efficient way. Moreover you can obtain a sequence of trajectories which realize this particular event, giving access to ...

60J22 ; 65C35 ; 65C05 ; 65C40

The valuation of American options (a widespread type of financial contract) requires the numerical solution of an optimal stopping problem. Numerical methods for such problems have been widely investigated. Monte-Carlo methods are based on the implementation of dynamic programming principles coupled with regression techniques. In lower dimension, one can choose to tackle the related free boundary PDE with deterministic schemes.
Pricing of American options will therefore be inevitably heavier than the one of European options, which only requires the computation of a (linear) expectation. The calibration (fitting) of a stochastic model to market quotes for American options is therefore an a priori demanding task. Yet, often this cannot be avoided: on exchange markets one is typically provided only with market quotes for American options on single stocks (as opposed to large stock indexes - e.g. S&P500 - for which large amounts of liquid European options are typically available).
In this talk, we show how one can derive (approximate, but accurate enough) explicit formulas - therefore replacing other numerical methods, at least in a low-dimensional case - based on asymptotic calculus for diffusions.
More precisely: based on a suitable representation of the PDE free boundary, we derive an approximation of this boundary close to final time that refines the expansions known so far in the literature. Via the early premium formula, this allows to derive semi-closed expressions for the price of the American put/call. The final product is a calibration recipe of a Dupire's local volatility to American option data.
Based on joint work with Pierre Henry-Labordère.
The valuation of American options (a widespread type of financial contract) requires the numerical solution of an optimal stopping problem. Numerical methods for such problems have been widely investigated. Monte-Carlo methods are based on the implementation of dynamic programming principles coupled with regression techniques. In lower dimension, one can choose to tackle the related free boundary PDE with deterministic schemes.
Pricing of ...

93E20 ; 91G60

We propose a novel projection-based particle method for solving the McKean-Vlasov stochastic differential equations. Our approach is based on a projection-type estimation of the marginal density of the solution in each time step.
The projection-based particle method leads in many situation to a significant reduction of numerical complexity compared to the widely used kernel density estimation algorithms.
We derive strong convergence rates and rates of density estimation. The convergence analysis in the case of linearly growing coefficients turns out to be rather challenging and requires some new type of averaging technique.
This case is exemplified by explicit solutions to a class of McKean-Vlasov equations with affine drift.
The performance of the proposed algorithm is illustrated by several numerical examples.
We propose a novel projection-based particle method for solving the McKean-Vlasov stochastic differential equations. Our approach is based on a projection-type estimation of the marginal density of the solution in each time step.
The projection-based particle method leads in many situation to a significant reduction of numerical complexity compared to the widely used kernel density estimation algorithms.
We derive strong convergence rates and ...

65C30 ; 65C35

Multi angle  Project evaluation under uncertainty
Zubelli, Jorge P. (Auteur de la Conférence) | CIRM (Editeur )

Industrial strategic decisions have evolved tremendously in the last decades towards a higher degree of quantitative analysis. Such decisions require taking into account a large number of uncertain variables and volatile scenarios, much like financial market investments. Furthermore, they can be evaluated by comparing to portfolios of investments in financial assets such as in stocks, derivatives and commodity futures. This revolution led to the development of a new field of managerial science known as Real Options.
The use of Real Option techniques incorporates also the value of flexibility and gives a broader view of many business decisions that brings in techniques from quantitative finance and risk management. Such techniques are now part of the decision making process of many corporations and require a substantial amount of mathematical background. Yet, there has been substantial debate concerning the use of risk neutral pricing and hedging arguments to the context of project evaluation. We discuss some alternatives to risk neutral pricing that could be suitable to evaluation of projects in a realistic context with special attention to projects dependent on commodities and non-hedgeable uncertainties. More precisely, we make use of a variant of the hedged Monte-Carlo method of Potters, Bouchaud and Sestovic to tackle strategic decisions. Furthermore, we extend this to different investor risk profiles. This is joint work with Edgardo Brigatti, Felipe Macias, and Max O. de Souza.
If time allows we shall also discuss the situation when the historical data for the project evaluation is very limited and we can make use of certain symmetries of the problem to perform (with good estimates) a nonintrusive stratified resampling of the data. This is joint work with E. Gobet and G. Liu.
Industrial strategic decisions have evolved tremendously in the last decades towards a higher degree of quantitative analysis. Such decisions require taking into account a large number of uncertain variables and volatile scenarios, much like financial market investments. Furthermore, they can be evaluated by comparing to portfolios of investments in financial assets such as in stocks, derivatives and commodity futures. This revolution led to the ...

91B26 ; 91B06 ; 91B30 ; 91B24

Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as clustering. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as clustering. In both case it appears as an optimal way to produce a set ...

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

We propose a mean field kinetic model for systems of rational agents interacting in a game theoretical framework. This model is inspired from non-cooperative anonymous games with a continuum of players and Mean-Field Games. The large time behavior of the system is given by a macroscopic closure with a Nash equilibrium serving as the local thermodynamic equilibrium. Applications of the presented theory to social and economical models will be given. We propose a mean field kinetic model for systems of rational agents interacting in a game theoretical framework. This model is inspired from non-cooperative anonymous games with a continuum of players and Mean-Field Games. The large time behavior of the system is given by a macroscopic closure with a Nash equilibrium serving as the local thermodynamic equilibrium. Applications of the presented theory to social and economical models will be ...

91B80 ; 35Q82 ; 35Q91

In this work, we consider the discretization of some nonlinear Fokker-Planck-Kolmogorov equations. The scheme we propose preserves the non-negativity of the solution, conserves the mass and, as the discretization parameters tend to zero, has limit measure-valued trajectories which are shown to solve the equation. This convergence result is proved by assuming only that the coefficients are continuous and satisfy a suitable linear growth property with respect to the space variable. In particular, under these assumptions, we obtain a new proof of existence of solutions for such equations.
We apply our results to several examples, including Mean Field Games systems and variations of the Hughes model for pedestrian dynamics.
In this work, we consider the discretization of some nonlinear Fokker-Planck-Kolmogorov equations. The scheme we propose preserves the non-negativity of the solution, conserves the mass and, as the discretization parameters tend to zero, has limit measure-valued trajectories which are shown to solve the equation. This convergence result is proved by assuming only that the coefficients are continuous and satisfy a suitable linear growth property ...

35K55 ; 35Q84 ; 60H15 ; 60H30

Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
These schemes have several important features:

- The discrete problem has the same structure as the continous one, so existence, energy estimates, and possibly uniqueness can be obtained with the same kind of arguments

- Monotonicity guarantees the stability of the scheme: it is robust in the deterministic limit

- convergence to classical or weak solutions can be proved

Finally, there are particular cases named variational MFGS in which the system of PDEs can be seen as the optimality conditions of some optimal control problem driven by a PDE. In such cases, augmented Lagrangian methods can be used for solving the discrete nonlinear system. The mini-course will be orgamized as follows

1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.

2. Monotone finite difference schemes

3. Examples of applications

4. Variational MFG and related algorithms for solving the discrete system of nonlinear equations
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and ...

49K20 ; 49N70 ; 35K40 ; 35K55 ; 35Q84 ; 65K10 ; 65M06 ; 65M12 ; 91A23 ; 91A15

Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
These schemes have several important features:

- The discrete problem has the same structure as the continous one, so existence, energy estimates, and possibly uniqueness can be obtained with the same kind of arguments

- Monotonicity guarantees the stability of the scheme: it is robust in the deterministic limit

- convergence to classical or weak solutions can be proved

Finally, there are particular cases named variational MFGS in which the system of PDEs can be seen as the optimality conditions of some optimal control problem driven by a PDE. In such cases, augmented Lagrangian methods can be used for solving the discrete nonlinear system. The mini-course will be orgamized as follows

1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.

2. Monotone finite difference schemes

3. Examples of applications

4. Variational MFG and related algorithms for solving the discrete system of nonlinear equations
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and ...

49K20 ; 49N70 ; 35K40 ; 35K55 ; 35Q84 ; 65K10 ; 65M06 ; 65M12 ; 91A23 ; 91A15

We describe and analyze the Multi-Index Monte Carlo (MIMC) and the Multi-Index Stochastic Collocation (MISC) method for computing statistics of the solution of a PDE with random data. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. These mixed differences yield new and improved complexity results, which are natural generalizations of Giles's MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence. On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that, in the optimal case, the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally efficient. We show the effectiveness of MIMC and MISC in some computational tests using the mimclib open source library, including PDEs with random coefficients and Stochastic Interacting Particle Systems. Finally, we will briefly discuss the use of Markovian projection for the approximation of prices in the context of American basket options. We describe and analyze the Multi-Index Monte Carlo (MIMC) and the Multi-Index Stochastic Collocation (MISC) method for computing statistics of the solution of a PDE with random data. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, ...

65C30 ; 65C05 ; 60H15 ; 60H35 ; 35R60 ; 65M70

We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion of adaptivity includes several aspects such as mesh refinements based on either a priori or a posteriori error estimates, the local choice of different time stepping methods and the selection of the total number of levels and the number of samples at different levels. Our Adaptive MLMC estimator uses a hierarchy of adaptively refined, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform discretization MLMC method introduced independently by M. Giles and S. Heinrich. In particular, we show that our adaptive MLMC algorithms are asymptotically accurate and have the correct complexity with an improved control of the multiplicative constant factor in the asymptotic analysis. In this context, we developed novel techniques for estimation of parameters needed in our MLMC algorithms, such as the variance of the difference between consecutive approximations. These techniques take particular care of the deepest levels, where for efficiency reasons only few realizations are available to produce essential estimates. Moreover, we show the asymptotic normality of the statistical error in the MLMC estimator, justifying in this way our error estimate that allows prescribing both the required accuracy and confidence level in the final result. We present several examples to illustrate the above results and the corresponding computational savings. We will first recall, for a general audience, the use of Monte Carlo and Multi-level Monte Carlo methods in the context of Uncertainty Quantification. Then we will discuss the recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for (i) It Stochastic Differential Equations, (ii) Stochastic Reaction Networks modeled by Pure Jump Markov Processes and (iii) Partial Differential Equations with random inputs. In this context, the notion ...

65C30 ; 65C05 ; 60H15 ; 60H35 ; 35R60

Multi angle  Model-free control and deep learning
Bellemare, Marc (Auteur de la Conférence) | CIRM (Editeur )

In this talk I will present some recent developments in model-free reinforcement learning applied to large state spaces, with an emphasis on deep learning and its role in estimating action-value functions. The talk will cover a variety of model-free algorithms, including variations on Q-Learning, and some of the main techniques that make the approach practical. I will illustrate the usefulness of these methods with examples drawn from the Arcade Learning Environment, the popular set of Atari 2600 benchmark domains. In this talk I will present some recent developments in model-free reinforcement learning applied to large state spaces, with an emphasis on deep learning and its role in estimating action-value functions. The talk will cover a variety of model-free algorithms, including variations on Q-Learning, and some of the main techniques that make the approach practical. I will illustrate the usefulness of these methods with examples drawn from the Arcade ...

68Q32 ; 91A25 ; 68T05

Uncertainty quantification (UQ) in the context of engineering applications aims aims at quantifying the effects of uncertainty in the input parameters of complex models on their output responses. Due to the increased availability of computational power and advanced modelling techniques, current simulation tools can provide unprecedented insight in the behaviour of complex systems. However, the associated computational costs have also increased significantly, often hindering the applicability of standard UQ techniques based on Monte-Carlo sampling. To overcome this limitation, metamodels (also referred to as surrogate models) have become a staple tool in the Engineering UQ community. This lecture will introduce a general framework for dealing with uncertainty in the presence of expensive computational models, in particular for reliability analysis (also known as rare event estimation). Reliability analysis focuses on the tail behaviour of a stochastic model response, so as to compute the probability of exceedance of a given performance measure, that would result in a critical failure of the system under study. Classical approximation-based techniques, as well as their modern metamodel-based counter-parts will be introduced. Uncertainty quantification (UQ) in the context of engineering applications aims aims at quantifying the effects of uncertainty in the input parameters of complex models on their output responses. Due to the increased availability of computational power and advanced modelling techniques, current simulation tools can provide unprecedented insight in the behaviour of complex systems. However, the associated computational costs have also increased ...

62P30 ; 65C05 ; 90B25 ; 62N05

The theory of mean field type control (or control of MacKean-Vlasov) aims at describing the behaviour of a large number of agents using a common feedback control and interacting through some mean field term. The solution to this type of control problem can be seen as a collaborative optimum. We will present the system of partial differential equations (PDE) arising in this setting: a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation. They describe respectively the evolution of the distribution of the agents' states and the evolution of the value function. Since it comes from a control problem, this PDE system differs in general from the one arising in mean field games.
Recently, this kind of model has been applied to crowd dynamics. More precisely, in this talk we will be interested in modeling congestion effects: the agents move but try to avoid very crowded regions. One way to take into account such effects is to let the cost of displacement increase in the regions where the density of agents is large. The cost may depend on the density in a non-local or in a local way. We will present one class of models for each case and study the associated PDE systems. The first one has classical solutions whereas the second one has weak solutions. Numerical results based on the Newton algorithm and the Augmented Lagrangian method will be presented.
This is joint work with Yves Achdou.
The theory of mean field type control (or control of MacKean-Vlasov) aims at describing the behaviour of a large number of agents using a common feedback control and interacting through some mean field term. The solution to this type of control problem can be seen as a collaborative optimum. We will present the system of partial differential equations (PDE) arising in this setting: a forward Fokker-Planck equation and a backward Hamilto...

35K40 ; 35K55 ; 35K65 ; 35D30 ; 49N70 ; 49K20 ; 65K10 ; 65M06

We introduce a new strategy for the solution of Mean Field Games in the presence of major and minor players. This approach is based on a formulation of the fixed point step in spaces of controls. We use it to highlight the differences between open and closed loop problems. We illustrate the implementation of this approach for linear quadratic and finite state space games, and we provide numerical results motivated by applications in biology and cyber-security. We introduce a new strategy for the solution of Mean Field Games in the presence of major and minor players. This approach is based on a formulation of the fixed point step in spaces of controls. We use it to highlight the differences between open and closed loop problems. We illustrate the implementation of this approach for linear quadratic and finite state space games, and we provide numerical results motivated by applications in biology and ...

93E20 ; 60H10 ; 60K35 ; 49K45

Nuage de mots clefs ici

Z