En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Search by event 1556 31 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Mean field games with major and minor players - Carmona, René (Auteur de la Conférence) | CIRM H

Multi angle

We introduce a new strategy for the solution of Mean Field Games in the presence of major and minor players. This approach is based on a formulation of the fixed point step in spaces of controls. We use it to highlight the differences between open and closed loop problems. We illustrate the implementation of this approach for linear quadratic and finite state space games, and we provide numerical results motivated by applications in biology and cyber-security.[-]
We introduce a new strategy for the solution of Mean Field Games in the presence of major and minor players. This approach is based on a formulation of the fixed point step in spaces of controls. We use it to highlight the differences between open and closed loop problems. We illustrate the implementation of this approach for linear quadratic and finite state space games, and we provide numerical results motivated by applications in biology and ...[+]

93E20 ; 60H10 ; 60K35 ; 49K45

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We consider competitive capacity investment for a duopoly of two distinct producers. The producers are exposed to stochastically fluctuating costs and interact through aggregate supply. Capacity expansion is irreversible and modeled in terms of timing strategies characterized through threshold rules. Because the impact of changing costs on the producers is asymmetric, we are led to a nonzero-sum timing game describing the transitions among the discrete investment stages. Working in a continuous-time diffusion framework, we characterize and analyze the resulting Nash equilibrium and game values. Our analysis quantifies the dynamic competition effects and yields insight into dynamic preemption and over-investment in a general asymmetric setting. A case-study considering the impact of fluctuating emission costs on power producers investing in nuclear and coal-fired plants is also presented.[-]
We consider competitive capacity investment for a duopoly of two distinct producers. The producers are exposed to stochastically fluctuating costs and interact through aggregate supply. Capacity expansion is irreversible and modeled in terms of timing strategies characterized through threshold rules. Because the impact of changing costs on the producers is asymmetric, we are led to a nonzero-sum timing game describing the transitions among the ...[+]

93E20 ; 91B38 ; 91A80

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Cubature methods and applications - Crisan, Dan (Auteur de la Conférence) | CIRM H

Multi angle

The talk will have two parts: In the first part, I will go over some of the basic feature of cubature methods for approximating solutions of classical SDEs and how they can be adapted to solve Backward SDEs. In the second part, I will introduce some recent results on the use of cubature method for approximating solutions of McKean-Vlasov SDEs.

65C30 ; 60H10 ; 34F05 ; 60H35 ; 91G60

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

On the interplay between kinetic theory and game theory - Degond, Pierre (Auteur de la Conférence) | CIRM H

Multi angle

We propose a mean field kinetic model for systems of rational agents interacting in a game theoretical framework. This model is inspired from non-cooperative anonymous games with a continuum of players and Mean-Field Games. The large time behavior of the system is given by a macroscopic closure with a Nash equilibrium serving as the local thermodynamic equilibrium. Applications of the presented theory to social and economical models will be given.[-]
We propose a mean field kinetic model for systems of rational agents interacting in a game theoretical framework. This model is inspired from non-cooperative anonymous games with a continuum of players and Mean-Field Games. The large time behavior of the system is given by a macroscopic closure with a Nash equilibrium serving as the local thermodynamic equilibrium. Applications of the presented theory to social and economical models will be ...[+]

91B80 ; 35Q82 ; 35Q91

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained for each component of the Markov chain. We generalize this result when the initial distribution is not the target probability measure. The obtained diffusive limit is the solution to a stochastic differential equation nonlinear in the sense of McKean. We prove convergence to equilibrium for this equation. We discuss practical counterparts in order to optimize the variance of the proposal distribution to accelerate convergence to equilibrium. Our analysis confirms the interest of the constant acceptance rate strategy (with acceptance rate between 1/4 and 1/3).[-]
We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained ...[+]

60J22 ; 60J10 ; 60G50 ; 60F17 ; 60J60 ; 60G09 ; 65C40 ; 65C05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.[-]
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted ...[+]

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this lecture, we shall discuss the key steps involved in the use of least squares regression for approximating the solution to BSDEs. This includes how to obtain explicit error estimates, and how these error estimates can be used to tune the parameters of the numerical scheme based on complexity considerations.
The algorithms are based on a two stage approximation process. Firstly, a suitable discrete time process is chosen to approximate the of the continuous time solution of the BSDE. The nodes of the discrete time processes can be expressed as conditional expectations. As we shall demonstrate, the choice of discrete time process is very important, as its properties will impact the performance of the overall numerical scheme. In the second stage, the conditional expectation is approximated in functional form using least squares regression on synthetically generated data – Monte Carlo simulations drawn from a suitable probability distribution. A key feature of the regression step is that the explanatory variables are built on a user chosen finite dimensional linear space of functions, which the user specifies by setting basis functions. The choice of basis functions is made on the hypothesis that it contains the solution, so regularity and boundedness assumptions are used in its construction. The impact of the choice of the basis functions is exposed in error estimates.
In addition to the choice of discrete time approximation and the basis functions, the Markovian structure of the problem gives significant additional freedom with regards to the Monte Carlo simulations. We demonstrate how to use this additional freedom to develop generic stratified sampling approaches that are independent of the underlying transition density function. Moreover, we demonstrate how to leverage the stratification method to develop a HPC algorithm for implementation on GPUs.
Thanks to the Feynmann-Kac relation between the the solution of a BSDE and its associated semilinear PDE, the approximation of the BSDE can be directly used to approximate the solution of the PDE. Moreover, the smoothness properties of the PDE play a crucial role in the selection of the hypothesis space of regressions functions, so this relationship is vitally important for the numerical scheme.
We conclude with some draw backs of the regression approach, notably the curse of dimensionality.[-]
In this lecture, we shall discuss the key steps involved in the use of least squares regression for approximating the solution to BSDEs. This includes how to obtain explicit error estimates, and how these error estimates can be used to tune the parameters of the numerical scheme based on complexity considerations.
The algorithms are based on a two stage approximation process. Firstly, a suitable discrete time process is chosen to approximate the ...[+]

65C05 ; 65C30 ; 93E24 ; 60H35 ; 60H10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk we first quickly present a classical and simple model used to describe flow in porous media (based on Darcy's Law). The high heterogeneity of the media and the lack of data are taken into account by the use of random permability fields. We then present some mathematical particularities of the random fields frequently used for such applications and the corresponding theoretical and numerical issues.
After giving a short overview of various applications of this basic model, we study in more detail the problem of the contamination of an aquifer by migration of pollutants. We present a numerical method to compute the mean spreading of a diffusive set of particles representing a tracer plume in an advecting flow field. We deal with the uncertainty thanks to a Monte Carlo method and use a stochastic particle method to approximate the solution of the transport-diffusion equation. Error estimates will be established and numerical results (obtained by A.Beaudoin et al. using PARADIS Software) will be presented. In particular the influence of the molecular diffusion and the heterogeneity on the asymptotic longitudinal macrodispersion will be investigated thanks to numerical experiments. Studying qualitatively and quantitatively the influence of molecular diffusion, correlation length and standard deviation is an important question in hydrogeolgy.[-]
In this talk we first quickly present a classical and simple model used to describe flow in porous media (based on Darcy's Law). The high heterogeneity of the media and the lack of data are taken into account by the use of random permability fields. We then present some mathematical particularities of the random fields frequently used for such applications and the corresponding theoretical and numerical issues.
After giving a short overview of ...[+]

76S05 ; 76M28 ; 65C05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Global sensitivity analysis in stochastic systems - Le Maître, Olivier (Auteur de la Conférence) | CIRM H

Multi angle

Stochastic models are used in many scientific fields, including mechanics, physics, life sciences, queues and social-network studies, chemistry. Stochastic modeling is necessary when deterministic ones cannot capture features of the dynamics, for instance, to represent effects of unresolved small-scale fluctuations, or when systems are subjected to important inherent noise. Often, stochastic models are not completely known and involve some calibrated parameters that should be considered as uncertain. In this case, it is critical to assess the impact of the uncertain model parameters on the stochastic model predictions. This is usually achieved by performing a sensitivity analysis (SA) which characterizes changes in a model output when the uncertain parameters are varied. In the case of a stochastic model, one classically applies the SA to statistical moments of the prediction, estimating, for instance, the derivatives with respect to the uncertain parameters of the output mean and variance. In this presentation, we introduce new approaches of SA in a stochastic system based on variance decomposition methods (ANOVA, Sobol). Compared to previous methods, our SA methods are global, with respect to both the parameters and stochasticity, and decompose the variance into stochastic, parametric and mixed contributions.
We consider first the case of uncertain Stochastic Differential Equations (SDE), that is systems with external noisy forcing and uncertain parameters. A polynomial chaos (PC) analysis with stochastic expansion coefficients is proposed to approximate the SDE solution. We first use a Galerkin formalism to determine the expansion coefficients, leading to a hierarchy of SDEs. Under the mild assumption that the noise and uncertain parameters are independent, the Galerkin formalism naturally separates parametric uncertainty and stochastic forcing dependencies, enabling an orthogonal decomposition of the variance, and consequently identify contributions arising
from the uncertainty in parameters, the stochastic forcing, and a coupled term. Non-intrusive approaches are subsequently considered for application to more complex systems hardly amenable to Galerkin projection. We also discuss parallel implementations and application to derived quantity of interest, in particular, a novel sampling strategy for non-smooth quantities of interest but smooth SDE solution. Numerical examples are provided to illustrate the output of the SA and the computational complexity of the method.
Second, we consider the case of stochastic simulators governed by a set of reaction channels with stochastic dynamics. Reformulating the system dynamics in terms of independent standardized Poisson processes permits the identification of individual realizations of each reaction channel dynamic and a quantitative characterization of the inherent stochasticity sources. By judiciously exploiting the inherent stochasticity of the system, we can then compute the global sensitivities associated with individual reaction channels, as well as the importance of channel interactions. This approach is subsequently extended to account for the effects of uncertain parameters and we propose dedicated algorithms to perform the Sobols decomposition of the variance into contributions from an arbitrary subset of uncertain parameters and stochastic reaction channels. The algorithms are illustrated in simplified systems, including the birth-death, Schlgl, and Michaelis-Menten models. The sensitivity analysis output is also contrasted with a local derivative-based sensitivity analysis method.[-]
Stochastic models are used in many scientific fields, including mechanics, physics, life sciences, queues and social-network studies, chemistry. Stochastic modeling is necessary when deterministic ones cannot capture features of the dynamics, for instance, to represent effects of unresolved small-scale fluctuations, or when systems are subjected to important inherent noise. Often, stochastic models are not completely known and involve some ...[+]

60H35 ; 65C30 ; 65D15

Sélection Signaler une erreur