En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

CEMRACS 149 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.[-]
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted ...[+]

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
These schemes have several important features:

- The discrete problem has the same structure as the continous one, so existence, energy estimates, and possibly uniqueness can be obtained with the same kind of arguments

- Monotonicity guarantees the stability of the scheme: it is robust in the deterministic limit

- convergence to classical or weak solutions can be proved

Finally, there are particular cases named variational MFGS in which the system of PDEs can be seen as the optimality conditions of some optimal control problem driven by a PDE. In such cases, augmented Lagrangian methods can be used for solving the discrete nonlinear system. The mini-course will be orgamized as follows

1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.

2. Monotone finite difference schemes

3. Examples of applications

4. Variational MFG and related algorithms for solving the discrete system of nonlinear equations[-]
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and ...[+]

49K20 ; 49N70 ; 35F21 ; 35K40 ; 35K55 ; 35Q84 ; 65K10 ; 65M06 ; 65M12 ; 91A23 ; 91A15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
These schemes have several important features:

- The discrete problem has the same structure as the continous one, so existence, energy estimates, and possibly uniqueness can be obtained with the same kind of arguments

- Monotonicity guarantees the stability of the scheme: it is robust in the deterministic limit

- convergence to classical or weak solutions can be proved

Finally, there are particular cases named variational MFGS in which the system of PDEs can be seen as the optimality conditions of some optimal control problem driven by a PDE. In such cases, augmented Lagrangian methods can be used for solving the discrete nonlinear system. The mini-course will be orgamized as follows

1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.

2. Monotone finite difference schemes

3. Examples of applications

4. Variational MFG and related algorithms for solving the discrete system of nonlinear equations[-]
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and ...[+]

49K20 ; 49N70 ; 35F21 ; 35K40 ; 35K55 ; 35Q84 ; 65K10 ; 65M06 ; 65M12 ; 91A23 ; 91A15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and computing, and the potential applications to economics and social sciences are numerous.
In the limit when $n \to +\infty$, a given agent feels the presence of the others through the statistical distribution of the states. Assuming that the perturbations of a single agent's strategy does not influence the statistical states distribution, the latter acts as a parameter in the control problem to be solved by each agent. When the dynamics of the agents are independent stochastic processes, MFGs naturally lead to a coupled system of two partial differential equations (PDEs for short), a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation.
The latter system of PDEs has closed form solutions in very few cases only. Therefore, numerical simulation are crucial in order to address applications. The present mini-course will be devoted to numerical methods that can be used to approximate the systems of PDEs.
The numerical schemes that will be presented rely basically on monotone approximations of the Hamiltonian and on a suitable weak formulation of the Fokker-Planck equation.
These schemes have several important features:

- The discrete problem has the same structure as the continous one, so existence, energy estimates, and possibly uniqueness can be obtained with the same kind of arguments

- Monotonicity guarantees the stability of the scheme: it is robust in the deterministic limit

- convergence to classical or weak solutions can be proved

Finally, there are particular cases named variational MFGS in which the system of PDEs can be seen as the optimality conditions of some optimal control problem driven by a PDE. In such cases, augmented Lagrangian methods can be used for solving the discrete nonlinear system. The mini-course will be orgamized as follows

1. Introduction to the system of PDEs and its interpretation. Uniqueness of classical solutions.

2. Monotone finite difference schemes

3. Examples of applications

4. Variational MFG and related algorithms for solving the discrete system of nonlinear equations[-]
Recently, an important research activity on mean field games (MFGs for short) has been initiated by the pioneering works of Lasry and Lions: it aims at studying the asymptotic behavior of stochastic differential games (Nash equilibria) as the number $n$ of agents tends to infinity. The field is now rapidly growing in several directions, including stochastic optimal control, analysis of PDEs, calculus of variations, numerical analysis and ...[+]

49K20 ; 49N70 ; 35F21 ; 35K40 ; 35K55 ; 35Q84 ; 65K10 ; 65M06 ; 65M12 ; 91A23 ; 91A15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.[-]
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They ...[+]

65N21 ; 65D99

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.[-]
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They ...[+]

65N21 ; 65D99

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning operators - Lecture 1 - Mishra, Siddhartha (Auteur de la Conférence) | CIRM H

Multi angle

Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...[+]

65Mxx ; 65Nxx ; 68Txx

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning operators - Lecture 3 - Mishra, Siddhartha (Auteur de la Conférence) | CIRM H

Multi angle

Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...[+]

65Mxx ; 65Nxx ; 68Txx

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
High-fidelity numerical simulation of physical systems modeled by time-dependent partial differential equations (PDEs) has been at the center of many technological advances in the last century. However, for engineering applications such as design, control, optimization, data assimilation, and uncertainty quantification, which require repeated model evaluation over a potentially large number of parameters, or initial conditions, these simulations remain prohibitively expensive, even with state-of-art PDE solvers. The necessity of reducing the overall cost for such downstream applications has led to the development of surrogate models, which captures the core behavior of the target system but at a fraction of the cost. In this context, new advances in machine learning provide a new path for developing surrogates models, particularly when the PDEs are not known and the system is advection-dominated. In a nutshell, we seek to find a data-driven latent representation of the state of the system, and then learn the latent-space dynamics. This allows us to compress the information, and evolve in compressed form, therefore, accelerating the models. In this series of lectures, I will present recent advances in two fronts: deterministic and probabilistic modeling latent representations. In particular, I will introduce the notions of hyper-networks, a neural network that outputs another neural network, and diffusion models, a framework that allows us to represent probability distributions of trajectories directly. I will provide the foundation for such methodologies, how they can be adapted to scientific computing, and which physical properties they need to satisfy. Finally, I will provide several examples of applications to scientific computing.[-]
High-fidelity numerical simulation of physical systems modeled by time-dependent partial differential equations (PDEs) has been at the center of many technological advances in the last century. However, for engineering applications such as design, control, optimization, data assimilation, and uncertainty quantification, which require repeated model evaluation over a potentially large number of parameters, or initial conditions, these simulations ...[+]

37N30 ; 65C20 ; 65L20

Sélection Signaler une erreur