Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Motivated by the task of sampling measures in high dimensions we will discuss a number of gradient flows in the spaces of measures, including the Wasserstein gradient flows of Maximum Mean Discrepancy and Hellinger gradient flows of relative entropy, the Stein Variational Gradient Descent and a new projected dynamic gradient flows. For all the flows we will consider their deterministic interacting-particle approximations. The talk is highlight some of the properties of the flows and indicate their differences. In particular we will discuss how well can the interacting particles approximate the target measures.The talk is based on joint works wit Anna Korba, Lantian Xu, Sangmin Park, Yulong Lu, and Lihan Wang.
[-]
Motivated by the task of sampling measures in high dimensions we will discuss a number of gradient flows in the spaces of measures, including the Wasserstein gradient flows of Maximum Mean Discrepancy and Hellinger gradient flows of relative entropy, the Stein Variational Gradient Descent and a new projected dynamic gradient flows. For all the flows we will consider their deterministic interacting-particle approximations. The talk is highlight ...
[+]
35Q62 ; 35Q70 ; 82C21 ; 62D05 ; 45M05
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This talk is devoted to the presentation of algorithms for simulating rare events in a molecular dynamics context, e.g., the simulation of reactive paths. We will consider $\mathbb{R}^d$ as the space of configurations for a given system, where the probability of a specific configuration is given by a Gibbs measure depending on a temperature parameter. The dynamics of the system is given by an overdamped Langevin (or gradient) equation. The problem is to find how the system can evolve from a local minimum of the potential to another, following the above dynamics. After a brief overview of classical Monte Carlo methods, we will expose recent results on adaptive multilevel splitting techniques.
[-]
This talk is devoted to the presentation of algorithms for simulating rare events in a molecular dynamics context, e.g., the simulation of reactive paths. We will consider $\mathbb{R}^d$ as the space of configurations for a given system, where the probability of a specific configuration is given by a Gibbs measure depending on a temperature parameter. The dynamics of the system is given by an overdamped Langevin (or gradient) equation. The ...
[+]
65C05 ; 65C60 ; 65C35 ; 62L12 ; 62D05
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This course will give a gentle introduction to SMC (Sequential Monte Carlo algorithms):
• motivation: state-space (hidden Markov) models, sequential analysis of such models; non-sequential problems that may be tackled using SMC.
• Formalism: Markov kernels, Feynman-Kac distributions.
• Monte Carlo tricks: importance sampling and resampling
• standard particle filters: bootstrap, guided, auxiliary
• maximum likelihood estimation of state-stace models
• Bayesian estimation of these models: PMCMC, SMC$^2$.
[-]
This course will give a gentle introduction to SMC (Sequential Monte Carlo algorithms):
• motivation: state-space (hidden Markov) models, sequential analysis of such models; non-sequential problems that may be tackled using SMC.
• Formalism: Markov kernels, Feynman-Kac distributions.
• Monte Carlo tricks: importance sampling and resampling
• standard particle filters: bootstrap, guided, auxiliary
• maximum likelihood estimation of state-stace ...
[+]
62F15 ; 62D05 ; 65C05 ; 60J22 ; 62M05 ; 62M20
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2 y
We consider the convergence of the iterative projected gradient (IPG) algorithm for arbitrary (typically nonconvex) sets and when both the gradient and projection oracles are only computed approximately. We consider different notions of approximation of which we show that the Progressive Fixed Precision (PFP) and (1+epsilon) optimal oracles can achieve the same accuracy as for the exact IPG algorithm. We also show that the former scheme is also able to maintain the (linear) rate of convergence of the exact algorithm, under the same embedding assumption, while the latter requires a stronger embedding condition, moderate compression ratios and typically exhibits slower convergence. We apply our results to accelerate solving a class of data driven compressed sensing problems, where we replace iterative exhaustive searches over large datasets by fast approximate nearest neighbour search strategies based on the cover tree data structure. Finally, if there is time we will give examples of this theory applied in practice for rapid enhanced solutions to an emerging MRI protocol called magnetic resonance fingerprinting for quantitative MRI.
[-]
We consider the convergence of the iterative projected gradient (IPG) algorithm for arbitrary (typically nonconvex) sets and when both the gradient and projection oracles are only computed approximately. We consider different notions of approximation of which we show that the Progressive Fixed Precision (PFP) and (1+epsilon) optimal oracles can achieve the same accuracy as for the exact IPG algorithm. We also show that the former scheme is also ...
[+]
65C60 ; 62D05 ; 94A12