En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 62F15 28 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Bayesian computational methods - Robert, Christian P. (Author of the conference) | CIRM H

Multi angle

This is a short introduction to the many directions of current research in Bayesian computational statistics, from accelerating MCMC algorithms, to using partly deterministic Markov processes like the bouncy particle and the zigzag samplers, to approximating the target or the proposal distributions in such methods. The main illustration focuses on the evaluation of normalising constants and ratios of normalising constants.

62C10 ; 65C60 ; 62F15 ; 65C05

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Bayesian computation with INLA - Rue, Havard (Author of the conference) | CIRM H

Multi angle

This talk focuses on the estimation of the distribution of unobserved nodes in large random graphs from the observation of very few edges. These graphs naturally model tournaments involving a large number of players (the nodes) where the ability to win of each player is unknown. The players are only partially observed through discrete valued scores (edges) describing the results of contests between players. In this very sparse setting, we present the first nonasymptotic risk bounds for maximum likelihood estimators (MLE) of the unknown distribution of the nodes. The proof relies on the construction of a graphical model encoding conditional dependencies that is extremely efficient to study n-regular graphs obtained using a round-robin scheduling. This graphical model allows to prove geometric loss of memory properties and deduce the asymptotic behavior of the likelihood function. Following a classical construction in learning theory, the asymptotic likelihood is used to define a measure of performance for the MLE. Risk bounds for the MLE are finally obtained by subgaussian deviation results derived from concentration inequalities for Markov chains applied to our graphical model.[-]
This talk focuses on the estimation of the distribution of unobserved nodes in large random graphs from the observation of very few edges. These graphs naturally model tournaments involving a large number of players (the nodes) where the ability to win of each player is unknown. The players are only partially observed through discrete valued scores (edges) describing the results of contests between players. In this very sparse setting, we ...[+]

62F15 ; 62C10 ; 65C60 ; 65C40

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Model assessment, selection and averaging - Vehtari, Aki (Author of the conference) | CIRM H

Multi angle

The tutorial covers cross-validation, and projection predictive approaches for model assessment, selection and inference after model selection and Bayesian stacking for model averaging. The talk is accompanied with R notebooks using rstanarm, bayesplot, loo, and projpred packages.

62C10 ; 62F15 ; 65C60 ; 62M20

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, we derive a novel non-reversible, continuous-time Markov chain Monte Carlo (MCMC) sampler, called Coordinate Sampler, based on a piecewise deterministic Markov process (PDMP), which can be seen as a variant of the Zigzag sampler. In addition to proving a theoretical validation for this new sampling algorithm, we show that the Markov chain it induces exhibits geometrical ergodicity convergence, for distributions whose tails decay at least as fast as an exponential distribution and at most as fast as a Gaussian distribution. Several numerical examples highlight that our coordinate sampler is more efficient than the Zigzag sampler, in terms of effective sample size.
[This is joint work with Wu Changye, ref. arXiv:1809.03388][-]
In this talk, we derive a novel non-reversible, continuous-time Markov chain Monte Carlo (MCMC) sampler, called Coordinate Sampler, based on a piecewise deterministic Markov process (PDMP), which can be seen as a variant of the Zigzag sampler. In addition to proving a theoretical validation for this new sampling algorithm, we show that the Markov chain it induces exhibits geometrical ergodicity convergence, for distributions whose tails decay at ...[+]

62F15 ; 60J25

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Ill-posed linear inverse problems that combine knowledge of the forward measurement model with prior models arise frequently in various applications, from computational photography to medical imaging. Recent research has focused on solving these problems with score-based generative models (SGMs) that produce perceptually plausible images, especially in inpainting problems. In this study, we exploit the particular structure of the prior defined in the SGM to formulate recovery in a Bayesian framework as a Feynman–Kac model adapted from the forward diffusion model used to construct score-based diffusion. To solve this Feynman–Kac problem, we propose the use of Sequential Monte Carlo methods. The proposed algorithm, MCGdiff, is shown to be theoretically grounded and we provide numerical simulations showing that it outperforms competing baselines when dealing with ill-posed inverse problems.[-]
Ill-posed linear inverse problems that combine knowledge of the forward measurement model with prior models arise frequently in various applications, from computational photography to medical imaging. Recent research has focused on solving these problems with score-based generative models (SGMs) that produce perceptually plausible images, especially in inpainting problems. In this study, we exploit the particular structure of the prior defined ...[+]

62F15 ; 65C05 ; 65C60

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The flexibility of the Bayesian approach to uncertainty, and its notable practical successes, have made it an increasingly popular tool for uncertainty quantification. The scope of application has widened from the finite sample spaces considered by Bayes and Laplace to very high-dimensional systems, or even infinite-dimensional ones such as PDEs. It is natural to ask about the accuracy of Bayesian procedures from several perspectives: e.g., the frequentist questions of well-specification and consistency, or the numerical analysis questions of stability and well-posedness with respect to perturbations of the prior, the likelihood, or the data. This talk will outline positive and negative results (both classical ones from the literature and new ones due to the authors) on the accuracy of Bayesian inference. There will be a particular emphasis on the consequences for high- and infinite-dimensional complex systems. In particular, for such systems, subtle details of geometry and topology play a critical role in determining the accuracy or instability of Bayesian procedures. Joint with with Houman Owhadi and Clint Scovel (Caltech).[-]
The flexibility of the Bayesian approach to uncertainty, and its notable practical successes, have made it an increasingly popular tool for uncertainty quantification. The scope of application has widened from the finite sample spaces considered by Bayes and Laplace to very high-dimensional systems, or even infinite-dimensional ones such as PDEs. It is natural to ask about the accuracy of Bayesian procedures from several perspectives: e.g., the ...[+]

62F15 ; 62G35

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

Markov Chain Monte Carlo Methods - Part 1 - Robert, Christian P. (Author of the conference) | CIRM H

Post-edited

In this short course, we recall the basics of Markov chain Monte Carlo (Gibbs & Metropolis sampelrs) along with the most recent developments like Hamiltonian Monte Carlo, Rao-Blackwellisation, divide & conquer strategies, pseudo-marginal and other noisy versions. We also cover the specific approximate method of ABC that is currently used in many fields to handle complex models in manageable conditions, from the original motivation in population genetics to the several reinterpretations of the approach found in the recent literature. Time allowing, we will also comment on the programming developments like BUGS, STAN and Anglican that stemmed from those specific algorithms.[-]
In this short course, we recall the basics of Markov chain Monte Carlo (Gibbs & Metropolis sampelrs) along with the most recent developments like Hamiltonian Monte Carlo, Rao-Blackwellisation, divide & conquer strategies, pseudo-marginal and other noisy versions. We also cover the specific approximate method of ABC that is currently used in many fields to handle complex models in manageable conditions, from the original motivation in population ...[+]

65C05 ; 65C40 ; 60J10 ; 62F15

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Approximate Bayesian computation (ABC) techniques, also known as likelihood-free methods, have become a standard tool for the analysis of complex models, primarily in population genetics. The development of new ABC methodologies is undergoing a rapid increase in the past years, as shown by multiple publications, conferences and softwares. In this lecture, we introduce some recent advances on ABC techniques, notably for model choice problems.

62F15 ; 65C60

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Faced with data containing a large number of inter-related explanatory variables, finding ways to investigate complex multi-factorial effects is an important statistical task. This is particularly relevant for epidemiological study designs where large numbers of covariates are typically collected in an attempt to capture complex interactions between host characteristics and risk factors. A related task, which is of great interest in stratified medicine, is to use multi-omics data to discover subgroups of patients with distinct molecular phenotypes and clinical outcomes, thus providing the potential to target treatments more precisely. Flexible clustering is a natural way to tackle such problems. It can be used in an unsupervised or a semi-supervised manner by adding a link between the clustering structure and outcomes and performing joint modelling. In this case, the clustering structure is used to help predict the outcome. This latter approach, known as profile regression, has been implemented recently using a Bayesian non parametric DP modelling framework, which specifies a joint clustering model for covariates and outcome, with an additional variable selection step to uncover the variables driving the clustering (Papathomas et al, 2012). In this talk, two related issues will be discussed. Firstly, we will focus on categorical covariates, a common situation in epidemiological studies, and examine the relation between: (i) dependence structures highlighted by Bayesian partitioning of the covariate space incorporating variable selection; and (ii) log linear modelling with interaction terms, a traditional approach to model dependence. We will show how the clustering approach can be employed to assist log-linear model determination, a challenging task as the model space becomes quickly very large (Papathomas and Richardson, 2015). Secondly, we will discuss clustering as a tool for integrating information from multiple datasets, with a view to discover useful structure for prediction. In this context several related issues arise. It is clear that each dataset may carry a different amount of information for the predictive task. Methods for learning how to reweight each data type for this task will therefore be presented. In the context of multi-omics datasets, the efficiency of different methods for performing integrative clustering will also be discussed, contrasting joint modelling and stepwise approaches. This will be illustrated by analysis of genomics cancer datasets.
Joint work with Michael Papathomas and Paul Kirk.[-]
Faced with data containing a large number of inter-related explanatory variables, finding ways to investigate complex multi-factorial effects is an important statistical task. This is particularly relevant for epidemiological study designs where large numbers of covariates are typically collected in an attempt to capture complex interactions between host characteristics and risk factors. A related task, which is of great interest in stratified ...[+]

62F15 ; 62P10

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The Expectation-Propagation algorithm was introduced by Minka in 2001, and is today still one of the most effective algorithms for approximate inference. It is relatively difficult to implement well but in certain cases it can give results that are almost exact, while being much faster than MCMC. In this course I will review EP and classical applications to Generalised Linear Models and Gaussian Process models. I will also introduce some recent developments, including applications of EP to ABC problems, and discuss how to parallelise EP effectively.[-]
The Expectation-Propagation algorithm was introduced by Minka in 2001, and is today still one of the most effective algorithms for approximate inference. It is relatively difficult to implement well but in certain cases it can give results that are almost exact, while being much faster than MCMC. In this course I will review EP and classical applications to Generalised Linear Models and Gaussian Process models. I will also introduce some recent ...[+]

62F15 ; 62J12

Bookmarks Report an error