m

F Nous contacter


0

Documents  62F15 | enregistrements trouvés : 22

O

-A +A

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

In this short course, we recall the basics of Markov chain Monte Carlo (Gibbs & Metropolis sampelrs) along with the most recent developments like Hamiltonian Monte Carlo, Rao-Blackwellisation, divide & conquer strategies, pseudo-marginal and other noisy versions. We also cover the specific approximate method of ABC that is currently used in many fields to handle complex models in manageable conditions, from the original motivation in population genetics to the several reinterpretations of the approach found in the recent literature. Time allowing, we will also comment on the programming developments like BUGS, STAN and Anglican that stemmed from those specific algorithms.
In this short course, we recall the basics of Markov chain Monte Carlo (Gibbs & Metropolis sampelrs) along with the most recent developments like Hamiltonian Monte Carlo, Rao-Blackwellisation, divide & conquer strategies, pseudo-marginal and other noisy versions. We also cover the specific approximate method of ABC that is currently used in many fields to handle complex models in manageable conditions, from the original motivation in population ...

65C05 ; 65C40 ; 60J10 ; 62F15

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Post-edited  Bayesian modelling
Mengersen, Kerrie (Auteur de la Conférence) | CIRM (Editeur )

This tutorial will be a beginner’s introduction to Bayesian statistical modelling and analysis. Simple models and computational tools will be described, followed by a discussion about implementing these approaches in practice. A range of case studies will be presented and possible solutions proposed, followed by an open discussion about other ways that these problems could be tackled.

62C10 ; 62F15 ; 62P12 ; 62P10

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Data mining methods based on finite mixture models are quite common in many areas of applied science, such as marketing, to segment data and to identify subgroups with specific features. Recent work shows that these methods are also useful in micro econometrics to analyze the behavior of workers in labor markets. Since these data are typically available as time series with discrete states, clustering kernels based on Markov chains with group-specific transition matrices are applied to capture both persistence in the individual time series as well as cross-sectional unobserved heterogeneity. Markov chains clustering has been applied to data from the Austrian labor market, (a) to understanding the effect of labor market entry conditions on long-run career developments for male workers (Frühwirth-Schnatter et al., 2012), (b) to study mothers’ long-run career patterns after first birth (Frühwirth-Schnatter et al., 2016), and (c) to study the effects of a plant closure on future career developments for male worker (Frühwirth-Schnatter et al., 2018). To capture non- stationary effects for the later study, time-inhomogeneous Markov chains based on time-varying group specific transition matrices are introduced as clustering kernels. For all applications, a mixture-of-experts formulation helps to understand which workers are likely to belong to a particular group. Finally, it will be shown that Markov chain clustering is also useful in a business application in marketing and helps to identify loyal consumers within a customer relationship management (CRM) program.
Data mining methods based on finite mixture models are quite common in many areas of applied science, such as marketing, to segment data and to identify subgroups with specific features. Recent work shows that these methods are also useful in micro econometrics to analyze the behavior of workers in labor markets. Since these data are typically available as time series with discrete states, clustering kernels based on Markov chains with ...

62C10 ; 62M05 ; 62M10 ; 62H30 ; 62P20 ; 62F15

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Bayesian posterior distributions can be numerically intractable, even by the means of Markov Chain Monte Carlo methods. Bayesian variational methods can then be used to compute directly (and fast) a deterministic approximation of these posterior distributions. In this course, I describe the principles of the variational methods and their application in Bayesian inference, review main theoretical results and discuss their use on examples.

62F15 ; 62H12 ; 49J40

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

The Expectation-Propagation algorithm was introduced by Minka in 2001, and is today still one of the most effective algorithms for approximate inference. It is relatively difficult to implement well but in certain cases it can give results that are almost exact, while being much faster than MCMC. In this course I will review EP and classical applications to Generalised Linear Models and Gaussian Process models. I will also introduce some recent developments, including applications of EP to ABC problems, and discuss how to parallelise EP effectively.
The Expectation-Propagation algorithm was introduced by Minka in 2001, and is today still one of the most effective algorithms for approximate inference. It is relatively difficult to implement well but in certain cases it can give results that are almost exact, while being much faster than MCMC. In this course I will review EP and classical applications to Generalised Linear Models and Gaussian Process models. I will also introduce some recent ...

62F15 ; 62J12

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Selective inference in genetics
Sabatti, Chiara (Auteur de la Conférence) | CIRM (Editeur )

Geneticists have always been aware that, when looking for signal across the entire genome, one has to be very careful to avoid false discoveries. Contemporary studies often involve a very large number of traits, increasing the challenges of "looking every-where". I will discuss novel approaches that allow an adaptive exploration of the data, while guaranteeing reproducible results.

62F15 ; 62J15 ; 62P10 ; 92D10

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Model assessment, selection and averaging
Vehtari, Aki (Auteur de la Conférence) | CIRM (Editeur )

The tutorial covers cross-validation, and projection predictive approaches for model assessment, selection and inference after model selection and Bayesian stacking for model averaging. The talk is accompanied with R notebooks using rstanarm, bayesplot, loo, and projpred packages.

62C10 ; 62F15 ; 65C60 ; 62M20

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. I will present a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as ``priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has n floating point operations (flops), where n is the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal ...

62P12 ; 62M30 ; 62F15

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Faced with data containing a large number of inter-related explanatory variables, finding ways to investigate complex multi-factorial effects is an important statistical task. This is particularly relevant for epidemiological study designs where large numbers of covariates are typically collected in an attempt to capture complex interactions between host characteristics and risk factors. A related task, which is of great interest in stratified medicine, is to use multi-omics data to discover subgroups of patients with distinct molecular phenotypes and clinical outcomes, thus providing the potential to target treatments more precisely. Flexible clustering is a natural way to tackle such problems. It can be used in an unsupervised or a semi-supervised manner by adding a link between the clustering structure and outcomes and performing joint modelling. In this case, the clustering structure is used to help predict the outcome. This latter approach, known as profile regression, has been implemented recently using a Bayesian non parametric DP modelling framework, which specifies a joint clustering model for covariates and outcome, with an additional variable selection step to uncover the variables driving the clustering (Papathomas et al, 2012). In this talk, two related issues will be discussed. Firstly, we will focus on categorical covariates, a common situation in epidemiological studies, and examine the relation between: (i) dependence structures highlighted by Bayesian partitioning of the covariate space incorporating variable selection; and (ii) log linear modelling with interaction terms, a traditional approach to model dependence. We will show how the clustering approach can be employed to assist log-linear model determination, a challenging task as the model space becomes quickly very large (Papathomas and Richardson, 2015). Secondly, we will discuss clustering as a tool for integrating information from multiple datasets, with a view to discover useful structure for prediction. In this context several related issues arise. It is clear that each dataset may carry a different amount of information for the predictive task. Methods for learning how to reweight each data type for this task will therefore be presented. In the context of multi-omics datasets, the efficiency of different methods for performing integrative clustering will also be discussed, contrasting joint modelling and stepwise approaches. This will be illustrated by analysis of genomics cancer datasets.
Joint work with Michael Papathomas and Paul Kirk.
Faced with data containing a large number of inter-related explanatory variables, finding ways to investigate complex multi-factorial effects is an important statistical task. This is particularly relevant for epidemiological study designs where large numbers of covariates are typically collected in an attempt to capture complex interactions between host characteristics and risk factors. A related task, which is of great interest in stratified ...

62F15 ; 62P10

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

In many health studies, interest often lies in assessing health effects on a large set of outcomes or specific outcome subtypes, which may be sparsely observed, even in big data settings. For example, while the overall prevalence of birth defects is not low, the vast heterogeneity in types of congenital malformations leads to challenges in estimation for sparse groups. However, lumping small groups together to facilitate estimation is often controversial and may have limited scientific support.
There is a very rich literature proposing Bayesian approaches for clustering starting with a prior probability distribution on partitions. Most approaches assume exchangeability, leading to simple representations in terms of Exchangeable Partition Probability Functions (EPPF). Gibbs-type priors encompass a broad class of such cases, including Dirichlet and Pitman-Yor processes. Even though there have been some proposals to relax the exchangeability assumption, allowing covariate-dependence and partial exchangeability, limited consideration has been given on how to include concrete prior knowledge on the partition. We wish to cluster birth defects into groups to facilitate estimation, and we have prior knowledge of an initial clustering provided by experts. As a general approach for including such prior knowledge, we propose a Centered Partition (CP) process that modifies the EPPF to favor partitions close to an initial one. Some properties of the CP prior are described, a general algorithm for posterior computation is developed, and we illustrate the methodology through simulation examples and an application to the motivating epidemiology study of birth defects.
In many health studies, interest often lies in assessing health effects on a large set of outcomes or specific outcome subtypes, which may be sparsely observed, even in big data settings. For example, while the overall prevalence of birth defects is not low, the vast heterogeneity in types of congenital malformations leads to challenges in estimation for sparse groups. However, lumping small groups together to facilitate estimation is often ...

62F15 ; 62H30 ; 60G09 ; 60G57 ; 62G05 ; 62P10

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

The flexibility of the Bayesian approach to uncertainty, and its notable practical successes, have made it an increasingly popular tool for uncertainty quantification. The scope of application has widened from the finite sample spaces considered by Bayes and Laplace to very high-dimensional systems, or even infinite-dimensional ones such as PDEs. It is natural to ask about the accuracy of Bayesian procedures from several perspectives: e.g., the frequentist questions of well-specification and consistency, or the numerical analysis questions of stability and well-posedness with respect to perturbations of the prior, the likelihood, or the data. This talk will outline positive and negative results (both classical ones from the literature and new ones due to the authors) on the accuracy of Bayesian inference. There will be a particular emphasis on the consequences for high- and infinite-dimensional complex systems. In particular, for such systems, subtle details of geometry and topology play a critical role in determining the accuracy or instability of Bayesian procedures. Joint with with Houman Owhadi and Clint Scovel (Caltech).
The flexibility of the Bayesian approach to uncertainty, and its notable practical successes, have made it an increasingly popular tool for uncertainty quantification. The scope of application has widened from the finite sample spaces considered by Bayes and Laplace to very high-dimensional systems, or even infinite-dimensional ones such as PDEs. It is natural to ask about the accuracy of Bayesian procedures from several perspectives: e.g., the ...

62F15 ; 62G35

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

The term ‘Public Access Defibrillation’ (PAD) is referred to programs based on the placement of Automated External Defibrillators (AED) in key locations along cities’ territory together with the development of a training plan for users (first responders). PAD programs are considered necessary since time for intervention in cases of sudden cardiac arrest outside of a medical environment (out-of-hospital cardiocirculatory arrest, OHCA) is strongly limited: survival potential decreases from a 67% baseline by 7 to 10% for each minute of delay in first defibrillation. However, it is widely recognized that current PAD performance is largely below its full potential. We provide a Bayesian spatio-temporal statistical model for predidicting OHCAs. Then we construct a risk map for Ticino, adjusted for demographic covariates, that explains and forecasts the spatial distribution of OHCAs, their temporal dynamics, and how the spatial distribution changes over time. The objective is twofold: to efficiently estimate, in each area of interest, the occurrence intensity of the OHCA event and to suggest a new optimized distribution of AEDs that accounts for population exposure to the geographic risk of OHCA occurrence and that includes both displacement of current devices and installation of new ones.
The term ‘Public Access Defibrillation’ (PAD) is referred to programs based on the placement of Automated External Defibrillators (AED) in key locations along cities’ territory together with the development of a training plan for users (first responders). PAD programs are considered necessary since time for intervention in cases of sudden cardiac arrest outside of a medical environment (out-of-hospital cardiocirculatory arrest, OHCA) is strongly ...

62F15 ; 62P10 ; 62H11 ; 91B30

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Les processus de Hawkes forment une classe des processus ponctuels pour lesquels l'intensité s'écrit comme :

$\lambda(t)= \int_{0}^{t^-} h(t-s)dN_s +\nu$

où $N$ représente le processus de Hawkes, et $\nu > 0$. Les processus de Hawkes multivariés ont une intensité similaire sauf que des interractions entre les différentes composantes du processus de Hawkes sont autorisées. Les paramètres de ce modèle sont donc les fonctions d'interractions $h_{k,\ell}, k, \ell \le M$ et les constantes $\nu_\ell, \ell \le M$. Dans ce travail nous étudions une approche bayésienne nonparamétrique pour estimer les fonctions $h_{k,\ell}$ et les constantes $\nu_\ell$. Nous présentons un théorème général caractérisant la vitesse de concentration de la loi a posteriori dans de tels modèles. L'intérêt de cette approche est qu'elle permet la caractérisation de la convergence en norme $L_1$ et demande assez peu d'hypothèses sur la forme de la loi a priori. Une caractérisation de la convergence en norme $L_2$ est aussi considérée. Nous étudierons un exemple de lois a priori adaptées à l'étude des interractions neuronales. Travail en collaboration avec S. Donnet et V. Rivoirard.
Les processus de Hawkes forment une classe des processus ponctuels pour lesquels l'intensité s'écrit comme :

$\lambda(t)= \int_{0}^{t^-} h(t-s)dN_s +\nu$

où $N$ représente le processus de Hawkes, et $\nu > 0$. Les processus de Hawkes multivariés ont une intensité similaire sauf que des interractions entre les différentes composantes du processus de Hawkes sont autorisées. Les paramètres de ce modèle sont donc les fonctions d'interractions ...

62Gxx ; 62G05 ; 62F15 ; 62G20

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Bayesian computational methods
Robert, Christian P. (Auteur de la Conférence) | CIRM (Editeur )

This is a short introduction to the many directions of current research in Bayesian computational statistics, from accelerating MCMC algorithms, to using partly deterministic Markov processes like the bouncy particle and the zigzag samplers, to approximating the target or the proposal distributions in such methods. The main illustration focuses on the evaluation of normalising constants and ratios of normalising constants.

62C10 ; 65C60 ; 62F15 ; 65C05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Bayesian computation with INLA
Rue, Havard (Auteur de la Conférence) | CIRM (Editeur )

This talk focuses on the estimation of the distribution of unobserved nodes in large random graphs from the observation of very few edges. These graphs naturally model tournaments involving a large number of players (the nodes) where the ability to win of each player is unknown. The players are only partially observed through discrete valued scores (edges) describing the results of contests between players. In this very sparse setting, we present the first nonasymptotic risk bounds for maximum likelihood estimators (MLE) of the unknown distribution of the nodes. The proof relies on the construction of a graphical model encoding conditional dependencies that is extremely efficient to study n-regular graphs obtained using a round-robin scheduling. This graphical model allows to prove geometric loss of memory properties and deduce the asymptotic behavior of the likelihood function. Following a classical construction in learning theory, the asymptotic likelihood is used to define a measure of performance for the MLE. Risk bounds for the MLE are finally obtained by subgaussian deviation results derived from concentration inequalities for Markov chains applied to our graphical model.
This talk focuses on the estimation of the distribution of unobserved nodes in large random graphs from the observation of very few edges. These graphs naturally model tournaments involving a large number of players (the nodes) where the ability to win of each player is unknown. The players are only partially observed through discrete valued scores (edges) describing the results of contests between players. In this very sparse setting, we ...

62F15 ; 62C10 ; 65C60 ; 65C40

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Capture-Recapture (RC) methodology provides a way to estimate the size of a population from multiple, independent samples. While the was developed more than a century ago to count animal populations, it has only recently become important in Data For Social Good. The large number of samples with varying amounts of intersection and developed over a period of time, so often found in Data For Social Good projects, can greatly complicate conventional RC methodology. These conditions are ideal, however, for Bayesian Capture Recapture. This presentation describes the use of Bayesian Capture Recapture to estimate populations in Data for Social Good. Examples illustrating this method include new work by the author in estimating numbers of human trafficking victims and in estimating the size of hate groups from the analysis of hate speech in social media.
Capture-Recapture (RC) methodology provides a way to estimate the size of a population from multiple, independent samples. While the was developed more than a century ago to count animal populations, it has only recently become important in Data For Social Good. The large number of samples with varying amounts of intersection and developed over a period of time, so often found in Data For Social Good projects, can greatly complicate conventional ...

62P25 ; 62F15 ; 62M10

Z