Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of extended graphons, introduced in Jabin-Poyato-Soler, by introducing a novel notion of discrete observables in the system. This is a joint work with D. Zhou.
[-]
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of ...
[+]
35Q49 ; 35Q83 ; 35R02 ; 35Q70 ; 05C90 ; 60G09 ; 35R06 ; 35Q89 ; 35Q92 ; 49N80 ; 92B20 ; 65N75
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of extended graphons, introduced in Jabin-Poyato-Soler, by introducing a novel notion of discrete observables in the system. This is a joint work with D. Zhou.
[-]
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of ...
[+]
35Q49 ; 35Q83 ; 35R02 ; 35Q70 ; 05C90 ; 60G09 ; 35R06 ; 35Q89 ; 49N80 ; 92B20 ; 65N75 ; 65N75
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained for each component of the Markov chain. We generalize this result when the initial distribution is not the target probability measure. The obtained diffusive limit is the solution to a stochastic differential equation nonlinear in the sense of McKean. We prove convergence to equilibrium for this equation. We discuss practical counterparts in order to optimize the variance of the proposal distribution to accelerate convergence to equilibrium. Our analysis confirms the interest of the constant acceptance rate strategy (with acceptance rate between 1/4 and 1/3).
[-]
We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained ...
[+]
60J22 ; 60J10 ; 60G50 ; 60F17 ; 60J60 ; 60G09 ; 65C40 ; 65C05
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In many health studies, interest often lies in assessing health effects on a large set of outcomes or specific outcome subtypes, which may be sparsely observed, even in big data settings. For example, while the overall prevalence of birth defects is not low, the vast heterogeneity in types of congenital malformations leads to challenges in estimation for sparse groups. However, lumping small groups together to facilitate estimation is often controversial and may have limited scientific support.
There is a very rich literature proposing Bayesian approaches for clustering starting with a prior probability distribution on partitions. Most approaches assume exchangeability, leading to simple representations in terms of Exchangeable Partition Probability Functions (EPPF). Gibbs-type priors encompass a broad class of such cases, including Dirichlet and Pitman-Yor processes. Even though there have been some proposals to relax the exchangeability assumption, allowing covariate-dependence and partial exchangeability, limited consideration has been given on how to include concrete prior knowledge on the partition. We wish to cluster birth defects into groups to facilitate estimation, and we have prior knowledge of an initial clustering provided by experts. As a general approach for including such prior knowledge, we propose a Centered Partition (CP) process that modifies the EPPF to favor partitions close to an initial one. Some properties of the CP prior are described, a general algorithm for posterior computation is developed, and we illustrate the methodology through simulation examples and an application to the motivating epidemiology study of birth defects.
[-]
In many health studies, interest often lies in assessing health effects on a large set of outcomes or specific outcome subtypes, which may be sparsely observed, even in big data settings. For example, while the overall prevalence of birth defects is not low, the vast heterogeneity in types of congenital malformations leads to challenges in estimation for sparse groups. However, lumping small groups together to facilitate estimation is often ...
[+]
62F15 ; 62H30 ; 60G09 ; 60G57 ; 62G05 ; 62P10