En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 60J10 13 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Les chaînes de Markov à mémoire de longueur variable constituent une classe de sources probabilistes. Il sera question dans cet exposé d'existence et unicité de mesure invariante pour une collection d'exemples de chaînes. Nous nous intéresserons également au comportement asymptotique d'une marche aléatoire dont les longueurs de sauts ne sont pas forcément intégrables. Les lois de sauts dépendent partiellement du passé de la trajectoire. Plus précisément, la probabilité de monter ou de descendre dépend du temps passé dans la direction dans laquelle le marcheur est en train d'avancer. Un critère de récurrence/transience s'exprimant en fonction des paramètres du modèle sera énoncé. Suivront plusieurs exemples illustrant le caractère instable du type de la marche lorsqu'on perturbe légèrement les paramètres.
Les travaux décrits dans cet exposé ont été faits en collaboration avec B. Chauvin, F. Paccaut et N. Pouyanne ou B. de Loynes, A. Le Ny et Y. Offret.[-]
Les chaînes de Markov à mémoire de longueur variable constituent une classe de sources probabilistes. Il sera question dans cet exposé d'existence et unicité de mesure invariante pour une collection d'exemples de chaînes. Nous nous intéresserons également au comportement asymptotique d'une marche aléatoire dont les longueurs de sauts ne sont pas forcément intégrables. Les lois de sauts dépendent partiellement du passé de la trajectoire. Plus ...[+]

60J10 ; 60J27 ; 60F05 ; 60K15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Random walks on dynamical percolation - Sousi, Perla (Auteur de la Conférence) | CIRM H

Multi angle

We study the behaviour of random walk on dynamical percolation. In this model, the edges of a graph are either open or closed and refresh their status at rate $\mu$, while at the same time a random walker moves on $G$ at rate 1, but only along edges which are open. On the d-dimensional torus with side length $n$, when the bond parameter is subcritical, the mixing times for both the full system and the random walker were determined by Peres, Stauffer and Steif. I will talk about the supercritical case, which was left open, but can be analysed using evolving sets.

Joint work with Y. Peres and J. Steif.[-]
We study the behaviour of random walk on dynamical percolation. In this model, the edges of a graph are either open or closed and refresh their status at rate $\mu$, while at the same time a random walker moves on $G$ at rate 1, but only along edges which are open. On the d-dimensional torus with side length $n$, when the bond parameter is subcritical, the mixing times for both the full system and the random walker were determined by Peres, ...[+]

60K35 ; 60J10 ; 60G50 ; 82B43

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

Self-interacting walks and uniform spanning forests - Peres, Yuval (Auteur de la Conférence) | CIRM H

Post-edited

In the first half of the talk, I will survey results and open problems on transience of self-interacting martingales. In particular, I will describe joint works with S. Popov, P. Sousi, R. Eldan and F. Nazarov on the tradeoff between the ambient dimension and the number of different step distributions needed to obtain a recurrent process. In the second, unrelated, half of the talk, I will present joint work with Tom Hutchcroft, showing that the component structure of the uniform spanning forest in $\mathbb{Z}^d$ changes every dimension for $d > 8$. This sharpens an earlier result of Benjamini, Kesten, Schramm and the speaker (Annals Math 2004), where we established a phase transition every four dimensions. The proofs are based on a the connection to loop-erased random walks.[-]
In the first half of the talk, I will survey results and open problems on transience of self-interacting martingales. In particular, I will describe joint works with S. Popov, P. Sousi, R. Eldan and F. Nazarov on the tradeoff between the ambient dimension and the number of different step distributions needed to obtain a recurrent process. In the second, unrelated, half of the talk, I will present joint work with Tom Hutchcroft, showing that the ...[+]

05C05 ; 05C80 ; 60G50 ; 60J10 ; 60K35 ; 82B43

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Sur les mesures stationnaires des VLMC - Pouyanne, Nicolas (Auteur de la Conférence) | CIRM H

Multi angle

Les chaînes de Markov à mémoire de longueur variable sont des sources probabilistes pour lesquelles la production d'une lettre dépend d'un passé fini, mais dont la longueur dépend du temps est n'est pas bornée. Elles sont définies à partir d'un arbre T qui est un sous-arbre de l'arbre de tous les mots. Contrairement aux chaînes de Markov d'ordre fini standard, ces sources n'admettent pas toujours de mesure de probabilité stationnaire, ou peuvent en admettre plusieurs. La forme de l'arbre T joue un rôle essentiel dans cette affaire. On montrera quelques outils adaptés à la question et, sous certaines hypothèses, on donnera une CNS d'existence et d'uniciteé d'une telle mesure de probabilité.
Travail en collaboration avec P. Cénac, B. Chauvin et F. Paccaut.[-]
Les chaînes de Markov à mémoire de longueur variable sont des sources probabilistes pour lesquelles la production d'une lettre dépend d'un passé fini, mais dont la longueur dépend du temps est n'est pas bornée. Elles sont définies à partir d'un arbre T qui est un sous-arbre de l'arbre de tous les mots. Contrairement aux chaînes de Markov d'ordre fini standard, ces sources n'admettent pas toujours de mesure de probabilité stationnaire, ou peuvent ...[+]

60J10 ; 60G10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Bayesian methods for inverse problems - lecture 2 - Dashti, Masoumeh (Auteur de la Conférence) | CIRM H

Virtualconference

We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the probability distribution of the unknown given the data, obtained through the Bayes rule. We will talk about the conditions under which this formulation leads to well-posedness of the inverse problem at the level of probability distributions. We then discuss the connection of the Bayesian approach to inverse problems with the variational regularization. This will also help us to study the properties of the modes of the posterior distribution as point estimators for the unknown parameter. We will also briefly talk about the Markov chain Monte Carlo methods in this context.[-]
We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the ...[+]

35R30 ; 65M32 ; 65M12 ; 65C05 ; 65C50 ; 76D07 ; 60J10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Let $G$ be an infinite locally finite and transitive graph. We investigate the relation between supercritical transient branching random walk (BRW) and the Martin boundary of its underlying random walk. We show results regarding the typical (and some atypical) asymptotic directions taken by the particles. We focus on the behavior of BRW inside given subgraphs by putting into relation geometrical properties of the subgraph itself and the behavior of BRW on it. We will also present some examples and counter examples. (Based on joint works with T. Hutchcroft,D. Bertacchi and F. Zucca.)[-]
Let $G$ be an infinite locally finite and transitive graph. We investigate the relation between supercritical transient branching random walk (BRW) and the Martin boundary of its underlying random walk. We show results regarding the typical (and some atypical) asymptotic directions taken by the particles. We focus on the behavior of BRW inside given subgraphs by putting into relation geometrical properties of the subgraph itself and the behavior ...[+]

60J80 ; 60J10 ; 60J45

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Random walk on random digraph - Salez, Justin (Auteur de la Conférence) | CIRM H

Multi angle

A finite ergodic Markov chain exhibits cutoff if its distance to equilibrium remains close to its initial value over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Originally discovered in the context of card shuffling (Aldous-Diaconis, 1986), this remarkable phenomenon is now rigorously established for many reversible chains. Here we consider the non-reversible case of random walks on sparse directed graphs, for which even the equilibrium measure is far from being understood. We work under the configuration model, allowing both the in-degrees and the out-degrees to be freely specified. We establish the cutoff phenomenon, determine its precise window and prove that the cutoff profile approaches a universal shape. We also provide a detailed description of the equilibrium measure.[-]
A finite ergodic Markov chain exhibits cutoff if its distance to equilibrium remains close to its initial value over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Originally discovered in the context of card shuffling (Aldous-Diaconis, 1986), this remarkable phenomenon is now rigorously established for many reversible chains. Here we consider the non-reversible case of random walks on sparse ...[+]

05C80 ; 05C81 ; 60G50 ; 60J10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

Markov Chain Monte Carlo Methods - Part 1 - Robert, Christian P. (Auteur de la Conférence) | CIRM H

Post-edited

In this short course, we recall the basics of Markov chain Monte Carlo (Gibbs & Metropolis sampelrs) along with the most recent developments like Hamiltonian Monte Carlo, Rao-Blackwellisation, divide & conquer strategies, pseudo-marginal and other noisy versions. We also cover the specific approximate method of ABC that is currently used in many fields to handle complex models in manageable conditions, from the original motivation in population genetics to the several reinterpretations of the approach found in the recent literature. Time allowing, we will also comment on the programming developments like BUGS, STAN and Anglican that stemmed from those specific algorithms.[-]
In this short course, we recall the basics of Markov chain Monte Carlo (Gibbs & Metropolis sampelrs) along with the most recent developments like Hamiltonian Monte Carlo, Rao-Blackwellisation, divide & conquer strategies, pseudo-marginal and other noisy versions. We also cover the specific approximate method of ABC that is currently used in many fields to handle complex models in manageable conditions, from the original motivation in population ...[+]

65C05 ; 65C40 ; 60J10 ; 62F15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In recent years, interest in time changes of stochastic processes according to irregular measures has arisen from various sources. Fundamental examples of such time-changed processes include the so-called Fontes-Isopi-Newman (FIN) diffusion and fractional kinetics (FK) processes, the introduction of which were partly motivated by the study of the localization and aging properties of physical spin systems, and the two- dimensional Liouville Brownian motion, which is the diffusion naturally associated with planar Liouville quantum gravity.
This FIN diffusions and FK processes are known to be the scaling limits of the Bouchaud trap models, and the two-dimensional Liouville Brownian motion is conjectured to be the scaling limit of simple random walks on random planar maps.
In the first part of my talk, I will provide a general framework for studying such time changed processes and their discrete approximations in the case when the underlying stochastic process is strongly recurrent, in the sense that it can be described by a resistance form, as introduced by J. Kigami. In particular, this includes the case of Brownian motion on tree-like spaces and low-dimensional self-similar fractals.
In the second part of my talk, I will discuss heat kernel estimates for (generalized) FIN diffusions and FK processes on metric measure spaces.
This talk is based on joint works with D. Croydon (Warwick) and B.M. Hambly (Oxford) and with Z.-Q. Chen (Seattle), P. Kim (Seoul) and J. Wang (Fuzhou).[-]
In recent years, interest in time changes of stochastic processes according to irregular measures has arisen from various sources. Fundamental examples of such time-changed processes include the so-called Fontes-Isopi-Newman (FIN) diffusion and fractional kinetics (FK) processes, the introduction of which were partly motivated by the study of the localization and aging properties of physical spin systems, and the two- dimensional Liouville ...[+]

60J35 ; 60J55 ; 60J10 ; 60J45 ; 60K37

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained for each component of the Markov chain. We generalize this result when the initial distribution is not the target probability measure. The obtained diffusive limit is the solution to a stochastic differential equation nonlinear in the sense of McKean. We prove convergence to equilibrium for this equation. We discuss practical counterparts in order to optimize the variance of the proposal distribution to accelerate convergence to equilibrium. Our analysis confirms the interest of the constant acceptance rate strategy (with acceptance rate between 1/4 and 1/3).[-]
We first introduce the Metropolis-Hastings algorithm. We then consider the Random Walk Metropolis algorithm on $R^n$ with Gaussian proposals, and when the target probability measure is the $n$-fold product of a one dimensional law. It is well-known that, in the limit $n$ tends to infinity, starting at equilibrium and for an appropriate scaling of the variance and of the timescale as a function of the dimension $n$, a diffusive limit is obtained ...[+]

60J22 ; 60J10 ; 60G50 ; 60F17 ; 60J60 ; 60G09 ; 65C40 ; 65C05

Sélection Signaler une erreur