Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The human brain contains billions of neurones and glial cells that are tightly interconnected. Describing their electrical and chemical activity is mind-boggling hence the idea of studying the thermodynamic limit of the equations that describe these activities, i.e. to look at what happens when the number of cells grows arbitrarily large. It turns out that under reasonable hypotheses the number of equations to deal with drops down sharply from millions to a handful, albeit more complex. There are many different approaches to this which are usually called mean-field analyses. I present two mathematical methods to illustrate these approaches. They both enjoy the feature that they propagate chaos, a notion I connect to physiological measurements of the correlations between neuronal activities. In the first method, the limit equations can be read off the network equations and methods 'à la Sznitman' can be used to prove convergence and propagation of chaos as in the case of a network of biologically plausible neurone models. The second method requires more sophisticated tools such as large deviations to identify the limit and do the rest of the job, as in the case of networks of Hopfield neurones such as those present in the trendy deep neural networks.
[-]
The human brain contains billions of neurones and glial cells that are tightly interconnected. Describing their electrical and chemical activity is mind-boggling hence the idea of studying the thermodynamic limit of the equations that describe these activities, i.e. to look at what happens when the number of cells grows arbitrarily large. It turns out that under reasonable hypotheses the number of equations to deal with drops down sharply from ...
[+]
60F99 ; 60B10 ; 92B20 ; 82C32 ; 82C80 ; 35Q80
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2 y
In this talk, I will focus on a Fokker-Planck equation modeling interacting neurons in a network where each neuron is governed by an Integrate and Fire dynamic type. When the network is excitatory, neurons that discharge, instantaneously increased the membrane potential of the neurons of the network with a speed which is proportional to the amplitude of the global activity of the network. The self-excitable nature of these neurons in the case of excitatory networks leads to phenomena of blow-up, once the proportion of neurons that are close to their action potential is too high. In this talk, we are interested in understanding the regimes where solutions globally exist. By new methods of entropy and upper-solution, we give criteria where the phenomena of blow-up can not appear and specify, in some cases, the asymptotic behavior of the solution.
integrate-and-fire - neural networks - Fokker-Planck equation - blow-up
[-]
In this talk, I will focus on a Fokker-Planck equation modeling interacting neurons in a network where each neuron is governed by an Integrate and Fire dynamic type. When the network is excitatory, neurons that discharge, instantaneously increased the membrane potential of the neurons of the network with a speed which is proportional to the amplitude of the global activity of the network. The self-excitable nature of these neurons in the case of ...
[+]
92B20 ; 82C32 ; 35Q84