En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 92B20 9 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of extended graphons, introduced in Jabin-Poyato-Soler, by introducing a novel notion of discrete observables in the system. This is a joint work with D. Zhou.[-]
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of ...[+]

35Q49 ; 35Q83 ; 35R02 ; 35Q70 ; 05C90 ; 60G09 ; 35R06 ; 35Q89 ; 35Q92 ; 49N80 ; 92B20 ; 65N75

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of extended graphons, introduced in Jabin-Poyato-Soler, by introducing a novel notion of discrete observables in the system. This is a joint work with D. Zhou.[-]
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of ...[+]

35Q49 ; 35Q83 ; 35R02 ; 35Q70 ; 05C90 ; 60G09 ; 35R06 ; 35Q89 ; 49N80 ; 92B20 ; 65N75 ; 65N75

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Still searching the engram: should we? - Mongillo, Gianluigi (Author of the conference) ; Segal, Menahem (Author of the conference) | CIRM H

Multi angle

Start the video and click on the track button in the timeline to move to talk 1, 2 and to the discussion.

- Talk 1: Gianluigi Mongillo - Inhibitory connectivity defines the realm of excitatory plasticity

- Talk 2: Menahem Segal - Determinants of network activity: Lessons from dissociated hippocampal lectures

- Discussion with Gianluigi Mongillo and Menahem Segal

92B20 ; 92C20 ; 68T05 ; 68UXX

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

New hints from the reward system - Apicella, Paul (Author of the conference) ; Loewenstein, Yonatan (Author of the conference) | CIRM H

Multi angle

Start the video and click on the track button in the timeline to move to talk 1, 2 and to the discussion.

- Talk 1: Paul Apicella - Striatal dopamine and acetylcholine mechanisms involved in reward-related learning

The midbrain dopamine system has been identified as a major component of motivation and reward processing. One of its main targets is the striatum which plays an important role in motor control and learning functions. Other subcortical neurons work in parallel with dopamine neurons. In particular, striatal cholinergic interneurons participate in signaling the reward-related significance of stimuli and they may act in concert with dopamine to encode prediction error signals and control the learning of stimulus–response associations. Recent studies have revealed functional cooperativity between these two neuromodulatory systems of a complexity far greater than previously appreciated. In this talk I will review the difference and similarities between dopamine and acetylcholine reward-signaling systems, the possible nature of reward representation in each system, and discuss the involvement of striatal dopamine-acetylcholine interactions during leaning and behavior.

- Talk 2: Yonatan Loewenstein - Modeling operant learning: from synaptic plasticity to behavior

- Discussion with Paul Apicella and Yonatan Loewenstein[-]
Start the video and click on the track button in the timeline to move to talk 1, 2 and to the discussion.

- Talk 1: Paul Apicella - Striatal dopamine and acetylcholine mechanisms involved in reward-related learning

The midbrain dopamine system has been identified as a major component of motivation and reward processing. One of its main targets is the striatum which plays an important role in motor control and learning functions. Other ...[+]

68T05 ; 68UXX ; 92B20 ; 92C20 ; 92C40

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The human brain contains billions of neurones and glial cells that are tightly interconnected. Describing their electrical and chemical activity is mind-boggling hence the idea of studying the thermodynamic limit of the equations that describe these activities, i.e. to look at what happens when the number of cells grows arbitrarily large. It turns out that under reasonable hypotheses the number of equations to deal with drops down sharply from millions to a handful, albeit more complex. There are many different approaches to this which are usually called mean-field analyses. I present two mathematical methods to illustrate these approaches. They both enjoy the feature that they propagate chaos, a notion I connect to physiological measurements of the correlations between neuronal activities. In the first method, the limit equations can be read off the network equations and methods 'à la Sznitman' can be used to prove convergence and propagation of chaos as in the case of a network of biologically plausible neurone models. The second method requires more sophisticated tools such as large deviations to identify the limit and do the rest of the job, as in the case of networks of Hopfield neurones such as those present in the trendy deep neural networks.[-]
The human brain contains billions of neurones and glial cells that are tightly interconnected. Describing their electrical and chemical activity is mind-boggling hence the idea of studying the thermodynamic limit of the equations that describe these activities, i.e. to look at what happens when the number of cells grows arbitrarily large. It turns out that under reasonable hypotheses the number of equations to deal with drops down sharply from ...[+]

60F99 ; 60B10 ; 92B20 ; 82C32 ; 82C80 ; 35Q80

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Inspired by modeling in neurosciences, we here discuss the well-posedness of a networked integrate-and-fire model describing an infinite population of companies which interact with one another through their common statistical distribution. The interaction is of the self-excitatory type as, at any time, the debt of a company increases when some of the others default: precisely, the loss it receives is proportional to the instantaneous proportion of companies that default at the same time. From a mathematical point of view, the coefficient of proportionality, denoted by a, is of great importance as the resulting system is known to blow-up when a takes large values, a blow-up meaning that a macroscopic proportion of companies may default at the same time. In the current talk, we focus on the complementary regime and prove that existence and uniqueness hold in arbitrary time without any blow-up when the excitatory parameter is small enough.[-]
Inspired by modeling in neurosciences, we here discuss the well-posedness of a networked integrate-and-fire model describing an infinite population of companies which interact with one another through their common statistical distribution. The interaction is of the self-excitatory type as, at any time, the debt of a company increases when some of the others default: precisely, the loss it receives is proportional to the instantaneous proportion ...[+]

35K60 ; 82C31 ; 92B20

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
In this talk, I will focus on a Fokker-Planck equation modeling interacting neurons in a network where each neuron is governed by an Integrate and Fire dynamic type. When the network is excitatory, neurons that discharge, instantaneously increased the membrane potential of the neurons of the network with a speed which is proportional to the amplitude of the global activity of the network. The self-excitable nature of these neurons in the case of excitatory networks leads to phenomena of blow-up, once the proportion of neurons that are close to their action potential is too high. In this talk, we are interested in understanding the regimes where solutions globally exist. By new methods of entropy and upper-solution, we give criteria where the phenomena of blow-up can not appear and specify, in some cases, the asymptotic behavior of the solution.

integrate-and-fire - neural networks - Fokker-Planck equation - blow-up[-]
In this talk, I will focus on a Fokker-Planck equation modeling interacting neurons in a network where each neuron is governed by an Integrate and Fire dynamic type. When the network is excitatory, neurons that discharge, instantaneously increased the membrane potential of the neurons of the network with a speed which is proportional to the amplitude of the global activity of the network. The self-excitable nature of these neurons in the case of ...[+]

92B20 ; 82C32 ; 35Q84

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Neural networks consist of a variegate class of computational models, used in machine learning for both supervised and unsupervised learning. Several topologies of networks have been proposed in the literature, since the preliminary work from the late 50s, including models based on undirected probabilistic graphical models, such as (Restricted) Boltzmann Machines, and on multi-layer feed-forward computational graphs. The training of a neural network is usually performed by the minimization of a cost function, such as the negative log-likelihood. During the talk we will review alternative geometries used to describe the space of the functions encoded by a neural network, parametrized by its connection weights, and the implications on the optimization of the cost function during training, from the perspective of Riemannian optimization. In the first part of the presentation, we will introduce a probabilistic interpretation for neural networks, which goes back to the work of Amari and coauthors from the 90s, and which is based on the use of the Fisher-Rao metric studied in Information Geometry. In this framework, the weights of a Boltzmann Machine, and similarly for feed-forward neural networks, are interpreted as the parameters of a (joint) statistical model for the observed, and possibly latent, variables. In the second part of the talk, we will review other approaches, motivated by invariant principles in neural networks and not explicitly based on probabilistic models, to the definition of alternative geometries for the space of the parameters of a neural network. The use of alternative non-Euclidean geometries has direct impact on the training algorithms, indeed by modeling the space of the functions associated to a neural network as a Riemannian manifold determines a dependence of the gradient on the choice of metric tensor. We conclude the presentation by reviewing some recently proposed training algorithm for neural networks, based on Riemannian optimization algorithms.[-]
Neural networks consist of a variegate class of computational models, used in machine learning for both supervised and unsupervised learning. Several topologies of networks have been proposed in the literature, since the preliminary work from the late 50s, including models based on undirected probabilistic graphical models, such as (Restricted) Boltzmann Machines, and on multi-layer feed-forward computational graphs. The training of a neural ...[+]

53B21 ; 65K10 ; 68T05 ; 92B20

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In the scenario where multiple instances of networks with same nodes are available and nodes are attached to spatial features, it is worth combining both information in order to explain the role of the nodes. The explainability of node role in complex networks is very difficult, however crucial in different application scenarios such as social science, neuroscience, computer science... Many efforts have been made on the quantification of hubs revealing particular nodes in a network using a given structural property. Yet, for spatio-temporal networks, the identification of node role remains largely unexplored. In this talk, I will show limitations of classical methods on a real datasets coming from brain connectivity comparing healthy subjects to coma patients. Then, I will present recent work using equivalence relation of the nodal structural properties. Comparisons of graphs with same nodes set is evaluated with a new similarity score based on graph structural patterns. This score provides a nodal index to determine node role distinctiveness in a graph family. Finally, illustrations on different datasets concerning human brain functional connectivity will be described.[-]
In the scenario where multiple instances of networks with same nodes are available and nodes are attached to spatial features, it is worth combining both information in order to explain the role of the nodes. The explainability of node role in complex networks is very difficult, however crucial in different application scenarios such as social science, neuroscience, computer science... Many efforts have been made on the quantification of hubs ...[+]

05C75 ; 92B20 ; 90B15 ; 62P10

Bookmarks Report an error