En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

2024 - Sem 1 - Carrillo - Nouri 12 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Dynamics of strategic agents and algorithms as PDEs - Hoffmann, Franca (Auteur de la Conférence) | CIRM H

Multi angle

We propose a PDE framework for modeling the distribution shift of a strategic population interacting with a learning algorithm. We consider two particular settings one, where the objective of the algorithm and population are aligned, and two, where the algorithm and population have opposite goals. We present convergence analysis for both settings, including three different timescales for the opposing-goal objective dynamics. We illustrate how our framework can accurately model real-world data and show via synthetic examples how it captures sophisticated distribution changes which cannot be modeled with simpler methods.[-]
We propose a PDE framework for modeling the distribution shift of a strategic population interacting with a learning algorithm. We consider two particular settings one, where the objective of the algorithm and population are aligned, and two, where the algorithm and population have opposite goals. We present convergence analysis for both settings, including three different timescales for the opposing-goal objective dynamics. We illustrate how ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Global existence of classical solutions is investigated for a chemotaxis model with local sensing for a general class of mobility functions. In contrast to the classical Keller-Segel chemotaxis model, no finite blowup occurs but the formation of singularities is possibly shifted to infinity. In addition, some classes of mobility functions for which solutions are bounded are identified. Joint works with Jie Jiang, Wuhan et Yanyan Zhang, Shanghai.

35K51 ; 35K55 ; 35B40 ; 35A01

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We discuss the natural Lagrangian and Eulerian formulations of multi-agent deterministic optimal control problems, analyzing their relations with a novel Kantorovich formulation. We exhibit some equivalence results among the various representations and compare the respective value functions, by combining techniques and ideas from optimal transport, control theory, Young measures and evolution equations in Banach spaces. We further exploit the connections among Lagrangian and Eulerian descriptions to derive consistency results as the number of particles/agents tends to infinity. (In collaboration with Giulia Cavagnari, Stefano Lisini and Carlo Orrieri)[-]
We discuss the natural Lagrangian and Eulerian formulations of multi-agent deterministic optimal control problems, analyzing their relations with a novel Kantorovich formulation. We exhibit some equivalence results among the various representations and compare the respective value functions, by combining techniques and ideas from optimal transport, control theory, Young measures and evolution equations in Banach spaces. We further exploit the ...[+]

49N80 ; 49Q22

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Motivated by the task of sampling measures in high dimensions we will discuss a number of gradient flows in the spaces of measures, including the Wasserstein gradient flows of Maximum Mean Discrepancy and Hellinger gradient flows of relative entropy, the Stein Variational Gradient Descent and a new projected dynamic gradient flows. For all the flows we will consider their deterministic interacting-particle approximations. The talk is highlight some of the properties of the flows and indicate their differences. In particular we will discuss how well can the interacting particles approximate the target measures.The talk is based on joint works wit Anna Korba, Lantian Xu, Sangmin Park, Yulong Lu, and Lihan Wang.[-]
Motivated by the task of sampling measures in high dimensions we will discuss a number of gradient flows in the spaces of measures, including the Wasserstein gradient flows of Maximum Mean Discrepancy and Hellinger gradient flows of relative entropy, the Stein Variational Gradient Descent and a new projected dynamic gradient flows. For all the flows we will consider their deterministic interacting-particle approximations. The talk is highlight ...[+]

35Q62 ; 35Q70 ; 82C21 ; 62D05 ; 45M05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Suppression of chemotactic blow-up by buoyancy - Yao, Yao (Auteur de la Conférence) | CIRM H

Multi angle

Chemotactic blow up in the context of the Keller-Segel equation is an extensively studied phenomenon. In recent years, it has been shown that when the Keller-Segel equation is coupled with passive advection, blow-up can be prevented if the flow possesses mixing or diffusion-enhancing properties, and its amplitude is sufficiently strong. In this talk, we consider the Keller-Segel equation coupled with an active advection, which is an incompressible flow obeying Darcy's law for incompressible porous media equation and driven by buoyancy force. We prove that in contrast with passive advection, this active advection coupling is capable of suppressing chemotactic blow up at arbitrary small coupling strength: namely, the system always has globally regular solutions. (Joint work with Zhongtian Hu and Alexander Kiselev).[-]
Chemotactic blow up in the context of the Keller-Segel equation is an extensively studied phenomenon. In recent years, it has been shown that when the Keller-Segel equation is coupled with passive advection, blow-up can be prevented if the flow possesses mixing or diffusion-enhancing properties, and its amplitude is sufficiently strong. In this talk, we consider the Keller-Segel equation coupled with an active advection, which is an inc...[+]

35B35 ; 35K55 ; 76B03

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Sampling is a fundamental task in Machine Learning. For instance in Bayesian Machine Learning, one has to sample from the posterior distribution over the parameters of a learning model, whose density is known up to a normalizing constant. In other settings such as generative modelling, one has to sample from a distribution from which some samples are available (e.g. images). The task of sampling can be seen as an optimization problem over the space of probability measures. The mathematical theory providing the tools and concepts for optimization over the space of probability measures is the theory of optimal transport. The topic of this course will be the connection between optimization and sampling, more precisely, how to solve sampling problems using optimization ideas. The goal of the first part of the course will be to present two important concepts from optimal transport: Wasserstein gradient flows and geodesic convexity. We will introduce them by analogy with their euclidean counterpart that is well known in optimization. The goal of the second part will be to show how these concepts, along with standard optimization techniques, enable to design, improve and analyze various sampling algorithms. In particular. we will focus on several interacting particles schemes that achieve state-of-the-art performance in machine learning.[-]
Sampling is a fundamental task in Machine Learning. For instance in Bayesian Machine Learning, one has to sample from the posterior distribution over the parameters of a learning model, whose density is known up to a normalizing constant. In other settings such as generative modelling, one has to sample from a distribution from which some samples are available (e.g. images). The task of sampling can be seen as an optimization problem over the ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The aggregation-diffusion equation is a nonlocal PDE that arises in the collective motion of cells. Mathematically, it is driven by two competing effects: local repulsion modelled by nonlinear diffusion, and long-range attraction modelled by nonlocal interaction. In this course, I will discuss several qualitative properties of its steady states and dynamical solutions. Using continuous Steiner symmetrization techniques, we show that all steady states are radially symmetric up to a translation. (joint with Carrillo, Hittmeir and Volzone). Once the symmetry is known, we further investigate whether steady states are unique within the radial class, and show that for a given mass, the uniqueness/non-uniqueness of steady states is determined by the power of the degenerate diffusion, with the critical power being m = 2. (joint with Delgadino and Yan). I'll also discuss some properties on the long-time behavior of aggregation-diffusion equation with linear diffusion (joint with Carrillo, Gomez-Castro and Zeng), and global-wellposedness if Keller-Segel equation when coupled with an active advection term (joint with Hu and Kiselev).[-]
The aggregation-diffusion equation is a nonlocal PDE that arises in the collective motion of cells. Mathematically, it is driven by two competing effects: local repulsion modelled by nonlinear diffusion, and long-range attraction modelled by nonlocal interaction. In this course, I will discuss several qualitative properties of its steady states and dynamical solutions. Using continuous Steiner symmetrization techniques, we show that all steady ...[+]

35B35 ; 35K55 ; 76B03

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of extended graphons, introduced in Jabin-Poyato-Soler, by introducing a novel notion of discrete observables in the system. This is a joint work with D. Zhou.[-]
We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neuron and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of ...[+]

35Q49 ; 35Q83 ; 35R02 ; 35Q70 ; 05C90 ; 60G09 ; 35R06 ; 35Q89 ; 35Q92 ; 49N80 ; 92B20 ; 65N75

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Sampling is a fundamental task in Machine Learning. For instance in Bayesian Machine Learning, one has to sample from the posterior distribution over the parameters of a learning model, whose density is known up to a normalizing constant. In other settings such as generative modelling, one has to sample from a distribution from which some samples are available (e.g. images). The task of sampling can be seen as an optimization problem over the space of probability measures. The mathematical theory providing the tools and concepts for optimization over the space of probability measures is the theory of optimal transport. The topic of this course will be the connection between optimization and sampling, more precisely, how to solve sampling problems using optimization ideas. The goal of the first part of the course will be to present two important concepts from optimal transport: Wasserstein gradient flows and geodesic convexity. We will introduce them by analogy with their euclidean counterpart that is well known in optimization. The goal of the second part will be to show how these concepts, along with standard optimization techniques, enable to design, improve and analyze various sampling algorithms. In particular. we will focus on several interacting particles schemes that achieve state-of-the-art performance in machine learning.[-]
Sampling is a fundamental task in Machine Learning. For instance in Bayesian Machine Learning, one has to sample from the posterior distribution over the parameters of a learning model, whose density is known up to a normalizing constant. In other settings such as generative modelling, one has to sample from a distribution from which some samples are available (e.g. images). The task of sampling can be seen as an optimization problem over the ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The aggregation-diffusion equation is a nonlocal PDE that arises in the collective motion of cells. Mathematically, it is driven by two competing effects: local repulsion modelled by nonlinear diffusion, and long-range attraction modelled by nonlocal interaction. In this course, I will discuss several qualitative properties of its steady states and dynamical solutions. Using continuous Steiner symmetrization techniques, we show that all steady states are radially symmetric up to a translation. (joint with Carrillo, Hittmeir and Volzone). Once the symmetry is known, we further investigate whether steady states are unique within the radial class, and show that for a given mass, the uniqueness/non-uniqueness of steady states is determined by the power of the degenerate diffusion, with the critical power being m = 2. (joint with Delgadino and Yan). I'll also discuss some properties on the long-time behavior of aggregation-diffusion equation with linear diffusion (joint with Carrillo, Gomez-Castro and Zeng), and global-wellposedness if Keller-Segel equation when coupled with an active advection term (joint with Hu and Kiselev).[-]
The aggregation-diffusion equation is a nonlocal PDE that arises in the collective motion of cells. Mathematically, it is driven by two competing effects: local repulsion modelled by nonlinear diffusion, and long-range attraction modelled by nonlocal interaction. In this course, I will discuss several qualitative properties of its steady states and dynamical solutions. Using continuous Steiner symmetrization techniques, we show that all steady ...[+]

35B35 ; 35K55 ; 76B03

Sélection Signaler une erreur