Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
While message-passing neural networks (MPNNs) are the most popular architectures for graph learning, their expressive power is inherently limited. In order to gain increased expressive power while retaining efficiency, several recent works apply MPNNs to subgraphs of the original graph. As a starting point, the talk will introduce the Equivariant Subgraph Aggregation Networks (ESAN) architecture, which is a representative framework for this class of methods. In ESAN, each graph is represented as a set of subgraphs, selected according to a predefined policy. The sets of subgraphs are then processed using an equivariant architecture designed specifically for this purpose. I will then present a recent follow-up work that revisits the symmetry group suggested in ESAN and suggests that a more precise choice can be made if we restrict our attention to a specific popular family of subgraph selection policies. We will see that using this observation, one can make a direct connection between subgraph GNNs and Invariant Graph Networks (IGNs), thus providing new insights into subgraph GNNs' expressive power and design space.
[-]
While message-passing neural networks (MPNNs) are the most popular architectures for graph learning, their expressive power is inherently limited. In order to gain increased expressive power while retaining efficiency, several recent works apply MPNNs to subgraphs of the original graph. As a starting point, the talk will introduce the Equivariant Subgraph Aggregation Networks (ESAN) architecture, which is a representative framework for this ...
[+]
68T05 ; 05C60 ; 68R10
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2 y
A non-backtracking walk on a graph is a directed path such that no edge is the inverse of its preceding edge. The non-backtracking matrix of a graph is indexed by its directed edges and can be used to count non-backtracking walks of a given length. It has been used recently in the context of community detection and has appeared previously in connection with the Ihara zeta function and in some generalizations of Ramanujan graphs. In this work, we study the largest eigenvalues of the non-backtracking matrix of the Erdos-Renyi random graph and of the Stochastic Block Model in the regime where the number of edges is proportional to the number of vertices. Our results confirm the "spectral redemption" conjecture that community detection can be made on the basis of the leading eigenvectors above the feasibility threshold.
[-]
A non-backtracking walk on a graph is a directed path such that no edge is the inverse of its preceding edge. The non-backtracking matrix of a graph is indexed by its directed edges and can be used to count non-backtracking walks of a given length. It has been used recently in the context of community detection and has appeared previously in connection with the Ihara zeta function and in some generalizations of Ramanujan graphs. In this work, we ...
[+]
05C50 ; 05C80 ; 68T05 ; 91D30
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
L'apparition des "Big Data" est en train de modifier profondément notre compréhension du traitement algorithmique de l'information. Le centre de gravité s'est déplacé du calcul vers les données, et le passage à l'échelle est devenu une notion centrale. En particulier, la prise en compte de la localisation géographique des données, du coût de leur déplacement et de leur disponibilité sont devenus des facteurs majeurs de la conception des applications.
Cette nouvelle vision "centrée sur les données" et "consciente de l'échelle" (data-centric, scaling-aware) renouvelle complètement la problématique de l'algorithmique et de la programmation, à la fois dans les outils théoriques utilisés et aussi dans les méthodologies pratiques mises en oeuvre. Cet exposé présentera quelques-uns des aspects ainsi touchés et proposera des pistes pour adapter l'enseignement de l'informatique à ce nouveau paysage.
[-]
L'apparition des "Big Data" est en train de modifier profondément notre compréhension du traitement algorithmique de l'information. Le centre de gravité s'est déplacé du calcul vers les données, et le passage à l'échelle est devenu une notion centrale. En particulier, la prise en compte de la localisation géographique des données, du coût de leur déplacement et de leur disponibilité sont devenus des facteurs majeurs de la conception des ...
[+]
68P05 ; 68T05 ; 68W40
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2 y
In this presentation based on on-line demonstrations of algorithms and on the examination of several practical examples, I will reflect on the problem of modeling a detection task in images. I will place myself in the (very frequent) case where the detection task can not be formulated in a Bayesian framework or, rather equivalently that can not be solved by simultaneous learning of the model of the object and that of the background. (In the case where there are plenty of examples of the background and of the object to be detected, the neural networks provide a practical answer, but without explanatory power). Nevertheless for the detection without "learning", I will show that we can not avoid building a background model, or possibly learn it. But this will not require many examples.
Joint works with Axel Davy, Tristan Dagobert, Agnes Desolneux, Thibaud Ehret.
[-]
In this presentation based on on-line demonstrations of algorithms and on the examination of several practical examples, I will reflect on the problem of modeling a detection task in images. I will place myself in the (very frequent) case where the detection task can not be formulated in a Bayesian framework or, rather equivalently that can not be solved by simultaneous learning of the model of the object and that of the background. (In the case ...
[+]
65D18 ; 68U10 ; 68T05
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many data can be represented as rankings or permutations, raising the question of developing machine learning models on the symmetric group. When the number of items in the permutations gets large, manipulating permutations can quickly become computationally intractable. I will discuss two computationally efficient embeddings of the symmetric groups in Euclidean spaces leading to fast machine learning algorithms, and illustrate their relevance on biological applications and image classification.
[-]
Many data can be represented as rankings or permutations, raising the question of developing machine learning models on the symmetric group. When the number of items in the permutations gets large, manipulating permutations can quickly become computationally intractable. I will discuss two computationally efficient embeddings of the symmetric groups in Euclidean spaces leading to fast machine learning algorithms, and illustrate their relevance ...
[+]
62H30 ; 62P10 ; 68T05
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk I will present some recent developments in model-free reinforcement learning applied to large state spaces, with an emphasis on deep learning and its role in estimating action-value functions. The talk will cover a variety of model-free algorithms, including variations on Q-Learning, and some of the main techniques that make the approach practical. I will illustrate the usefulness of these methods with examples drawn from the Arcade Learning Environment, the popular set of Atari 2600 benchmark domains.
[-]
In this talk I will present some recent developments in model-free reinforcement learning applied to large state spaces, with an emphasis on deep learning and its role in estimating action-value functions. The talk will cover a variety of model-free algorithms, including variations on Q-Learning, and some of the main techniques that make the approach practical. I will illustrate the usefulness of these methods with examples drawn from the Arcade ...
[+]
68Q32 ; 91A25 ; 68T05
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, I will introduce the classical theory of multi-armed bandits, a field at the junction of statistics, optimization, game theory and machine learning, discuss the possible applications, and highlights the new perspectives and open questions that they propose We consider competitive capacity investment for a duopoly of two distinct producers. The producers are exposed to stochastically fluctuating costs and interact through aggregate supply. Capacity expansion is irreversible and modeled in terms of timing strategies characterized through threshold rules. Because the impact of changing costs on the producers is asymmetric, we are led to a nonzero-sum timing game describing the transitions among the discrete investment stages. Working in a continuous-time diffusion framework, we characterize and analyze the resulting Nash equilibrium and game values. Our analysis quantifies the dynamic competition effects and yields insight into dynamic preemption and over-investment in a general asymmetric setting. A case-study considering the impact of fluctuating emission costs on power producers investing in nuclear and coal-fired plants is also presented.
[-]
In this talk, I will introduce the classical theory of multi-armed bandits, a field at the junction of statistics, optimization, game theory and machine learning, discuss the possible applications, and highlights the new perspectives and open questions that they propose We consider competitive capacity investment for a duopoly of two distinct producers. The producers are exposed to stochastically fluctuating costs and interact through aggregate ...
[+]
62L05 ; 68T05 ; 91A26 ; 91A80 ; 91B26
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Neural networks consist of a variegate class of computational models, used in machine learning for both supervised and unsupervised learning. Several topologies of networks have been proposed in the literature, since the preliminary work from the late 50s, including models based on undirected probabilistic graphical models, such as (Restricted) Boltzmann Machines, and on multi-layer feed-forward computational graphs. The training of a neural network is usually performed by the minimization of a cost function, such as the negative log-likelihood. During the talk we will review alternative geometries used to describe the space of the functions encoded by a neural network, parametrized by its connection weights, and the implications on the optimization of the cost function during training, from the perspective of Riemannian optimization. In the first part of the presentation, we will introduce a probabilistic interpretation for neural networks, which goes back to the work of Amari and coauthors from the 90s, and which is based on the use of the Fisher-Rao metric studied in Information Geometry. In this framework, the weights of a Boltzmann Machine, and similarly for feed-forward neural networks, are interpreted as the parameters of a (joint) statistical model for the observed, and possibly latent, variables. In the second part of the talk, we will review other approaches, motivated by invariant principles in neural networks and not explicitly based on probabilistic models, to the definition of alternative geometries for the space of the parameters of a neural network. The use of alternative non-Euclidean geometries has direct impact on the training algorithms, indeed by modeling the space of the functions associated to a neural network as a Riemannian manifold determines a dependence of the gradient on the choice of metric tensor. We conclude the presentation by reviewing some recently proposed training algorithm for neural networks, based on Riemannian optimization algorithms.
[-]
Neural networks consist of a variegate class of computational models, used in machine learning for both supervised and unsupervised learning. Several topologies of networks have been proposed in the literature, since the preliminary work from the late 50s, including models based on undirected probabilistic graphical models, such as (Restricted) Boltzmann Machines, and on multi-layer feed-forward computational graphs. The training of a neural ...
[+]
53B21 ; 65K10 ; 68T05 ; 92B20