En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 68T05 35 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
A non-backtracking walk on a graph is a directed path such that no edge is the inverse of its preceding edge. The non-backtracking matrix of a graph is indexed by its directed edges and can be used to count non-backtracking walks of a given length. It has been used recently in the context of community detection and has appeared previously in connection with the Ihara zeta function and in some generalizations of Ramanujan graphs. In this work, we study the largest eigenvalues of the non-backtracking matrix of the Erdos-Renyi random graph and of the Stochastic Block Model in the regime where the number of edges is proportional to the number of vertices. Our results confirm the "spectral redemption" conjecture that community detection can be made on the basis of the leading eigenvectors above the feasibility threshold.[-]
A non-backtracking walk on a graph is a directed path such that no edge is the inverse of its preceding edge. The non-backtracking matrix of a graph is indexed by its directed edges and can be used to count non-backtracking walks of a given length. It has been used recently in the context of community detection and has appeared previously in connection with the Ihara zeta function and in some generalizations of Ramanujan graphs. In this work, we ...[+]

05C50 ; 05C80 ; 68T05 ; 91D30

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Big Data: Tremendous challenges, great solutions - Bougé, Luc (Auteur de la Conférence) | CIRM H

Multi angle

L'apparition des "Big Data" est en train de modifier profondément notre compréhension du traitement algorithmique de l'information. Le centre de gravité s'est déplacé du calcul vers les données, et le passage à l'échelle est devenu une notion centrale. En particulier, la prise en compte de la localisation géographique des données, du coût de leur déplacement et de leur disponibilité sont devenus des facteurs majeurs de la conception des applications.
Cette nouvelle vision "centrée sur les données" et "consciente de l'échelle" (data-centric, scaling-aware) renouvelle complètement la problématique de l'algorithmique et de la programmation, à la fois dans les outils théoriques utilisés et aussi dans les méthodologies pratiques mises en oeuvre. Cet exposé présentera quelques-uns des aspects ainsi touchés et proposera des pistes pour adapter l'enseignement de l'informatique à ce nouveau paysage.[-]
L'apparition des "Big Data" est en train de modifier profondément notre compréhension du traitement algorithmique de l'information. Le centre de gravité s'est déplacé du calcul vers les données, et le passage à l'échelle est devenu une notion centrale. En particulier, la prise en compte de la localisation géographique des données, du coût de leur déplacement et de leur disponibilité sont devenus des facteurs majeurs de la conception des ...[+]

68P05 ; 68T05 ; 68W40

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Algorithms in high-dimensional non-convex landscapes - Zdeborova, Lenka (Auteur de la Conférence) | CIRM H

Multi angle

Analysis of algorithms in noisy high-dimensional probabilistic problems poses many current challenges. In a subclass of these problems the corresponding challenges can be overcome with the help of a method coming from statistical mechanics. I will review some of the related recent work together with progress on rigorous justification of the corresponding results.

68T05 ; 62P35 ; 68W25

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
While message-passing neural networks (MPNNs) are the most popular architectures for graph learning, their expressive power is inherently limited. In order to gain increased expressive power while retaining efficiency, several recent works apply MPNNs to subgraphs of the original graph. As a starting point, the talk will introduce the Equivariant Subgraph Aggregation Networks (ESAN) architecture, which is a representative framework for this class of methods. In ESAN, each graph is represented as a set of subgraphs, selected according to a predefined policy. The sets of subgraphs are then processed using an equivariant architecture designed specifically for this purpose. I will then present a recent follow-up work that revisits the symmetry group suggested in ESAN and suggests that a more precise choice can be made if we restrict our attention to a specific popular family of subgraph selection policies. We will see that using this observation, one can make a direct connection between subgraph GNNs and Invariant Graph Networks (IGNs), thus providing new insights into subgraph GNNs' expressive power and design space.[-]
While message-passing neural networks (MPNNs) are the most popular architectures for graph learning, their expressive power is inherently limited. In order to gain increased expressive power while retaining efficiency, several recent works apply MPNNs to subgraphs of the original graph. As a starting point, the talk will introduce the Equivariant Subgraph Aggregation Networks (ESAN) architecture, which is a representative framework for this ...[+]

68T05 ; 05C60 ; 68R10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Clustering with tangles - von Luxburg, Ulrike (Auteur de la Conférence) ; Klepper, Solveig (Auteur de la Conférence) | CIRM H

Multi angle

Originally, tangles were invented as an abstract tool in mathematical graph theory to prove the famous graph minor theorem. In the talk, I will showcase the potential of tangles in machine learning applications. Given a collection of cuts of any dataset, tangles aggregate these cuts to point in the direction of a dense structure. As a result, a cluster is softly characterized by a set of consistent pointers. This highly flexible approach can solve clustering problems in various setups, ranging from questionnaires over community detection in graphs to clustering points in metric spaces.[-]
Originally, tangles were invented as an abstract tool in mathematical graph theory to prove the famous graph minor theorem. In the talk, I will showcase the potential of tangles in machine learning applications. Given a collection of cuts of any dataset, tangles aggregate these cuts to point in the direction of a dense structure. As a result, a cluster is softly characterized by a set of consistent pointers. This highly flexible approach can ...[+]

68T05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Statistical contract theory - Jordan, Michael (Auteur de la Conférence) | CIRM H

Multi angle

Contract theory is the study of economic incentives when parties transact in the presence of private information. We augment classical contract theory to incorporate a role for learning from data, where the overall goal of the adaptive mechanism is to obtain desired statistical behavior. We consider applications of this framework to problems in federated learning, the delegation of data collection, and principal-agent regulatory mechanisms.

68T05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Privacy: definitions, procedures, open problems - Duchi, John (Auteur de la Conférence) | CIRM H

Multi angle

I will provide a broad overview of differential privacy, which provides guarantees that a data analysis protects the privacy of data contributors. The main focus will be on the private computation and release of different statistics, both classical (low-dimensional) and high-dimensional statistics. In addition to giving a high-level program for the development of optimal private estimators, I will likely discuss a few open questions as well.

68T05 ; 94A60

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Large-scale machine learning and convex optimization 1/2 - Bach, Francis (Auteur de la Conférence) | CIRM H

Multi angle

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent.[-]
Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...[+]

62L20 ; 68T05 ; 90C06 ; 90C25

Sélection Signaler une erreur