En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents Pustelnik, Nelly 8 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Optimization - lecture 1 - Pustelnik, Nelly (Auteur de la Conférence) | CIRM H

Virtualconference

During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.

Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.

1/ Introduction
2/ Optimization: basics
3/ Subdifferential and proximity operator
4/ First order schemes (gradient descent, proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, Douglas-Rachford splitting): weak and linear convergence.
5/ Conjugate, duality, proximal primal-dual algorithms
6/ Unfolded algorithms
7/ Acceleration, non-convex optimization, stochastic optimization[-]
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions ...[+]

49N45 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Optimization - lecture 2 - Pustelnik, Nelly (Auteur de la Conférence) | CIRM H

Virtualconference

During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.

Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.

1/ Introduction
2/ Optimization: basics
3/ Subdifferential and proximity operator
4/ First order schemes (gradient descent, proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, Douglas-Rachford splitting): weak and linear convergence.
5/ Conjugate, duality, proximal primal-dual algorithms
6/ Unfolded algorithms
7/ Acceleration, non-convex optimization, stochastic optimization[-]
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions ...[+]

49N45 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Optimization - lecture 3 - Pustelnik, Nelly (Auteur de la Conférence) | CIRM H

Virtualconference

During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.

Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.

1/ Introduction
2/ Optimization: basics
3/ Subdifferential and proximity operator
4/ First order schemes (gradient descent, proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, Douglas-Rachford splitting): weak and linear convergence.
5/ Conjugate, duality, proximal primal-dual algorithms
6/ Unfolded algorithms
7/ Acceleration, non-convex optimization, stochastic optimization[-]
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions ...[+]

49N45 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Optimization - lecture 4 - Pustelnik, Nelly (Auteur de la Conférence) | CIRM H

Virtualconference

During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.

Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.

1/ Introduction
2/ Optimization: basics
3/ Subdifferential and proximity operator
4/ First order schemes (gradient descent, proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, Douglas-Rachford splitting): weak and linear convergence.
5/ Conjugate, duality, proximal primal-dual algorithms
6/ Unfolded algorithms
7/ Acceleration, non-convex optimization, stochastic optimization[-]
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions ...[+]

49N45 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Statistical comparisons of spatio-temporal networks - Achard, Sophie (Auteur de la Conférence) | CIRM H

Multi angle

In the scenario where multiple instances of networks with same nodes are available and nodes are attached to spatial features, it is worth combining both information in order to explain the role of the nodes. The explainability of node role in complex networks is very difficult, however crucial in different application scenarios such as social science, neuroscience, computer science... Many efforts have been made on the quantification of hubs revealing particular nodes in a network using a given structural property. Yet, for spatio-temporal networks, the identification of node role remains largely unexplored. In this talk, I will show limitations of classical methods on a real datasets coming from brain connectivity comparing healthy subjects to coma patients. Then, I will present recent work using equivalence relation of the nodal structural properties. Comparisons of graphs with same nodes set is evaluated with a new similarity score based on graph structural patterns. This score provides a nodal index to determine node role distinctiveness in a graph family. Finally, illustrations on different datasets concerning human brain functional connectivity will be described.[-]
In the scenario where multiple instances of networks with same nodes are available and nodes are attached to spatial features, it is worth combining both information in order to explain the role of the nodes. The explainability of node role in complex networks is very difficult, however crucial in different application scenarios such as social science, neuroscience, computer science... Many efforts have been made on the quantification of hubs ...[+]

05C75 ; 92B20 ; 90B15 ; 62P10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk I will discuss how a variant of the classical optimal transport problem, known as the Gromov-Wasserstein distance, can help in designing learning tasks over graphs, and allow to transpose classical signal processing or data analysis tools such as dictionary learning or online change detection, for learning over those types of structured objects. Both theoretical and practical aspects will be discussed.

68Q32 ; 68T05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
While message-passing neural networks (MPNNs) are the most popular architectures for graph learning, their expressive power is inherently limited. In order to gain increased expressive power while retaining efficiency, several recent works apply MPNNs to subgraphs of the original graph. As a starting point, the talk will introduce the Equivariant Subgraph Aggregation Networks (ESAN) architecture, which is a representative framework for this class of methods. In ESAN, each graph is represented as a set of subgraphs, selected according to a predefined policy. The sets of subgraphs are then processed using an equivariant architecture designed specifically for this purpose. I will then present a recent follow-up work that revisits the symmetry group suggested in ESAN and suggests that a more precise choice can be made if we restrict our attention to a specific popular family of subgraph selection policies. We will see that using this observation, one can make a direct connection between subgraph GNNs and Invariant Graph Networks (IGNs), thus providing new insights into subgraph GNNs' expressive power and design space.[-]
While message-passing neural networks (MPNNs) are the most popular architectures for graph learning, their expressive power is inherently limited. In order to gain increased expressive power while retaining efficiency, several recent works apply MPNNs to subgraphs of the original graph. As a starting point, the talk will introduce the Equivariant Subgraph Aggregation Networks (ESAN) architecture, which is a representative framework for this ...[+]

68T05 ; 05C60 ; 68R10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Clustering with tangles - von Luxburg, Ulrike (Auteur de la Conférence) ; Klepper, Solveig (Auteur de la Conférence) | CIRM H

Multi angle

Originally, tangles were invented as an abstract tool in mathematical graph theory to prove the famous graph minor theorem. In the talk, I will showcase the potential of tangles in machine learning applications. Given a collection of cuts of any dataset, tangles aggregate these cuts to point in the direction of a dense structure. As a result, a cluster is softly characterized by a set of consistent pointers. This highly flexible approach can solve clustering problems in various setups, ranging from questionnaires over community detection in graphs to clustering points in metric spaces.[-]
Originally, tangles were invented as an abstract tool in mathematical graph theory to prove the famous graph minor theorem. In the talk, I will showcase the potential of tangles in machine learning applications. Given a collection of cuts of any dataset, tangles aggregate these cuts to point in the direction of a dense structure. As a result, a cluster is softly characterized by a set of consistent pointers. This highly flexible approach can ...[+]

68T05

Sélection Signaler une erreur