En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 49N45 10 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
One of the goals of shape analysis is to model and characterise shape evolution. We focus on methods where this evolution is modeled by the action of a time-dependent diffeomorphism, which is characterised by its time-derivatives: vector fields. Reconstructing the evolution of a shape from observations then amounts to determining an optimal path of vector fields whose flow of diffeomorphisms deforms the initial shape in accordance with the observations. However, if the space of considered vector fields is not constrained, optimal paths may be inaccurate from a modeling point of view. To overcome this problem, the notion of deformation module allows to incorporate prior information from the data into the set of considered deformations and the associated metric. I will present this generic framework as well as the Python library IMODAL, which allows to perform registration using such structured deformations. More specifically, I will focus on a recent implicit formulation where the prior can be expressed as a property that the generated vector field should satisfy. This imposed property can be of different categories that can be adapted to many use cases, such as constraining a growth pattern or imposing divergence-free fields.[-]
One of the goals of shape analysis is to model and characterise shape evolution. We focus on methods where this evolution is modeled by the action of a time-dependent diffeomorphism, which is characterised by its time-derivatives: vector fields. Reconstructing the evolution of a shape from observations then amounts to determining an optimal path of vector fields whose flow of diffeomorphisms deforms the initial shape in accordance with the ...[+]

68U10 ; 49N90 ; 49N45 ; 51P05 ; 53-04 ; 53Z05 ; 58D30 ; 65D18 ; 68-04 ; 92C15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Dirichlet-Neumann shape optimization problems - Buttazzo, Giuseppe (Auteur de la Conférence) | CIRM H

Multi angle

We consider spectral optimization problems of the form

$\min\lbrace\lambda_1(\Omega;D):\Omega\subset D,|\Omega|=1\rbrace$

where $D$ is a given subset of the Euclidean space $\textbf{R}^d$. Here $\lambda_1(\Omega;D)$ is the first eigenvalue of the Laplace operator $-\Delta$ with Dirichlet conditions on $\partial\Omega\cap D$ and Neumann or Robin conditions on $\partial\Omega\cap\partial D$. The equivalent variational formulation

$\lambda_1(\Omega;D)=\min\lbrace\int_\Omega|\nabla u|^2dx+k\int_{\partial D}u^2d\mathcal{H}^{d-1}:$

$u\in H^1(D),u=0$ on $\partial\Omega\cap D,||u||_{L^2(\Omega)}=1\rbrace$

reminds the classical drop problems, where the first eigenvalue replaces the total variation functional. We prove an existence result for general shape cost functionals and we show some qualitative properties of the optimal domains. The case of Dirichlet condition on a $\textit{fixed}$ part and of Neumann condition on the $\textit{free}$ part of the boundary is also considered[-]
We consider spectral optimization problems of the form

$\min\lbrace\lambda_1(\Omega;D):\Omega\subset D,|\Omega|=1\rbrace$

where $D$ is a given subset of the Euclidean space $\textbf{R}^d$. Here $\lambda_1(\Omega;D)$ is the first eigenvalue of the Laplace operator $-\Delta$ with Dirichlet conditions on $\partial\Omega\cap D$ and Neumann or Robin conditions on $\partial\Omega\cap\partial D$. The equivalent variational formulation

$\lam...[+]

49Q10 ; 49J20 ; 49N45

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.[-]
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...[+]

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.[-]
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...[+]

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.[-]
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...[+]

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Optimization - lecture 1 - Pustelnik, Nelly (Auteur de la Conférence) | CIRM H

Virtualconference

During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.

Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.

1/ Introduction
2/ Optimization: basics
3/ Subdifferential and proximity operator
4/ First order schemes (gradient descent, proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, Douglas-Rachford splitting): weak and linear convergence.
5/ Conjugate, duality, proximal primal-dual algorithms
6/ Unfolded algorithms
7/ Acceleration, non-convex optimization, stochastic optimization[-]
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions ...[+]

49N45 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Optimization - lecture 2 - Pustelnik, Nelly (Auteur de la Conférence) | CIRM H

Virtualconference

During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.

Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.

1/ Introduction
2/ Optimization: basics
3/ Subdifferential and proximity operator
4/ First order schemes (gradient descent, proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, Douglas-Rachford splitting): weak and linear convergence.
5/ Conjugate, duality, proximal primal-dual algorithms
6/ Unfolded algorithms
7/ Acceleration, non-convex optimization, stochastic optimization[-]
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions ...[+]

49N45 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Optimization - lecture 3 - Pustelnik, Nelly (Auteur de la Conférence) | CIRM H

Virtualconference

During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.

Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.

1/ Introduction
2/ Optimization: basics
3/ Subdifferential and proximity operator
4/ First order schemes (gradient descent, proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, Douglas-Rachford splitting): weak and linear convergence.
5/ Conjugate, duality, proximal primal-dual algorithms
6/ Unfolded algorithms
7/ Acceleration, non-convex optimization, stochastic optimization[-]
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions ...[+]

49N45 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Optimization - lecture 4 - Pustelnik, Nelly (Auteur de la Conférence) | CIRM H

Virtualconference

During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions jointly with optimization advances.

Since 2003, convex optimization has become the main thrust behind significant advances in signal processing, image processing and machine learning. The increasingly complex variational formulations encountered in these areas which may involve a sum of several, possibly non-smooth, convex terms, together with the large sizes of the problems at hand make the use of standard optimization methods such as those based on subgradient descent techniques intractable computationally. Since their introduction in the signal processing arena, splitting techniques have emerged as a central tool to circumvent these roadblocks: they operate by breaking down the problem into individual components that can be activated individually in the solution algorithm. In the past decade, numerous convex optimization algorithms based on splitting techniques have been proposed or rediscovered in an attempt to efficiently deal with such problems. We will provide the basic building blocks for major proximal algorithm strategies and their recent advances in nonconvex and stochastic optimization. Behind non-smooth functions, there is the concept of sparsity which is central in the contributions in inverse problems and compressed sensing. This concept will be described as well as the objective functions relying on it, going from Mumford-Shah model to sparse SVM. Ten years after the start of proximal revolution, deep learning has started to provide a new framework for solving imaging problems going from agnostic techniques to models combining deep learning with standard regularized formulation. The main encountered objective functions as well as the associated algorithmic strategies will be discussed.

1/ Introduction
2/ Optimization: basics
3/ Subdifferential and proximity operator
4/ First order schemes (gradient descent, proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, Douglas-Rachford splitting): weak and linear convergence.
5/ Conjugate, duality, proximal primal-dual algorithms
6/ Unfolded algorithms
7/ Acceleration, non-convex optimization, stochastic optimization[-]
During the last 20 years, imaging sciences, including inverse problems, segmentation or classification, has known two major revolutions: (i) sparsity and proximal algorithms and (ii) deep learning and stochastic optimization. This course proposes to illustrate these major advances in the context of imaging problems that can be formulated as the minimization of an objective function and to highlight the evolution of these objective functions ...[+]

49N45 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.[-]
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...[+]

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Sélection Signaler une erreur