En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 68U10 10 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
One of the goals of shape analysis is to model and characterise shape evolution. We focus on methods where this evolution is modeled by the action of a time-dependent diffeomorphism, which is characterised by its time-derivatives: vector fields. Reconstructing the evolution of a shape from observations then amounts to determining an optimal path of vector fields whose flow of diffeomorphisms deforms the initial shape in accordance with the observations. However, if the space of considered vector fields is not constrained, optimal paths may be inaccurate from a modeling point of view. To overcome this problem, the notion of deformation module allows to incorporate prior information from the data into the set of considered deformations and the associated metric. I will present this generic framework as well as the Python library IMODAL, which allows to perform registration using such structured deformations. More specifically, I will focus on a recent implicit formulation where the prior can be expressed as a property that the generated vector field should satisfy. This imposed property can be of different categories that can be adapted to many use cases, such as constraining a growth pattern or imposing divergence-free fields.[-]
One of the goals of shape analysis is to model and characterise shape evolution. We focus on methods where this evolution is modeled by the action of a time-dependent diffeomorphism, which is characterised by its time-derivatives: vector fields. Reconstructing the evolution of a shape from observations then amounts to determining an optimal path of vector fields whose flow of diffeomorphisms deforms the initial shape in accordance with the ...[+]

68U10 ; 49N90 ; 49N45 ; 51P05 ; 53-04 ; 53Z05 ; 58D30 ; 65D18 ; 68-04 ; 92C15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. These penalty functionals are crucial to force the regularized solution to conform to some notion of simplicity/low complexity. Classical priors of this kind includes sparsity, piecewise regularity and low-rank. These are natural assumptions for many applications, ranging from medical imaging to machine learning.
imaging - image processing - sparsity - convex optimization - inverse problem - super-resolution[-]
In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. These penalty functionals are crucial to force the regularized solution to conform to some notion of simplicity/low complexity. Classical priors of this kind includes sparsity, piecewise regularity and low-rank. These are natural assumptions for many applications, ranging from medical imaging to machine ...[+]

62H35 ; 65D18 ; 94A08 ; 68U10 ; 90C31 ; 80M50 ; 47N10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Large image and 3D model repositories of everyday objects are now ubiquitous and are increasingly being used in computer graphics and computer vision, both for analysis and synthesis. However, images of objects in the real world have a richness of appearance that these repositories do not capture, largely because most existing 3D models are untextured. In this work we develop an automated pipeline capable of linking the two collections, and transporting texture information from images of real objects to 3D models of similar objects. This is a challenging problem, as an object's texture as seen in a photograph is distorted by many factors, including pose, geometry, and illumination. These geometric and photometric distortions must be undone in order to transfer the pure underlying texture to a new object ? the 3D model. Instead of using problematic dense correspondences, we factorize the problem into the reconstruction of a set of base textures (materials) and an illumination model for the object in the image. By exploiting the geometry of the similar 3D model, we reconstruct certain reliable texture regions and correct for the illumination, from which a full texture map can be recovered and applied to the model. Our method allows for large-scale unsupervised production of richly textured 3D models directly from image data, providing high quality virtual objects for 3D scene design or photo editing applications, as well as a wealth of data for training machine learning algorithms for various inference tasks in graphics and vision. For more details, please visit: geometry.cs.ucl.ac.uk.[-]
Large image and 3D model repositories of everyday objects are now ubiquitous and are increasingly being used in computer graphics and computer vision, both for analysis and synthesis. However, images of objects in the real world have a richness of appearance that these repositories do not capture, largely because most existing 3D models are untextured. In this work we develop an automated pipeline capable of linking the two collections, and ...[+]

68U10 ; 65D18

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

Detection theory and novelty filters - Morel, Jean-Michel (Auteur de la conférence) | CIRM H

Post-edited

In this presentation based on on-line demonstrations of algorithms and on the examination of several practical examples, I will reflect on the problem of modeling a detection task in images. I will place myself in the (very frequent) case where the detection task can not be formulated in a Bayesian framework or, rather equivalently that can not be solved by simultaneous learning of the model of the object and that of the background. (In the case where there are plenty of examples of the background and of the object to be detected, the neural networks provide a practical answer, but without explanatory power). Nevertheless for the detection without "learning", I will show that we can not avoid building a background model, or possibly learn it. But this will not require many examples.

Joint works with Axel Davy, Tristan Dagobert, Agnes Desolneux, Thibaud Ehret.[-]
In this presentation based on on-line demonstrations of algorithms and on the examination of several practical examples, I will reflect on the problem of modeling a detection task in images. I will place myself in the (very frequent) case where the detection task can not be formulated in a Bayesian framework or, rather equivalently that can not be solved by simultaneous learning of the model of the object and that of the background. (In the case ...[+]

65D18 ; 68U10 ; 68T05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.[-]
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...[+]

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.[-]
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...[+]

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.[-]
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...[+]

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to so-called “convex imaging problems”. This will provide an opportunity to establish connections with the convex optimisation and machine learning approaches to imaging, and to discuss some of their relative strengths and drawbacks. Examples of topics covered in the course include: efficient stochastic simulation and optimisation numerical methods that tightly combine proximal convex optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.[-]
This course presents an overview of modern Bayesian strategies for solving imaging inverse problems. We will start by introducing the Bayesian statistical decision theory framework underpinning Bayesian analysis, and then explore efficient numerical methods for performing Bayesian computation in large-scale settings. We will pay special attention to high-dimensional imaging models that are log-concave w.r.t. the unknown image, related to ...[+]

49N45 ; 65C40 ; 65C60 ; 65J22 ; 68U10 ; 62C10 ; 62F15 ; 94A08

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Mirror-symmetry in images and 3D shapes - Patraucean, Viorica (Auteur de la conférence) | CIRM

Multi angle

Several psychophysical studies stress the importance of object symmetries in human perception when dealing with tasks related to object detection and recognition. Paradoxically, state-of-the-art methods in image and D shape analysis not only they reveal very limited use of object symmetry information, but they often regard symmetries as an issue. This talk will tackle this paradox, by addressing aspects linked to symmetry detection
in images, and 3D shape matching in presence of symmetries. Specifically, a method for reducing the umber of false positives in symmetry detection in images will be presented, as well as a 3D shape matching approach that solves the ambiguity induced by intrinsic symmetries in the shape matching problem.[-]
Several psychophysical studies stress the importance of object symmetries in human perception when dealing with tasks related to object detection and recognition. Paradoxically, state-of-the-art methods in image and D shape analysis not only they reveal very limited use of object symmetry information, but they often regard symmetries as an issue. This talk will tackle this paradox, by addressing aspects linked to symmetry detection
in images, ...[+]

68U10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

La transformée de Hough ... 55 ans plus tard ! - Sequeira, Jean (Auteur de la conférence) | CIRM H

Multi angle

Paul Hough a mis au point sa “Transformée de Hough” au tout début des années soixante, pour mettre en évidence l'alignement de “points” sur une image. Une dizaine d'années plus tard, Duda et Hart, dans un article référence, montrait que le principe mis en place par Paul Hough permettait d'aller plus loin que la détection de droites, en favorisant la détection de courbes paramétrées dépendant de m paramétres au sein d'un nuage de points. Aujourd'hui, les travaux de recherche autour de cette approche continue à se développer, en particulier pour introduire de la “connaissance” dans la recherche d'occurrences de modèles paramétrés au sein d'un ensemble de données.[-]
Paul Hough a mis au point sa “Transformée de Hough” au tout début des années soixante, pour mettre en évidence l'alignement de “points” sur une image. Une dizaine d'années plus tard, Duda et Hart, dans un article référence, montrait que le principe mis en place par Paul Hough permettait d'aller plus loin que la détection de droites, en favorisant la détection de courbes paramétrées dépendant de m paramétres au sein d'un nuage de points. ...[+]

68U05 ; 68U10

Sélection Signaler une erreur