Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
One of the goals of shape analysis is to model and characterise shape evolution. We focus on methods where this evolution is modeled by the action of a time-dependent diffeomorphism, which is characterised by its time-derivatives: vector fields. Reconstructing the evolution of a shape from observations then amounts to determining an optimal path of vector fields whose flow of diffeomorphisms deforms the initial shape in accordance with the observations. However, if the space of considered vector fields is not constrained, optimal paths may be inaccurate from a modeling point of view. To overcome this problem, the notion of deformation module allows to incorporate prior information from the data into the set of considered deformations and the associated metric. I will present this generic framework as well as the Python library IMODAL, which allows to perform registration using such structured deformations. More specifically, I will focus on a recent implicit formulation where the prior can be expressed as a property that the generated vector field should satisfy. This imposed property can be of different categories that can be adapted to many use cases, such as constraining a growth pattern or imposing divergence-free fields.
[-]
One of the goals of shape analysis is to model and characterise shape evolution. We focus on methods where this evolution is modeled by the action of a time-dependent diffeomorphism, which is characterised by its time-derivatives: vector fields. Reconstructing the evolution of a shape from observations then amounts to determining an optimal path of vector fields whose flow of diffeomorphisms deforms the initial shape in accordance with the ...
[+]
68U10 ; 49N90 ; 49N45 ; 51P05 ; 53-04 ; 53Z05 ; 58D30 ; 65D18 ; 68-04 ; 92C15
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. These penalty functionals are crucial to force the regularized solution to conform to some notion of simplicity/low complexity. Classical priors of this kind includes sparsity, piecewise regularity and low-rank. These are natural assumptions for many applications, ranging from medical imaging to machine learning.
imaging - image processing - sparsity - convex optimization - inverse problem - super-resolution
[-]
In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. These penalty functionals are crucial to force the regularized solution to conform to some notion of simplicity/low complexity. Classical priors of this kind includes sparsity, piecewise regularity and low-rank. These are natural assumptions for many applications, ranging from medical imaging to machine ...
[+]
62H35 ; 65D18 ; 94A08 ; 68U10 ; 90C31 ; 80M50 ; 47N10
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Large image and 3D model repositories of everyday objects are now ubiquitous and are increasingly being used in computer graphics and computer vision, both for analysis and synthesis. However, images of objects in the real world have a richness of appearance that these repositories do not capture, largely because most existing 3D models are untextured. In this work we develop an automated pipeline capable of linking the two collections, and transporting texture information from images of real objects to 3D models of similar objects. This is a challenging problem, as an object's texture as seen in a photograph is distorted by many factors, including pose, geometry, and illumination. These geometric and photometric distortions must be undone in order to transfer the pure underlying texture to a new object ? the 3D model. Instead of using problematic dense correspondences, we factorize the problem into the reconstruction of a set of base textures (materials) and an illumination model for the object in the image. By exploiting the geometry of the similar 3D model, we reconstruct certain reliable texture regions and correct for the illumination, from which a full texture map can be recovered and applied to the model. Our method allows for large-scale unsupervised production of richly textured 3D models directly from image data, providing high quality virtual objects for 3D scene design or photo editing applications, as well as a wealth of data for training machine learning algorithms for various inference tasks in graphics and vision. For more details, please visit: geometry.cs.ucl.ac.uk.
[-]
Large image and 3D model repositories of everyday objects are now ubiquitous and are increasingly being used in computer graphics and computer vision, both for analysis and synthesis. However, images of objects in the real world have a richness of appearance that these repositories do not capture, largely because most existing 3D models are untextured. In this work we develop an automated pipeline capable of linking the two collections, and ...
[+]
68U10 ; 65D18
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2 y
In this presentation based on on-line demonstrations of algorithms and on the examination of several practical examples, I will reflect on the problem of modeling a detection task in images. I will place myself in the (very frequent) case where the detection task can not be formulated in a Bayesian framework or, rather equivalently that can not be solved by simultaneous learning of the model of the object and that of the background. (In the case where there are plenty of examples of the background and of the object to be detected, the neural networks provide a practical answer, but without explanatory power). Nevertheless for the detection without "learning", I will show that we can not avoid building a background model, or possibly learn it. But this will not require many examples.
Joint works with Axel Davy, Tristan Dagobert, Agnes Desolneux, Thibaud Ehret.
[-]
In this presentation based on on-line demonstrations of algorithms and on the examination of several practical examples, I will reflect on the problem of modeling a detection task in images. I will place myself in the (very frequent) case where the detection task can not be formulated in a Bayesian framework or, rather equivalently that can not be solved by simultaneous learning of the model of the object and that of the background. (In the case ...
[+]
65D18 ; 68U10 ; 68T05
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Proteins are flexible molecules involved in all biological functions. Understanding these functions actually requires delving into protein structure, thermodynamics, and kinetics. This talk will be devoted to two problems in this area. The first one is the (uniform) generation of conformations of a protein backbone in the so-called rigid geometry model. We will present a method based on algebraic solutions of the so-called tripeptide loop closure. The second one deals with the calculation of the volume of high dimensional polytope, a questions closely related to the computation of density of states in statistical physics.
[-]
Proteins are flexible molecules involved in all biological functions. Understanding these functions actually requires delving into protein structure, thermodynamics, and kinetics. This talk will be devoted to two problems in this area. The first one is the (uniform) generation of conformations of a protein backbone in the so-called rigid geometry model. We will present a method based on algebraic solutions of the so-called tripeptide loop ...
[+]
46N55 ; 92E10 ; 65D18