F Nous contacter


0

Documents  65D18 | enregistrements trouvés : 4

O
     

-A +A

Sélection courante (0) : Tout sélectionner / Tout déselectionner

P Q

I will describe a recent framework for robust shape reconstruction based on optimal transportation between measures, where the input measurements are seen as distribution of masses. In addition to robustness to defect-laden point sets (hampered with noise and outliers), this approach can reconstruct smooth closed shapes as well as piecewise smooth shapes with boundaries.

68Rxx ; 65D17 ; 65D18

Post-edited  Detection theory and novelty filters
Morel, Jean-Michel (Auteur de la Conférence) | CIRM (Editeur )

In this presentation based on on-line demonstrations of algorithms and on the examination of several practical examples, I will reflect on the problem of modeling a detection task in images. I will place myself in the (very frequent) case where the detection task can not be formulated in a Bayesian framework or, rather equivalently that can not be solved by simultaneous learning of the model of the object and that of the background. (In the case where there are plenty of examples of the background and of the object to be detected, the neural networks provide a practical answer, but without explanatory power). Nevertheless for the detection without "learning", I will show that we can not avoid building a background model, or possibly learn it. But this will not require many examples.

Joint works with Axel Davy, Tristan Dagobert, Agnes Desolneux, Thibaud Ehret.
In this presentation based on on-line demonstrations of algorithms and on the examination of several practical examples, I will reflect on the problem of modeling a detection task in images. I will place myself in the (very frequent) case where the detection task can not be formulated in a Bayesian framework or, rather equivalently that can not be solved by simultaneous learning of the model of the object and that of the background. (In the case ...

65D18 ; 68U10 ; 68T05

In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. These penalty functionals are crucial to force the regularized solution to conform to some notion of simplicity/low complexity. Classical priors of this kind includes sparsity, piecewise regularity and low-rank. These are natural assumptions for many applications, ranging from medical imaging to machine learning.
imaging - image processing - sparsity - convex optimization - inverse problem - super-resolution
In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. These penalty functionals are crucial to force the regularized solution to conform to some notion of simplicity/low complexity. Classical priors of this kind includes sparsity, piecewise regularity and low-rank. These are natural assumptions for many applications, ranging from medical imaging to machine ...

62H35 ; 65D18 ; 94A08 ; 68U10 ; 90C31 ; 80M50 ; 47N10

Large image and 3D model repositories of everyday objects are now ubiquitous and are increasingly being used in computer graphics and computer vision, both for analysis and synthesis. However, images of objects in the real world have a richness of appearance that these repositories do not capture, largely because most existing 3D models are untextured. In this work we develop an automated pipeline capable of linking the two collections, and transporting texture information from images of real objects to 3D models of similar objects. This is a challenging problem, as an object's texture as seen in a photograph is distorted by many factors, including pose, geometry, and illumination. These geometric and photometric distortions must be undone in order to transfer the pure underlying texture to a new object ? the 3D model. Instead of using problematic dense correspondences, we factorize the problem into the reconstruction of a set of base textures (materials) and an illumination model for the object in the image. By exploiting the geometry of the similar 3D model, we reconstruct certain reliable texture regions and correct for the illumination, from which a full texture map can be recovered and applied to the model. Our method allows for large-scale unsupervised production of richly textured 3D models directly from image data, providing high quality virtual objects for 3D scene design or photo editing applications, as well as a wealth of data for training machine learning algorithms for various inference tasks in graphics and vision. For more details, please visit: geometry.cs.ucl.ac.uk. Large image and 3D model repositories of everyday objects are now ubiquitous and are increasingly being used in computer graphics and computer vision, both for analysis and synthesis. However, images of objects in the real world have a richness of appearance that these repositories do not capture, largely because most existing 3D models are untextured. In this work we develop an automated pipeline capable of linking the two collections, and ...

68U10 ; 65D18

Nuage de mots clefs ici

Z