En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK
1

Stochastic normalizing flows and the power of patches in inverse problems

Sélection Signaler une erreur
Multi angle
Auteurs : Steidl, Gabriele (Auteur de la Conférence)
... (Editeur )

Loading the player...

Résumé : Learning neural networks using only a small amount of data is an important ongoing research topic with tremendous potential for applications. We introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows, called patchNR. It involves a normalizing flow learned on patches of very few images. The subsequent reconstruction method is completely unsupervised and the same regularizer can be used for different forward operators acting on the same class of images.
By investigating the distribution of patches versus those of the whole image class, we prove that our variational model is indeed a MAP approach. Numerical examples for low-dose CT, limited-angle CT and superresolution of material images demonstrate that our method provides high quality results among unsupervised methods, but requires only very few data. Further, the appoach also works if only the low resolution image is available.
In the second part of the talk I will generalize normalizing flows to stochastic normalizing flows to improve their expressivity.Normalizing flows, diffusion normalizing flows and variational autoencoders are powerful generative models. A unified framework to handle these approaches appear to be Markov chains. We consider stochastic normalizing flows as a pair of Markov chains fulfilling some properties and show how many state-of-the-art models for data generation fit into this framework. Indeed including stochastic layers improves the expressivity of the network and allows for generating multimodal distributions from unimodal ones. The Markov chains point of view enables us to couple both deterministic layers as invertible neural networks and stochastic layers as Metropolis-Hasting layers, Langevin layers, variational autoencoders and diffusion normalizing flows in a mathematically sound way. Our framework establishes a useful mathematical tool to combine the various approaches.
Joint work with F. Altekrüger, A. Denker, P. Hagemann, J. Hertrich, P. Maass

Keywords : normalizing flows; variational auto-encoders; Markov chain; diffusion flows

Codes MSC :
60J20 - Applications of Markov chains and discrete-time Markov processes on general state spaces
60J22 - Computational methods in Markov chains
62F15 - Bayesian inference
65C05 - Monte Carlo methods
65C40 - Computational Markov chains (numerical analysis)
68T07 - Artificial neural networks and deep learning

Ressources complémentaires :
https://www.cirm-math.fr/RepOrga/2551/Slides/STEIDL.pdf

    Informations sur la Vidéo

    Réalisateur : Hennenfent, Guillaume
    Langue : Anglais
    Date de publication : 10/11/2022
    Date de captation : 04/10/2022
    Sous collection : Research talks
    arXiv category : Machine Learning ; Probability
    Domaine : Mathematics in Science & Technology ; Probability & Statistics
    Format : MP4 (.mp4) - HD
    Durée : 00:42:41
    Audience : Researchers ; Graduate Students ; Doctoral Students, Post-Doctoral Students
    Download : https://videos.cirm-math.fr/2022-10-04_Steidl.mp4

Informations sur la Rencontre

Nom de la rencontre : Learning and Optimization in Luminy - LOL2022 / Apprentissage et Optimisation à Luminy - LOL2022
Organisateurs de la rencontre : Boyer, Claire ; d'Aspremont, Alexandre ; Dieuleveut, Aymeric ; Moreau, Thomas ; Villar, Soledad
Dates : 03/10/2022 - 07/10/2022
Année de la rencontre : 2022
URL Congrès : https://conferences.cirm-math.fr/2551.html

Données de citation

DOI : 10.24350/CIRM.V.19966003
Citer cette vidéo: Steidl, Gabriele (2022). Stochastic normalizing flows and the power of patches in inverse problems. CIRM. Audiovisual resource. doi:10.24350/CIRM.V.19966003
URI : http://dx.doi.org/10.24350/CIRM.V.19966003

Voir aussi

Bibliographie

  • ALTEKRÜGER, Fabian, DENKER, Alexander, HAGEMANN, Paul, et al. PatchNR: Learning from Small Data by Patch Normalizing Flow Regularization. arXiv preprint arXiv:2205.12021, 2022. - https://doi.org/10.48550/arXiv.2205.12021



Imagette Video

Sélection Signaler une erreur