En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 65C50 2 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Bayesian methods for inverse problems - lecture 2 - Dashti, Masoumeh (Auteur de la conférence) | CIRM H

Virtualconference

We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the probability distribution of the unknown given the data, obtained through the Bayes rule. We will talk about the conditions under which this formulation leads to well-posedness of the inverse problem at the level of probability distributions. We then discuss the connection of the Bayesian approach to inverse problems with the variational regularization. This will also help us to study the properties of the modes of the posterior distribution as point estimators for the unknown parameter. We will also briefly talk about the Markov chain Monte Carlo methods in this context.[-]
We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the ...[+]

35R30 ; 65M32 ; 65M12 ; 65C05 ; 65C50 ; 76D07 ; 60J10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Modern machine learning architectures often embed their inputs into a lower-dimensional latent space before generating a final output. A vast set of empirical results---and some emerging theory---predicts that these lower-dimensional codes often are highly structured, capturing lower-dimensional variation in the data. Based on this observation, in this talk I will describe efforts in my group to develop lightweight algorithms that navigate, restructure, and reshape learned latent spaces. Along the way, I will consider a variety of practical problems in machine learning, including low-rank adaptation of large models, regularization to promote local latent structure, and efficient training/evaluation of generative models.[-]
Modern machine learning architectures often embed their inputs into a lower-dimensional latent space before generating a final output. A vast set of empirical results---and some emerging theory---predicts that these lower-dimensional codes often are highly structured, capturing lower-dimensional variation in the data. Based on this observation, in this talk I will describe efforts in my group to develop lightweight algorithms that navigate, ...[+]

62E20 ; 62F99 ; 62G07 ; 62P30 ; 65C50 ; 68T99

Sélection Signaler une erreur