En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents Mula Hernandez, Olga 13 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
An ubiquitous problem in applied science is the recovery of physical phenomenons, represented by multivariate functions, from uncomplete measurements. These measurements typically have the form of pointwise data, but could also be obtained by linear functional. Most often, recovery techniques are based on some form of approximation by finite dimensional space that should accurately capture the unknown multivariate function. The first part of the course will review fundamental tools from approximation theory that describe how well relevant classes of multivariate functions can be described by such finite dimensional spaces. The notion of (linear or nonlinear) n-width will be developped, in relation with reduced modeling strategies that allow to construct near-optimal approximation spaces for classes of parametrized PDE's. Functions of many variables that are subject to the curse of dimensionality, will also be discussed. The second part of the course will review two recovery strategies from uncomplete measurements: weighted least-squares and parametrized background data-weak methods. An emphasis will be put on the derivation of sample distributions of minimal size for ensuring optimal convergence estimates.[-]
An ubiquitous problem in applied science is the recovery of physical phenomenons, represented by multivariate functions, from uncomplete measurements. These measurements typically have the form of pointwise data, but could also be obtained by linear functional. Most often, recovery techniques are based on some form of approximation by finite dimensional space that should accurately capture the unknown multivariate function. The first part of the ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Bayesian methods for inverse problems - lecture 2 - Dashti, Masoumeh (Auteur de la Conférence) | CIRM H

Virtualconference

We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the probability distribution of the unknown given the data, obtained through the Bayes rule. We will talk about the conditions under which this formulation leads to well-posedness of the inverse problem at the level of probability distributions. We then discuss the connection of the Bayesian approach to inverse problems with the variational regularization. This will also help us to study the properties of the modes of the posterior distribution as point estimators for the unknown parameter. We will also briefly talk about the Markov chain Monte Carlo methods in this context.[-]
We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the ...[+]

35R30 ; 65M32 ; 65M12 ; 65C05 ; 65C50 ; 76D07 ; 60J10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor networks (TNs), or tree-based tensor formats. In part I, we will present some general notions about tensors, tensor ranks, tensor formats and tensorization of vectors and functions. Then in part II, we will introduce approximation tools based on TNs, present results on the approximation power (or expressivity) of TNs and discuss the role of tensorization and architecture of TNs. Finally in part III, we will present algorithms for computing with TNs. This includes algorithms for tensor truncation, for the solution of optimization problems, for learning functions from samples...[-]
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor ...[+]

15A69

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor networks (TNs), or tree-based tensor formats. In part I, we will present some general notions about tensors, tensor ranks, tensor formats and tensorization of vectors and functions. Then in part II, we will introduce approximation tools based on TNs, present results on the approximation power (or expressivity) of TNs and discuss the role of tensorization and architecture of TNs. Finally in part III, we will present algorithms for computing with TNs.
This includes algorithms for tensor truncation, for the solution of optimization problems, for learning functions from samples...[-]
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor ...[+]

15A69

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series surveys this field and describes future challenges.[-]
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series ...[+]

68T07 ; 65Mxx

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series surveys this field and describes future challenges.[-]
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series ...[+]

68T07 ; 65Mxx

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Coupling models or coupling codes: two examples where the coupled model has better properties than each model.

In a simple setting (1d in space), a model of porous media (which is characterized by a degenerate parabolic equation) is coupled with the behavior of the medium, the resulting system is a non degenerate heat equation.
Similarly, the diffusion equation of neutronics, which coefficients depend on the local temperature of the nuclear core, is (classically) an eigenvalue problem, with infinitely many eigenvalues. Coupled with the equation for the local temperature yields a unique solution.[-]
Coupling models or coupling codes: two examples where the coupled model has better properties than each model.

In a simple setting (1d in space), a model of porous media (which is characterized by a degenerate parabolic equation) is coupled with the behavior of the medium, the resulting system is a non degenerate heat equation.
Similarly, the diffusion equation of neutronics, which coefficients depend on the local temperature of the nuclear ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

How to compute transition times? - Lelièvre, Tony (Auteur de la Conférence) | CIRM H

Multi angle

We illustrate how the Hill relation and the notion of quasi-stationary distribution can be used to analyse the error introduced by many algorithms that have been proposed in the literature, in particular in molecular dynamics, to compute mean reaction times between metastable states for Markov processes. We present in particular how this analysis gives rigorous foundations to methods using splitting algorithms to sample the reactive trajectories.

60J22 ; 65C40 ; 82C31

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables; the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the probability distribution of the unknown given the data, obtained through the Bayes rule. We will talk about the conditions under which this formulation leads to well-posedness of the inverse problem at the level of probability distributions. We then discuss the connection of the Bayesian approach to inverse problems with the variational regularization. This will also help us to study the properties of the modes of the posterior distribution as point estimators for the unknown parameter. We will also briefly talk about the Markov chain Monte Carlo methods in this context.[-]
We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables; the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

PinT schemes using time as a parameter - Mula Hernandez, Olga (Auteur de la Conférence) | CIRM H

Multi angle

When thinking about parallel in time schemes, one often tends to view time as a variable to discretize within a numerical scheme (that usually involves a time marching strategy). In this talk, I propose to review alternative strategies where time can be seen as a parameter so that computing the PDE solution at a given time would consist in evaluating closed formulas or in solving tasks of very low computational cost that do not involve any time marching. This type of approach is by nature entirely parallelizable. It can be achieved by either leveraging analytic formulas (whose existence strongly depends on the nature of the PDE), or by learning techniques such as model order reduction. For the later strategy, convection dominated problems are challenging (just like in classical PinT schemes such as parareal) and I will present recent contributions to address this type of problems.[-]
When thinking about parallel in time schemes, one often tends to view time as a variable to discretize within a numerical scheme (that usually involves a time marching strategy). In this talk, I propose to review alternative strategies where time can be seen as a parameter so that computing the PDE solution at a given time would consist in evaluating closed formulas or in solving tasks of very low computational cost that do not involve any time ...[+]

Sélection Signaler une erreur