Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
An ubiquitous problem in applied science is the recovery of physical phenomenons, represented by multivariate functions, from uncomplete measurements. These measurements typically have the form of pointwise data, but could also be obtained by linear functional. Most often, recovery techniques are based on some form of approximation by finite dimensional space that should accurately capture the unknown multivariate function. The first part of the course will review fundamental tools from approximation theory that describe how well relevant classes of multivariate functions can be described by such finite dimensional spaces. The notion of (linear or nonlinear) n-width will be developped, in relation with reduced modeling strategies that allow to construct near-optimal approximation spaces for classes of parametrized PDE's. Functions of many variables that are subject to the curse of dimensionality, will also be discussed. The second part of the course will review two recovery strategies from uncomplete measurements: weighted least-squares and parametrized background data-weak methods. An emphasis will be put on the derivation of sample distributions of minimal size for ensuring optimal convergence estimates.
[-]
An ubiquitous problem in applied science is the recovery of physical phenomenons, represented by multivariate functions, from uncomplete measurements. These measurements typically have the form of pointwise data, but could also be obtained by linear functional. Most often, recovery techniques are based on some form of approximation by finite dimensional space that should accurately capture the unknown multivariate function. The first part of the ...
[+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the probability distribution of the unknown given the data, obtained through the Bayes rule. We will talk about the conditions under which this formulation leads to well-posedness of the inverse problem at the level of probability distributions. We then discuss the connection of the Bayesian approach to inverse problems with the variational regularization. This will also help us to study the properties of the modes of the posterior distribution as point estimators for the unknown parameter. We will also briefly talk about the Markov chain Monte Carlo methods in this context.
[-]
We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the ...
[+]
35R30 ; 65M32 ; 65M12 ; 65C05 ; 65C50 ; 76D07 ; 60J10
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor networks (TNs), or tree-based tensor formats. In part I, we will present some general notions about tensors, tensor ranks, tensor formats and tensorization of vectors and functions. Then in part II, we will introduce approximation tools based on TNs, present results on the approximation power (or expressivity) of TNs and discuss the role of tensorization and architecture of TNs. Finally in part III, we will present algorithms for computing with TNs. This includes algorithms for tensor truncation, for the solution of optimization problems, for learning functions from samples...
[-]
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor ...
[+]
15A69
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor networks (TNs), or tree-based tensor formats. In part I, we will present some general notions about tensors, tensor ranks, tensor formats and tensorization of vectors and functions. Then in part II, we will introduce approximation tools based on TNs, present results on the approximation power (or expressivity) of TNs and discuss the role of tensorization and architecture of TNs. Finally in part III, we will present algorithms for computing with TNs.
This includes algorithms for tensor truncation, for the solution of optimization problems, for learning functions from samples...
[-]
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor ...
[+]
15A69
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series surveys this field and describes future challenges.
[-]
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series ...
[+]
68T07 ; 65Mxx
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series surveys this field and describes future challenges.
[-]
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series ...
[+]
68T07 ; 65Mxx
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Coupling models or coupling codes: two examples where the coupled model has better properties than each model.
In a simple setting (1d in space), a model of porous media (which is characterized by a degenerate parabolic equation) is coupled with the behavior of the medium, the resulting system is a non degenerate heat equation.
Similarly, the diffusion equation of neutronics, which coefficients depend on the local temperature of the nuclear core, is (classically) an eigenvalue problem, with infinitely many eigenvalues. Coupled with the equation for the local temperature yields a unique solution.
[-]
Coupling models or coupling codes: two examples where the coupled model has better properties than each model.
In a simple setting (1d in space), a model of porous media (which is characterized by a degenerate parabolic equation) is coupled with the behavior of the medium, the resulting system is a non degenerate heat equation.
Similarly, the diffusion equation of neutronics, which coefficients depend on the local temperature of the nuclear ...
[+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.