En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 15A69 5 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor networks (TNs), or tree-based tensor formats. In part I, we will present some general notions about tensors, tensor ranks, tensor formats and tensorization of vectors and functions. Then in part II, we will introduce approximation tools based on TNs, present results on the approximation power (or expressivity) of TNs and discuss the role of tensorization and architecture of TNs. Finally in part III, we will present algorithms for computing with TNs. This includes algorithms for tensor truncation, for the solution of optimization problems, for learning functions from samples...[-]
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor ...[+]

15A69

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor networks (TNs), or tree-based tensor formats. In part I, we will present some general notions about tensors, tensor ranks, tensor formats and tensorization of vectors and functions. Then in part II, we will introduce approximation tools based on TNs, present results on the approximation power (or expressivity) of TNs and discuss the role of tensorization and architecture of TNs. Finally in part III, we will present algorithms for computing with TNs.
This includes algorithms for tensor truncation, for the solution of optimization problems, for learning functions from samples...[-]
Many problems in computational and data science require the approximation of high-dimensional functions. Examples of such problems can be found in physics, stochastic analysis, statistics, machine learning or uncertainty quantification. The approximation of high-dimensional functions requires the introduction of approximation tools that capture specific features of these functions.
In this lecture, we will give an introduction to tree tensor ...[+]

15A69

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We introduce some effective computation of tensor rank and tensor decomposition through apolarity and nonabelian apolarity, with some examples described in detail. The first lecture regards the symmetric case, where symmetric tensors are identified with homogeneous polynomials. The second lecture regards the general case.

14N07 ; 14Q20 ; 15A69 ; 15A72

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We introduce some effective computation of tensor rank and tensor decomposition through apolarity and nonabelian apolarity, with some examples described in detail. The first lecture regards the symmetric case, where symmetric tensors are identified with homogeneous polynomials. The second lecture regards the general case.

14N07 ; 14Q20 ; 15A69 ; 15A72

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Tensor methods have emerged as an indispensable tool for the numerical solution of high-dimensional problems in computational science, and in particular problems arising in stochastic and parametric analyses. In many practical situations, the approximation of functions of multiple parameters (or random variables) is made computationally tractable by using low-rank tensor formats. Here, we present some results on rank-structured approximations and we discuss the connection between best approximation problems in tree-based low-rank formats and the problem of finding optimal low-dimensional subspaces for the projection of a tensor. Then, we present constructive algorithms that adopt a subspace point of view for the computation of sub-optimal low-rank approximations with respect to a given norm. These algorithms are based on the construction of sequences of suboptimal but nested subspaces.

Keywords: high dimensional problems - tensor numerical methods - projection-based model order reduction - low-rank tensor formats - greedy algorithms - proper generalized decomposition - uncertainty quantification - parametric equations[-]
Tensor methods have emerged as an indispensable tool for the numerical solution of high-dimensional problems in computational science, and in particular problems arising in stochastic and parametric analyses. In many practical situations, the approximation of functions of multiple parameters (or random variables) is made computationally tractable by using low-rank tensor formats. Here, we present some results on rank-structured approximations ...[+]

65D15 ; 35J50 ; 41A63 ; 65N12 ; 15A69 ; 46B28 ; 46A32 ; 41A46 ; 41A15

Sélection Signaler une erreur