En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 41A46 4 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Given a domain D in $C^n$ and a compact subset K of D, the set $A^D_K$ of all restrictions of functions holomorphic on D the modulus of which is bounded by 1 is a compact subset of the Banach space $C(K)$ of continuous functions on K. The sequence $d_m(A^D_K)$ of Kolmogorov m-widths of $A^D_K$ provides a measure of the degree of compactness of the set $A^D_K$ in $C(K)$ and the study of its asymptotics has a long history, essentially going back to Kolmogorov's work on epsilon-entropy of compact sets in the 1950s. In the 1980s Zakharyuta gave, for suitable D and K, the exact asymptotics of these diameters (1), and showed that is implied by a conjecture, now known as Zakharyuta's Conjecture, concerning the approximability of the regularised relative extremal function of K and D by certain pluricomplex Green functions. Zakharyuta's Conjecture was proved by Nivoche in 2004 thus settling (1) at the same time. We shall give a new proof of the asymptotics (1) for D strictly hyperconvex and K nonpluripolar which does not rely on Zakharyuta's Conjecture. Instead we proceed more directly by a two-pronged approach establishing sharp upper and lower bounds for the Kolmogorov widths. The lower bounds follow from concentration results of independent interest for the eigenvalues of a certain family of Toeplitz operators, while the upper bounds follow from an application of the Bergman–Weil formula together with an exhaustion procedure by special holomorphic polyhedral.[-]
Given a domain D in $C^n$ and a compact subset K of D, the set $A^D_K$ of all restrictions of functions holomorphic on D the modulus of which is bounded by 1 is a compact subset of the Banach space $C(K)$ of continuous functions on K. The sequence $d_m(A^D_K)$ of Kolmogorov m-widths of $A^D_K$ provides a measure of the degree of compactness of the set $A^D_K$ in $C(K)$ and the study of its asymptotics has a long history, essentially going back ...[+]

41A46 ; 32A36 ; 32U20 ; 32W20 ; 35P15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Tensor methods have emerged as an indispensable tool for the numerical solution of high-dimensional problems in computational science, and in particular problems arising in stochastic and parametric analyses. In many practical situations, the approximation of functions of multiple parameters (or random variables) is made computationally tractable by using low-rank tensor formats. Here, we present some results on rank-structured approximations and we discuss the connection between best approximation problems in tree-based low-rank formats and the problem of finding optimal low-dimensional subspaces for the projection of a tensor. Then, we present constructive algorithms that adopt a subspace point of view for the computation of sub-optimal low-rank approximations with respect to a given norm. These algorithms are based on the construction of sequences of suboptimal but nested subspaces.

Keywords: high dimensional problems - tensor numerical methods - projection-based model order reduction - low-rank tensor formats - greedy algorithms - proper generalized decomposition - uncertainty quantification - parametric equations[-]
Tensor methods have emerged as an indispensable tool for the numerical solution of high-dimensional problems in computational science, and in particular problems arising in stochastic and parametric analyses. In many practical situations, the approximation of functions of multiple parameters (or random variables) is made computationally tractable by using low-rank tensor formats. Here, we present some results on rank-structured approximations ...[+]

65D15 ; 35J50 ; 41A63 ; 65N12 ; 15A69 ; 46B28 ; 46A32 ; 41A46 ; 41A15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Smooth parametrization consists in a subdivision of mathematical objects under consideration into simple pieces, and then parametric representation of each piece, while keeping control of high order derivatives. The main goal of the talk is to provide a short overview of some results and open problems on smooth parametrization and its applications in several apparently separated domains: Smooth Dynamics, Diophantine Geometry, and Approximation Theory. The structure of the results, open problems, and conjectures in each of these domains shows in many cases a remarkable similarity, which I'll try to stress. Sometimes this similarity can be easily explained, sometimes the reasons remain somewhat obscure, and it motivates some natural questions discussed in the talk. I plan to present also some new results, connecting smooth parametrization with “Remez-type” (or “Norming”) inequalities for polynomials restricted to analytic varieties.[-]
Smooth parametrization consists in a subdivision of mathematical objects under consideration into simple pieces, and then parametric representation of each piece, while keeping control of high order derivatives. The main goal of the talk is to provide a short overview of some results and open problems on smooth parametrization and its applications in several apparently separated domains: Smooth Dynamics, Diophantine Geometry, and Approximation ...[+]

37C05 ; 11Gxx ; 41A46

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
we discuss classification problems in high dimension. We study classification problems using three classical notions: complexity of decision boundary, noise, and margin. We demonstrate that under suitable conditions on the decision boundary, classification problems can be very efficiently approximated, even in high dimensions. If a margin condition is assumed, then arbitrary fast approximation rates can be achieved, despite the problem being high-dimensional and discontinuous. We extend the approximation results ta learning results and show close ta optimal learning rates for empirical risk minimization in high dimensional classification. [-]
we discuss classification problems in high dimension. We study classification problems using three classical notions: complexity of decision boundary, noise, and margin. We demonstrate that under suitable conditions on the decision boundary, classification problems can be very efficiently approximated, even in high dimensions. If a margin condition is assumed, then arbitrary fast approximation rates can be achieved, despite the problem being ...[+]

68T05 ; 62C20 ; 41A25 ; 41A46

Sélection Signaler une erreur