m

F Nous contacter


0

Documents  65N12 | enregistrements trouvés : 5

O
     

-A +A

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Post-edited  25+ years of wavelets for PDEs
Kunoth, Angela (Auteur de la Conférence) | CIRM (Editeur )

Ingrid Daubechies' construction of orthonormal wavelet bases with compact support published in 1988 started a general interest to employ these functions also for the numerical solution of partial differential equations (PDEs). Concentrating on linear elliptic and parabolic PDEs, I will start from theoretical topics such as the well-posedness of the problem in appropriate function spaces and regularity of solutions and will then address quality and optimality of approximations and related concepts from approximation the- ory. We will see that wavelet bases can serve as a basic ingredient, both for the theory as well as for algorithmic realizations. Particularly for situations where solutions exhibit singularities, wavelet concepts enable adaptive appproximations for which convergence and optimal algorithmic complexity can be established. I will describe corresponding implementations based on biorthogonal spline-wavelets.
Moreover, wavelet-related concepts have triggered new developments for efficiently solving complex systems of PDEs, as they arise from optimization problems with PDEs.
Ingrid Daubechies' construction of orthonormal wavelet bases with compact support published in 1988 started a general interest to employ these functions also for the numerical solution of partial differential equations (PDEs). Concentrating on linear elliptic and parabolic PDEs, I will start from theoretical topics such as the well-posedness of the problem in appropriate function spaces and regularity of solutions and will then address quality ...

65T60 ; 94A08 ; 65N12 ; 65N30 ; 49J20

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

We present a lowest order Serendipity Virtual Element method, and show its use for the numerical solution of linear magneto-static problems in three dimensions. The method can be applied to very general decompositions of the computational domain (as is natural for Virtual Element Methods) and uses as unknowns the (constant) tangential component of the magnetic eld H on each edge, and the vertex values of the Lagrange multiplier p (used to enforce the solenoidality of the magnetic induction B = µH). In this respect the method can be seen as the natural generalization of the lowest order Edge Finite Element Method (the so-called ”first kind N´ed´elec” elements) to polyhedra of almost arbitrary shape, and as we show on some numerical examples it exhibits very good accuracy (for being a lowest order element) and excellent robustness with respect to distortions. Hints on a whole family of elements will also be given.
We present a lowest order Serendipity Virtual Element method, and show its use for the numerical solution of linear magneto-static problems in three dimensions. The method can be applied to very general decompositions of the computational domain (as is natural for Virtual Element Methods) and uses as unknowns the (constant) tangential component of the magnetic eld H on each edge, and the vertex values of the Lagrange multiplier p (used to ...

65N30 ; 65N12

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Many problems in computational science require the approximation of a high-dimensional function from limited amounts of data. For instance, a common task in Uncertainty Quantification (UQ) involves building a surrogate model for a parametrized computational model. Complex physical systems involve computational models with many parameters, resulting in multivariate functions of many variables. Although the amount of data may be large, the curse of dimensionality essentially prohibits collecting or processing enough data to reconstruct such a function using classical approximation techniques. Over the last five years, spurred by its successful application in signal and image processing, compressed sensing has begun to emerge as potential tool for surrogate model construction UQ. In this talk, I will give an overview of application of compressed sensing to high-dimensional approximation. I will demonstrate how the appropriate implementation of compressed sensing overcomes the curse of dimensionality (up to a log factor). This is based on weighted l1 regularizers, and structured sparsity in so-called lower sets. If time, I will also discuss several variations and extensions relevant to UQ applications, many of which have links to the standard compressed sensing theory. These include dealing with corrupted data, the effect of model error, functions defined on irregular domains and incorporating additional information such as gradient data. I will also highlight several challenges and open problems.
Many problems in computational science require the approximation of a high-dimensional function from limited amounts of data. For instance, a common task in Uncertainty Quantification (UQ) involves building a surrogate model for a parametrized computational model. Complex physical systems involve computational models with many parameters, resulting in multivariate functions of many variables. Although the amount of data may be large, the curse ...

41A05 ; 41A10 ; 65N12 ; 65N15 ; 94A12

Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Tensor methods have emerged as an indispensable tool for the numerical solution of high-dimensional problems in computational science, and in particular problems arising in stochastic and parametric analyses. In many practical situations, the approximation of functions of multiple parameters (or random variables) is made computationally tractable by using low-rank tensor formats. Here, we present some results on rank-structured approximations and we discuss the connection between best approximation problems in tree-based low-rank formats and the problem of finding optimal low-dimensional subspaces for the projection of a tensor. Then, we present constructive algorithms that adopt a subspace point of view for the computation of sub-optimal low-rank approximations with respect to a given norm. These algorithms are based on the construction of sequences of suboptimal but nested subspaces.

Keywords: high dimensional problems - tensor numerical methods - projection-based model order reduction - low-rank tensor formats - greedy algorithms - proper generalized decomposition - uncertainty quantification - parametric equations
Tensor methods have emerged as an indispensable tool for the numerical solution of high-dimensional problems in computational science, and in particular problems arising in stochastic and parametric analyses. In many practical situations, the approximation of functions of multiple parameters (or random variables) is made computationally tractable by using low-rank tensor formats. Here, we present some results on rank-structured approximations ...

65D15 ; 35J50 ; 41A63 ; 65N12 ; 15A69 ; 46B28 ; 46A32 ; 41A46 ; 41A15

Z