En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents Després, Bruno 17 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Lagrange - history of mathematics - 19th century - fluid mechanics

01A55 ; 70H03 ; 76M30 ; 76B15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.[-]
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They ...[+]

65N21 ; 65D99

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.[-]
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They ...[+]

65N21 ; 65D99

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.[-]
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They ...[+]

65N21 ; 65D99

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Finite neuron method - Lecture 1 - Xu, Jinchao (Auteur de la conférence) | CIRM H

Multi angle

In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the relevant optimization problems. I will present theoretical results that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present some new classes of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN and MgNet using an activation function with compact support for image classifications.[-]
In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Finite neuron method - Lecture 2 - Xu, Jinchao (Auteur de la conférence) | CIRM H

Multi angle

In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the relevant optimization problems. I will present theoretical results that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present some new classes of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN and MgNet using an activation function with compact support for image classifications.[-]
In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Finite neuron method - Lecture 3 - Xu, Jinchao (Auteur de la conférence) | CIRM H

Multi angle

In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the relevant optimization problems. I will present theoretical results that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present some new classes of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN and MgNet using an activation function with compact support for image classifications.[-]
In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the ...[+]

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning operators - Lecture 1 - Mishra, Siddhartha (Auteur de la conférence) | CIRM H

Multi angle

Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...[+]

65Mxx ; 65Nxx ; 68Txx

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning operators - Lecture 2 - Mishra, Siddhartha (Auteur de la conférence) | CIRM H

Multi angle

Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...[+]

65Mxx ; 65Nxx ; 68Txx

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning operators - Lecture 3 - Mishra, Siddhartha (Auteur de la conférence) | CIRM H

Multi angle

Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...[+]

65Mxx ; 65Nxx ; 68Txx

Sélection Signaler une erreur