En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents Frénod, Emmanuel 23 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.[-]
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They ...[+]

65N21 ; 65D99

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.[-]
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They ...[+]

65N21 ; 65D99

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.[-]
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They ...[+]

65N21 ; 65D99

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Finite neuron method - Lecture 1 - Xu, Jinchao (Author of the conference) | CIRM H

Multi angle

In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the relevant optimization problems. I will present theoretical results that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present some new classes of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN and MgNet using an activation function with compact support for image classifications.[-]
In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the ...[+]

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Finite neuron method - Lecture 2 - Xu, Jinchao (Author of the conference) | CIRM H

Multi angle

In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the relevant optimization problems. I will present theoretical results that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present some new classes of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN and MgNet using an activation function with compact support for image classifications.[-]
In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the ...[+]

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Finite neuron method - Lecture 3 - Xu, Jinchao (Author of the conference) | CIRM H

Multi angle

In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the relevant optimization problems. I will present theoretical results that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present some new classes of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN and MgNet using an activation function with compact support for image classifications.[-]
In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the ...[+]

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning operators - Lecture 1 - Mishra, Siddhartha (Author of the conference) | CIRM H

Multi angle

Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...[+]

65Mxx ; 65Nxx ; 68Txx

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning operators - Lecture 2 - Mishra, Siddhartha (Author of the conference) | CIRM H

Multi angle

Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...[+]

65Mxx ; 65Nxx ; 68Txx

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning operators - Lecture 3 - Mishra, Siddhartha (Author of the conference) | CIRM H

Multi angle

Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...[+]

65Mxx ; 65Nxx ; 68Txx

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
High-fidelity numerical simulation of physical systems modeled by time-dependent partial differential equations (PDEs) has been at the center of many technological advances in the last century. However, for engineering applications such as design, control, optimization, data assimilation, and uncertainty quantification, which require repeated model evaluation over a potentially large number of parameters, or initial conditions, these simulations remain prohibitively expensive, even with state-of-art PDE solvers. The necessity of reducing the overall cost for such downstream applications has led to the development of surrogate models, which captures the core behavior of the target system but at a fraction of the cost. In this context, new advances in machine learning provide a new path for developing surrogates models, particularly when the PDEs are not known and the system is advection-dominated. In a nutshell, we seek to find a data-driven latent representation of the state of the system, and then learn the latent-space dynamics. This allows us to compress the information, and evolve in compressed form, therefore, accelerating the models. In this series of lectures, I will present recent advances in two fronts: deterministic and probabilistic modeling latent representations. In particular, I will introduce the notions of hyper-networks, a neural network that outputs another neural network, and diffusion models, a framework that allows us to represent probability distributions of trajectories directly. I will provide the foundation for such methodologies, how they can be adapted to scientific computing, and which physical properties they need to satisfy. Finally, I will provide several examples of applications to scientific computing.[-]
High-fidelity numerical simulation of physical systems modeled by time-dependent partial differential equations (PDEs) has been at the center of many technological advances in the last century. However, for engineering applications such as design, control, optimization, data assimilation, and uncertainty quantification, which require repeated model evaluation over a potentially large number of parameters, or initial conditions, these simulations ...[+]

37N30 ; 65C20 ; 65L20

Bookmarks Report an error