Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2 y
In this talk, I will present ColDICE[1, 2], a publicly available parallel numerical solver designed to solve the Vlasov-Poisson equations in the cold case limit. The method is based on the representation of the phase-space sheet as a conforming, self-adaptive simplicial tessellation whose vertices follow the Lagrangian equations of motion. In this presentation, I will mainly focus on describing the underlying algorithm and its practical implementation, as well as showing a few practical examples demonstrating its capabilities.
[-]
In this talk, I will present ColDICE[1, 2], a publicly available parallel numerical solver designed to solve the Vlasov-Poisson equations in the cold case limit. The method is based on the representation of the phase-space sheet as a conforming, self-adaptive simplicial tessellation whose vertices follow the Lagrangian equations of motion. In this presentation, I will mainly focus on describing the underlying algorithm and its practical ...
[+]
65Mxx ; 45K05 ; 65Y05 ; 76W05 ; 85A30
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Gyrokinetic simulation is considered to be an essential tool to study turbulent transport driven by micro-scale instabilities in tokamak plasmas. It is roughly categorized into two approaches; delta-$f$ local and full-$f$ global approaches. In full-$f$ approach, both turbulent transport and profile evolutions are solved self-consistently under the power balance between external heat source/sink. In this talk, we address (A) numerical technique to treat such full-$f$ gyrokinetic Vlasov-Poisson equations [1] and (B) characteristics of global ion-scale turbulence and transport barrier [2]. We also discuss (C) the role of stable modes in collisionless or weakly collisional plasmas [3].
[-]
Gyrokinetic simulation is considered to be an essential tool to study turbulent transport driven by micro-scale instabilities in tokamak plasmas. It is roughly categorized into two approaches; delta-$f$ local and full-$f$ global approaches. In full-$f$ approach, both turbulent transport and profile evolutions are solved self-consistently under the power balance between external heat source/sink. In this talk, we address (A) numerical technique ...
[+]
76X05 ; 65Mxx ; 76F10 ; 82D10
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series surveys this field and describes future challenges.
[-]
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series ...
[+]
68T07 ; 65Mxx
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series surveys this field and describes future challenges.
[-]
Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods in particular deep learning. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The lecture series ...
[+]
68T07 ; 65Mxx
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this course, we will consider the development and the analysis of numerical methods for kinetic partial differential equations. Kinetic equations represent a way of describing the time evolution of a system consisting of a large number of particles. Due to the high number of dimensions and their intrinsic physical properties, the construction of numerical methods represents a challenge and requires a careful balance between accuracy and computational complexity. In the first part, we will review the basic numerical techniques for dealing with such equations, including the case of semi-Lagrangian methods, discrete-velocity models and spectral methods. In the second part, we give an overview of the current state of the art of numerical methods for kinetic equations. This covers the derivation of fast algorithms, the notion of asymptotic-preserving methods and the construction of hybrid schemes. Since, in all models a degree of uncertainty is implicitly embedded which can be due to the lack of knowledge about the microscopic interaction details, incomplete informations on the initial state or at the boundaries, a last part will be dedicated to an overview of numerical methods to deal with the quantification of the uncertainties in kinetic equations. Applications of the models and the numerical methods to different fields ranging from physics to biology and social sciences will be discussed as well.
[-]
In this course, we will consider the development and the analysis of numerical methods for kinetic partial differential equations. Kinetic equations represent a way of describing the time evolution of a system consisting of a large number of particles. Due to the high number of dimensions and their intrinsic physical properties, the construction of numerical methods represents a challenge and requires a careful balance between accuracy and ...
[+]
65ZXX ; 65Mxx ; 70-XX
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this course, we will consider the development and the analysis of numerical methods for kinetic partial differential equations. Kinetic equations represent a way of describing the time evolution of a system consisting of a large number of particles. Due to the high number of dimensions and their intrinsic physical properties, the construction of numerical methods represents a challenge and requires a careful balance between accuracy and computational complexity. In the first part, we will review the basic numerical techniques for dealing with such equations, including the case of semi-Lagrangian methods, discrete-velocity models and spectral methods. In the second part, we give an overview of the current state of the art of numerical methods for kinetic equations. This covers the derivation of fast algorithms, the notion of asymptotic-preserving methods and the construction of hybrid schemes. Since, in all models a degree of uncertainty is implicitly embedded which can be due to the lack of knowledge about the microscopic interaction details, incomplete informations on the initial state or at the boundaries, a last part will be dedicated to an overview of numerical methods to deal with the quantification of the uncertainties in kinetic equations. Applications of the models and the numerical methods to different fields ranging from physics to biology and social sciences will be discussed as well.
[-]
In this course, we will consider the development and the analysis of numerical methods for kinetic partial differential equations. Kinetic equations represent a way of describing the time evolution of a system consisting of a large number of particles. Due to the high number of dimensions and their intrinsic physical properties, the construction of numerical methods represents a challenge and requires a careful balance between accuracy and ...
[+]
65ZXX ; 65Mxx ; 70-XX
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.
[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...
[+]
65Mxx ; 65Nxx ; 68Txx
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.
[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...
[+]
65Mxx ; 65Nxx ; 68Txx
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results that show that these architectures learn operators arising from PDEs. A large number of numerical examples will be provided to illustrate them.
[-]
Operators are mappings between infinite-dimensional spaces, which arise in the context of differential equations. Learning operators is challenging due to the inherent infinite-dimensional context. In this course, we present different architectures for learning operators from data. These include operator networks such as DeepONets and Neural operators such as Fourier Neural Operators (FNOs) and their variants. We will present theoretical results ...
[+]
65Mxx ; 65Nxx ; 68Txx
Déposez votre fichier ici pour le déplacer vers cet enregistrement.