Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This talk is devoted to solitons and wave collapses which can be considered as two alternative scenarios pertaining to the evolution of nonlinear wave systems describing by a certain class of dispersive PDEs (see, for instance, review [1]). For the former case, it suffices that the Hamiltonian be bounded from below (or above), and then the soliton realizing its minimum (or maximum) is Lyapunov stable. The extremum is approached via the radiation of small-amplitude waves, a process absent in systems with finitely many degrees of freedom. The framework of the nonlinear Schrodinger equation, the ZK equation and the three-wave system is used to show how the boundedness of the Hamiltonian H, and hence the stability of the soliton minimizing H can be proved rigorously using the integral estimate method based on the Sobolev embedding theorems. Wave systems with the Hamiltonians unbounded from below must evolve to a collapse, which can be considered as the fall of a particle in an unbounded potential. The radiation of small-amplitude waves promotes collapse in this case.
This work was supported by the Russian Science Foundation (project no. 14-22-00174).
[-]
This talk is devoted to solitons and wave collapses which can be considered as two alternative scenarios pertaining to the evolution of nonlinear wave systems describing by a certain class of dispersive PDEs (see, for instance, review [1]). For the former case, it suffices that the Hamiltonian be bounded from below (or above), and then the soliton realizing its minimum (or maximum) is Lyapunov stable. The extremum is approached via the radiation ...
[+]
35Q53 ; 35Q55 ; 37K10 ; 37N10 ; 76B15
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, I will give an overview of recent successes (and some failures) of combining modern, high order discretization schemes with machine learning submodels and their applications for large scale computations. The primary focus will be on supervised learning strategies, where a multivariate, non-linear function approximation of given data sets is found through a high-dimensional, non-convex optimization problem that is efficiently solved on modern GPUs. This approach can thus for example be employed in cases where current submodels in the discretization schemes currently rely on heuristic data. A prime of example of this is shock detection and shock capturing for high order methods, where essentially all known approaches require some expert user knowledge as guiding input. As an illustrative example, I will show how modern, multiscale neural network architectures originally designed for image segmentation can ameliorate this problem and provide parameter free and grid independent shock front detection on a subelement level. With this information, we can then inform a high order artificial viscosity operator for inner-element shock capturing. In the second part of my talk, I will present data-driven approaches to LES modeling for implicitly filtered high order discretizations. Whereas supervised learning of the Reynolds force tensor based on non-local data can provide highly accurate results that provide higher a priori correlation than any existing closures, a posteriori stability remains an issue. I will give reasons for this and introduce reinforcement learning as an alternative optimization approach. Our experiments with this method suggest that is it much better suited to account for the uncertainties introduced by the numerical scheme and its induced filter form on the modeling task. For this coupled RL-DG framework, I will present discretization-aware model approaches for the LES equations and discuss the future potential of these solver-in-the-loop optimizations.
[-]
In this talk, I will give an overview of recent successes (and some failures) of combining modern, high order discretization schemes with machine learning submodels and their applications for large scale computations. The primary focus will be on supervised learning strategies, where a multivariate, non-linear function approximation of given data sets is found through a high-dimensional, non-convex optimization problem that is efficiently solved ...
[+]
37N10 ; 76F55 ; 76F65 ; 76M22 ; 35L67
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, I will give an overview of recent successes (and some failures) of combining modern, high order discretization schemes with machine learning submodels and their applications for large scale computations. The primary focus will be on supervised learning strategies, where a multivariate, non-linear function approximation of given data sets is found through a high-dimensional, non-convex optimization problem that is efficiently solved on modern GPUs. This approach can thus for example be employed in cases where current submodels in the discretization schemes currently rely on heuristic data. A prime of example of this is shock detection and shock capturing for high order methods, where essentially all known approaches require some expert user knowledge as guiding input. As an illustrative example, I will show how modern, multiscale neural network architectures originally designed for image segmentation can ameliorate this problem and provide parameter free and grid independent shock front detection on a subelement level. With this information, we can then inform a high order artificial viscosity operator for inner-element shock capturing. In the second part of my talk, I will present data-driven approaches to LES modeling for implicitly filtered high order discretizations. Whereas supervised learning of the Reynolds force tensor based on non-local data can provide highly accurate results that provide higher a priori correlation than any existing closures, a posteriori stability remains an issue. I will give reasons for this and introduce reinforcement learning as an alternative optimization approach. Our experiments with this method suggest that is it much better suited to account for the uncertainties introduced by the numerical scheme and its induced filter form on the modeling task. For this coupled RL-DG framework, I will present discretization-aware model approaches for the LES equations and discuss the future potential of these solver-in-the-loop optimizations.
[-]
In this talk, I will give an overview of recent successes (and some failures) of combining modern, high order discretization schemes with machine learning submodels and their applications for large scale computations. The primary focus will be on supervised learning strategies, where a multivariate, non-linear function approximation of given data sets is found through a high-dimensional, non-convex optimization problem that is efficiently solved ...
[+]
37N10 ; 76F55 ; 76F65 ; 76M22 ; 35L67
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, I will give an overview of recent successes (and some failures) of combining modern, high order discretization schemes with machine learning submodels and their applications for large scale computations. The primary focus will be on supervised learning strategies, where a multivariate, non-linear function approximation of given data sets is found through a high-dimensional, non-convex optimization problem that is efficiently solved on modern GPUs. This approach can thus for example be employed in cases where current submodels in the discretization schemes currently rely on heuristic data. A prime of example of this is shock detection and shock capturing for high order methods, where essentially all known approaches require some expert user knowledge as guiding input. As an illustrative example, I will show how modern, multiscale neural network architectures originally designed for image segmentation can ameliorate this problem and provide parameter free and grid independent shock front detection on a subelement level. With this information, we can then inform a high order artificial viscosity operator for inner-element shock capturing. In the second part of my talk, I will present data-driven approaches to LES modeling for implicitly filtered high order discretizations. Whereas supervised learning of the Reynolds force tensor based on non-local data can provide highly accurate results that provide higher a priori correlation than any existing closures, a posteriori stability remains an issue. I will give reasons for this and introduce reinforcement learning as an alternative optimization approach. Our experiments with this method suggest that is it much better suited to account for the uncertainties introduced by the numerical scheme and its induced filter form on the modeling task. For this coupled RL-DG framework, I will present discretization-aware model approaches for the LES equations and discuss the future potential of these solver-in-the-loop optimizations.
[-]
In this talk, I will give an overview of recent successes (and some failures) of combining modern, high order discretization schemes with machine learning submodels and their applications for large scale computations. The primary focus will be on supervised learning strategies, where a multivariate, non-linear function approximation of given data sets is found through a high-dimensional, non-convex optimization problem that is efficiently solved ...
[+]
37N10 ; 76F55 ; 76F65 ; 76M22 ; 35L67