F Nous contacter


0

Videothèque  | enregistrements trouvés : 7

O
     

-A +A

Sélection courante (0) : Tout sélectionner / Tout déselectionner

P Q

The mathematical framework of variational inequalities is a powerful tool to model problems arising in mechanics such as elasto-plasticity where the physical laws change when some state variables reach a certain threshold [1]. Somehow, it is not surprising that the models used in the literature for the hysteresis effect of non-linear elasto-plastic oscillators submitted to random vibrations [2] are equivalent to (finite dimensional) stochastic variational inequalities (SVIs) [3]. This presentation concerns (a) cycle properties of a SVI modeling an elasto-perfectly-plastic oscillator excited by a white noise together with an application to the risk of failure [4,5]. (b) a set of Backward Kolmogorov equations for computing means, moments and correlation [6]. (c) free boundary value problems and HJB equations for the control of SVIs. For engineering applications, it is related to the problem of critical excitation [7]. This point concerns what we are doing during the CEMRACS research project. (d) (if time permits) on-going research on the modeling of a moving plate on turbulent convection [8]. This is a mixture of joint works and / or discussions with, amongst others, A. Bensoussan, L. Borsoi, C. Feau, M. Huang, M. Laurière, G. Stadler, J. Wylie, J. Zhang and J.Q. Zhong. The mathematical framework of variational inequalities is a powerful tool to model problems arising in mechanics such as elasto-plasticity where the physical laws change when some state variables reach a certain threshold [1]. Somehow, it is not surprising that the models used in the literature for the hysteresis effect of non-linear elasto-plastic oscillators submitted to random vibrations [2] are equivalent to (finite dimensional) stochastic ...

74H50 ; 35R60 ; 60H10 ; 60H30 ; 74C05

We analyse how reverting Random Number Generator can be efficiently used to save memory in solving dynamic programming equation. For SDEs, it takes the form of forward and backward Euler scheme. Surprisingly the error induced by time reversion is of order 1.

60H10 ; 60H15 ; 60H30 ; 65C10

Neural networks consist of a variegate class of computational models, used in machine learning for both supervised and unsupervised learning. Several topologies of networks have been proposed in the literature, since the preliminary work from the late 50s, including models based on undirected probabilistic graphical models, such as (Restricted) Boltzmann Machines, and on multi-layer feed-forward computational graphs. The training of a neural network is usually performed by the minimization of a cost function, such as the negative log-likelihood. During the talk we will review alternative geometries used to describe the space of the functions encoded by a neural network, parametrized by its connection weights, and the implications on the optimization of the cost function during training, from the perspective of Riemannian optimization. In the first part of the presentation, we will introduce a probabilistic interpretation for neural networks, which goes back to the work of Amari and coauthors from the 90s, and which is based on the use of the Fisher-Rao metric studied in Information Geometry. In this framework, the weights of a Boltzmann Machine, and similarly for feed-forward neural networks, are interpreted as the parameters of a (joint) statistical model for the observed, and possibly latent, variables. In the second part of the talk, we will review other approaches, motivated by invariant principles in neural networks and not explicitly based on probabilistic models, to the definition of alternative geometries for the space of the parameters of a neural network. The use of alternative non-Euclidean geometries has direct impact on the training algorithms, indeed by modeling the space of the functions associated to a neural network as a Riemannian manifold determines a dependence of the gradient on the choice of metric tensor. We conclude the presentation by reviewing some recently proposed training algorithm for neural networks, based on Riemannian optimization algorithms. Neural networks consist of a variegate class of computational models, used in machine learning for both supervised and unsupervised learning. Several topologies of networks have been proposed in the literature, since the preliminary work from the late 50s, including models based on undirected probabilistic graphical models, such as (Restricted) Boltzmann Machines, and on multi-layer feed-forward computational graphs. The training of a neural ...

53B21 ; 65K10 ; 68T05 ; 92B20

Multi angle  Geometry of quantum entanglement
Zyczkowski, Karol (Auteur de la Conférence) | CIRM (Editeur )

Multi angle  Minicourse shape spaces and geometric statistics
Pennec, Xavier (Auteur de la Conférence) ; Trouvé, Alain (Auteur de la Conférence) | CIRM (Editeur )

Nuage de mots clefs ici

Z