En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 62F99 2 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The class of integer-valued trawl processes has recently been introduced for modelling univariate and multivariate integer-valued time series with short or long memory.

In this talk, I will discuss recent developments with regards to model estimation, model selection and forecasting of such processes. The new methods will be illustrated in an empirical study of high-frequency financial data.

This is joint work with Mikkel Bennedsen (Aarhus University), Asger Lunde (Aarhus University) and Neil Shephard (Harvard University).[-]
The class of integer-valued trawl processes has recently been introduced for modelling univariate and multivariate integer-valued time series with short or long memory.

In this talk, I will discuss recent developments with regards to model estimation, model selection and forecasting of such processes. The new methods will be illustrated in an empirical study of high-frequency financial data.

This is joint work with Mikkel Bennedsen (Aarhus ...[+]

37M10 ; 60G10 ; 60G55 ; 62F99 ; 62M10 ; 62P05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Learning with differentiable perturbed optimizers - Berthet, Quentin (Auteur de la Conférence) | CIRM H

Post-edited

Machine learning pipelines often rely on optimization procedures to make discrete decisions (e.g. sorting, picking closest neighbors, finding shortest paths or optimal matchings). Although these discrete decisions are easily computed in a forward manner, they cannot be used to modify model parameters using first-order optimization techniques because they break the back-propagation of computational graphs. In order to expand the scope of learning problems that can be solved in an end-to-end fashion, we propose a systematic method to transform a block that outputs an optimal discrete decision into a differentiable operation. Our approach relies on stochastic perturbations of these parameters, and can be used readily within existing solvers without the need for ad hoc regularization or smoothing. These perturbed optimizers yield solutions that are differentiable and never locally constant. The amount of smoothness can be tuned via the chosen noise amplitude, whose impact we analyze. The derivatives of these perturbed solvers can be evaluated eciently. We also show how this framework can be connected to a family of losses developed in structured prediction, and describe how these can be used in unsupervised and supervised learning, with theoretical guarantees.
We demonstrate the performance of our approach on several machine learning tasks in experiments on synthetic and real data.[-]
Machine learning pipelines often rely on optimization procedures to make discrete decisions (e.g. sorting, picking closest neighbors, finding shortest paths or optimal matchings). Although these discrete decisions are easily computed in a forward manner, they cannot be used to modify model parameters using first-order optimization techniques because they break the back-propagation of computational graphs. In order to expand the scope of learning ...[+]

90C06 ; 68W20 ; 62F99

Sélection Signaler une erreur