m
• D

F Nous contacter

0

Videothèque2  | enregistrements trouvés : 5

O

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Post-edited  ​​​Growth of normalizing sequences in limit theorems Gouëzel, Sébastien (Auteur de la Conférence) | CIRM (Editeur )

​Assume that a renormalized Birkhoff sum $S_n f/B_n$ converges in distribution to a nontrivial limit. What can one say about the sequence $B_n$? Most natural statements in the literature involve sequences $B_n$ of the form $B_n = n^\alpha L(n)$, where $L$ is slowly varying. We will discuss the possible growth rate of $B_n$ both in the probability preserving case and the conservative case. In particular, we will describe examples where $B_n$ grows superpolynomially, or where $B_{n+1}/B_n$ does not tend to $1$.
​Assume that a renormalized Birkhoff sum $S_n f/B_n$ converges in distribution to a nontrivial limit. What can one say about the sequence $B_n$? Most natural statements in the literature involve sequences $B_n$ of the form $B_n = n^\alpha L(n)$, where $L$ is slowly varying. We will discuss the possible growth rate of $B_n$ both in the probability preserving case and the conservative case. In particular, we will describe examples where $B_n$ ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Post-edited  Were the foundations of measurement without theory laid in the 1920s? Pradier, Pierre-Charles (Auteur de la Conférence) | CIRM (Editeur )

In his 1947 essay, Tjalling Koopmans criticized the development of an empirical science that had no theoretical basis, what he referred to as measurement without theory. The controversy over the status of relations based on mere statistical inference has not ceased since then. Instead of looking for the contemporary consequences, however, I will inquire into its early beginnings. As early as the 1900s, Walras, Pareto and Juglar exchanged views on the status of theory and its relation to economic data. These private exchanges acquired the status of scientific controversy in the aftermath of the First World War, with the dissemination of Pareto’s work. It is precisely this moment that I will try to grasp, when engineers began to read and write pure economic treatises, questioning the relation between theory and empirical problems, the nature of their project and the expectations that the subsequent development of economics has tried to fulfill.

Cournot Centre session devoted to the transformations that took place in mathematical economics during the interwar period.
In his 1947 essay, Tjalling Koopmans criticized the development of an empirical science that had no theoretical basis, what he referred to as measurement without theory. The controversy over the status of relations based on mere statistical inference has not ceased since then. Instead of looking for the contemporary consequences, however, I will inquire into its early beginnings. As early as the 1900s, Walras, Pareto and Juglar exchanged views ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Post-edited  Inexact gradient projection and fast data driven compressed sensing: theory and application Davies, Michael E. (Auteur de la Conférence) | CIRM (Editeur )

We consider the convergence of the iterative projected gradient (IPG) algorithm for arbitrary (typically nonconvex) sets and when both the gradient and projection oracles are only computed approximately. We consider different notions of approximation of which we show that the Progressive Fixed Precision (PFP) and (1+epsilon) optimal oracles can achieve the same accuracy as for the exact IPG algorithm. We also show that the former scheme is also able to maintain the (linear) rate of convergence of the exact algorithm, under the same embedding assumption, while the latter requires a stronger embedding condition, moderate compression ratios and typically exhibits slower convergence. We apply our results to accelerate solving a class of data driven compressed sensing problems, where we replace iterative exhaustive searches over large datasets by fast approximate nearest neighbour search strategies based on the cover tree data structure. Finally, if there is time we will give examples of this theory applied in practice for rapid enhanced solutions to an emerging MRI protocol called magnetic resonance fingerprinting for quantitative MRI.
We consider the convergence of the iterative projected gradient (IPG) algorithm for arbitrary (typically nonconvex) sets and when both the gradient and projection oracles are only computed approximately. We consider different notions of approximation of which we show that the Progressive Fixed Precision (PFP) and (1+epsilon) optimal oracles can achieve the same accuracy as for the exact IPG algorithm. We also show that the former scheme is also ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Post-edited  Bayesian econometrics in the Big Data Era Frühwirth-Schnatter, Sylvia (Auteur de la Conférence) | CIRM (Editeur )

Data mining methods based on finite mixture models are quite common in many areas of applied science, such as marketing, to segment data and to identify subgroups with specific features. Recent work shows that these methods are also useful in micro econometrics to analyze the behavior of workers in labor markets. Since these data are typically available as time series with discrete states, clustering kernels based on Markov chains with group-specific transition matrices are applied to capture both persistence in the individual time series as well as cross-sectional unobserved heterogeneity. Markov chains clustering has been applied to data from the Austrian labor market, (a) to understanding the effect of labor market entry conditions on long-run career developments for male workers (Frühwirth-Schnatter et al., 2012), (b) to study mothers’ long-run career patterns after first birth (Frühwirth-Schnatter et al., 2016), and (c) to study the effects of a plant closure on future career developments for male worker (Frühwirth-Schnatter et al., 2018). To capture non- stationary effects for the later study, time-inhomogeneous Markov chains based on time-varying group specific transition matrices are introduced as clustering kernels. For all applications, a mixture-of-experts formulation helps to understand which workers are likely to belong to a particular group. Finally, it will be shown that Markov chain clustering is also useful in a business application in marketing and helps to identify loyal consumers within a customer relationship management (CRM) program.
Data mining methods based on finite mixture models are quite common in many areas of applied science, such as marketing, to segment data and to identify subgroups with specific features. Recent work shows that these methods are also useful in micro econometrics to analyze the behavior of workers in labor markets. Since these data are typically available as time series with discrete states, clustering kernels based on Markov chains with ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Post-edited  Introduction aux technologies et applications Big Data Allemand, Sylvain (Auteur de la Conférence) | CIRM (Editeur )

Depuis les années 2000, l'informatique a vu émerger de nouvelles technologies, cloud et big data, qui bouleversent l'industrie avec l'arrivée d'outils de traitement à grande échelle.
De nouveaux besoins sont apparus comme la possibilité d'extraire de la valeur des données en s'appuyant sur des outils qui répondent aux nouvelles exigences technologiques.
Les architectures distribuées comme Hadoop, les bases de données non-relationnelles, les traitements parallélisés avec MapReduce constituent des outils qui répondent aux accroissements massifs des données, que ce soit en volumétrie, en nombre ou en type. Cette explosion de données a conduit à la terminologie Big Data.
Nous découvrirons les différents concepts des systèmes Big Data, ce que signifient les termes comme base NoSQL, MapReduce, lac de données, ETL ou ELT, etc.
Nous nous attarderons sur deux grands outils du BigData : Hadoop et MongoDB.
Depuis les années 2000, l'informatique a vu émerger de nouvelles technologies, cloud et big data, qui bouleversent l'industrie avec l'arrivée d'outils de traitement à grande échelle.
De nouveaux besoins sont apparus comme la possibilité d'extraire de la valeur des données en s'appuyant sur des outils qui répondent aux nouvelles exigences technologiques.
Les architectures distribuées comme Hadoop, les bases de données non-relationnelles, les ...

Z