En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 62M10 11 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
We prove the consistency and asymptotic normality of the Laplacian Quasi-Maximum Likelihood Estimator (QMLE) for a general class of causal time series including ARMA, AR($\infty$), GARCH, ARCH($\infty$), ARMA-GARCH, APARCH, ARMA-APARCH,..., processes. We notably exhibit the advantages (moment order and robustness) of this estimator compared to the classical Gaussian QMLE. Numerical simulations confirms the accuracy of this estimator.

62F12 ; 62M10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
This talk develops a new test for local white noise which also doubles as a test for the lack of aliasing in a locally stationary wavelet process. We compare and contrast our new test with the aliasing test for stationary time series due to Hinich and co-authors. We show that the test is robust to mismatch of analysis and synthesis wavelet. We demonstrate the effectiveness of the test on some simulated examples and on an example from wind energy.

42C40 ; 60G10 ; 62M10 ; 62M15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk we introduce a class of statistics for spatial data that is observed on an irregular set of locations. Our aim is to obtain a unified framework for inference and the statistics we consider include both parametric and nonparametric estimators of the spatial covariance function, Whittle likelihood estimation, goodness of fit tests and a test for second order spatial stationarity. To ensure that the statistics are computationally feasible they are defined within the Fourier domain, and in most cases can be expressed as a quadratic form of a discrete Fourier-type transform of the spatial data. Evaluation of such statistic is computationally tractable, requiring $O(nb)$ operations, where $b$ are the number Fourier frequencies used in the definition of the statistic (which varies according to the application) and $n$ is the sample size. The asymptotic sampling properties of the statistics are derived using mixed spatial asymptotics, where the number of locations grows at a faster rate than the size of the spatial domain and under the assumption that the spatial random field is stationary and the irregular design of the locations are independent, identically distributed random variables. We show that there are quite intriguing differences in the behaviour of the statistic when the spatial process is Gaussian and non-Gaussian. In particular, the choice of the number of frequencies $b$ in the construction of the statistic depends on whether the spatial process is Gaussian or not. If time permits we describe how the results can also be used in variance estimation. And if we still have time some simulations and real data will be presented.[-]
In this talk we introduce a class of statistics for spatial data that is observed on an irregular set of locations. Our aim is to obtain a unified framework for inference and the statistics we consider include both parametric and nonparametric estimators of the spatial covariance function, Whittle likelihood estimation, goodness of fit tests and a test for second order spatial stationarity. To ensure that the statistics are computationally ...[+]

62M10 ; 62M30 ; 62F12 ; 62G05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

Bayesian econometrics in the Big Data Era - Frühwirth-Schnatter, Sylvia (Auteur de la conférence) | CIRM H

Post-edited

Data mining methods based on finite mixture models are quite common in many areas of applied science, such as marketing, to segment data and to identify subgroups with specific features. Recent work shows that these methods are also useful in micro econometrics to analyze the behavior of workers in labor markets. Since these data are typically available as time series with discrete states, clustering kernels based on Markov chains with group-specific transition matrices are applied to capture both persistence in the individual time series as well as cross-sectional unobserved heterogeneity. Markov chains clustering has been applied to data from the Austrian labor market, (a) to understanding the effect of labor market entry conditions on long-run career developments for male workers (Frühwirth-Schnatter et al., 2012), (b) to study mothers' long-run career patterns after first birth (Frühwirth-Schnatter et al., 2016), and (c) to study the effects of a plant closure on future career developments for male worker (Frühwirth-Schnatter et al., 2018). To capture non- stationary effects for the later study, time-inhomogeneous Markov chains based on time-varying group specific transition matrices are introduced as clustering kernels. For all applications, a mixture-of-experts formulation helps to understand which workers are likely to belong to a particular group. Finally, it will be shown that Markov chain clustering is also useful in a business application in marketing and helps to identify loyal consumers within a customer relationship management (CRM) program.[-]
Data mining methods based on finite mixture models are quite common in many areas of applied science, such as marketing, to segment data and to identify subgroups with specific features. Recent work shows that these methods are also useful in micro econometrics to analyze the behavior of workers in labor markets. Since these data are typically available as time series with discrete states, clustering kernels based on Markov chains with ...[+]

62C10 ; 62M05 ; 62M10 ; 62H30 ; 62P20 ; 62F15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Arctic sea-ice extent has been of considerable interest to scientists in recent years, mainly due to its decreasing trend over the past 20 years. In this talk, I propose a hierarchical spatio-temporal generalized linear model (GLM) for binary Arctic-sea-ice data, where data dependencies are introduced through a latent, dynamic, spatio-temporal mixed-effects model. By using a fixed number of spatial basis functions, the resulting model achieves both dimension reduction and non-stationarity for spatial fields at different time points. An EM algorithm is used to estimate model parameters, and an MCMC algorithm is developed to obtain the predictive distribution of the latent spatio-temporal process. The methodology is applied to spatial, binary, Arctic-sea-ice data for each September over the past 20 years, and several posterior summaries are computed to detect changes of Arctic sea-ice cover. The fully Bayesian version is under development awill be discussed.[-]
Arctic sea-ice extent has been of considerable interest to scientists in recent years, mainly due to its decreasing trend over the past 20 years. In this talk, I propose a hierarchical spatio-temporal generalized linear model (GLM) for binary Arctic-sea-ice data, where data dependencies are introduced through a latent, dynamic, spatio-temporal mixed-effects model. By using a fixed number of spatial basis functions, the resulting model achieves ...[+]

62M30 ; 62M10 ; 62M15

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Bayesian capture-recapture in social justice research - Corliss, David (Auteur de la conférence) | CIRM H

Multi angle

Capture-Recapture (RC) methodology provides a way to estimate the size of a population from multiple, independent samples. While the was developed more than a century ago to count animal populations, it has only recently become important in Data For Social Good. The large number of samples with varying amounts of intersection and developed over a period of time, so often found in Data For Social Good projects, can greatly complicate conventional RC methodology. These conditions are ideal, however, for Bayesian Capture Recapture. This presentation describes the use of Bayesian Capture Recapture to estimate populations in Data for Social Good. Examples illustrating this method include new work by the author in estimating numbers of human trafficking victims and in estimating the size of hate groups from the analysis of hate speech in social media.[-]
Capture-Recapture (RC) methodology provides a way to estimate the size of a population from multiple, independent samples. While the was developed more than a century ago to count animal populations, it has only recently become important in Data For Social Good. The large number of samples with varying amounts of intersection and developed over a period of time, so often found in Data For Social Good projects, can greatly complicate conventional ...[+]

62P25 ; 62F15 ; 62M10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this paper we study asymptotic properties of random forests within the framework of nonlinear time series modeling. While random forests have been successfully applied in various fields, the theoretical justification has not been considered for their use in a time series setting. Under mild conditions, we prove a uniform concentration inequality for regression trees built on nonlinear autoregressive processes and, subsequently, use this result to prove consistency for a large class of random forests. The results are supported by various simulations. (This is joint work with Mikkel Slot Nielsen.)[-]
In this paper we study asymptotic properties of random forests within the framework of nonlinear time series modeling. While random forests have been successfully applied in various fields, the theoretical justification has not been considered for their use in a time series setting. Under mild conditions, we prove a uniform concentration inequality for regression trees built on nonlinear autoregressive processes and, subsequently, use this ...[+]

62G10 ; 60G10 ; 60J05 ; 62M05 ; 62M10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In time series analysis there is an apparent dichotomy between time and frequency domain methods. The aim of this paper is to draw connections between frequency and time domain methods. Our focus will be on reconciling the Gaussia likelihood and the Whittle likelihood. We derive an exact, interpretable, bound between the Gaussian and Whittle likelihood of a second order stationary time series. The derivation is based on obtaining the transformation which is biorthogonal to the discrete Fourier transform of the time series. Such a transformation yields a new decomposition for the inverse of a Toeplitz matrix and enables the representation of the Gaussian likelihood within the frequency domain. We show that the difference between the Gaussian and Whittle likelihood is due to the omission of the best linear predictions outside the domain of observation in the periodogram associated with the Whittle likelihood. Based on this result, we obtain an approximation for the difference between the Gaussian and Whittle likelihoods in terms of the best fitting, finite order autoregressive parameters. These approximations are used to define two new frequency domain quasi-likelihoods criteria. We show these new criteria yield a better approximation of the spectral divergence criterion, as compared to both the Gaussian and Whittle likelihoods. In simulations, we show that the proposed estimators have satisfactory finite sample properties.[-]
In time series analysis there is an apparent dichotomy between time and frequency domain methods. The aim of this paper is to draw connections between frequency and time domain methods. Our focus will be on reconciling the Gaussia likelihood and the Whittle likelihood. We derive an exact, interpretable, bound between the Gaussian and Whittle likelihood of a second order stationary time series. The derivation is based on obtaining the tr...[+]

62M10 ; 62M15 ; 62F10

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
The class of integer-valued trawl processes has recently been introduced for modelling univariate and multivariate integer-valued time series with short or long memory.

In this talk, I will discuss recent developments with regards to model estimation, model selection and forecasting of such processes. The new methods will be illustrated in an empirical study of high-frequency financial data.

This is joint work with Mikkel Bennedsen (Aarhus University), Asger Lunde (Aarhus University) and Neil Shephard (Harvard University).[-]
The class of integer-valued trawl processes has recently been introduced for modelling univariate and multivariate integer-valued time series with short or long memory.

In this talk, I will discuss recent developments with regards to model estimation, model selection and forecasting of such processes. The new methods will be illustrated in an empirical study of high-frequency financial data.

This is joint work with Mikkel Bennedsen (Aarhus ...[+]

37M10 ; 60G10 ; 60G55 ; 62F99 ; 62M10 ; 62P05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
It is generally admitted that financial time series have heavy tailed marginal distributions. When time series models are fitted on such data, the non-existence of appropriate moments may invalidate standard statistical tools used for inference. Moreover, the existence of moments can be crucial for risk management. This talk considers testing the existence of moments in the framework of standard and augmented GARCH models. In the case of standard GARCH, even-moment conditions involve moments of the independent innovation process. We propose tests for the existence of moments of the returns process that are based on the joint asymptotic distribution of the estimator of the volatility parameters and empirical moments of the residuals. To achieve efficiency gains we consider non Gaussian QML estimators founded on reparametrizations of the GARCH model, and we discuss optimality issues. We also consider augmented GARCH processes, for which moment conditions are less explicit. We establish the asymptotic distribution of the empirical moment Generating function (MGF) of the model, defined as the MGF of the random autoregressive coefficient in the volatility dynamics, from which a test is deduced. An alternative test is based on the estimation of the maximal exponent characterizing the existence of moments. Our results will be illustrated with Monte Carlo experiments and real financial data.[-]
It is generally admitted that financial time series have heavy tailed marginal distributions. When time series models are fitted on such data, the non-existence of appropriate moments may invalidate standard statistical tools used for inference. Moreover, the existence of moments can be crucial for risk management. This talk considers testing the existence of moments in the framework of standard and augmented GARCH models. In the case of ...[+]

37M10 ; 62M10 ; 62P20

Sélection Signaler une erreur