Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We start with a brief historical account of wavelets and of the way they shattered some of the preconceptions of the 20th century theory of statistical signal processing that is founded on the Gaussian hypothesis. The advent of wavelets led to the emergence of the concept of sparsity and resulted in important advances in image processing, compression, and the resolution of ill-posed inverse problems, including compressed sensing. In support of this change in paradigm, we introduce an extended class of stochastic processes specified by a generic (non-Gaussian) innovation model or, equivalently, as solutions of linear stochastic differential equations driven by white Lévy noise. Starting from first principles, we prove that the solutions of such equations are either Gaussian or sparse, at the exclusion of any other behavior. Moreover, we show that these processes admit a representation in a matched wavelet basis that is "sparse" and (approximately) decoupled. The proposed model lends itself well to an analytic treatment. It also has a strong predictive power in that it justifies the type of sparsity-promoting reconstruction methods that are currently being deployed in the field.
Keywords: wavelets - fractals - stochastic processes - sparsity - independent component analysis - differential operators - iterative thresholding - infinitely divisible laws - Lévy processes
[-]
We start with a brief historical account of wavelets and of the way they shattered some of the preconceptions of the 20th century theory of statistical signal processing that is founded on the Gaussian hypothesis. The advent of wavelets led to the emergence of the concept of sparsity and resulted in important advances in image processing, compression, and the resolution of ill-posed inverse problems, including compressed sensing. In support of ...
[+]
42C40 ; 60G20 ; 60G22 ; 60G18 ; 60H40
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Consider a non-linear function $G(X_t)$ where $X_t$ is a stationary Gaussian sequence with long-range dependence. The usual reduction principle states that the partial sums of $G(X_t)$ behave asymptotically like the partial sums of the first term in the expansion of $G$ in Hermite polynomials. In the context of the wavelet estimation of the long-range dependence parameter, one replaces the partial sums of $G(X_t)$ by the wavelet scalogram, namely the partial sum of squares of the wavelet coefficients. Is there a reduction principle in the wavelet setting, namely is the asymptotic behavior of the scalogram for $G(X_t)$ the same as that for the first term in the expansion of $G$ in Hermite polynomial? The answer is negative in general. This paper provides a minimal growth condition on the scales of the wavelet coefficients which ensures that the reduction principle also holds for the scalogram. The results are applied to testing the hypothesis that the long-range dependence parameter takes a specific value. Joint work with François Roueff and Murad S. Taqqu
Keywords: long-range dependence; long memory; self-similarity; wavelet transform; estimation; hypothesis
testing
[-]
Consider a non-linear function $G(X_t)$ where $X_t$ is a stationary Gaussian sequence with long-range dependence. The usual reduction principle states that the partial sums of $G(X_t)$ behave asymptotically like the partial sums of the first term in the expansion of $G$ in Hermite polynomials. In the context of the wavelet estimation of the long-range dependence parameter, one replaces the partial sums of $G(X_t)$ by the wavelet scalogram, ...
[+]
42C40 ; 60G18 ; 62M15 ; 60G20 ; 60G22