F Nous contacter


0

Documents  94A12 | enregistrements trouvés : 11

O
     

-A +A

Sélection courante (0) : Tout sélectionner / Tout déselectionner

P Q

Retrieving an arbitrary signal from the magnitudes of its inner products with the elements of a frame is not possible in infinite dimensions. Under certain conditions, signals can be retrieved satisfactorily however.

42C15 ; 46C05 ; 94A12 ; 94A15 ; 94A20

Time-frequency (or Gabor) frames are constructed from time- and frequency shifts of one (or several) basic analysis window and thus carry a very particular structure. On the other hand, due to their close relation to standard signal processing tools such as the short-time Fourier transform, but also local cosine bases or lapped transforms, in the past years time-frequency frames have increasingly been applied to solve problems in audio signal processing.
In this course, we will introduce the basic concepts of time-frequency frames, keeping their connection to audio applications as a guide-line. We will show how standard mathematical tools such as the Walnut representations can be used to obtain convenient reconstruction methods and also generalizations such the non-stationary Gabor transform. Applications such as the realization of an invertible constant-Q transform will be presented. Finally, we will introduce the basic notions of transform domain modelling, in particular those based on sparsity and structured sparsity, and their applications to denoising, multilayer decomposition and declipping. (Slides in attachment).
Time-frequency (or Gabor) frames are constructed from time- and frequency shifts of one (or several) basic analysis window and thus carry a very particular structure. On the other hand, due to their close relation to standard signal processing tools such as the short-time Fourier transform, but also local cosine bases or lapped transforms, in the past years time-frequency frames have increasingly been applied to solve problems in audio signal ...

94A12

Uncertainty principles go back to the early years of quantum mechanics. Originally introduced to describe the impossibility for a function to be sharply localized in both the direct and Fourier spaces, localization being measured by variance, it has been generalized to many other situations, including different representation spaces and different localization measures.
In this talk we first review classical results on variance uncertainty inequalities (in particular Heisenberg, Robertson and Breitenberger inequalities). We then focus on discrete (and in particular finite-dimensional) situations, where variance has to be replaced with more suitable localization measures. We then present recent results on support and entropic inequalities, describing joint localization properties of vector expansions with respect to two frames.

Keywords: uncertainty principle - variance of a function - Heisenberg inequality - support inequalities - entropic inequalities
Uncertainty principles go back to the early years of quantum mechanics. Originally introduced to describe the impossibility for a function to be sharply localized in both the direct and Fourier spaces, localization being measured by variance, it has been generalized to many other situations, including different representation spaces and different localization measures.
In this talk we first review classical results on variance uncertainty ...

94A12 ; 94A17 ; 26D20 ; 42C40

In this conference, I start by presenting the first applications and developments of wavelet methods made in Marseille in 1985 in the framework of sounds and music. A description of the earliest wavelet transform implementation using the SYTER processor is given followed by a discussion related to the first signal analysis investigations. Sound examples of the initial sound transformations obtained by altering the wavelet representation are further presented. Then methods aiming at estimating sound synthesis parameters such as amplitude and frequency modulation laws are described. Finally, new challenges brought by these early works are presented, focusing on the relationship between low-level synthesis parameters and sound perception and cognition. An example of the use of the wavelet transforms to estimate sound invariants related to the evocation of the "object" and the "action" is presented.

Keywords : sound and music - first wavelet applications - signal analysis - sound synthesis - fast wavelet algorithms - instantaneous frequency estimation - sound invariants
In this conference, I start by presenting the first applications and developments of wavelet methods made in Marseille in 1985 in the framework of sounds and music. A description of the earliest wavelet transform implementation using the SYTER processor is given followed by a discussion related to the first signal analysis investigations. Sound examples of the initial sound transformations obtained by altering the wavelet representation are ...

00A65 ; 42C40 ; 65T60 ; 94A12 ; 97M10 ; 97M80

In this talk, we will briefly look at the history of wavelets, from signal processing algorithms originating in speech and image processing, and harmonic analysis constructions of orthonormal bases. We review the promises, the achievements, and some of the limitations of wavelet applications, with JPEG and JPEG2000 as examples. We then take two key insights from the wavelet and signal processing experience, namely the time-frequency-scale view of the world, and the sparsity property of wavelet expansions, and present two recent results. First, we show new bounds for the time-frequency spread of sequences, and construct maximally compact sequences. Interestingly they differ from sampled Gaussians. Next, we review work on sampling of finite rate of innovation signals, which are sparse continuous-time signals for which sampling theorems are possible. We conclude by arguing that the interface of signal processing and applied harmonic analysis has been both fruitful and fun, and try to identify lessons learned from this experience.

Keywords: wavelets ­ filter banks - subband coding ­ uncertainty principle ­ sampling theory ­ sparse sampling
In this talk, we will briefly look at the history of wavelets, from signal processing algorithms originating in speech and image processing, and harmonic analysis constructions of orthonormal bases. We review the promises, the achievements, and some of the limitations of wavelet applications, with JPEG and JPEG2000 as examples. We then take two key insights from the wavelet and signal processing experience, namely the time-frequency-scale view ...

94A08 ; 94A12 ; 65T60 ; 42C40

sparsity - morphological diversity - inpainting - cosmology - weak lensing - cosmic microwave background

65T60 ; 94A12 ; 85A35

Si $f$ est une fonction somme d'une séries trigonométrique lacunaire, elle est bien définie quand on donne sa restriction à un petit intervalle. Mais comment l'obtenir à partir de cette restriction ? C'est possible par un procédé d'analyse convexe, à savoir le prolongement minimal dans l'algèbre de Wiener. Ce prolongement minimal est la clé de l'echantillonnage parcimonieux (compressed sensing) exposé par Emmanuel Candès dans l'ICM de Zurich 2006 et dans un article de Candès, Romberg et Tao de la même année ; je donnerai un aperçu de variantes dans les méthodes et les résultats que j'ai publiés en 2013 dans les Annales de l'Institut Fourier. Si $f$ est une fonction somme d'une séries trigonométrique lacunaire, elle est bien définie quand on donne sa restriction à un petit intervalle. Mais comment l'obtenir à partir de cette restriction ? C'est possible par un procédé d'analyse convexe, à savoir le prolongement minimal dans l'algèbre de Wiener. Ce prolongement minimal est la clé de l'echantillonnage parcimonieux (compressed sensing) exposé par Emmanuel Candès dans l'ICM de Zurich ...

42A38 ; 42A55 ; 42A61 ; 65T50 ; 94A12 ; 94A20

Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as clustering. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as clustering. In both case it appears as an optimal way to produce a set ...

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

Multi angle  Minicourse shape spaces and geometric statistics
Pennec, Xavier (Auteur de la Conférence) ; Trouvé, Alain (Auteur de la Conférence) | CIRM (Editeur )

Nuage de mots clefs ici

Z