En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 94A12 19 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.[-]
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted ...[+]

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many problems in computational science require the approximation of a high-dimensional function from limited amounts of data. For instance, a common task in Uncertainty Quantification (UQ) involves building a surrogate model for a parametrized computational model. Complex physical systems involve computational models with many parameters, resulting in multivariate functions of many variables. Although the amount of data may be large, the curse of dimensionality essentially prohibits collecting or processing enough data to reconstruct such a function using classical approximation techniques. Over the last five years, spurred by its successful application in signal and image processing, compressed sensing has begun to emerge as potential tool for surrogate model construction UQ. In this talk, I will give an overview of application of compressed sensing to high-dimensional approximation. I will demonstrate how the appropriate implementation of compressed sensing overcomes the curse of dimensionality (up to a log factor). This is based on weighted l1 regularizers, and structured sparsity in so-called lower sets. If time, I will also discuss several variations and extensions relevant to UQ applications, many of which have links to the standard compressed sensing theory. These include dealing with corrupted data, the effect of model error, functions defined on irregular domains and incorporating additional information such as gradient data. I will also highlight several challenges and open problems.[-]
Many problems in computational science require the approximation of a high-dimensional function from limited amounts of data. For instance, a common task in Uncertainty Quantification (UQ) involves building a surrogate model for a parametrized computational model. Complex physical systems involve computational models with many parameters, resulting in multivariate functions of many variables. Although the amount of data may be large, the curse ...[+]

41A05 ; 41A10 ; 65N12 ; 65N15 ; 94A12

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Signal processing for nonlinear diffractive imaging - Kamilov, Ulugbek (Auteur de la Conférence) | CIRM H

Multi angle

Can modern signal processing be used to overcome the diffraction limit? The classical diffraction limit states that the resolution of a linear imaging system is fundamentally limited by one half of the wavelength of light. This implies that conventional light microscopes cannot distinguish two objects placed within a distance closer than 0.5 × 400 = 200nm (blue) or 0.5 × 700 = 350nm (red). This significantly impedes biomedical discovery by restricting our ability to observe biological structure and processes smaller than 100nm. Recent progress in sparsity-driven signal processing has created a powerful paradigm for increasing both the resolution and overall quality of imaging by promoting model-based image acquisition and reconstruction. This has led to multiple influential results demonstrating super-resolution in practical imaging systems. To date, however, the vast majority of work in signal processing has neglected the fundamental nonlinearity of the object-light interaction and its potential to lead to resolution enhancement. As a result, modern theory heavily focuses on linear measurement models that are truly effective only when object-light interactions are weak. Without a solid signal processing foundation for understanding such nonlinear interactions, we undervalue their impact on information transfer in the image formation. This ultimately limits our capability to image a large class of objects, such as biological tissue, that generally are in large-volumes and interact strongly and nonlinearly with light.
The goal of this talk is to present the recent progress in model-based imaging under multiple scattering. We will discuss several key applications including optical diffraction tomography, Fourier Ptychography, and large-scale Holographic microscopy. We will show that all these application can benefit from models, such as the Rytov approximation and beam propagation method, that take light scattering into account. We will discuss the integration of such models into the state-of-the-art optimization algorithms such as FISTA and ADMM. Finally, we will describe the most recent work that uses learned-priors for improving the quality of image reconstruction under multiple scattering.[-]
Can modern signal processing be used to overcome the diffraction limit? The classical diffraction limit states that the resolution of a linear imaging system is fundamentally limited by one half of the wavelength of light. This implies that conventional light microscopes cannot distinguish two objects placed within a distance closer than 0.5 × 400 = 200nm (blue) or 0.5 × 700 = 350nm (red). This significantly impedes biomedical discovery by ...[+]

94A12 ; 94A08 ; 65T50 ; 65N21 ; 65K10 ; 62H35

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We consider a general class of filtering equations, where all coefficients depend upon the observation process, and the signal and observation noises are correlated. We prove uniqueness of the measure valued solution of the Zakai equation via a duality argument with a backward stochastic partial differential equation.
This is joint work with Dan Crisan, Imperial College, London.

60G35 ; 93E11 ; 94A12

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
Time-frequency (or Gabor) frames are constructed from time- and frequency shifts of one (or several) basic analysis window and thus carry a very particular structure. On the other hand, due to their close relation to standard signal processing tools such as the short-time Fourier transform, but also local cosine bases or lapped transforms, in the past years time-frequency frames have increasingly been applied to solve problems in audio signal processing.
In this course, we will introduce the basic concepts of time-frequency frames, keeping their connection to audio applications as a guide-line. We will show how standard mathematical tools such as the Walnut representations can be used to obtain convenient reconstruction methods and also generalizations such the non-stationary Gabor transform. Applications such as the realization of an invertible constant-Q transform will be presented. Finally, we will introduce the basic notions of transform domain modelling, in particular those based on sparsity and structured sparsity, and their applications to denoising, multilayer decomposition and declipping. (Slides in attachment).[-]
Time-frequency (or Gabor) frames are constructed from time- and frequency shifts of one (or several) basic analysis window and thus carry a very particular structure. On the other hand, due to their close relation to standard signal processing tools such as the short-time Fourier transform, but also local cosine bases or lapped transforms, in the past years time-frequency frames have increasingly been applied to solve problems in audio signal ...[+]

94A12

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Continuous and discrete uncertainty principles - Torrésani, Bruno (Auteur de la Conférence) | CIRM H

Multi angle

Uncertainty principles go back to the early years of quantum mechanics. Originally introduced to describe the impossibility for a function to be sharply localized in both the direct and Fourier spaces, localization being measured by variance, it has been generalized to many other situations, including different representation spaces and different localization measures.
In this talk we first review classical results on variance uncertainty inequalities (in particular Heisenberg, Robertson and Breitenberger inequalities). We then focus on discrete (and in particular finite-dimensional) situations, where variance has to be replaced with more suitable localization measures. We then present recent results on support and entropic inequalities, describing joint localization properties of vector expansions with respect to two frames.

Keywords: uncertainty principle - variance of a function - Heisenberg inequality - support inequalities - entropic inequalities[-]
Uncertainty principles go back to the early years of quantum mechanics. Originally introduced to describe the impossibility for a function to be sharply localized in both the direct and Fourier spaces, localization being measured by variance, it has been generalized to many other situations, including different representation spaces and different localization measures.
In this talk we first review classical results on variance uncertainty ...[+]

94A12 ; 94A17 ; 26D20 ; 42C40

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Sound, music and wavelets in Marseille - Kronland-Martinet, Richard (Auteur de la Conférence) | CIRM H

Multi angle

In this conference, I start by presenting the first applications and developments of wavelet methods made in Marseille in 1985 in the framework of sounds and music. A description of the earliest wavelet transform implementation using the SYTER processor is given followed by a discussion related to the first signal analysis investigations. Sound examples of the initial sound transformations obtained by altering the wavelet representation are further presented. Then methods aiming at estimating sound synthesis parameters such as amplitude and frequency modulation laws are described. Finally, new challenges brought by these early works are presented, focusing on the relationship between low-level synthesis parameters and sound perception and cognition. An example of the use of the wavelet transforms to estimate sound invariants related to the evocation of the "object" and the "action" is presented.

Keywords : sound and music - first wavelet applications - signal analysis - sound synthesis - fast wavelet algorithms - instantaneous frequency estimation - sound invariants[-]
In this conference, I start by presenting the first applications and developments of wavelet methods made in Marseille in 1985 in the framework of sounds and music. A description of the earliest wavelet transform implementation using the SYTER processor is given followed by a discussion related to the first signal analysis investigations. Sound examples of the initial sound transformations obtained by altering the wavelet representation are ...[+]

00A65 ; 42C40 ; 65T60 ; 94A12 ; 97M10 ; 97M80

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, we will briefly look at the history of wavelets, from signal processing algorithms originating in speech and image processing, and harmonic analysis constructions of orthonormal bases. We review the promises, the achievements, and some of the limitations of wavelet applications, with JPEG and JPEG2000 as examples. We then take two key insights from the wavelet and signal processing experience, namely the time-frequency-scale view of the world, and the sparsity property of wavelet expansions, and present two recent results. First, we show new bounds for the time-frequency spread of sequences, and construct maximally compact sequences. Interestingly they differ from sampled Gaussians. Next, we review work on sampling of finite rate of innovation signals, which are sparse continuous-time signals for which sampling theorems are possible. We conclude by arguing that the interface of signal processing and applied harmonic analysis has been both fruitful and fun, and try to identify lessons learned from this experience.

Keywords: wavelets – filter banks - subband coding – uncertainty principle – sampling theory – sparse sampling[-]
In this talk, we will briefly look at the history of wavelets, from signal processing algorithms originating in speech and image processing, and harmonic analysis constructions of orthonormal bases. We review the promises, the achievements, and some of the limitations of wavelet applications, with JPEG and JPEG2000 as examples. We then take two key insights from the wavelet and signal processing experience, namely the time-frequency-scale view ...[+]

94A08 ; 94A12 ; 65T60 ; 42C40

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Si $f$ est une fonction somme d'une séries trigonométrique lacunaire, elle est bien définie quand on donne sa restriction à un petit intervalle. Mais comment l'obtenir à partir de cette restriction ? C'est possible par un procédé d'analyse convexe, à savoir le prolongement minimal dans l'algèbre de Wiener. Ce prolongement minimal est la clé de l'echantillonnage parcimonieux (compressed sensing) exposé par Emmanuel Candès dans l'ICM de Zurich 2006 et dans un article de Candès, Romberg et Tao de la même année ; je donnerai un aperçu de variantes dans les méthodes et les résultats que j'ai publiés en 2013 dans les Annales de l'Institut Fourier.[-]
Si $f$ est une fonction somme d'une séries trigonométrique lacunaire, elle est bien définie quand on donne sa restriction à un petit intervalle. Mais comment l'obtenir à partir de cette restriction ? C'est possible par un procédé d'analyse convexe, à savoir le prolongement minimal dans l'algèbre de Wiener. Ce prolongement minimal est la clé de l'echantillonnage parcimonieux (compressed sensing) exposé par Emmanuel Candès dans l'ICM de Zurich ...[+]

42A38 ; 42A55 ; 42A61 ; 65T50 ; 94A12 ; 94A20

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

Phase retrieval in infinite dimensions - Daubechies, Ingrid (Auteur de la Conférence) | CIRM H

Post-edited

Retrieving an arbitrary signal from the magnitudes of its inner products with the elements of a frame is not possible in infinite dimensions. Under certain conditions, signals can be retrieved satisfactorily however.

42C15 ; 46C05 ; 94A12 ; 94A15 ; 94A20

Sélection Signaler une erreur