En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 94A12 19 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in the context of supervised learning. Then, we show several empirical results that allow us to get some insights about the black box of neural networks. Third, we explain how neural networks create invariant representation: in the specific case of translation, it is possible to design predefined neural networks which are stable to translation, namely the Scattering Transform. Finally, we discuss several recent statistical learning, about the generalization and approximation properties of this deep machinery.[-]
Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in ...[+]

68T07 ; 94A12

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted prototypes (or codebook) which makes up a kind of skeleton of a dataset, a signal and more generally, from a mathematical point of view, of a probability distribution.
Quantization has encountered in recent years a renewed interest in various application fields like automatic classification, learning algorithms, optimal stopping and stochastic control, Backward SDEs and more generally numerical probability. In all these various applications, practical implementation of such clustering/quantization methods more or less rely on two procedures (and their countless variants): the Competitive Learning Vector Quantization $(CLV Q)$ which appears as a stochastic gradient descent derived from the so-called distortion potential and the (randomized) Lloyd's procedure (also known as k- means algorithm, nu ees dynamiques) which is but a fixed point search procedure. Batch version of those procedures can also be implemented when dealing with a dataset (or more generally a discrete distribution).
In a more formal form, if is probability distribution on an Euclidean space $\mathbb{R}^d$, the optimal quantization problem at level $N$ boils down to exhibiting an $N$-tuple $(x_{1}^{*}, . . . , x_{N}^{*})$, solution to

argmin$_{(x1,\dotsb,x_N)\epsilon(\mathbb{R}^d)^N} \int_{\mathbb{R}^d 1\le i\le N} \min |x_i-\xi|^2 \mu(d\xi)$

and its distribution i.e. the weights $(\mu(C(x_{i}^{*}))_{1\le i\le N}$ where $(C(x_{i}^{*})$ is a (Borel) partition of $\mathbb{R}^d$ satisfying

$C(x_{i}^{*})\subset \lbrace\xi\epsilon\mathbb{R}^d :|x_{i}^{*} -\xi|\le_{1\le j\le N} \min |x_{j}^{*}-\xi|\rbrace$.

To produce an unsupervised classification (or clustering) of a (large) dataset $(\xi_k)_{1\le k\le n}$, one considers its empirical measure

$\mu=\frac{1}{n}\sum_{k=1}^{n}\delta_{\xi k}$

whereas in numerical probability $\mu = \mathcal{L}(X)$ where $X$ is an $\mathbb{R}^d$-valued simulatable random vector. In both situations, $CLV Q$ and Lloyd's procedures rely on massive sampling of the distribution $\mu$.
As for clustering, the classification into $N$ clusters is produced by the partition of the dataset induced by the Voronoi cells $C(x_{i}^{*}), i = 1, \dotsb, N$ of the optimal quantizer.
In this second case, which is of interest for solving non linear problems like Optimal stopping problems (variational inequalities in terms of PDEs) or Stochastic control problems (HJB equations) in medium dimensions, the idea is to produce a quantization tree optimally fitting the dynamics of (a time discretization) of the underlying structure process.
We will explore (briefly) this vast panorama with a focus on the algorithmic aspects where few theoretical results coexist with many heuristics in a burgeoning literature. We will present few simulations in two dimensions.[-]
Optimal vector quantization has been originally introduced in Signal processing as a discretization method of random signals, leading to an optimal trade-off between the speed of transmission and the quality of the transmitted signal. In machine learning, similar methods applied to a dataset are the historical core of unsupervised classification methods known as “clustering”. In both case it appears as an optimal way to produce a set of weighted ...[+]

62L20 ; 93E25 ; 94A12 ; 91G60 ; 65C05

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Signal processing tutorial - part 2 - Oudre, Laurent (Author of the conference) | CIRM H

Virtualconference

Processing signals presents many challenges by the quantity, structure, faults, heterogeneity of sensor data recorded over time. Supporting decisions using prediction or detection based on data streams naturally calls Machine Learning techniques (and theory!) for backup. The latter field has witnessed a tremendous development since the publication of Vladimir Vapnik's best-seller 'The Nature of Statistical Learning' and the invention of Support Vector Machines, Bagging, Boosting and Random Forests between 1995 and 1999 until the latest technological breakthroughs based on Deep Learning. However, most of its reference frameworks and methods consider vector observations which are essentially invariant up to a permutation of the indices of vector components. Beyond the obvious approach of featurization (or embedding) time series into vectors of characteristics (features), there are other more subtle interactions between the two fields of SP and ML but they first need to address some fundamental questions such as:
- how to monitor the lack of stationarity in time
- dependent data - how to supervise such data
- what is the objective of learning (prediction goal) in this context, and more generally what can be learned with signals
- how to account for additional structure in signals
- how Signal Processing as a field may benefit from modern optimization techniques

The purpose of this course is to offer an overview on some Signal Processing problems from the angle of Machine Learning philosophy and techniques in order to develop insights on the fundamental questions formulated above. In other terms, this is not a standard course on Signal Processing and we may skip some of the very fundamental concepts that would belong to such a course.

The topics presented in this doctoral course will include:
- local stationarity
- event detection methodology
- prediction problems with signals
- representation learning
- graph signal processing
In the practical sessions, a concrete example in the context of precision medicine will be developed. In particular, the central issues of segmentation, quantification, representation will be addressed with code.[-]
Processing signals presents many challenges by the quantity, structure, faults, heterogeneity of sensor data recorded over time. Supporting decisions using prediction or detection based on data streams naturally calls Machine Learning techniques (and theory!) for backup. The latter field has witnessed a tremendous development since the publication of Vladimir Vapnik's best-seller 'The Nature of Statistical Learning' and the invention of Support ...[+]

94A12

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in the context of supervised learning. Then, we show several empirical results that allow us to get some insights about the black box of neural networks. Third, we explain how neural networks create invariant representation: in the specific case of translation, it is possible to design predefined neural networks which are stable to translation, namely the Scattering Transform. Finally, we discuss several recent statistical learning, about the generalization and approximation properties of this deep machinery.[-]
Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in ...[+]

68T07 ; 94A12

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y
Time-frequency (or Gabor) frames are constructed from time- and frequency shifts of one (or several) basic analysis window and thus carry a very particular structure. On the other hand, due to their close relation to standard signal processing tools such as the short-time Fourier transform, but also local cosine bases or lapped transforms, in the past years time-frequency frames have increasingly been applied to solve problems in audio signal processing.
In this course, we will introduce the basic concepts of time-frequency frames, keeping their connection to audio applications as a guide-line. We will show how standard mathematical tools such as the Walnut representations can be used to obtain convenient reconstruction methods and also generalizations such the non-stationary Gabor transform. Applications such as the realization of an invertible constant-Q transform will be presented. Finally, we will introduce the basic notions of transform domain modelling, in particular those based on sparsity and structured sparsity, and their applications to denoising, multilayer decomposition and declipping. (Slides in attachment).[-]
Time-frequency (or Gabor) frames are constructed from time- and frequency shifts of one (or several) basic analysis window and thus carry a very particular structure. On the other hand, due to their close relation to standard signal processing tools such as the short-time Fourier transform, but also local cosine bases or lapped transforms, in the past years time-frequency frames have increasingly been applied to solve problems in audio signal ...[+]

94A12

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Continuous and discrete uncertainty principles - Torrésani, Bruno (Author of the conference) | CIRM H

Multi angle

Uncertainty principles go back to the early years of quantum mechanics. Originally introduced to describe the impossibility for a function to be sharply localized in both the direct and Fourier spaces, localization being measured by variance, it has been generalized to many other situations, including different representation spaces and different localization measures.
In this talk we first review classical results on variance uncertainty inequalities (in particular Heisenberg, Robertson and Breitenberger inequalities). We then focus on discrete (and in particular finite-dimensional) situations, where variance has to be replaced with more suitable localization measures. We then present recent results on support and entropic inequalities, describing joint localization properties of vector expansions with respect to two frames.

Keywords: uncertainty principle - variance of a function - Heisenberg inequality - support inequalities - entropic inequalities[-]
Uncertainty principles go back to the early years of quantum mechanics. Originally introduced to describe the impossibility for a function to be sharply localized in both the direct and Fourier spaces, localization being measured by variance, it has been generalized to many other situations, including different representation spaces and different localization measures.
In this talk we first review classical results on variance uncertainty ...[+]

94A12 ; 94A17 ; 26D20 ; 42C40

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this conference, I start by presenting the first applications and developments of wavelet methods made in Marseille in 1985 in the framework of sounds and music. A description of the earliest wavelet transform implementation using the SYTER processor is given followed by a discussion related to the first signal analysis investigations. Sound examples of the initial sound transformations obtained by altering the wavelet representation are further presented. Then methods aiming at estimating sound synthesis parameters such as amplitude and frequency modulation laws are described. Finally, new challenges brought by these early works are presented, focusing on the relationship between low-level synthesis parameters and sound perception and cognition. An example of the use of the wavelet transforms to estimate sound invariants related to the evocation of the "object" and the "action" is presented.

Keywords : sound and music - first wavelet applications - signal analysis - sound synthesis - fast wavelet algorithms - instantaneous frequency estimation - sound invariants[-]
In this conference, I start by presenting the first applications and developments of wavelet methods made in Marseille in 1985 in the framework of sounds and music. A description of the earliest wavelet transform implementation using the SYTER processor is given followed by a discussion related to the first signal analysis investigations. Sound examples of the initial sound transformations obtained by altering the wavelet representation are ...[+]

00A65 ; 42C40 ; 65T60 ; 94A12 ; 97M10 ; 97M80

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
In this talk, we will briefly look at the history of wavelets, from signal processing algorithms originating in speech and image processing, and harmonic analysis constructions of orthonormal bases. We review the promises, the achievements, and some of the limitations of wavelet applications, with JPEG and JPEG2000 as examples. We then take two key insights from the wavelet and signal processing experience, namely the time-frequency-scale view of the world, and the sparsity property of wavelet expansions, and present two recent results. First, we show new bounds for the time-frequency spread of sequences, and construct maximally compact sequences. Interestingly they differ from sampled Gaussians. Next, we review work on sampling of finite rate of innovation signals, which are sparse continuous-time signals for which sampling theorems are possible. We conclude by arguing that the interface of signal processing and applied harmonic analysis has been both fruitful and fun, and try to identify lessons learned from this experience.

Keywords: wavelets – filter banks - subband coding – uncertainty principle – sampling theory – sparse sampling[-]
In this talk, we will briefly look at the history of wavelets, from signal processing algorithms originating in speech and image processing, and harmonic analysis constructions of orthonormal bases. We review the promises, the achievements, and some of the limitations of wavelet applications, with JPEG and JPEG2000 as examples. We then take two key insights from the wavelet and signal processing experience, namely the time-frequency-scale view ...[+]

94A08 ; 94A12 ; 65T60 ; 42C40

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

Phase retrieval in infinite dimensions - Daubechies, Ingrid (Author of the conference) | CIRM H

Post-edited

Retrieving an arbitrary signal from the magnitudes of its inner products with the elements of a frame is not possible in infinite dimensions. Under certain conditions, signals can be retrieved satisfactorily however.

42C15 ; 46C05 ; 94A12 ; 94A15 ; 94A20

Bookmarks Report an error