En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents 90C25 4 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
One of the important "products" of wavelet theory consists in the insight that it is often beneficial to consider sparsity in signal processing applications. In fact, wavelet compression relies on the fact that wavelet expansions of real-world signals and images are usually sparse. Compressive sensing builds on sparsity and tells us that sparse signals (expansions) can be recovered from incomplete linear measurements (samples) efficiently. This finding triggered an enormous research activity in recent years both in signal processing applications as well as their mathematical foundations. The present talk discusses connections of compressive sensing and time-frequency analysis (the sister of wavelet theory). In particular, we give on overview on recent results on compressive sensing with time-frequency structured random matrices.

Keywords: compressive sensing - time-frequency analysis - wavelets - sparsity - random matrices - $\ell_1$-minimization - radar - wireless communications[-]
One of the important "products" of wavelet theory consists in the insight that it is often beneficial to consider sparsity in signal processing applications. In fact, wavelet compression relies on the fact that wavelet expansions of real-world signals and images are usually sparse. Compressive sensing builds on sparsity and tells us that sparse signals (expansions) can be recovered from incomplete linear measurements (samples) efficiently. This ...[+]

94A20 ; 94A08 ; 42C40 ; 60B20 ; 90C25

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent.[-]
Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...[+]

62L20 ; 68T05 ; 90C06 ; 90C25

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent.[-]
Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...[+]

62L20 ; 68T05 ; 90C06 ; 90C25

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
We present a list of counterexamples to conjectures in smooth convex coercive optimization. We will detail two extensions of the gradient descent method, of interest in machine learning: gradient descent with exact line search, and Bregman descent (also known as mirror descent). We show that both are non convergent in general. These examples are based on general smooth convex interpolation results. Given a decreasing sequence of convex compact sets in the plane, whose boundaries are Ck curves (k ¿ 1, arbitrary) with positive curvature, there exists a Ck convex function for which each set of the sequence is a sublevel set. The talk will provide proof arguments for this results and detail how it can be used to construct the anounced counterexamples.[-]
We present a list of counterexamples to conjectures in smooth convex coercive optimization. We will detail two extensions of the gradient descent method, of interest in machine learning: gradient descent with exact line search, and Bregman descent (also known as mirror descent). We show that both are non convergent in general. These examples are based on general smooth convex interpolation results. Given a decreasing sequence of convex compact ...[+]

52A41 ; 90C25 ; 52A10 ; 52A27

Bookmarks Report an error