En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents Chaux, Caroline 24 résultats

Filtrer
Sélectionner : Tous / Aucun
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Motivated by the spectrogram (or short-time Fourier transform) basic principles of linear algebra are explained, preparing for the more general case of Gabor frames in time-frequency analysis. The importance of the singular value decomposition and the four spaces associated with a matrix is pointed out, and based on this the pseudo-inverse (leading later to the dual Gabor frame) and the Loewdin (symmetric) orthogonalization are explained.
CIRM - Chaire Jean-Morlet 2014 - Aix-Marseille Université[-]
Motivated by the spectrogram (or short-time Fourier transform) basic principles of linear algebra are explained, preparing for the more general case of Gabor frames in time-frequency analysis. The importance of the singular value decomposition and the four spaces associated with a matrix is pointed out, and based on this the pseudo-inverse (leading later to the dual Gabor frame) and the Loewdin (symmetric) orthogonalization are explained.
CIRM - ...[+]

15-XX ; 41-XX ; 42-XX ; 46-XX

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
Motivated by the spectrogram (or short-time Fourier transform) basic principles of linear algebra are explained, preparing for the more general case of Gabor frames in time-frequency analysis. The importance of the singular value decomposition and the four spaces associated with a matrix is pointed out, and based on this the pseudo-inverse (leading later to the dual Gabor frame) and the Loewdin (symmetric) orthogonalization are explained.
CIRM - Chaire Jean-Morlet 2014 - Aix-Marseille Université[-]
Motivated by the spectrogram (or short-time Fourier transform) basic principles of linear algebra are explained, preparing for the more general case of Gabor frames in time-frequency analysis. The importance of the singular value decomposition and the four spaces associated with a matrix is pointed out, and based on this the pseudo-inverse (leading later to the dual Gabor frame) and the Loewdin (symmetric) orthogonalization are explained.
CIRM - ...[+]

15-XX ; 41-XX ; 42-XX ; 46-XX

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

Wavelets, shearlets and geometric frames - Part 1 - Grohs, Philipp (Auteur de la Conférence) | CIRM H

Multi angle

In several applications in signal processing it has proven useful to decompose a given signal in a multiscale dictionary, for instance to achieve compression by coefficient thresholding or to solve inverse problems. The most popular family of such dictionaries are undoubtedly wavelets which have had a tremendous impact in applied mathematics since Daubechies' construction of orthonormal wavelet bases with compact support in the 1980s. While wavelets are now a well-established tool in numerical signal processing (for instance the JPEG2000 coding standard is based on a wavelet transform) it has been recognized in the past decades that they also possess several shortcomings, in particular with respect to the treatment of multidimensional data where anisotropic structures such as edges in images are typically present. This deficiency of wavelets has given birth to the research area of geometric multiscale analysis where frame constructions which are optimally adapted to anisotropic structures are sought. A milestone in this area has been the construction of curvelet and shearlet frames which are indeed capable of optimally resolving curved singularities in multidimensional data.
In this course we will outline these developments, starting with a short introduction to wavelets and then moving on to more recent constructions of curvelets, shearlets and ridgelets. We will discuss their applicability to diverse problems in signal processing such as compression, denoising, morphological component analysis, or the solution of transport PDEs. Implementation aspects will also be covered. (Slides in attachment).[-]
In several applications in signal processing it has proven useful to decompose a given signal in a multiscale dictionary, for instance to achieve compression by coefficient thresholding or to solve inverse problems. The most popular family of such dictionaries are undoubtedly wavelets which have had a tremendous impact in applied mathematics since Daubechies' construction of orthonormal wavelet bases with compact support in the 1980s. While ...[+]

42C15 ; 42C40

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
2y

L'échantillonnage - Chaux, Caroline (Auteur de la Conférence) | CIRM H

Multi angle

A l'occasion du centenaire de la naissance de Claude Shannon, la SMF, la SMAI et le CIRM organisent, à l'issue de la conférence SIGMA, une après-midi d'exposés grand public autour de l'oeuvre scientifique de Claude Shannon, de la théorie de l'information et de ses applications.

94-XX ; 00A06 ; 68Qxx

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Basics in machine learning - lecture 1 - Clausel, Marianne (Auteur de la Conférence) | CIRM H

Multi angle

This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True risk/Empirical risk, loss functions, regularization, sparsity, norms, bias/variance trade-off, PAC generalization bounds, model selection.
Classical machine learning models: Support Vector Machines, Kernel Methods, Decision trees and Random Forests.
An introduction to uncertainty in ML: Gaussian Processes, Quantile Regression with RF
Labs (4h)
Introduction to scikit-learn
Classical Machine learning Models with scikit-learn
Uncertainty in ML[-]
This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True ...[+]

68-06 ; 68T05 ; 93B47

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Basics in machine learning - lecture 2 - Clausel, Marianne (Auteur de la Conférence) | CIRM H

Multi angle

This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True risk/Empirical risk, loss functions, regularization, sparsity, norms, bias/variance trade-off, PAC generalization bounds, model selection.
Classical machine learning models: Support Vector Machines, Kernel Methods, Decision trees and Random Forests
An introduction to uncertainty in ML: Gaussian Processes, Quantile Regression with RF
Labs (4h)
Introduction to scikit-learn
Classical Machine learning Models with scikit-learn
Uncertainty in ML[-]
This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True ...[+]

68-06 ; 68T05 ; 93B47

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Basics in machine learning - practical session 1 - Clausel, Marianne (Auteur de la Conférence) | CIRM H

Multi angle

This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True risk/Empirical risk, loss functions, regularization, sparsity, norms, bias/variance trade-off, PAC generalization bounds, model selection.
Classical machine learning models: Support Vector Machines, Kernel Methods, Decision trees and Random Forests
An introduction to uncertainty in ML: Gaussian Processes, Quantile Regression with RF
Labs (4h)
Introduction to scikit-learn
Classical Machine learning Models with scikit-learn
Uncertainty in ML[-]
This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True ...[+]

68-06 ; 68T05 ; 93B47

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Basics in machine learning - practical session 2 - Clausel, Marianne (Auteur de la Conférence) | CIRM H

Multi angle

This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True risk/Empirical risk, loss functions, regularization, sparsity, norms, bias/variance trade-off, PAC generalization bounds, model selection.
Classical machine learning models: Support Vector Machines, Kernel Methods, Decision trees and Random Forests.
An introduction to uncertainty in ML: Gaussian Processes, Quantile Regression with RF
Labs (4h)
Introduction to scikit-learn
Classical Machine learning Models with scikit-learn
Uncertainty in ML[-]
This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True ...[+]

68-06 ; 68T05 ; 93B47

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Reinforcement learning - lecture 1 - Lazaric, Allesandro (Auteur de la Conférence) | CIRM H

Virtualconference

Reinforcement learning (RL) studies the problem of learning how to optimally controlling a dynamical and stochastic environment. Unlike in supervised learning, a RL agent does not receive a direct supervision on which actions to take in order to maximize the longterm reward, and it needs to learn from the samples collected through direct interaction with the environment. RL algorithms combined with deep learning tools recently achieved impressive results in a variety of problems ranging from recommendation systems to computer games, often reaching human-competitive performance (e.g., in the Go game). In this course, we will review the mathematical foundations of RL and the most popular algorithmic strategies. In particular, we will build around the model of Markov decision processes (MDPs) to formalize the agent-environment interaction and ground RL algorithms into popular dynamic programming algorithms, such as value and policy iteration. We will study how such algorithms can be made online, incremental and how to integrate approximation techniques from the deep learning literature. Finally, we will discuss the problem of the exploration-exploitation dilemma in the simpler bandit scenario as well as in the full RL case. Across the course, we will try to identify the main current limitations of RL algorithms and the main open questions in the field.

Theoretical part
- Introduction to reinforcement learning (recent advances and current limitations)
- How to model a RL problem: Markov decision processes (MDPs)
- How to solve an MDP: Dynamic programming methods (value and policy iteration)
- How to solve an MDP from direct interaction: RL algorithms (Monte-Carlo, temporal difference, SARSA, Q-learning)
- How to solve an MDP with approximation (aka deep RL): value-based (e.g., DQN) and policy gradient methods (e.g., Reinforce, TRPO)
- How to efficiently explore an MDP: from bandit to RL

Practical part
- Simple example of value iteration and Q-learning
- More advanced example with policy gradient
- Simple bandit example for exploration
- More advanced example for exploration in RL[-]
Reinforcement learning (RL) studies the problem of learning how to optimally controlling a dynamical and stochastic environment. Unlike in supervised learning, a RL agent does not receive a direct supervision on which actions to take in order to maximize the longterm reward, and it needs to learn from the samples collected through direct interaction with the environment. RL algorithms combined with deep learning tools recently achieved ...[+]

68T05 ; 62C05 ; 68Q87 ; 90C15 ; 93B47

Sélection Signaler une erreur
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Reinforcement learning - lecture 2 - Lazaric, Allesandro (Auteur de la Conférence) | CIRM H

Virtualconference

Reinforcement learning (RL) studies the problem of learning how to optimally controlling a dynamical and stochastic environment. Unlike in supervised learning, a RL agent does not receive a direct supervision on which actions to take in order to maximize the longterm reward, and it needs to learn from the samples collected through direct interaction with the environment. RL algorithms combined with deep learning tools recently achieved impressive results in a variety of problems ranging from recommendation systems to computer games, often reaching human-competitive performance (e.g., in the Go game). In this course, we will review the mathematical foundations of RL and the most popular algorithmic strategies. In particular, we will build around the model of Markov decision processes (MDPs) to formalize the agent-environment interaction and ground RL algorithms into popular dynamic programming algorithms, such as value and policy iteration. We will study how such algorithms can be made online, incremental and how to integrate approximation techniques from the deep learning literature. Finally, we will discuss the problem of the exploration-exploitation dilemma in the simpler bandit scenario as well as in the full RL case. Across the course, we will try to identify the main current limitations of RL algorithms and the main open questions in the field.

Theoretical part
- Introduction to reinforcement learning (recent advances and current limitations)
- How to model a RL problem: Markov decision processes (MDPs)
- How to solve an MDP: Dynamic programming methods (value and policy iteration)
- How to solve an MDP from direct interaction: RL algorithms (Monte-Carlo, temporal difference, SARSA, Q-learning)
- How to solve an MDP with approximation (aka deep RL): value-based (e.g., DQN) and policy gradient methods (e.g., Reinforce, TRPO)
- How to efficiently explore an MDP: from bandit to RL

Practical part
- Simple example of value iteration and Q-learning
- More advanced example with policy gradient
- Simple bandit example for exploration
- More advanced example for exploration in RL[-]
Reinforcement learning (RL) studies the problem of learning how to optimally controlling a dynamical and stochastic environment. Unlike in supervised learning, a RL agent does not receive a direct supervision on which actions to take in order to maximize the longterm reward, and it needs to learn from the samples collected through direct interaction with the environment. RL algorithms combined with deep learning tools recently achieved ...[+]

68T05 ; 62C05 ; 68Q87 ; 90C15 ; 93B47

Sélection Signaler une erreur