m

F Nous contacter


0

Videothèque  | enregistrements trouvés : 35

O

-A +A

P Q

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Andreï Kolmogorov est un mathématicien russe (1903-1987) qui a apporté des contributions frappantes en théorie des probabilités, théorie ergodique, turbulence, mécanique classique, logique mathématique, topologie, théorie algorithmique de l'information et en analyse de la complexité des algorithmes. Alexander Bufetov, Directeur de recherche CNRS (I2M - Aix-Marseille Université, CNRS, Centrale Marseille) et porteur local de la Chaire Jean-Morlet (Chaire Tamara Grava 2019 - semestre 1) donnera une conférence sur les contributions exceptionnelles et la vie dramatique d'un grand génie du XXe siècle.
Andreï Kolmogorov est un mathématicien russe (1903-1987) qui a apporté des contributions frappantes en théorie des probabilités, théorie ergodique, turbulence, mécanique classique, logique mathématique, topologie, théorie algorithmique de l'information et en analyse de la complexité des algorithmes. Alexander Bufetov, Directeur de recherche CNRS (I2M - Aix-Marseille Université, CNRS, Centrale Marseille) et porteur local de la Chaire Jean-Morlet ...

00A06 ; 00A09 ; 01Axx ; 01A60

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Owing to the interaction between modes, difficulties arise in creating amplitude equations where non-normality and nonlinearity is present in the original system. For example, if amplitude equations are made via weakly nonlinear analysis, then approximating via the critical mode only (least stable eigenvalue) does not work at higher orders where the mixing of the modes needs to be taken into consideration. However, using a different homogenisation technique, namely stochastic singular perturbation theory of authors like Papanicalaou , Blömker & al, where noise is applied to the stable modes only, then the linear operator in question is no longer non-self-adjoint. Although, the difficulty of the problem shifts to showing that we can use a Rigged Hilbert Space construction. If the original problem in a Hilbert space H. We force the main operator of our problem to be Hilbert-Schmidt by choosing our noise in a dense subspace S of H. We demonstrate this on the Complex-Ginsburg-Landau equation with cubic nonlinearity.
Owing to the interaction between modes, difficulties arise in creating amplitude equations where non-normality and nonlinearity is present in the original system. For example, if amplitude equations are made via weakly nonlinear analysis, then approximating via the critical mode only (least stable eigenvalue) does not work at higher orders where the mixing of the modes needs to be taken into consideration. However, using a different hom...

76E09

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

A subgroup of a group is confined if the closure of its conjugacy class in the Chabauty space does not contain the trivial subgroup. Such subgroups arise naturally as stabilisers for non-free actions on compact spaces. I will explain a result establishing a relation between the confined subgroup of a group with its highly transitive actions. We will see how this result allows to understand the highly transitive actions of a class of groups of dynamical origin. This is joint work with Adrien Le Boudec.
A subgroup of a group is confined if the closure of its conjugacy class in the Chabauty space does not contain the trivial subgroup. Such subgroups arise naturally as stabilisers for non-free actions on compact spaces. I will explain a result establishing a relation between the confined subgroup of a group with its highly transitive actions. We will see how this result allows to understand the highly transitive actions of a class of groups of ...

20B22 ; 37B05 ; 22F05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Interview au Cirm : Fabien Durand & Samuel Petite
Durand, Fabien (Personne interviewée) ; Petite, Samuel (Personne interviewée) | CIRM (Editeur )

This conference will gather researchers working on different topics such as combinatorics, computer science, probability, geometry, physics, quasicrystallography, ... but sharing a common interest: dynamical systems and more precisely subshifts, tilings and group actions. It will focus on algebraic and dynamical invariants such as group automorphisms, growth of symbolic complexity, Rauzy graphs, dimension groups, cohomology groups, full groups, dynamical spectrum, amenability, proximal pairs, ... With this conference we aim to spread out these invariants outside of their original domains and to deepen their connections with combinatorial and dynamical properties.
This conference will gather researchers working on different topics such as combinatorics, computer science, probability, geometry, physics, quasicrystallography, ... but sharing a common interest: dynamical systems and more precisely subshifts, tilings and group actions. It will focus on algebraic and dynamical invariants such as group automorphisms, growth of symbolic complexity, Rauzy graphs, dimension groups, cohomology groups, full groups, ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Teasing poster: mathematics, signal processing and learning
Antonsanti, Pierre-Louis (Auteur de la Conférence) ; Belotto Da Silva, André (Auteur de la Conférence) ; Cano, Cyril (Auteur de la Conférence) ; Cohen, Jeremy (Auteur de la Conférence) ; Doz, Cyprien (Auteur de la Conférence) ; Lazzaretti, Marta (Auteur de la Conférence) ; Pilavci, Yusuf Yigit (Auteur de la Conférence) ; Rodriguez, Willy (Auteur de la Conférence) ; Stergiopoulou, Vasiliki (Auteur de la Conférence) ; Kaloga, Yacouba (Auteur de la Conférence) ; Safaa, Al-Ali (Auteur de la Conférence) | CIRM (Editeur )

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Basics in machine learning - lecture 1
Clausel, Marianne (Auteur de la Conférence) | CIRM (Editeur )

This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True risk/Empirical risk, loss functions, regularization, sparsity, norms, bias/variance trade-off, PAC generalization bounds, model selection.
Classical machine learning models: Support Vector Machines, Kernel Methods, Decision trees and Random Forests.
An introduction to uncertainty in ML: Gaussian Processes, Quantile Regression with RF
Labs (4h)
Introduction to scikit-learn
Classical Machine learning Models with scikit-learn
Uncertainty in ML
This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True ...

68-06 ; 68T05 ; 93B47

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Basics in machine learning - lecture 2
Clausel, Marianne (Auteur de la Conférence) | CIRM (Editeur )

This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True risk/Empirical risk, loss functions, regularization, sparsity, norms, bias/variance trade-off, PAC generalization bounds, model selection.
Classical machine learning models: Support Vector Machines, Kernel Methods, Decision trees and Random Forests
An introduction to uncertainty in ML: Gaussian Processes, Quantile Regression with RF
Labs (4h)
Introduction to scikit-learn
Classical Machine learning Models with scikit-learn
Uncertainty in ML
This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True ...

68-06 ; 68T05 ; 93B47

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True risk/Empirical risk, loss functions, regularization, sparsity, norms, bias/variance trade-off, PAC generalization bounds, model selection.
Classical machine learning models: Support Vector Machines, Kernel Methods, Decision trees and Random Forests
An introduction to uncertainty in ML: Gaussian Processes, Quantile Regression with RF
Labs (4h)
Introduction to scikit-learn
Classical Machine learning Models with scikit-learn
Uncertainty in ML
This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True ...

68-06 ; 68T05 ; 93B47

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True risk/Empirical risk, loss functions, regularization, sparsity, norms, bias/variance trade-off, PAC generalization bounds, model selection.
Classical machine learning models: Support Vector Machines, Kernel Methods, Decision trees and Random Forests.
An introduction to uncertainty in ML: Gaussian Processes, Quantile Regression with RF
Labs (4h)
Introduction to scikit-learn
Classical Machine learning Models with scikit-learn
Uncertainty in ML
This course introduces fundamental concepts in machine learning and presents some classical approaches and algorithms. The scikit-learn library is presented during the practical sessions. The course aims at providing fundamental basics for using machine learning techniques.
Class (4h)
General Introduction to Machine Learning (learning settings, curse of dimensionality, overfitting/underfitting, etc.)
Overview of Supervised Learning: True ...

68-06 ; 68T05 ; 93B47

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in the context of supervised learning. Then, we show several empirical results that allow us to get some insights about the black box of neural networks. Third, we explain how neural networks create invariant representation: in the specific case of translation, it is possible to design predefined neural networks which are stable to translation, namely the Scattering Transform. Finally, we discuss several recent statistical learning, about the generalization and approximation properties of this deep machinery.
Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in ...

68T07 ; 94A12

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in the context of supervised learning. Then, we show several empirical results that allow us to get some insights about the black box of neural networks. Third, we explain how neural networks create invariant representation: in the specific case of translation, it is possible to design predefined neural networks which are stable to translation, namely the Scattering Transform. Finally, we discuss several recent statistical learning, about the generalization and approximation properties of this deep machinery.
Since 2012, deep neural networks have led to outstanding results in many various applications, literally exceeding any previously existing methods, in texts, images, sounds, videos, graphs... They consist of a cascade of parametrized linear and non-linear operators whose parameters are optimized to achieve a fixed task. This talk addresses 4 aspects of deep learning through the lens of signal processing. First, we explain image classification in ...

68T07 ; 94A12

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

The development of quantum information processing and quantum computation goes hand in hand with the ability of addressing and manipulating quantum systems. Quantum Control Theory has provided a successful framework, both theoretical and experimental, to design and develop the control of such systems. In particular, for finite dimensional quantum systems or finite dimensional approximations to them. The theory for infinite dimensional systems is much less developed.
In this talk I propose a scheme of infinite dimensional quantum control on quantum graphs based on interacting with the system by changing the self-adjoint boundary conditions. I will show the existence of solutions of the time-dependent Schrödinger equation, the stability of the solutions and the (approximate) controllability of the state of a quantum system by modifying the boundary conditions on generic quantum graphs.
The development of quantum information processing and quantum computation goes hand in hand with the ability of addressing and manipulating quantum systems. Quantum Control Theory has provided a successful framework, both theoretical and experimental, to design and develop the control of such systems. In particular, for finite dimensional quantum systems or finite dimensional approximations to them. The theory for infinite dimensional systems is ...

81Q10 ; 47N40 ; 35Q41

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This talk is devoted to two-dimensional Dirac operators on bounded domains coupled to a magnetic field perpendicular to the plane. It will be focused on the MIT bag boundary condition. I will describe recent results about accurate asymptotic estimates for the low-lying (positive and négative) eigenvalues in the limit of a strong magnetic field.
This is a joint work with J.-M. Barbaroux, L. Le Treust and E. Stockmeyer.

35P15 ; 32A70 ; 81Q20

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Normal and non-normal numbers
Madritsch, Manfred (Auteur de la Conférence) | CIRM (Editeur )

We fix a positive integer $q\geq 2$. Then every real number $x\in[0,1]$ admits a representation of the form

$x=\sum_{n\geq 1}\frac{a_{n}}{q^{n}}$,

where $a_{n}\in \mathcal{N} :=\{0,1,\ .\ .\ .\ ,\ q-1\}$ for $n\geq 1$. For given $x\in[0,1], N\geq 1$, and $\mathrm{d}=d_{1}\ldots d_{k}\in \mathcal{N}^{k}$ we denote by $\Pi(x,\ \mathrm{d},\ N)$ the frequency of occurrences of the block $\mathrm{d}$ among the first $N$ digits of $x$, i.e.

$\Pi(x, \mathrm{d},N):=\frac{1}{N}|\{0\leq n< N:a_{n+1}=d_{1}, . . . a_{n+k}=d_{k}\}$

from a probabilistic point of view we would expect that in a randomly chosen $x\in[0,1]$ each block $\mathrm{d}$ of $k$ digits occurs with the same frequency $q^{-k}$. In this respect we call a real $x\in[0,1]$ normal to base $q$ if $\Pi(x,\ \mathrm{d},\ N)=q^{-k}$ for each $k\geq 1$ and each $|\mathrm{d}|=k$. When Borel introduced this concept he could show that almost all (with respect to Lebesgue measure) reals are normal in all bases $q\geq 2$ simultaneously. However, still today all constructions of normal numbers have an artificial touch and we do not know whether given reals such as $\sqrt{2},$ log2, $e$ or $\pi$ are normal to a single base.
On the other hand the set of non-normal numbers is large from a topological point of view. We say that a typical element (in the sense of Baire) $x\in[0,1]$ has property $P$ if the set $S :=${$x\in[0,1]:x$ has property $P$} is residual - meaning the countable intersection of dense sets. The set of non-normal numbers is residual.
In the present talk we will consider the construction of sets of normal and non-normal numbers with respect to recent results on absolutely normal and extremely non-normal numbers.
We fix a positive integer $q\geq 2$. Then every real number $x\in[0,1]$ admits a representation of the form

$x=\sum_{n\geq 1}\frac{a_{n}}{q^{n}}$,

where $a_{n}\in \mathcal{N} :=\{0,1,\ .\ .\ .\ ,\ q-1\}$ for $n\geq 1$. For given $x\in[0,1], N\geq 1$, and $\mathrm{d}=d_{1}\ldots d_{k}\in \mathcal{N}^{k}$ we denote by $\Pi(x,\ \mathrm{d},\ N)$ the frequency of occurrences of the block $\mathrm{d}$ among the first $N$ digits of $x$, i.e. ...

11K16 ; 11A63

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Multi angle  Metric discrepancy theory
Tichy, Robert (Auteur de la Conférence) | CIRM (Editeur )

This is a survey on progress in metric discrepancy theory and probabilistic aspects in harmonic analysis. We start with classical limit theorems of Salem and Zygmund as well as with the work of Erdoes and Gaal and of Walter Philipp. A focus lies on laws of the iterated logarithm for discrepancy functions of lacunary sequences. We show the connection to certain diophantine properties of the underlying lacunary sequences obtaining precise asymptotic formulas. Different phenomena for subexponentially growing, for exponentially growing and for superexponentially growing sequences are established. Furthermore, relations to arithmetic dynamical systems and to Donald Knuth`s concept of pseudorandomness are discussed. Recent results are contained in joint work with Christoph Aistleitner and Istvan Berkes and it is planed to publish parts of it in a Jean Morlet Springer lecture Notes volume.
This is a survey on progress in metric discrepancy theory and probabilistic aspects in harmonic analysis. We start with classical limit theorems of Salem and Zygmund as well as with the work of Erdoes and Gaal and of Walter Philipp. A focus lies on laws of the iterated logarithm for discrepancy functions of lacunary sequences. We show the connection to certain diophantine properties of the underlying lacunary sequences obtaining precise ...

11K38 ; 11J83 ; 11K60

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

I will comment on recent results concerning the topological properties of finite rank Cantor minimal systems. I will mention some ideas to estimate their word complexity and ask a few open problems.

54H20 ; 37B10 ; 37B20

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Subshifts on finite alphabets form a class of dynamical systems that bridge topological/ergodic dynamical systems with that of word combinatorics. In 1984, M. Boshernitzan used word combinatorics to provide a bound on the number of ergodic measures for a minimal subshift with bounds on its linear factor complexity growth rate. He further asked if the correct bound for subshifts naturally coded by interval exchange transformations (IETs) could be obtained by word combinatoric methods. (The ”correct” bound is roughly half that obtained by Boshernitzan’s work.) In 2017 and joint with M. Damron, we slightly improved Boshernitzan’s bound by restricting to a smaller class of subshifts that still contained IET subshifts. In recent work, we have further proved the ”correct” bound to subshifts whose languages satisfy a specific word combinatoric condition, which we called the Regular Bispecial Condition. (This condition is equivalent to being Eventually Dendric as independently introduced by F. Dolce and D. Perrin.)
During the same time we worked on our 2017 paper, V. Cyr and B. Kra were independently improving Boshernitzan’s results. In 2019, they relaxed the conditions to no longer require minimality and extended Boshernitzan’s bound to generic measures. (Generic measures are those that have generic points, meaning they satisfy the averaging limits as stated in Pointwise Ergodic Theorem. However, there are non-ergodic generic measures.) We have obtained the improved 2017 bound but for generic measures (and on a more general class of subshifts). It should be noted that, to our current knowledge, there does not exist a proof of the correct bound of generic measures for minimal IETs (by any method).In this talk, I will discuss these recent results and highlight related open problems.
Subshifts on finite alphabets form a class of dynamical systems that bridge topological/ergodic dynamical systems with that of word combinatorics. In 1984, M. Boshernitzan used word combinatorics to provide a bound on the number of ergodic measures for a minimal subshift with bounds on its linear factor complexity growth rate. He further asked if the correct bound for subshifts naturally coded by interval exchange transformations (IETs) could be ...

37B10 ; 28D05 ; 37A05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

We say a pointed dynamical system is asymptotically nilpotent if every point tends to zero. We study group actions whose endomorphism actions are nilrigid, meaning that for all asymptotically nilpotent endomorphisms the convergence to zero is uniform. We show that this happens for a large class of expansive group actions on a large class of groups. The main examples are cellular automata on subshifts of finite type.

37B05 ; 37B15 ; 54H15

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

Virtualconference  Rotated odometers
Lukina, Olga (Auteur de la Conférence) | CIRM (Editeur )

We consider infinite interval exchange transformations (IETs) obtained as a composition of a finite IET and the von Neumann-Kakutani map, called rotated odometers, and study their dynamical and ergodic properties by means of an associated Bratteli-Vershik system. We show that every rotated odometer is measurably isomorphic to the first return map of a rational parallel flow on a translation surface of finite area with infinite genus and a finite number of ends, with respect to the Lebesgue measure. This is one motivation for the study of rotated odometers. We also prove a few results about the factors of the unique minimal subsystem of a rotated odometer. This is joint work with Henk Bruin.
We consider infinite interval exchange transformations (IETs) obtained as a composition of a finite IET and the von Neumann-Kakutani map, called rotated odometers, and study their dynamical and ergodic properties by means of an associated Bratteli-Vershik system. We show that every rotated odometer is measurably isomorphic to the first return map of a rational parallel flow on a translation surface of finite area with infinite genus and a finite ...

37C83 ; 37E05 ; 28D05

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

This talk introduces, in a simplified setting, a novel commutator method to obtain averaging lemma estimates. Averaging lemmas are a type regularizing effect on averages in velocity of solutions to kinetic equations. We introduce a new bilinear approach that naturally leads to velocity averages in $L^{2}\left ( \left [ 0,T \right ],H_{x}^{s} \right )$. The new method outperforms classical averaging lemma results when the right-hand side of the kinetic equation has enough integrability. It also allows a perturbative approach to averaging lemmas which provides, for the first time, explicit regularity results for non-homogeneous velocity fluxes.
This talk introduces, in a simplified setting, a novel commutator method to obtain averaging lemma estimates. Averaging lemmas are a type regularizing effect on averages in velocity of solutions to kinetic equations. We introduce a new bilinear approach that naturally leads to velocity averages in $L^{2}\left ( \left [ 0,T \right ],H_{x}^{s} \right )$. The new method outperforms classical averaging lemma results when the right-hand side of the ...

35Q83 ; 35L65 ; 35B65

Z