Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Multi angle  Geometric control and dynamics Rifford, Ludovic (Auteur de la Conférence) | CIRM (Editeur )

The geometric control theory is concerned with the study of control systems in finite dimension, that is dynamical systems on which one can act by a control. After a brief introduction to controllability properties of control systems, we will see how basic techniques from control theory can be used to obtain for example generic properties in Hamiltonians dynamics.

#### Filtrer

##### Codes MSC

Z
>

We consider spectral optimization problems of the form

$\min\lbrace\lambda_1(\Omega;D):\Omega\subset D,|\Omega|=1\rbrace$

where $D$ is a given subset of the Euclidean space $\textbf{R}^d$. Here $\lambda_1(\Omega;D)$ is the first eigenvalue of the Laplace operator $-\Delta$ with Dirichlet conditions on $\partial\Omega\cap D$ and Neumann or Robin conditions on $\partial\Omega\cap\partial D$. The equivalent variational formulation

$\lambda_1(\Omega;D)=\min\lbrace\int_\Omega|\nabla u|^2dx+k\int_{\partial D}u^2d\mathcal{H}^{d-1}:$

$u\in H^1(D),u=0$ on $\partial\Omega\cap D,||u||_{L^2(\Omega)}=1\rbrace$

reminds the classical drop problems, where the first eigenvalue replaces the total variation functional. We prove an existence result for general shape cost functionals and we show some qualitative properties of the optimal domains. The case of Dirichlet condition on a $\textit{fixed}$ part and of Neumann condition on the $\textit{free}$ part of the boundary is also considered
We consider spectral optimization problems of the form

$\min\lbrace\lambda_1(\Omega;D):\Omega\subset D,|\Omega|=1\rbrace$

where $D$ is a given subset of the Euclidean space $\textbf{R}^d$. Here $\lambda_1(\Omega;D)$ is the first eigenvalue of the Laplace operator $-\Delta$ with Dirichlet conditions on $\partial\Omega\cap D$ and Neumann or Robin conditions on $\partial\Omega\cap\partial D$. The equivalent variational formulation

$\lam... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi angle Linear transformations for the stabilization of nonlinear partial differential equations Coron, Jean-Michel (Auteur de la Conférence) | CIRM (Editeur ) We start by presenting some results on the stabilization, rapid or in finite time, of control systems modeled by means of ordinary differential equations. We study the interest and the limitation of the damping method for the stabilization of control systems. We then describe methods to transform a given linear control system into new ones for which the rapid stabilization is easy to get. As an application of these methods we show how to get rapid stabilization for Korteweg-de Vries equations and how to stabilize in finite time$1-D$parabolic linear equations by means of periodic time-varying feedback laws. We start by presenting some results on the stabilization, rapid or in finite time, of control systems modeled by means of ordinary differential equations. We study the interest and the limitation of the damping method for the stabilization of control systems. We then describe methods to transform a given linear control system into new ones for which the rapid stabilization is easy to get. As an application of these methods we show how to get ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi angle Large-scale machine learning and convex optimization 1/2 Bach, Francis (Auteur de la Conférence) | CIRM (Editeur ) Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are$O(1/\sqrt{n})$for general convex functions and reaches$O(1/n)$for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of$O(1/n)$without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi angle Large-scale machine learning and convex optimization 2/2 Bach, Francis (Auteur de la Conférence) | CIRM (Editeur ) Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are$O(1/\sqrt{n})$for general convex functions and reaches$O(1/n)$for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of$O(1/n)\$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Multi angle  Dissipativity in nonautonomous linear-quadratic control processes Núñez, Carmen (Auteur de la Conférence) | CIRM (Editeur )

This talk concerns the concept of dissipativity in the sense of Willems for nonautonomous linear-quadratic (LQ) control systems. A nonautonomous system of Hamiltonian ODEs can be associated with such an LQ system, and the analysis of the corresponding symplectic dynamics provides valuable information on the dissipativity properties. The presence of exponential dichotomy, the occurrence of weak disconjugacy, and the existence of nonnegative solutions of the Riccati equation provided by the Hamiltonian system are closely related to the presence of (normal or strict) dissipativity and to the definition of the (normal or strong) storage functions.
This is a joint work with: Roberta Fabbri, Russell Johnson, Sylvia Novo and Rafael Obaya.
This talk concerns the concept of dissipativity in the sense of Willems for nonautonomous linear-quadratic (LQ) control systems. A nonautonomous system of Hamiltonian ODEs can be associated with such an LQ system, and the analysis of the corresponding symplectic dynamics provides valuable information on the dissipativity properties. The presence of exponential dichotomy, the occurrence of weak disconjugacy, and the existence of nonnegative ...

Déposez votre fichier ici pour le déplacer vers cet enregistrement.

## Multi angle  Dichotomy, the closed range theorem and optimal control Brunovsky, Pavel (Auteur de la Conférence) | CIRM (Editeur )

Necessary conditions for infinite horizon optimal controls problem can be obtained by the alternative theorem. This theorem requires that the range of a shift operator on a functional space is closed. It will be shown that this is the case if the dynamics of the problem is hyperbolic but may fail to be so if it is not.