• D

F Nous contacter

0

# Control Theory and Optimization  | enregistrements trouvés : 21

O

Sélection courante (0) : Tout sélectionner / Tout déselectionner

P Q

## Post-edited  Some new inequalities for the Cheeger constant Fragalà, Ilaria (Auteur de la Conférence) | CIRM (Editeur )

We discuss some new results for the Cheeger constant in dimension two, including:
- a polygonal version of Faber-Krahn inequality;
- a reverse isoperimetric inequality for convex bodies;
- a Mahler-type inequality in the axisymmetric setting;
- asymptotic behaviour of optimal partition problems.
Based on some recent joint works with D.Bucur,
and for the last part also with B.Velichkov and G.Verzini.

## Post-edited  The moment-LP and moment-SOS hierarchies Lasserre, Jean Bernard (Auteur de la Conférence) | CIRM (Editeur )

We review basic properties of the moment-LP and moment-SOS hierarchies for polynomial optimization and compare them. We also illustrate how to use such a methodology in two applications outside optimization. Namely :
- for approximating (as claosely as desired in a strong sens) set defined with quantifiers of the form
$R_1 =\{ x\in B : f(x,y)\leq 0$ for all $y$ such that $(x,y) \in K \}$.
$D_1 =\{ x\in B : f(x,y)\leq 0$ for some $y$ such that $(x,y) \in K \}$.
by a hierarchy of inner sublevel set approximations
$\Theta_k = \left \{ x\in B : J_k(x)\leq 0 \right \}\subset R_f$.
or outer sublevel set approximations
$\Theta_k = \left \{ x\in B : J_k(x)\leq 0 \right \}\supset D_f$.
for some polynomiales $(J_k)$ of increasing degree :
- for computing convex polynomial underestimators of a given polynomial $f$ on a box $B \subset R^n$.
We review basic properties of the moment-LP and moment-SOS hierarchies for polynomial optimization and compare them. We also illustrate how to use such a methodology in two applications outside optimization. Namely :
- for approximating (as claosely as desired in a strong sens) set defined with quantifiers of the form
$R_1 =\{ x\in B : f(x,y)\leq 0$ for all $y$ such that $(x,y) \in K \}$.
$D_1 =\{ x\in B : f(x,y)\leq 0$ for ...

## Post-edited  Extended Lagrange spaces and optimal control Mehrmann, Volker (Auteur de la Conférence) | CIRM (Editeur )

Mathematical modeling and numerical mathematics of today is very much Lagrangian and modern automated modeling techniques lead to differential-algebraic systems. The optimal control for such systems in general cannot be obtained using the classical Euler-Lagrange approach or the maximum principle, but it is shown how this approach can be extended.
differential-algebraic equations - optimal control - Lagrangian subspace - necessary optimality conditions - Hamiltonian system - symplectic flow
Mathematical modeling and numerical mathematics of today is very much Lagrangian and modern automated modeling techniques lead to differential-algebraic systems. The optimal control for such systems in general cannot be obtained using the classical Euler-Lagrange approach or the maximum principle, but it is shown how this approach can be extended.
differential-algebraic equations - optimal control - Lagrangian subspace - necessary optimality ...

## Post-edited  A spectral inequality for the bi-Laplace operator Robbiano, Luc (Auteur de la Conférence) | CIRM (Editeur )

In this talk we present a inequality obtained with Jérôme Le Rousseau, for sum of eigenfunctions for bi-Laplace operator with clamped boundary condition. These boundary conditions do not allow to reduce the problem for a Laplacian with adapted boundary condition. The proof follow the strategy used for Laplacian, namely we consider a problem with an extra variable and we prove Carleman estimates for this new problem. The main difficulty is to obtain a Carleman estimate up to the boundary. In this talk we present a inequality obtained with Jérôme Le Rousseau, for sum of eigenfunctions for bi-Laplace operator with clamped boundary condition. These boundary conditions do not allow to reduce the problem for a Laplacian with adapted boundary condition. The proof follow the strategy used for Laplacian, namely we consider a problem with an extra variable and we prove Carleman estimates for this new problem. The main difficulty is to ...

## Post-edited  On the space highway to Lagrange points! Trélat, Emmanuel (Auteur de la Conférence) | CIRM (Editeur )

Everything is under control: mathematics optimize everyday life.
In an empirical way we are able to do many things with more or less efficiency or success. When one wants to achieve a parallel parking, consequences may sometimes be ridiculous... But when one wants to launch a rocket or plan interplanetary missions, better is to be sure of what we do.
Control theory is a branch of mathematics that allows to control, optimize and guide systems on which one can act by means of a control, like for example a car, a robot, a space shuttle, a chemical reaction or in more general a process that one aims at steering to some desired target state.
Emmanuel Trélat will overview the range of applications of that theory through several examples, sometimes funny, but also historical. He will show you that the study of simple cases of our everyday life, far from insignificant, allow to approach problems like the orbit transfer or interplanetary mission design.
control theory - optimal control - stabilization - optimization - aerospace - Lagrange points - dynamical systems - mission design
Everything is under control: mathematics optimize everyday life.
In an empirical way we are able to do many things with more or less efficiency or success. When one wants to achieve a parallel parking, consequences may sometimes be ridiculous... But when one wants to launch a rocket or plan interplanetary missions, better is to be sure of what we do.
Control theory is a branch of mathematics that allows to control, optimize and guide systems on ...

## Multi angle  Large-scale machine learning and convex optimization 1/2 Bach, Francis (Auteur de la Conférence) | CIRM (Editeur )

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

## Multi angle  Large-scale machine learning and convex optimization 2/2 Bach, Francis (Auteur de la Conférence) | CIRM (Editeur )

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...

## Multi angle  Dichotomy, the closed range theorem and optimal control Brunovsky, Pavel (Auteur de la Conférence) | CIRM (Editeur )

Necessary conditions for infinite horizon optimal controls problem can be obtained by the alternative theorem. This theorem requires that the range of a shift operator on a functional space is closed. It will be shown that this is the case if the dynamics of the problem is hyperbolic but may fail to be so if it is not.

## Multi angle  Dirichlet-Neumann shape optimization problems Buttazzo, Giuseppe (Auteur de la Conférence) | CIRM (Editeur )

We consider spectral optimization problems of the form

$\min\lbrace\lambda_1(\Omega;D):\Omega\subset D,|\Omega|=1\rbrace$

where $D$ is a given subset of the Euclidean space $\textbf{R}^d$. Here $\lambda_1(\Omega;D)$ is the first eigenvalue of the Laplace operator $-\Delta$ with Dirichlet conditions on $\partial\Omega\cap D$ and Neumann or Robin conditions on $\partial\Omega\cap\partial D$. The equivalent variational formulation

$\lambda_1(\Omega;D)=\min\lbrace\int_\Omega|\nabla u|^2dx+k\int_{\partial D}u^2d\mathcal{H}^{d-1}:$

$u\in H^1(D),u=0$ on $\partial\Omega\cap D,||u||_{L^2(\Omega)}=1\rbrace$

reminds the classical drop problems, where the first eigenvalue replaces the total variation functional. We prove an existence result for general shape cost functionals and we show some qualitative properties of the optimal domains. The case of Dirichlet condition on a $\textit{fixed}$ part and of Neumann condition on the $\textit{free}$ part of the boundary is also considered
We consider spectral optimization problems of the form

$\min\lbrace\lambda_1(\Omega;D):\Omega\subset D,|\Omega|=1\rbrace$

where $D$ is a given subset of the Euclidean space $\textbf{R}^d$. Here $\lambda_1(\Omega;D)$ is the first eigenvalue of the Laplace operator $-\Delta$ with Dirichlet conditions on $\partial\Omega\cap D$ and Neumann or Robin conditions on $\partial\Omega\cap\partial D$. The equivalent variational formulation

## Multi angle  On the stability of the Bossel-Daners inequality Trombetti, Cristina (Auteur de la Conférence) | CIRM (Editeur )

The Bossel-Daners is a Faber-Krahn type inequality for the first Laplacian eigenvalue with Robin boundary conditions. We prove a stability result for such inequality.

##### Codes MSC

Nuage de mots clefs ici

Z