En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK
1

Non-convex SGD and Lojasiewicz-type conditions for deep learning

Bookmarks Report an error
Multi angle
Authors : Scaman, Kevin (Author of the conference)
CIRM (Publisher )

Loading the player...

Abstract : First-order non-convex optimization is at the heart of neural networks training. Recent analyses showed that the Polyak-Lojasiewicz condition is particularly well-suited to analyze the convergence of the training error for these architectures. In this short presentation, I will propose extensions of this condition that allows for more flexibility and application scenarios, and show how stochastic gradient descent converges under these conditions. Then, I will show how to use these conditions to prove the convergence of the test error for simple deep learning architectures in an online setting.

Keywords : deep learning; optimization; Lojasiewicz; non-convex; stochastic gradient descent

MSC Codes :
68T05 - Learning and adaptive systems

    Information on the Video

    Film maker : Hennenfent, Guillaume
    Language : English
    Available date : 10/11/2022
    Conference Date : 04/10/2022
    Subseries : Research talks
    arXiv category : Machine Learning
    Mathematical Area(s) : Computer Science ; Control Theory & Optimization
    Format : MP4 (.mp4) - HD
    Video Time : 00:47:22
    Targeted Audience : Researchers ; Graduate Students ; Doctoral Students, Post-Doctoral Students
    Download : https://videos.cirm-math.fr/2022-10-04_Scaman.mp4

Information on the Event

Event Title : Learning and Optimization in Luminy - LOL2022 / Apprentissage et Optimisation à Luminy - LOL2022
Event Organizers : Boyer, Claire ; d'Aspremont, Alexandre ; Dieuleveut, Aymeric ; Moreau, Thomas ; Villar, Soledad
Dates : 03/10/2022 - 07/10/2022
Event Year : 2022
Event URL : https://conferences.cirm-math.fr/2551.html

Citation Data

DOI : 10.24350/CIRM.V.19965303
Cite this video as: Scaman, Kevin (2022). Non-convex SGD and Lojasiewicz-type conditions for deep learning. CIRM. Audiovisual resource. doi:10.24350/CIRM.V.19965303
URI : http://dx.doi.org/10.24350/CIRM.V.19965303

See Also

Bibliography

  • SCAMAN, Kevin, MALHERBE, Cédric, et DOS SANTOS, Ludovic. Convergence Rates of Non-Convex Stochastic Gradient Descent Under a Generic Lojasiewicz Condition and Local Smoothness. In : International Conference on Machine Learning. PMLR, 2022. p. 19310-19327. - https://proceedings.mlr.press/v162/scaman22a.html

  • ROBIN, David, SCAMAN, Kevin, LELARGE, Marc. Convergence beyond the over-parameterized regime using Rayleigh quotients. Poster. NeurIPS, 2022. -



Imagette Video

Bookmarks Report an error