En poursuivant votre navigation sur ce site, vous acceptez l'utilisation d'un simple cookie d'identification. Aucune autre exploitation n'est faite de ce cookie. OK

Documents Bartlett, Peter 2 results

Filter
Select: All / None
Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y

Benign overfitting - Lecture 1 - Bartlett, Peter (Author of the conference) | CIRM H

Multi angle

These lectures present some recent results on two phenomena that have been observed in deep neural networks. The first is benign overfitting: even without any explicit effort to control model complexity, deep learning methods find functions that give a near-perfect fit to noisy training data and yet exhibit good prediction performance in practice. We describe results that characterize this phenomenon in linear regression and in ridge regression. The second phenomenon that we consider is that of adversarial examples: functions computed by deep networkscan be extremely sensitive to small changes in their inputs. We show that this occurs in ReLU networks of constant depth with independent gaussian parameters because the functions that these networks compute are close to linear. The lectures include joint work with Seb Bubeck, Yeshwanth Cherapanamjeri, Phil Long, Gabor, Lugosi, and Alex Tsigler.[-]
These lectures present some recent results on two phenomena that have been observed in deep neural networks. The first is benign overfitting: even without any explicit effort to control model complexity, deep learning methods find functions that give a near-perfect fit to noisy training data and yet exhibit good prediction performance in practice. We describe results that characterize this phenomenon in linear regression and in ridge regression. ...[+]

Bookmarks Report an error
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
y
These lectures present some recent results on two phenomena that have been observed in deep neural networks. The first is benign overfitting: even without any explicit effort to control model complexity, deep learning methods find functions that give a near-perfect fit to noisy training data and yet exhibit good prediction performance in practice. We describe results that characterize this phenomenon in linear regression and in ridge regression. The second phenomenon that we consider is that of adversarial examples: functions computed by deep networkscan be extremely sensitive to small changes in their inputs. We show that this occurs in ReLU networks of constant depth with independent gaussian parameters because the functions that these networks compute are close to linear. The lectures include joint work with Seb Bubeck, Yeshwanth Cherapanamjeri, Phil Long, Gabor, Lugosi, and Alex Tsigler.[-]
These lectures present some recent results on two phenomena that have been observed in deep neural networks. The first is benign overfitting: even without any explicit effort to control model complexity, deep learning methods find functions that give a near-perfect fit to noisy training data and yet exhibit good prediction performance in practice. We describe results that characterize this phenomenon in linear regression and in ridge regression. ...[+]

Bookmarks Report an error