Linearly-convergent stochastic gradient algorithms.

Download the Slides

Hippocrates August 30, 2017 1:30 pm - 2:30 pm

Bookmark and Share

Dr. Francis Bach
(Click Image for speaker Bio)

Departement d'Informatique de l'Ecole Normale Superieure, Centre de Recherche INRIA de Paris

Many machine learning and signal processing problems are traditionally cast as convex optimization problems where the objective function is a sum of many simple terms. In this situation, batch algorithms compute gradients of the objective function by summing all individual gradients at every iteration and exhibit a linear convergence rate for strongly-convex problems. Stochastic methods rather select a single function at random at every iteration, classically leading to cheaper iterations but with a convergence rate which decays only as the inverse of the number of iterations. In this talk, I will present the stochastic averaged gradient (SAG) algorithm which is dedicated to minimizing finite sums of smooth functions; it has a linear convergence rate for strongly-convex problems, but with an iteration cost similar to stochastic gradient descent, thus leading to faster convergence for machine learning and signal processing problems. I will also mention several extensions, in particular to saddle-point problems, showing that this new class of incremental algorithms applies more generally.

Presentation Slides

You can download the slides here