Abstract:
Momentum technique has improved deep learning and regularized learning by accelerating gradient descent (GD) convergence. NAG and HB are momentum methods. Most acceleration analyses have focused on NAG, with a few on HB. This article discusses individual convergence about the last iterate of HB in nonsmooth optimizations with constraints. Machine learning requires constraints on the learning structure and individual output to effectively guarantee this structure while maintaining an optimal convergence rate. We show that HB has an individual convergence rate of $O({1}/{\sqrt {t}})$, where $t$ is the number of iterations. Both momentum methods can accelerate basic GD convergence to optimal levels. Our result avoids the drawbacks of the previous work in restricting the optimization problem to be unconstrained and limiting the number of iterations to be predefined, even for averaged iterates. This article’s novel convergence analysis explains how the HB momentum accelerates individual convergence and shows how the averaging and individual convergence rates are similar and different. The projection-based operation produces an optimal individual solution in regularized and stochastic settings. Sparsity can be greatly reduced without sacrificing theoretical optimal rates, unlike averaged output. HB momentum strategy works in several experiments.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here