loading page

FASFA: A Novel Next-Generation Backpropagation Optimizer
  • Philip Naveen
Philip Naveen
Godwin High School

Corresponding Author:[email protected]

Author Profile


This paper introduces the fast adaptive stochastic function accelerator (FASFA) for gradient-based optimization of stochastic objective functions. It works based on Nesterov-enhanced first and second momentum estimates. The method is simple and effective during implementation because it has intuitive/familiar hyperparameterization. The training dynamics can be progressive or conservative depending on the decay rate sum. It works well with a low learning rate and mini batch size. Experiments and statistics showed convincing evidence that FASFA could be an ideal candidate for optimizing stochastic objective functions, particularly those generated by multilayer perceptrons with convolution and dropout layers. In addition, the convergence properties and regret bound provide results aligning with the online convex optimization framework. In a first of its kind, FASFA addresses the growing need for diverse optimizers by pro-viding next-generation training dynamics for artificial intelligence algorithms. Future experiments could modify FASFA based on the infinity norm. FASFA: A Novel Next-Generation Backpropagation Optimizer