The generalization performance of deep neural networks with regard to the optimization algorithm is one of the major concerns in machine learning. This performance can be affected by various factors. In this paper, we theoretically prove that the Lipschitz constant of a loss function is an important factor to diminish the generalization error of the output model obtained by Adam or AdamW. The results can be used as a guideline for choosing the loss function when the optimization algorithm is Adam or AdamW. In addition, to evaluate the theoretical bound in a practical setting, we choose the human age estimation problem in computer vision. For assessing the generalization better, the training and test datasets are drawn from different distributions. Our experimental evaluation shows that the loss function with a lower Lipschitz constant and maximum value improves the generalization of the model trained by Adam or AdamW.
Lashkari, M., & Gheibi, A. (2024). Lipschitzness effect of a loss function on generalization performance of deep neural networks trained by Adam and AdamW optimizers. AUT Journal of Mathematics and Computing, 5(4), 361-375. doi: 10.22060/ajmc.2023.22182.1139
MLA
Mohammad Lashkari; Amin Gheibi. "Lipschitzness effect of a loss function on generalization performance of deep neural networks trained by Adam and AdamW optimizers". AUT Journal of Mathematics and Computing, 5, 4, 2024, 361-375. doi: 10.22060/ajmc.2023.22182.1139
HARVARD
Lashkari, M., Gheibi, A. (2024). 'Lipschitzness effect of a loss function on generalization performance of deep neural networks trained by Adam and AdamW optimizers', AUT Journal of Mathematics and Computing, 5(4), pp. 361-375. doi: 10.22060/ajmc.2023.22182.1139
VANCOUVER
Lashkari, M., Gheibi, A. Lipschitzness effect of a loss function on generalization performance of deep neural networks trained by Adam and AdamW optimizers. AUT Journal of Mathematics and Computing, 2024; 5(4): 361-375. doi: 10.22060/ajmc.2023.22182.1139