Abstract
In deep learning, robust loss functions are crucial for addressing challenges like outliers and noise. This paper introduces a novel family of adaptive robust loss functions, Fractional Loss Functions (FLFs), generated by deploying the fractional derivative operator into conventional ones. We demonstrate that adjusting the fractional derivative order α allows generating a diverse spectrum of FLFs while preserving the essential properties necessary for gradient-based learning. We show that tuning α gives the unique property to morph the loss landscape to reduce the influence of large residuals. Thus, α serves as an interpretable hyperparameter defining the robustness level of FLFs. However, determining α prior to training requires a manual exploration to pinpoint an FLF that aligns with the learning tasks. To overcome this issue, we reveal that FLFs can balance robustness against outliers while increasing penalization of inliers by tuning α. This inherent feature allows transforming α to an adaptive parameter as a trade-off that ensures balanced learning of α is feasible. Thus, FLFs can dynamically adapt their loss landscape, facilitating error minimization while providing robustness during training. We performed experiments across diverse tasks and showed that FLFs significantly enhanced performance. Our source code is available at https://github.com/mertcankurucu/Fractional-Loss-Functions.
Original language | English |
---|---|
Article number | 113136 |
Journal | Knowledge-Based Systems |
Volume | 312 |
DOIs | |
Publication status | Published - 15 Mar 2025 |
Bibliographical note
Publisher Copyright:© 2025 Elsevier B.V.
Keywords
- Adaptive loss
- Deep learning
- Fractional calculus
- Loss function
- Robust loss