Regularization and False Alarms Quantification: Towards an Approach to Assess the Economic Value of Machine Learning
Nima Safaei · Pooria Assadi
Abstract
Regularization is a well-established technique in machine learning (ML) that facilitates an optimal bias-variance trade-off and consequently reduces model complexity and enhances explainability. In this article, we provide a reinterpretation of the regularization hyper-parameter, and argue that the lack of quantification of the costs and risks of false alarms in the loss function undermines the measurability of the economic value of using ML to the extent that might make it practically useless.
Chat is not available.
Successful Page Load