loading page

On BiasWrappers: New Regularization Techniques for Machine Learning Regression
  • Karthik Singaravadivelan
Karthik Singaravadivelan

Corresponding Author:[email protected]

Author Profile

Abstract

Regressive models in machine learning require regularization to balance the bias-variance tradeoff and attain realistic predictions in the real world. Two new regularization techniques, referenced as BiasWrappers, will be discussed in this paper: BiasWrapperC1 and BiasWrapperC2. BiasWrapperC1 uses a form of penalization to prevent models from consistently overshooting or undershooting. BiasWrapperC2 uses a modified layer of regression stacking to identify correlations of a regression model’s error. The techniques’ logics will be discussed through pseudocode in the context of machine learning regression. The regularization techniques are applied to machine learning models and compared with other regularization techniques through a series of carefully chosen datasets, and these metrics are used to hypothesize about the implications of these new techniques. All implementations are referenced with pseudocode in the paper, with external testing wrappers programmed in Python. An experimental study was conducted with standard regression datasets and showed the regularizations’ value propositions in multi-output data and outlier-based data.

01 Feb 2024Submitted to TechRxiv
12 Feb 2024Published in TechRxiv