Charles Explorer logo
🇬🇧

On Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data

Publication at Faculty of Mathematics and Physics, Central Library of Charles University |
2022

Abstract

Multilayer perceptrons (MLPs) continue to be commonly used for nonlinear regression modeling in numerous applications. Available robust approaches to training MLPs, which allow to yield reliable results also for data contaminated by outliers, have not much penetrated to real applications so far.

Besides, there remains a lack of systematic comparisons of the performance of robust MLPs, if their training uses one of regularization techniques, which are available for standard MLPs to prevent overfitting. This paper is interested in comparing the performance of MLPs trained with various combinations of robust loss functions and regularization types on small datasets.

The experiments start with MLPs trained on individual datasets, which allow graphical visualizations, and proceed to a study on a set of 163251 MLPs trained on well known benchmarks using various combinations of robustness and regularization types. Huber loss combined with $L_{2}$ - regularization turns out to outperform other choices; this combination is recommendable whenever the data do not contain a large proportion of outliers.