Reliable neural networks applicable in practice require adequate generalization capabilities accompanied with a low sensitivity to noise in the processed data and a transparent network structure. In this paper, we introduce a general framework for sensitivity control in neural networks of the back-propagation type (BP-networks) with an arbitrary number of hidden layers.
Experiments performed so far confirm that sensitivity inhibition with an enforced internal representation significantly improves generalization. A transparent network structure formed during training supports an easy architecture optimization, too.