In standard BP-networks, hidden neuron outputs are usually spread over the whole interval (0,1). In this paper, we propose an efficient framework to enforce a transparent internal knowledge representation in BP-networks during training.
We want the formed internal representations to differ as much as possible for different outputs. At the same time, the hidden neuron outputs will be forced to group around three possible values, namely 1, 0 and 0.5.