The back propagation learning rule converges significantly faster if expected values of source units are used for updating weights. The expected value of a unit can be approximated as the sum of the output of the unit and its error term. Results from numerous simulations demonstrate the comparative advantage of the new rule.
Copyright 2014 Elsevier B.V., All rights reserved.
- Back propagation
- Neural networks
- Supervised learning