Charles Explorer logo
🇬🇧

Stochastic gradient learning and instability : an example

Publication at Faculty of Social Sciences, Faculty of Mathematics and Physics, Centre for Economic Research and Graduate Education, Central Library of Charles University |
2016

Abstract

In this paper, we investigate real-time behavior of constant-gain stochastic gradient (SG) learning, using the Phelps model of monetary policy as a testing ground. We find that whereas the self-confirming equilibrium is stable under the mean dynamics in a very large region, real-time learning diverges for all but the very smallest gain values.

We employ a stochastic Lyapunov function approach to demonstrate that the SG mean dynamics is easily destabilized by the noise associated with real-time learning, because its Jacobian contains stable but very small eigenvalues. We also express caution on usage of perpetual learning algorithms with such small eigenvalues, as the real-time dynamics might diverge from the equilibrium that is stable under the mean dynamics.