Reveal When Next-Generation Reservoir Computer Predictions Succeed and Mislead

Sarah E. Marzen
Paul M. Riechers |

**ABSTRACT:**
Recurrent neural networks are used to forecast time series in
finance, climate, language, and from many other domains. Reservoir
computers are a particularly easily trainable form of recurrent
neural network. Recently, a “next-generation” reservoir
computer was introduced in which the memory trace involves only a
finite number of previous symbols. We explore the inherent
limitations of finite-past memory traces in this intriguing
proposal. A lower bound from Fano's inequality shows that, on
highly non-Markovian processes generated by large probabilistic
state machines, next-generation reservoir computers with
reasonably long memory traces have an error probability that is at
least ~ 60% higher than the minimal attainable error
probability in predicting the next observation. More generally, it
appears that popular recurrent neural networks fall far short of
optimally predicting such complex processes. These results
highlight the need for a new generation of optimized recurrent
neural network architectures. Alongside this finding, we present
concentration- of-measure results for randomly-generated but
complex processes. One conclusion is that large probabilistic
state machines—specifically, large ε-machines&mdsah;are key to
generating challenging and structurally-unbiased stimuli for
ground-truthing recurrent neural network architectures.

Sarah E. Marzen, Paul M. Riechers, and James P. Crutchfield, “Complexity-calibrated Benchmarks for Machine Learning Reveal When Next-Generation Reservoir Computer Predictions Succeed and Misleads”, (2023).

doi:.

[pdf]

arXiv.org:2303.14553.