Spectral Radius
The spectral radius of the recurrent weight matrix is the single most important hyperparameter in reservoir computing. It controls whether the reservoir has stable fading memory, operates at the edge of chaos with maximal computational capacity, or diverges into instability. This page defines the spectral radius, develops the mathematical theory behind its role, and explains how SPIRES uses it to configure reservoir dynamics.
Definition
The spectral radius of a square matrix is the largest absolute value of its eigenvalues:
where are the eigenvalues of (which may be complex). The absolute value denotes the modulus of in the complex plane.
Gelfand’s Formula
An equivalent characterization that does not require explicit eigenvalue computation is Gelfand’s formula:
This formula connects the spectral radius to the asymptotic growth rate of matrix powers. If , then as — the effect of any initial perturbation decays to zero. If , then — perturbations grow without bound.
Intuition via Power Iteration
Gelfand’s formula can be understood through the lens of power iteration. Consider repeatedly multiplying an arbitrary vector by :
After many iterations, this sequence aligns with the eigenvector corresponding to the eigenvalue of largest modulus, and the ratio converges to . This is exactly the power iteration algorithm for computing the dominant eigenvalue.
In the context of a reservoir, the vector represents a perturbation to the reservoir state. If , the perturbation shrinks at each time step — the reservoir forgets its initial conditions and becomes a function of the input history alone. If , the perturbation amplifies, and the reservoir’s state becomes dominated by internal dynamics rather than external input.
Three Dynamical Regimes
The spectral radius partitions the space of reservoir behavior into three qualitatively distinct regimes.
Regime 1: (Ordered / Stable)
When , the reservoir is in the ordered regime:
- State perturbations decay geometrically: .
- The reservoir possesses the echo state property (see below).
- Memory is finite and fading: information about past inputs decays exponentially at rate .
- The reservoir is a contractive map — all trajectories converge regardless of initial conditions.
- Computation is limited: the reservoir acts as a lossy buffer.
The decay rate of memory is approximately for linear reservoirs at lag , so higher (closer to 1) produces longer memory.
Regime 2: (Edge of Chaos / Critical)
When , the reservoir operates at the edge of chaos:
- Perturbations neither grow nor decay on average.
- Memory is maximized: information persists for the longest possible duration before fading.
- The reservoir exhibits critical slowing down — it takes many time steps to relax from perturbations.
- Dynamical range is maximized: the reservoir responds sensitively to inputs across a wide range of amplitudes.
- Computational capacity (as measured by memory capacity and information processing) is maximal.
This is the optimal operating point for most reservoir computing tasks. The reservoir balances stability (it does not diverge) with sensitivity (it does not forget too quickly).
Regime 3: (Chaotic / Unstable)
When , the reservoir enters the chaotic regime:
- State perturbations grow geometrically: (for the linearized system).
- The echo state property is lost: the reservoir’s state depends on initial conditions, not just input history.
- In nonlinear reservoirs, bounded activation functions prevent true divergence, but the dynamics become chaotic — sensitive to initial conditions, unpredictable, and ergodic.
- Memory of specific inputs is destroyed by the chaotic mixing.
- The reservoir generates complex internal dynamics that may be useful for some tasks but are generally not controllable.
The Echo State Property
The echo state property (ESP) formalizes the requirement that a reservoir’s state should be uniquely determined by its input history. A reservoir possesses the ESP if, for any input sequence and any two initial states :
In other words, the reservoir “forgets” its initial conditions and its state becomes a deterministic function of the input history alone.
Sufficient condition: For a reservoir with activation function satisfying (Lipschitz constant 1, as with tanh), the echo state property holds if .
Necessary condition for absence of ESP: If and is the identity (linear reservoir), then the ESP is violated.
For spiking reservoirs (as in SPIRES), the situation is more nuanced. The spike-and-reset nonlinearity is not a smooth contraction, and spiking reservoirs can maintain the ESP at values slightly above 1. However, the general principle holds: increasing toward and beyond 1 transitions the reservoir from stable to chaotic dynamics.
Weight Matrix Scaling
SPIRES generates the reservoir weight matrix in two steps:
Step 1: Raw Matrix Generation
A raw weight matrix is generated according to the chosen topology (Erdos-Renyi, Watts-Strogatz, or Barabasi-Albert). The nonzero weights are drawn from a specified distribution (e.g., uniform on ).
Step 2: Spectral Radius Rescaling
The raw matrix is rescaled to achieve the user-specified target spectral radius :
This works because scaling a matrix by a constant scales all eigenvalues by , and therefore the spectral radius by :
After rescaling, exactly. This allows the user to control the reservoir’s dynamical regime independently of the topology and weight distribution.
Computing
For large sparse matrices (typical in reservoir computing), SPIRES computes the spectral radius using iterative eigenvalue algorithms rather than full eigendecomposition. The cost is per iteration (where is the number of nonzero entries), and convergence typically requires iterations.
Spectral Radius and Memory Capacity
For linear reservoirs (a useful theoretical benchmark), the relationship between spectral radius and memory capacity has a closed-form characterization. The linear memory capacity at lag is:
The total linear memory capacity satisfies:
As , the memory profile becomes flatter (more uniform across lags), and the total MC approaches its maximum value of . As decreases, MC is concentrated at short lags and the total is reduced.
For spiking reservoirs, these relationships are approximate but qualitatively correct.
Interaction with Fractional Order
The spectral radius and the fractional order both influence memory but through complementary mechanisms:
- Spectral radius controls the gain of the recurrent dynamics: how strongly the reservoir’s current state influences the next state.
- Fractional order controls the kernel shape: how the reservoir’s current state is influenced by the entire history of past states.
In a fractional-order spiking reservoir, the effective memory is determined by both parameters jointly. A useful mental model:
- sets the “volume” of the recurrent feedback loop.
- sets the “shape” of the temporal integration window.
- with intermediate provides both strong recurrence and broad temporal integration — the optimal regime for most tasks.
The two-dimensional parameter space provides a richer landscape for optimization than either parameter alone. This is one of the key advantages of fractional-order reservoir computing as implemented in SPIRES.
Practical Guidelines
| Spectral Radius | Recommended Use |
|---|---|
| Short-memory tasks, highly nonlinear dynamics | |
| General-purpose reservoir computing | |
| Long-memory tasks, near-linear dynamics | |
| Use with caution; may work for spiking networks due to reset nonlinearity |
In SPIRES, the spectral radius is set via the spectral_radius field of the spires_reservoir_config struct. The AGILE optimizer can search over spectral radius jointly with other hyperparameters including .
References
- Jaeger, H. (2001). The “echo state” approach to analysing and training recurrent neural networks. GMD Report 148, German National Research Center for Information Technology.
- Jaeger, H. (2002). Short-term memory in echo state networks. GMD Report 152.
- Yildiz, I. B., Jaeger, H., & Kiebel, S. J. (2012). Re-visiting the echo state property. Neural Networks, 35, 1—9.
- Verstraeten, D., Schrauwen, B., D’Haene, M., & Stroobandt, D. (2007). An experimental unification of reservoir computing methods. Neural Networks, 20(3), 391—403.
- Lukoševičius, M., & Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3), 127—149.