Grunwald-Letnikov Fractional LIF
The Grunwald-Letnikov (GL) Fractional Leaky Integrate-and-Fire model is the primary fractional neuron implementation in SPIRES. It provides an efficient and numerically robust discretization of fractional-order dynamics using precomputed binomial coefficients and a circular buffer for voltage history.
The GL Fractional Derivative
The Grunwald-Letnikov fractional derivative of order is defined as a limit of a generalized difference quotient:
where the generalized binomial coefficient is:
For , this reduces to the standard backward difference: .
For , the sum extends over the entire history of , with weights that decay as a power law. The GL derivative is equivalent to the Caputo derivative for sufficiently smooth functions with consistent initial conditions.
GL Discretization of the FLIF
Applying the GL derivative to the fractional LIF equation:
yields the discrete update rule. Isolating on the left-hand side:
where the GL coefficients are:
These coefficients satisfy the recurrence relation:
Properties of the GL Coefficients
The GL coefficients have important properties that govern the memory behavior of the neuron:
- for all
- For , the coefficients alternate in sign and decay as for large
- The partial sums converge, which ensures that the truncated history approximation is well-behaved
The power-law decay of the coefficients is the discrete manifestation of the long memory property: contributions from the distant past never fully vanish but become progressively smaller.
Implementation Details
Precomputed Coefficients
At reservoir creation time, SPIRES precomputes the array using the recurrence relation. This costs operations and is done once. The coefficients are stored in a contiguous array for cache-efficient access during the per-step update.
Circular Buffer
Each neuron maintains a circular buffer of length to store its recent voltage history . The circular buffer avoids the cost of shifting elements at every time step:
- Write: Store at position .
- Read: Access at position .
This gives bookkeeping per step (beyond the history sum).
Per-Step Update Algorithm
At each time step for neuron :
- Compute the synaptic input current from incoming spikes.
- Compute the history sum:
- Update:
- If : emit spike, reset
- Write into the circular buffer.
History Length Trade-Off
The truncation length controls the trade-off between accuracy and computational cost:
| History Length | Memory Accuracy | Cost per Step | Use Case |
|---|---|---|---|
| 50—100 | Captures ~90% of memory | Low | Real-time, large reservoirs |
| 100—300 | Good approximation | Moderate | General-purpose |
| 300—500 | High accuracy | Higher | Research, benchmarking |
| 500+ | Diminishing returns | Expensive | Theoretical validation |
The truncation error from using terms instead of the full history decays as . For , doubling reduces the error by a factor of . For , the decay is faster (), meaning shorter histories suffice.
Parameters
| Parameter | Symbol | Description | Typical Range |
|---|---|---|---|
| Fractional order | Order of the GL derivative | 0.1—1.0 | |
| Membrane time constant | Governs the decay rate | 10—50 ms | |
| Resting potential | Equilibrium potential | -65 mV | |
| Threshold | Spike threshold | -50 mV | |
| Reset potential | Post-spike reset value | -65 mV | |
| Bias | Constant offset current | -0.1 to 0.1 | |
| History length | Circular buffer size | 50—500 |
SPIRES API
cfg.neuron_type = SPIRES_NEURON_FLIF_GL;Example Configuration
spires_flif_params params = {
.alpha = 0.5,
.tau_m = 20.0,
.v_rest = -65.0,
.v_th = -50.0,
.v_reset = -65.0,
.bias = 0.0,
.L = 200,
};
spires_reservoir_config cfg = {
.num_neurons = 500,
.num_inputs = 1,
.num_outputs = 1,
.spectral_radius = 0.95,
.ei_ratio = 0.8,
.input_strength = 0.1,
.connectivity = 0.1,
.dt = 1.0,
.connectivity_type = SPIRES_CONN_RANDOM,
.neuron_type = SPIRES_NEURON_FLIF_GL,
.neuron_params = ¶ms,
};Sweeping the Fractional Order
A common experimental workflow is to sweep to find the optimal value for a given task:
double alphas[] = {0.3, 0.5, 0.7, 0.9, 1.0};
int n_alphas = sizeof(alphas) / sizeof(alphas[0]);
for (int i = 0; i < n_alphas; i++) {
spires_flif_params params = {
.alpha = alphas[i],
.tau_m = 20.0,
.v_rest = -65.0,
.v_th = -50.0,
.v_reset = -65.0,
.bias = 0.0,
.L = 200,
};
cfg.neuron_params = ¶ms;
spires_reservoir *r = NULL;
spires_status s = spires_reservoir_create(&cfg, &r);
if (s != SPIRES_OK) continue;
/* Train, evaluate, record performance for alphas[i] */
spires_reservoir_destroy(r);
}Complexity Analysis
For a reservoir with neurons and history length :
- Memory: for voltage histories, plus for the shared coefficient array
- Time per step: for history sums, parallelized across neurons with OpenMP
- Initialization: for coefficient precomputation (done once)
The per-step cost is the dominant expense for fractional reservoirs. The SPIRES implementation parallelizes the outer loop over neurons, so with threads the effective cost is .
Why GL is the Primary Fractional Method
The GL discretization is the default choice in SPIRES for several reasons:
- Direct discretization: No integral approximation is needed. The GL formula naturally produces a discrete update rule.
- Efficient coefficients: The recurrence relation avoids computing gamma functions at every step.
- Circular buffer: The implementation is cache-friendly and avoids memory allocation during simulation.
- Equivalence: For well-behaved initial conditions, the GL and Caputo discretizations produce identical results.
- Parallelism: The per-neuron history sums are independent and parallelize trivially.