Skip to contentSkip to Content
DocsUser GuideNeuron ModelsGrünwald–Letnikov Fractional LIF

Grunwald-Letnikov Fractional LIF

The Grunwald-Letnikov (GL) Fractional Leaky Integrate-and-Fire model is the primary fractional neuron implementation in SPIRES. It provides an efficient and numerically robust discretization of fractional-order dynamics using precomputed binomial coefficients and a circular buffer for voltage history.

The GL Fractional Derivative

The Grunwald-Letnikov fractional derivative of order α\alpha is defined as a limit of a generalized difference quotient:

Dαx(t)=limΔt01Δtαk=0t/Δt(1)k(αk)x(tkΔt)D^\alpha x(t) = \lim_{\Delta t \to 0} \frac{1}{\Delta t^\alpha} \sum_{k=0}^{\lfloor t/\Delta t \rfloor} (-1)^k \binom{\alpha}{k} x(t - k\,\Delta t)

where the generalized binomial coefficient is:

(αk)=α(α1)(α2)(αk+1)k!=Γ(α+1)Γ(k+1)Γ(αk+1)\binom{\alpha}{k} = \frac{\alpha\,(\alpha-1)\,(\alpha-2)\,\cdots\,(\alpha-k+1)}{k!} = \frac{\Gamma(\alpha+1)}{\Gamma(k+1)\,\Gamma(\alpha-k+1)}

For α=1\alpha = 1, this reduces to the standard backward difference: D1x(t)(x(t)x(tΔt))/ΔtD^1 x(t) \approx (x(t) - x(t - \Delta t)) / \Delta t.

For 0<α<10 \lt \alpha \lt 1, the sum extends over the entire history of xx, with weights that decay as a power law. The GL derivative is equivalent to the Caputo derivative for sufficiently smooth functions with consistent initial conditions.

GL Discretization of the FLIF

Applying the GL derivative to the fractional LIF equation:

CDtαV(t)=1τm(V(t)Vrest)+I(t)+b{}_C D_t^\alpha V(t) = -\frac{1}{\tau_m}\bigl(V(t) - V_{\text{rest}}\bigr) + I(t) + b

yields the discrete update rule. Isolating VnV_n on the left-hand side:

Vn=Δtα(Vn1Vrestτm+In+b)k=1min(n,L)ck(α)VnkV_n = \Delta t^\alpha \left( -\frac{V_{n-1} - V_{\text{rest}}}{\tau_m} + I_n + b \right) - \sum_{k=1}^{\min(n, L)} c_k(\alpha)\, V_{n-k}

where the GL coefficients are:

ck(α)=(1)k(αk)c_k(\alpha) = (-1)^k \binom{\alpha}{k}

These coefficients satisfy the recurrence relation:

c0(α)=1,ck(α)=(1α+1k)ck1(α)c_0(\alpha) = 1, \qquad c_k(\alpha) = \left(1 - \frac{\alpha + 1}{k}\right) c_{k-1}(\alpha)

Properties of the GL Coefficients

The GL coefficients have important properties that govern the memory behavior of the neuron:

  • c0(α)=1c_0(\alpha) = 1 for all α\alpha
  • c1(α)=αc_1(\alpha) = -\alpha
  • For 0<α<10 \lt \alpha \lt 1, the coefficients alternate in sign and decay as ckkα1|c_k| \sim k^{-\alpha-1} for large kk
  • The partial sums k=0Lck(α)\sum_{k=0}^{L} c_k(\alpha) converge, which ensures that the truncated history approximation is well-behaved

The power-law decay of the coefficients is the discrete manifestation of the long memory property: contributions from the distant past never fully vanish but become progressively smaller.

Implementation Details

Precomputed Coefficients

At reservoir creation time, SPIRES precomputes the array {ck(α)}k=1L\{c_k(\alpha)\}_{k=1}^{L} using the recurrence relation. This costs O(L)O(L) operations and is done once. The coefficients are stored in a contiguous array for cache-efficient access during the per-step update.

Circular Buffer

Each neuron maintains a circular buffer of length LL to store its recent voltage history {Vn1,Vn2,,VnL}\{V_{n-1}, V_{n-2}, \ldots, V_{n-L}\}. The circular buffer avoids the cost of shifting elements at every time step:

  • Write: Store VnV_n at position nmodLn \bmod L.
  • Read: Access VnkV_{n-k} at position (nk)modL(n - k) \bmod L.

This gives O(1)O(1) bookkeeping per step (beyond the O(L)O(L) history sum).

Per-Step Update Algorithm

At each time step nn for neuron ii:

  1. Compute the synaptic input current In(i)I_n^{(i)} from incoming spikes.
  2. Compute the history sum: S=k=1min(n,L)ck(α)Vnk(i)S = \sum_{k=1}^{\min(n,L)} c_k(\alpha) \cdot V_{n-k}^{(i)}
  3. Update: Vn(i)=Δtα(Vn1(i)Vrestτm+In(i)+b)SV_n^{(i)} = \Delta t^\alpha \left(-\frac{V_{n-1}^{(i)} - V_{\text{rest}}}{\tau_m} + I_n^{(i)} + b\right) - S
  4. If Vn(i)VthV_n^{(i)} \geq V_{\text{th}}: emit spike, reset Vn(i)=VresetV_n^{(i)} = V_{\text{reset}}
  5. Write Vn(i)V_n^{(i)} into the circular buffer.

History Length Trade-Off

The truncation length LL controls the trade-off between accuracy and computational cost:

History Length LLMemory AccuracyCost per StepUse Case
50—100Captures ~90% of memoryLowReal-time, large reservoirs
100—300Good approximationModerateGeneral-purpose
300—500High accuracyHigherResearch, benchmarking
500+Diminishing returnsExpensiveTheoretical validation

The truncation error from using LL terms instead of the full history decays as O(Lα)O(L^{-\alpha}). For α=0.5\alpha = 0.5, doubling LL reduces the error by a factor of 20.51.412^{0.5} \approx 1.41. For α=0.9\alpha = 0.9, the decay is faster (20.91.872^{0.9} \approx 1.87), meaning shorter histories suffice.

Parameters

ParameterSymbolDescriptionTypical Range
Fractional orderα\alphaOrder of the GL derivative0.1—1.0
Membrane time constantτm\tau_mGoverns the decay rate10—50 ms
Resting potentialVrestV_{\text{rest}}Equilibrium potential-65 mV
ThresholdVthV_{\text{th}}Spike threshold-50 mV
Reset potentialVresetV_{\text{reset}}Post-spike reset value-65 mV
BiasbbConstant offset current-0.1 to 0.1
History lengthLLCircular buffer size50—500

SPIRES API

cfg.neuron_type = SPIRES_NEURON_FLIF_GL;

Example Configuration

spires_flif_params params = { .alpha = 0.5, .tau_m = 20.0, .v_rest = -65.0, .v_th = -50.0, .v_reset = -65.0, .bias = 0.0, .L = 200, }; spires_reservoir_config cfg = { .num_neurons = 500, .num_inputs = 1, .num_outputs = 1, .spectral_radius = 0.95, .ei_ratio = 0.8, .input_strength = 0.1, .connectivity = 0.1, .dt = 1.0, .connectivity_type = SPIRES_CONN_RANDOM, .neuron_type = SPIRES_NEURON_FLIF_GL, .neuron_params = &params, };

Sweeping the Fractional Order

A common experimental workflow is to sweep α\alpha to find the optimal value for a given task:

double alphas[] = {0.3, 0.5, 0.7, 0.9, 1.0}; int n_alphas = sizeof(alphas) / sizeof(alphas[0]); for (int i = 0; i < n_alphas; i++) { spires_flif_params params = { .alpha = alphas[i], .tau_m = 20.0, .v_rest = -65.0, .v_th = -50.0, .v_reset = -65.0, .bias = 0.0, .L = 200, }; cfg.neuron_params = &params; spires_reservoir *r = NULL; spires_status s = spires_reservoir_create(&cfg, &r); if (s != SPIRES_OK) continue; /* Train, evaluate, record performance for alphas[i] */ spires_reservoir_destroy(r); }

Complexity Analysis

For a reservoir with NN neurons and history length LL:

  • Memory: O(NL)O(N \cdot L) for voltage histories, plus O(L)O(L) for the shared coefficient array
  • Time per step: O(NL)O(N \cdot L) for history sums, parallelized across neurons with OpenMP
  • Initialization: O(L)O(L) for coefficient precomputation (done once)

The O(NL)O(N \cdot L) per-step cost is the dominant expense for fractional reservoirs. The SPIRES implementation parallelizes the outer loop over neurons, so with PP threads the effective cost is O((N/P)L)O((N/P) \cdot L).

Why GL is the Primary Fractional Method

The GL discretization is the default choice in SPIRES for several reasons:

  1. Direct discretization: No integral approximation is needed. The GL formula naturally produces a discrete update rule.
  2. Efficient coefficients: The recurrence relation avoids computing gamma functions at every step.
  3. Circular buffer: The implementation is cache-friendly and avoids memory allocation during simulation.
  4. Equivalence: For well-behaved initial conditions, the GL and Caputo discretizations produce identical results.
  5. Parallelism: The per-neuron history sums are independent and parallelize trivially.

← Caputo Fractional LIF | Diffusive Fractional LIF →

Last updated on