Skip to contentSkip to Content
DocsTheoryLIF Dynamics

LIF Dynamics

The Leaky Integrate-and-Fire (LIF) neuron is the workhorse model of computational neuroscience. It captures the essential dynamics of a biological neuron — passive membrane decay, synaptic integration, and threshold-triggered spiking — while remaining analytically tractable and computationally efficient. This page develops the classical LIF model, extends it to fractional order, derives the Grunwald-Letnikov discretization used in SPIRES, and analyzes the effects of the fractional order α\alpha on neural dynamics.

The Classical LIF Neuron

Biophysical Form

A biological neuron’s membrane can be modeled as a parallel RC circuit. The membrane capacitance CC charges in response to input current I(t)I(t), while the membrane resistance RR causes passive leakage toward the resting potential VrestV_{\text{rest}}. The resulting equation is:

CdV(t)dt=1R(V(t)Vrest)+I(t)(1)\tag{1} C \frac{dV(t)}{dt} = -\frac{1}{R}\bigl(V(t) - V_{\text{rest}}\bigr) + I(t)

When the membrane potential V(t)V(t) reaches the threshold VthV_{\text{th}}, the neuron emits a spike and VV is reset to VresetV_{\text{reset}}. The membrane time constant is τm=RC\tau_m = RC.

Normalized Form

Dividing Equation (1) by CC and using τm=RC\tau_m = RC:

τmdV(t)dt=(V(t)Vrest)+RI(t)+b(2)\tag{2} \tau_m \frac{dV(t)}{dt} = -\bigl(V(t) - V_{\text{rest}}\bigr) + R \, I(t) + b

where bb is an optional bias current. This is the standard form used in most reservoir computing implementations. The dynamics are straightforward:

  • Leak: The term (VVrest)-(V - V_{\text{rest}}) drives VV toward the resting potential with time constant τm\tau_m.
  • Integration: The input current I(t)I(t) charges the membrane.
  • Spike-and-reset: When VVthV \geq V_{\text{th}}, emit a spike and set VVresetV \leftarrow V_{\text{reset}}.

The solution in the absence of spiking and for constant input II is an exponential approach to equilibrium:

V(t)=Vrest+(V0Vrest)et/τm+RI(1et/τm)V(t) = V_{\text{rest}} + (V_0 - V_{\text{rest}}) e^{-t/\tau_m} + R I (1 - e^{-t/\tau_m})

The exponential decay et/τme^{-t/\tau_m} means the classical LIF neuron has a single characteristic timescale. Memory of past inputs fades exponentially — fast and uniform.

The Fractional LIF (FLIF) Neuron

Motivation

The exponential decay of the classical LIF is at odds with biological observations. Cortical neurons exhibit power-law adaptation, ion channels display non-Markovian kinetics, and dendritic processing involves anomalous subdiffusion. All of these phenomena are naturally described by fractional-order dynamics.

The fractional LIF (FLIF) model replaces the integer-order time derivative with a Caputo fractional derivative of order α(0,1]\alpha \in (0, 1].

Biophysical Form

CCDtαV(t)=1R(V(t)Vrest)+I(t)(3)\tag{3} C \, {}_C D_t^\alpha V(t) = -\frac{1}{R}\bigl(V(t) - V_{\text{rest}}\bigr) + I(t)

Normalized Form

CDtαV(t)=1τm(V(t)Vrest)+I(t)+b(4)\tag{4} {}_C D_t^\alpha V(t) = -\frac{1}{\tau_m}\bigl(V(t) - V_{\text{rest}}\bigr) + I(t) + b

When α=1\alpha = 1, the Caputo derivative reduces to the ordinary derivative and Equation (4) reduces to the classical LIF. When α<1\alpha \lt 1, the neuron acquires a power-law memory kernel: the present voltage depends on the entire history of inputs and states, weighted by (tτ)α(t - \tau)^{-\alpha}.

Free Response and Mittag-Leffler Decay

For the unforced FLIF (I(t)=0I(t) = 0) with initial condition V(0)=V0V(0) = V_0, the solution is:

V(t)=Vrest+(V0Vrest)Eα ⁣(tατm)(5)\tag{5} V(t) = V_{\text{rest}} + (V_0 - V_{\text{rest}}) \, E_\alpha\!\left(-\frac{t^\alpha}{\tau_m}\right)

where Eα(z)=k=0zk/Γ(αk+1)E_\alpha(z) = \sum_{k=0}^\infty z^k / \Gamma(\alpha k + 1) is the Mittag-Leffler function. This function interpolates between two extremes:

  • For α=1\alpha = 1: E1(t/τm)=et/τmE_1(-t/\tau_m) = e^{-t/\tau_m} (exponential decay)
  • For small tt: Eα(tα/τm)1tα/(τmΓ(α+1))E_\alpha(-t^\alpha/\tau_m) \approx 1 - t^\alpha/(\tau_m \Gamma(\alpha + 1)) (stretched exponential)
  • For large tt: Eα(tα/τm)tα/(τmΓ(1α))E_\alpha(-t^\alpha/\tau_m) \sim t^{-\alpha}/(\tau_m \Gamma(1 - \alpha)) (power-law tail)

The power-law tail is the key feature. While the classical LIF forgets its initial condition exponentially fast, the FLIF retains a memory that decays only algebraically. This slow forgetting is precisely the behavior needed to capture long-range temporal dependencies.

Grunwald-Letnikov Discretization

Derivation

To simulate the FLIF numerically, we use the Grunwald-Letnikov (GL) discretization. Starting from the GL definition of the fractional derivative and setting h=Δth = \Delta t:

GLDtαV(tn)1Δtαk=0nck(α)Vnk{}_{\text{GL}} D_t^\alpha V(t_n) \approx \frac{1}{\Delta t^\alpha} \sum_{k=0}^{n} c_k(\alpha) \, V_{n-k}

where ck(α)=(1)k(αk)c_k(\alpha) = (-1)^k \binom{\alpha}{k} are the GL coefficients. Substituting into the FLIF equation (4) and solving for VnV_n:

1Δtα(Vn+k=1min(n,L)ck(α)Vnk)=1τm(Vn1Vrest)+In+b\frac{1}{\Delta t^\alpha}\left(V_n + \sum_{k=1}^{\min(n,L)} c_k(\alpha) \, V_{n-k}\right) = -\frac{1}{\tau_m}(V_{n-1} - V_{\text{rest}}) + I_n + b

Rearranging to isolate VnV_n:

Vn=Δtα(Vn1Vrestτm+In+b)k=1min(n,L)ck(α)Vnk(6)\tag{6} V_n = \Delta t^\alpha \left(-\frac{V_{n-1} - V_{\text{rest}}}{\tau_m} + I_n + b\right) - \sum_{k=1}^{\min(n,L)} c_k(\alpha) \, V_{n-k}

This is the FLIF-GL update rule implemented in SPIRES. At each time step, the new voltage is computed from two contributions:

  1. Instantaneous dynamics: The first term captures the leak and input at the current time step, scaled by Δtα\Delta t^\alpha.
  2. History correction: The summation over past voltages, weighted by the GL coefficients, encodes the fractional memory.

GL Coefficient Computation

The GL coefficients are defined by the generalized binomial coefficients:

ck(α)=(1)k(αk)=(1)kα(α1)(α2)(αk+1)k!(7)\tag{7} c_k(\alpha) = (-1)^k \binom{\alpha}{k} = (-1)^k \frac{\alpha(\alpha-1)(\alpha-2)\cdots(\alpha-k+1)}{k!}

These can be computed efficiently via the recurrence:

c0(α)=1,ck(α)=(1α+1k)ck1(α)(8)\tag{8} c_0(\alpha) = 1, \qquad c_k(\alpha) = \left(1 - \frac{\alpha + 1}{k}\right) c_{k-1}(\alpha)

The first several coefficients for representative values of α\alpha:

kkck(0.3)c_k(0.3)ck(0.5)c_k(0.5)ck(0.7)c_k(0.7)ck(1.0)c_k(1.0)
01.0001.0001.0001.000
10.300-0.3000.500-0.5000.700-0.7001.000-1.000
20.105-0.1050.125-0.1250.105-0.1050.000
30.060-0.0600.063-0.0630.042-0.0420.000
40.041-0.0410.039-0.0390.021-0.0210.000

Note that for α=1\alpha = 1, only c0c_0 and c1c_1 are nonzero, and the update rule reduces to the classical Euler forward step. For α<1\alpha \lt 1, the coefficients are nonzero for all kk, encoding the infinite memory of the fractional operator.

History Length LL

In principle, the GL sum extends over the entire history (k=0k = 0 to nn). In practice, the sum is truncated at a finite history length LL. The truncation error decreases as:

ϵLk=L+1ck(α)VnkLα\epsilon_L \sim \sum_{k=L+1}^{\infty} |c_k(\alpha)| \cdot |V_{n-k}| \sim L^{-\alpha}

The slower the power-law decay (smaller α\alpha), the more history must be retained for a given accuracy. In SPIRES, LL is a configurable parameter. Typical values range from 50 to 500, depending on the task’s temporal scale and the chosen α\alpha.

The computational cost of the history correction is O(NL)O(NL) per time step for a reservoir of NN neurons, making LL the primary knob for the memory-computation trade-off.

Effects of α\alpha on Neural Dynamics

The fractional order α\alpha profoundly shapes the behavior of the FLIF neuron across multiple dimensions.

Membrane Potential Decay

  • α=1.0\alpha = 1.0: Exponential decay with time constant τm\tau_m. The neuron “forgets” its state on a single timescale.
  • α<1.0\alpha \lt 1.0: Mittag-Leffler decay — initially stretched-exponential, asymptotically power-law. The neuron retains a fading trace of its entire history.

Effective Rheobase Shift

The rheobase is the minimum sustained current required to bring the neuron to threshold. For the classical LIF, the rheobase is:

IrheoLIF=VthVrestτmI_{\text{rheo}}^{\text{LIF}} = \frac{V_{\text{th}} - V_{\text{rest}}}{\tau_m}

For the FLIF, the fractional derivative introduces an effective increase in the rheobase. Intuitively, the power-law memory kernel causes the neuron to “remember” its subthreshold state more persistently, which opposes the charging process. Lower values of α\alpha produce a higher effective rheobase, meaning the neuron requires stronger input to fire.

This rheobase shift has an important consequence for reservoir computing: lower α\alpha makes the reservoir more selective, responding only to sufficiently strong or persistent input patterns.

Input Sensitivity vs. Memory Retention

The fractional order controls a fundamental trade-off between two desirable properties:

  • Input sensitivity (high α\alpha): The neuron responds rapidly to new inputs, making it an effective sensor of recent stimuli. However, it quickly forgets past events.
  • Memory retention (low α\alpha): The neuron maintains long traces of past inputs, enabling it to detect slow-varying patterns and long-range dependencies. However, it is less responsive to sudden changes.

This trade-off is not merely qualitative. It can be quantified precisely using information-theoretic measures, as discussed in Memory and Information Theory.

Frequency Response

The Laplace-domain transfer function of the FLIF neuron is:

H(s)=1sα+1/τm(9)\tag{9} H(s) = \frac{1}{s^\alpha + 1/\tau_m}

This is a fractional-order low-pass filter. The roll-off rate is 20α-20\alpha dB/decade, compared to 20-20 dB/decade for the classical LIF. Lower α\alpha produces a shallower roll-off, meaning the FLIF neuron passes a broader range of frequencies — it is more sensitive to slow temporal components of the input.

Spike-and-Reset Mechanism

The spike-and-reset rule is the same for classical and fractional LIF neurons:

  1. If VnVthV_n \geq V_{\text{th}}: record a spike at time tnt_n.
  2. Set VnVresetV_n \leftarrow V_{\text{reset}}.
  3. Optionally, enforce a refractory period during which VV is held at VresetV_{\text{reset}}.

An important subtlety arises with the GL discretization: when VnV_n is reset, the history buffer retains the pre-reset voltage values. This means the fractional memory “remembers” the approach to threshold even after the reset, creating a form of spike aftereffect that influences subsequent dynamics. This is actually biologically realistic, as real neurons exhibit post-spike membrane potential trajectories that depend on the preceding interspike interval.

Summary

PropertyClassical LIF (α=1\alpha = 1)Fractional LIF (α<1\alpha \lt 1)
Decay lawExponential et/τme^{-t/\tau_m}Mittag-Leffler Eα(tα/τm)E_\alpha(-t^\alpha/\tau_m)
Asymptotic tailExponentialPower-law tα\sim t^{-\alpha}
Memory of pastSingle timescale τm\tau_mInfinite hierarchy of timescales
RheobaseIrheo=(VthVrest)/τmI_{\text{rheo}} = (V_{\text{th}} - V_{\text{rest}})/\tau_mHigher than classical
Frequency roll-off20-20 dB/decade20α-20\alpha dB/decade
Update cost per neuronO(1)O(1)O(L)O(L)
Parametersτm,Vth,Vrest,Vreset\tau_m, V_{\text{th}}, V_{\text{rest}}, V_{\text{reset}}Same + α,L\alpha, L

References

  1. Teka, W. W., Marinov, T. M., & Bhatt, S. J. (2014). Fractional-order leaky integrate-and-fire model with long-term memory and power law dynamics. Computational and Mathematical Methods in Medicine, 2014.
  2. Teka, W. W., Upadhyay, R. K., & Mondal, A. (2017). Fractional-order leaky integrate-and-fire model: frequency adaptation and coincidence detection. Biosystems, 155, 32—42.
  3. Lundstrom, B. N., Higgs, M. H., Spain, W. J., & Fairhall, A. L. (2008). Fractional differentiation by neocortical pyramidal neurons. Nature Neuroscience, 11(11), 1335—1342.
  4. Podlubny, I. (1999). Fractional Differential Equations. Academic Press.
  5. Gorenflo, R., Kilbas, A. A., Mainardi, F., & Rogosin, S. V. (2014). Mittag-Leffler Functions, Related Topics and Applications. Springer.

← Fractional Calculus | Memory & Information →

Last updated on