Skip to contentSkip to Content
DocsCase StudiesDiabetes Prediction

Diabetes Prediction

This case study applies a Fractional-Order Reservoir (FOR) to the Pima Indians Diabetes Database, a well-known clinical prediction benchmark. Unlike the previous case studies, which involve temporal signals (speech) or sequential decisions (control), diabetes prediction tests the reservoir’s ability to classify static tabular data — a domain traditionally dominated by gradient-boosted trees and multilayer perceptrons. The results demonstrate that the fractional-order reservoir achieves the highest accuracy (80.85%) and F1 score (0.6545) among all tested models, and that performance improves monotonically as α\alpha decreases, favoring long memory even for nominally static data.

Dataset

Pima Indians Diabetes Database

The Pima Indians Diabetes Database (Smith et al., 1988) contains clinical measurements from 768 female patients of Pima Indian heritage, aged 21 and older. Each sample consists of 8 clinical features:

FeatureDescription
PregnanciesNumber of times pregnant
GlucosePlasma glucose concentration (2h oral glucose tolerance test)
Blood PressureDiastolic blood pressure (mm Hg)
Skin ThicknessTriceps skin fold thickness (mm)
Insulin2-hour serum insulin (mu U/ml)
BMIBody mass index (kg/m2^2)
Diabetes PedigreeDiabetes pedigree function (genetic influence score)
AgeAge in years

Classification Task

The original dataset defines a binary classification problem (diabetic vs. non-diabetic). In this study, the task is extended to a 3-class problem:

  1. Diabetic — positive diagnosis
  2. Non-diabetic — negative diagnosis
  3. Uncertain — borderline cases where clinical indicators are ambiguous

The 3-class formulation is clinically meaningful: rather than forcing a binary decision on ambiguous cases, the model can flag them for additional screening.

Experimental Setup

Reservoir Configuration

ParameterValue
Neuron count NN350
Neuron modelFLIF-GL (Grunwald-Letnikov fractional LIF)
Input dimensions8 (clinical features)
Output classes3 (diabetic / non-diabetic / uncertain)
Sliding window50 time steps
Readout trainingRidge regression
Fractional order α\alphaSwept from 0.1 to 1.0

Encoding Static Data as Temporal Sequences

A key design decision is the encoding of static tabular data into a temporal format suitable for a reservoir computer. Each patient’s 8-feature vector is presented to the reservoir over a sliding window of 50 time steps:

  1. The 8 clinical features are presented as input at each of the 50 time steps.
  2. The reservoir integrates the static input over the window, using its recurrent dynamics to build up a rich state representation.
  3. After 50 steps, the reservoir state x(50)R350\mathbf{x}(50) \in \mathbb{R}^{350} is read out and classified via the ridge-regression-trained linear layer.

This temporal encoding is not redundant. The reservoir’s nonlinear dynamics transform the repeated static input into a progressively richer state representation at each time step. Different α\alpha values produce different temporal trajectories for the same input, effectively creating different nonlinear feature expansions. The 50-step window gives the reservoir enough time to develop a discriminative state, especially at low α\alpha where the power-law dynamics evolve slowly.

Baselines

The FOR is compared against three alternative architectures:

ModelDescription
FORFractional-Order Reservoir (FLIF-GL neurons, ridge regression readout)
ESNEcho State Network (tanh neurons, ridge regression readout, comparable NN)
SNNClassical Spiking Neural Network (α=1.0\alpha = 1.0 LIF, ridge regression readout)
MLPMultilayer perceptron (2 hidden layers, trained with backpropagation)

All models use the same train/test split, preprocessing, and evaluation protocol to ensure a fair comparison.

Evaluation Metrics

  • Accuracy: Fraction of correctly classified test samples (all 3 classes).
  • F1 Score: Macro-averaged F1 across all 3 classes. F1 is the harmonic mean of precision and recall: F1=2precisionrecall/(precision+recall)F_1 = 2 \cdot \text{precision} \cdot \text{recall} / (\text{precision} + \text{recall}). The macro-average weights all classes equally, which is important because the classes are imbalanced (fewer diabetic cases than non-diabetic).

Results

Model Comparison

ModelAccuracy (%)F1 Score
FOR80.850.6545
ESN77.27
SNN75.95
MLP75.97

The fractional-order reservoir achieves the highest accuracy and F1 score, outperforming the ESN by 3.58 percentage points and the SNN/MLP by approximately 5 percentage points.

Performance vs. Fractional Order

A striking feature of the results is that performance decreases monotonically with increasing α\alpha:

Fractional order α\alphaAccuracy trendF1 trend
Low (α0.3\alpha \leq 0.3)HighestHighest
Mid (α0.5\alpha \approx 0.5)Moderate-highModerate-high
High (α0.8\alpha \geq 0.8)Approaches SNN baselineApproaches SNN baseline
α=1.0\alpha = 1.0Equivalent to classical SNNEquivalent to classical SNN

This monotonic relationship contrasts with the spoken digit recognition case study, where intermediate α\alpha was optimal. The difference is explained by the nature of the data and the task.

Why Lower α\alpha is Better for Diabetes Prediction

The monotonic improvement at lower α\alpha can be understood through several complementary perspectives:

1. Slow-varying physiological processes. The 8 clinical features reflect long-term physiological states: glucose metabolism, body composition, genetic predisposition, and reproductive history. These processes evolve on timescales of months to years. A reservoir with long power-law memory is well suited to representing features that encode slow temporal processes, even when the data is presented as a static snapshot.

2. Richer nonlinear feature expansion. At low α\alpha, the reservoir’s state at step 50 depends on the entire 50-step trajectory of internal dynamics, weighted by a broad power-law kernel. This produces a richer, higher-effective-dimensionality state representation than high α\alpha, where the state is dominated by the most recent input and the trajectory is nearly memoryless. The richer representation gives the linear readout more features to work with for separating the three classes.

3. No rapid temporal dynamics to track. Unlike speech (which has rapid formant transitions) or cart-pole control (which has fast oscillatory dynamics), the diabetes classification task has no intrinsic fast timescale. There is no penalty for slow responsiveness. The reservoir can afford to maximize memory without sacrificing temporal resolution.

4. Implicit feature interaction. The power-law history kernel causes the reservoir state to depend on complex interactions between the input features across time steps. Because the input is static (the same 8 features at every step), these temporal interactions become nonlinear feature interactions — the reservoir effectively computes higher-order combinations of the input features, analogous to polynomial feature expansion but through the dynamics of spiking neurons.

Discussion

Why a Reservoir Outperforms an MLP on Tabular Data

The FOR’s advantage over the MLP (80.85% vs. 75.97%) on tabular data is noteworthy because MLPs are a standard baseline for tabular classification. The advantage can be attributed to:

  1. Nonlinear feature expansion. The 350-neuron reservoir expands the 8-dimensional input into a 350-dimensional state space through nonlinear spiking dynamics. The ridge regression readout then performs linear classification in this high-dimensional space. This is analogous to a kernel method, where the reservoir acts as an implicit feature map.

  2. Regularization through dynamics. The reservoir’s fixed random weights impose an implicit regularization on the feature expansion. Unlike the MLP, where all weights are trained and can overfit, the reservoir’s features are determined by the physics of spiking dynamics and are not susceptible to training-set overfitting.

  3. Convex training. The ridge regression readout finds the global optimum of a convex objective, whereas the MLP’s backpropagation-trained weights may converge to a suboptimal local minimum.

FOR vs. ESN

The FOR outperforms the ESN (80.85% vs. 77.27%) despite both using ridge regression readouts and comparable reservoir sizes. The advantage comes from the fractional-order dynamics:

  • The FLIF neuron’s power-law memory kernel produces a state representation that is qualitatively different from the ESN’s exponential-memory tanh neuron.
  • The slow power-law dynamics create a more diverse temporal trajectory over the 50-step window, increasing the effective rank of the state matrix Φ\Phi and improving the ridge regression solution.
  • The spiking nonlinearity (threshold + reset) provides a different class of nonlinear feature expansion than the smooth tanh activation.

Clinical Implications

The 3-class formulation is clinically relevant. Rather than forcing a binary diagnosis on ambiguous cases, the model can identify patients in the “uncertain” category and flag them for:

  • Additional diagnostic tests (e.g., HbA1c, oral glucose tolerance test)
  • Follow-up screening at shorter intervals
  • Lifestyle intervention programs

The FOR’s higher F1 score (0.6545) indicates that it achieves a better balance of precision and recall across all three classes, including the difficult “uncertain” class.

Limitations

  • The Pima Indians Diabetes Database is small (768 samples) and dated. Validation on larger, more diverse clinical datasets would strengthen the conclusions.
  • The 50-step sliding window is a hyperparameter that was not systematically optimized. The optimal window length may interact with α\alpha.
  • The 3-class formulation requires a labeling scheme for the “uncertain” class, which introduces a degree of subjectivity.
  • Direct comparison with modern gradient-boosted tree methods (XGBoost, LightGBM) was not included; these often set the state of the art on tabular data.

Connection to Theory

The monotonic accuracy-α\alpha relationship confirms the theoretical prediction that lower α\alpha maximizes memory capacity. For this task, memory capacity is the dominant factor because:

  • The input is static (no temporal dynamics to track).
  • The 50-step encoding window benefits from maximal history utilization.
  • Classification accuracy depends on the richness of the state-space representation, which increases with memory.

This contrasts with the speech and control tasks, where the trade-off between memory and input sensitivity produces an optimal intermediate α\alpha. The diabetes prediction task, by removing the input-sensitivity requirement, isolates the pure memory advantage of low α\alpha.


← Cart-Pole Control | Back to Documentation

Last updated on