Skip to contentSkip to Content
DocsGetting StartedFirst Reservoir

Your First Reservoir

This tutorial builds a complete reservoir computing application step by step. Unlike the Quickstart, every configuration field is explained, both training methods are demonstrated, and you will learn how to inspect internal reservoir state.

1. Choose a Neuron Model

SPIRES ships with five neuron models. Each is selected with a spires_neuron_type enum value.

EnumDescriptionWhen to Use
SPIRES_NEURON_LIF_DISCRETEDiscrete-time leaky integrate-and-fireFast prototyping, benchmarking
SPIRES_NEURON_LIF_BIOBiophysical LIF with continuous dynamicsWhen biologically realistic time constants matter
SPIRES_NEURON_FLIF_CAPUTOFractional LIF (Caputo derivative)Long-memory tasks; the Caputo definition preserves initial conditions
SPIRES_NEURON_FLIF_GLFractional LIF (Grunwald-Letnikov)Long-memory tasks; direct discrete approximation
SPIRES_NEURON_FLIF_DIFFUSIVEFractional LIF (diffusive representation)Numerically stable fractional dynamics for long time series

For this tutorial we use SPIRES_NEURON_LIF_BIO to see the effect of the dt time-step parameter on continuous dynamics.

2. Choose a Network Topology

The topology controls how neurons are wired together.

EnumGraph ModelProperties
SPIRES_CONN_RANDOMErdos-RenyiEach pair connected with probability connectivity. Simple, well-understood.
SPIRES_CONN_SMALL_WORLDWatts-StrogatzHigh clustering with short path lengths. Good for tasks that benefit from local structure.
SPIRES_CONN_SCALE_FREEBarabasi-AlbertPower-law degree distribution. Hub neurons emerge naturally.

We will use SPIRES_CONN_SMALL_WORLD here to show the difference from the random graph used in the Quickstart.

3. Fill in the Configuration

Every field of spires_reservoir_config is documented below.

spires_reservoir_config cfg = { .num_neurons = 500, .num_inputs = 2, .num_outputs = 1, .spectral_radius = 0.9, .ei_ratio = 0.8, .input_strength = 0.15, .connectivity = 0.05, .dt = 0.5, .connectivity_type = SPIRES_CONN_SMALL_WORLD, .neuron_type = SPIRES_NEURON_LIF_BIO, .neuron_params = NULL, };

Field-by-Field Explanation

num_neurons — The number of recurrent units in the reservoir. Larger reservoirs have more computational capacity but cost more memory and time. Typical values range from 100 to 5000.

num_inputs — Dimensionality of the input vector u(t)u(t) at each time step. Set this to match the width of your input data.

num_outputs — Dimensionality of the readout vector y(t)y(t). Set this to match the width of your target data.

spectral_radius — The largest absolute eigenvalue of the reservoir weight matrix after scaling. Controls the memory–stability trade-off:

  • Values close to 1.0 give longer fading memory.
  • Values above 1.0 can push dynamics toward chaos.
  • Values well below 1.0 make the reservoir forget quickly.

Tip: A good starting point is 0.9. Tune from there based on task performance.

ei_ratio — Fraction of neurons that are excitatory (the rest are inhibitory). A value of 0.8 means 80% excitatory / 20% inhibitory, which mirrors cortical proportions.

input_strength — Scaling factor applied to the input weight matrix WinW_{\mathrm{in}}. Larger values drive the reservoir harder, which can help with low-amplitude signals but may saturate neurons.

connectivity — Connection density of the recurrent weight matrix. A value of 0.05 means each neuron pair has a 5% probability of being connected (for random graphs). Sparse reservoirs are faster and often perform comparably to dense ones.

dt — Integration time step for continuous neuron models (LIF_BIO and the fractional models). Ignored by LIF_DISCRETE. Smaller values increase temporal resolution but require more steps to cover the same real time.

connectivity_type — One of the spires_connectivity_type enum values. See section 2 above.

neuron_type — One of the spires_neuron_type enum values. See section 1 above.

neuron_params — Pointer to an array of model-specific parameters (time constants, fractional orders, etc.). Pass NULL to use the default parameters for the selected neuron type. See the individual neuron model pages for the parameter layout.

4. Create the Reservoir and Check Status

#include <spires.h> #include <stdio.h> #include <stdlib.h> int main(void) { /* ... cfg from above ... */ spires_reservoir *r = NULL; spires_status s = spires_reservoir_create(&cfg, &r); if (s != SPIRES_OK) { fprintf(stderr, "spires_reservoir_create failed with status %d\n", s); return 1; }

spires_reservoir_create returns one of four status codes:

CodeMeaning
SPIRES_OKSuccess
SPIRES_ERR_INVALID_ARGA configuration field is out of range or NULL where not allowed
SPIRES_ERR_ALLOCMemory allocation failed
SPIRES_ERR_INTERNALAn unexpected internal error

Warning: Always check the return value. Passing a NULL config pointer or setting num_neurons to 0 will return SPIRES_ERR_INVALID_ARG.

5. Feed Input Manually with spires_step()

spires_run() drives the reservoir for an entire series in one call. Sometimes you need finer control — for example, to inspect state after every step or to mix input from different sources. Use spires_step() for this.

/* Two-dimensional input: we feed one sample at a time */ double u[2]; for (size_t t = 0; t < 200; t++) { u[0] = /* your first input channel at time t */; u[1] = /* your second input channel at time t */; s = spires_step(r, u); if (s != SPIRES_OK) { fprintf(stderr, "spires_step failed at t=%zu: %d\n", t, s); break; } /* Optionally read the current output */ double y; spires_compute_output(r, &y); printf("t=%zu y=%.4f\n", t, y); }

spires_step() advances the reservoir by one time step. The input pointer u must point to an array of length num_inputs. After the call, the internal neuron states have been updated but no output is computed automatically — call spires_compute_output() if you need the readout value.

6. Train with Ridge Regression

Ridge regression (Tikhonov regularization) is the standard offline training method for reservoir computing. It solves for readout weights WoutW_{\text{out}} that minimize:

WoutXY22  +  λWout22\| W_{\text{out}} X - Y \|_2^2 \;+\; \lambda \| W_{\text{out}} \|_2^2

where XX is the matrix of collected reservoir states and YY is the target matrix.

/* Reset state so training starts from a clean slate */ spires_reservoir_reset(r); double lambda = 1e-4; s = spires_train_ridge(r, input_series, target_series, series_length, lambda); if (s != SPIRES_OK) { fprintf(stderr, "Ridge training failed: %d\n", s); spires_reservoir_destroy(r); return 1; }

Choosing lambda. The regularization parameter λ\lambda prevents overfitting:

  • Too small (λ0\lambda \to 0): weights become large and the readout overfits noise in the training data.
  • Too large: the readout underfits because it is over-constrained toward zero.
  • A good range to search is 10810^{-8} to 10210^{-2}, in powers of ten.

Tip: If your predictions are wildly noisy, increase λ\lambda by a factor of 10. If predictions are flat and featureless, decrease it.

7. Train Online with the Delta Rule

For streaming applications where data arrives continuously, use spires_train_online(). After each spires_step() call, pass the target value for that step:

double lr = 0.001; /* learning rate */ for (size_t t = 0; t < series_length; t++) { spires_step(r, &input_series[t * cfg.num_inputs]); spires_train_online(r, &target_series[t * cfg.num_outputs], lr); }

This updates the readout weights incrementally using a delta rule. The learning rate lr controls the step size. Start with a small value (0.001) and adjust.

Warning: Online training and ridge training write to the same readout weight matrix. Calling one after the other will overwrite the previous weights.

8. Run Inference

Once the readout weights are trained, spires_run() processes an entire input series and returns all output values at once.

spires_reservoir_reset(r); double *predictions = spires_run(r, test_input, test_length); if (!predictions) { fprintf(stderr, "spires_run returned NULL\n"); spires_reservoir_destroy(r); return 1; } for (size_t t = 0; t < test_length; t++) { printf("t=%zu predicted=%.4f\n", t, predictions[t * cfg.num_outputs]); } free(predictions); /* YOU own this memory */

Tip: Call spires_reservoir_reset() before inference if you want predictions to be independent of the training trajectory. If you want the reservoir to continue from its current state (e.g., in an online setting), skip the reset.

9. Inspect Internal State

SPIRES provides two ways to read the reservoir state vector (the membrane potentials or activation values of all neurons).

Copy into a New Buffer

double *state = spires_copy_reservoir_state(r); if (state) { size_t n = spires_num_neurons(r); for (size_t i = 0; i < n; i++) { printf("neuron %zu: %.4f\n", i, state[i]); } free(state); /* Caller owns this buffer */ }

spires_copy_reservoir_state() allocates a new array of length num_neurons and copies the current state into it. Returns NULL on allocation failure.

Copy into Your Own Buffer

size_t n = spires_num_neurons(r); double *my_buffer = malloc(n * sizeof(double)); s = spires_read_reservoir_state(r, my_buffer); if (s == SPIRES_OK) { /* my_buffer now contains the state */ } free(my_buffer);

spires_read_reservoir_state() writes into a buffer you provide. The buffer must be at least spires_num_neurons(r) doubles long.

10. Clean Up

Always destroy the reservoir when you are done. This frees all internal allocations (weight matrices, state vectors, etc.).

spires_reservoir_destroy(r); return 0; }

spires_reservoir_destroy() accepts NULL safely, so you do not need a guard check.

Error Handling Best Practices

A robust program should follow this pattern:

spires_reservoir *r = NULL; double *predictions = NULL; int exit_code = 1; spires_status s = spires_reservoir_create(&cfg, &r); if (s != SPIRES_OK) goto cleanup; s = spires_train_ridge(r, input, target, len, lambda); if (s != SPIRES_OK) goto cleanup; predictions = spires_run(r, test, test_len); if (!predictions) goto cleanup; /* ... use predictions ... */ exit_code = 0; cleanup: free(predictions); spires_reservoir_destroy(r); return exit_code;

Key points:

  • Initialize all pointers to NULL so that free(NULL) and spires_reservoir_destroy(NULL) are safe no-ops.
  • Use a single cleanup label at the end of the function.
  • Check every return value. Silent failures lead to hard-to-debug state corruption.

Complete Example

Putting it all together — a program that trains on a two-channel input and predicts a one-channel output.

#include <spires.h> #include <math.h> #include <stdio.h> #include <stdlib.h> #define N_TRAIN 1000 #define N_TEST 200 #define PI 3.14159265358979323846 int main(void) { spires_reservoir *r = NULL; double *predictions = NULL; int exit_code = 1; /* Configure */ spires_reservoir_config cfg = { .num_neurons = 500, .num_inputs = 2, .num_outputs = 1, .spectral_radius = 0.9, .ei_ratio = 0.8, .input_strength = 0.15, .connectivity = 0.05, .dt = 0.5, .connectivity_type = SPIRES_CONN_SMALL_WORLD, .neuron_type = SPIRES_NEURON_LIF_BIO, .neuron_params = NULL, }; /* Create */ spires_status s = spires_reservoir_create(&cfg, &r); if (s != SPIRES_OK) { fprintf(stderr, "Create failed: %d\n", s); goto cleanup; } /* Generate training data: * input = [sin(t), cos(t)] * target = sin(t) * cos(t) */ double *input_train = malloc(N_TRAIN * 2 * sizeof(double)); double *target_train = malloc(N_TRAIN * sizeof(double)); for (int i = 0; i < N_TRAIN; i++) { double t = 0.05 * i; input_train[2 * i] = sin(t); input_train[2 * i + 1] = cos(t); target_train[i] = sin(t) * cos(t); } /* Train */ s = spires_train_ridge(r, input_train, target_train, N_TRAIN, 1e-4); free(input_train); free(target_train); if (s != SPIRES_OK) { fprintf(stderr, "Train failed: %d\n", s); goto cleanup; } /* Generate test data */ double *input_test = malloc(N_TEST * 2 * sizeof(double)); for (int i = 0; i < N_TEST; i++) { double t = 0.05 * (N_TRAIN + i); input_test[2 * i] = sin(t); input_test[2 * i + 1] = cos(t); } /* Infer */ spires_reservoir_reset(r); predictions = spires_run(r, input_test, N_TEST); free(input_test); if (!predictions) { fprintf(stderr, "Inference failed\n"); goto cleanup; } /* Print results */ printf("Step | Predicted | Expected\n"); printf("-----+-----------+---------\n"); for (int i = 0; i < 10; i++) { double t = 0.05 * (N_TRAIN + i); double expected = sin(t) * cos(t); printf("%4d | %+.5f | %+.5f\n", i, predictions[i], expected); } exit_code = 0; cleanup: free(predictions); spires_reservoir_destroy(r); return exit_code; }

← Quickstart | Building from Source →

Last updated on