spires_train_ridge
Train the readout weight matrix using batch ridge regression over a complete input/target time series.
Signature
spires_status spires_train_ridge(spires_reservoir *r,
const double *input_series,
const double *target_series,
size_t series_length,
double lambda);Parameters
| Parameter | Type | Description |
|---|---|---|
r | spires_reservoir * | Handle to the reservoir. Must not be NULL. The reservoir’s internal state is modified during the run; after training, the state reflects the final timestep. |
input_series | const double * | Flattened input matrix of shape [series_length x num_inputs], row-major. Must not be NULL. |
target_series | const double * | Flattened target matrix of shape [series_length x num_outputs], row-major. Element target_series[t * num_outputs + k] is the desired output channel k at timestep t. Must not be NULL. |
series_length | size_t | Number of timesteps in both the input and target series. Must be greater than zero. |
lambda | double | Ridge regularization parameter (Tikhonov factor). Must be non-negative. Larger values penalize large output weights, improving generalization at the cost of training accuracy. A value of 0 reduces to ordinary least squares. |
Returns
spires_status — SPIRES_OK on success. On failure:
SPIRES_ERR_INVALID_ARG— a pointer isNULL,series_lengthis 0, orlambdais negative.SPIRES_ERR_ALLOC— memory allocation for internal work arrays failed.SPIRES_ERR_INTERNAL— the LAPACK linear solve failed (e.g., singular matrix).
Example
size_t T = 2000;
/* Drive reservoir to collect states */
/* (spires_train_ridge does this internally) */
spires_status s = spires_train_ridge(r, input, target, T, 1e-6);
if (s != SPIRES_OK) {
fprintf(stderr, "training failed: %d\n", s);
return 1;
}
/* Now compute outputs on new data */
spires_reservoir_reset(r);
for (size_t t = 0; t < test_len; t++) {
spires_step(r, &test_input[t * num_inputs]);
double y;
spires_compute_output(r, &y);
printf("%.6f\n", y);
}Notes
- Internal workflow. This function (1) resets the reservoir state, (2) drives it with
input_seriesto collect a state matrixXof shape[series_length x num_neurons], and (3) solvesW_out = (X^T X + lambda * I)^{-1} X^T Yusing LAPACK. The computedW_outis stored inside the reservoir for use byspires_compute_output. - State after training. The reservoir state after this call reflects the final timestep of
input_series. Callspires_reservoir_resetbefore processing new sequences. - LAPACK dependency. This function requires a LAPACK implementation at link time (e.g., OpenBLAS, Apple Accelerate, Intel MKL). If LAPACK is not available, the function will return
SPIRES_ERR_INTERNAL. - Choosing lambda. Typical values range from 1e-8 to 1e-2. Cross-validation or the
spires_optimizefunction can be used to select an appropriate value. - Thread safety. Do not call this function on a reservoir that is concurrently in use by another thread.
See Also
- spires_train_online — incremental weight update.
- spires_compute_output — use the trained weights.
- spires_optimize — automated hyperparameter search including lambda.
- Ridge Regression — theoretical background.
Last updated on