Skip to contentSkip to Content
DocsAPI ReferenceOptimizerspires_optimize

spires_optimize

Run the AGILE (Adaptive, Greedy, Iterative, Low-cost Exploration) hyperparameter optimizer to find the best reservoir configuration and ridge parameter for a given dataset.


Signature

int spires_optimize(const spires_reservoir_config *base_config, const struct spires_opt_budget *budgets, int num_budgets, const struct spires_opt_score *score, struct spires_opt_result *out, const double *input_series, const double *target_series, size_t series_length);

Parameters

ParameterTypeDescription
base_configconst spires_reservoir_config *Starting configuration used as the center of the search space. Fields such as num_inputs and num_outputs are held fixed; tunable fields (e.g., num_neurons, spectral_radius, connectivity, ei_ratio) are varied around the values in this struct. Must not be NULL.
budgetsconst struct spires_opt_budget *Array of budget stages, each defining the fraction of data, number of random seeds, and wall-clock time limit for one round of the AGILE search. Must not be NULL.
num_budgetsintNumber of elements in the budgets array. Must be at least 1. The optimizer processes budget stages in order, progressively narrowing the search.
scoreconst struct spires_opt_score *Scoring configuration: which metric to optimize, and regularization terms for variance and computational cost. Must not be NULL.
outstruct spires_opt_result *Pointer to a result struct that will be populated with the best configuration, ridge parameter, and performance statistics on success. Must not be NULL.
input_seriesconst double *Flattened input matrix of shape [series_length x num_inputs], row-major. Must not be NULL.
target_seriesconst double *Flattened target matrix of shape [series_length x num_outputs], row-major. Must not be NULL.
series_lengthsize_tNumber of timesteps in the input and target series. Must be greater than zero.

Returns

int0 on success, non-zero on failure. Failure may occur due to invalid arguments, allocation errors, or if all candidate configurations fail to train.


Example

/* Base configuration: the optimizer will explore around these values */ spires_reservoir_config base = { .num_neurons = 300, .num_inputs = 3, .num_outputs = 1, .spectral_radius = 0.9, .ei_ratio = 0.8, .input_strength = 1.0, .connectivity = 0.1, .dt = 0.001, .connectivity_type = SPIRES_CONN_RANDOM, .neuron_type = SPIRES_NEURON_LIF_DISCRETE, .neuron_params = NULL }; /* Progressive budget: coarse search, then refinement */ struct spires_opt_budget budgets[] = { { .data_fraction = 0.25, .num_seeds = 20, .time_limit_sec = 30.0 }, { .data_fraction = 0.50, .num_seeds = 10, .time_limit_sec = 60.0 }, { .data_fraction = 1.00, .num_seeds = 5, .time_limit_sec = 120.0 } }; struct spires_opt_score score = { .lambda_var = 0.1, .lambda_cost = 0.01, .metric = SPIRES_METRIC_AUROC }; struct spires_opt_result result; int rc = spires_optimize(&base, budgets, 3, &score, &result, input, target, series_length); if (rc != 0) { fprintf(stderr, "optimization failed\n"); return 1; } printf("Best score: %.4f\n", result.best_score); printf("Best log10(lambda): %.2f\n", result.best_log10_ridge); printf("Metric mean +/- std: %.4f +/- %.4f\n", result.metric_mean, result.metric_std); /* Use the best configuration to build a final reservoir */ spires_reservoir *r = NULL; spires_reservoir_create(&result.best_config, &r); spires_train_ridge(r, input, target, series_length, pow(10.0, result.best_log10_ridge));

Notes

  • AGILE algorithm. The optimizer implements a multi-fidelity successive halving strategy. Each budget stage evaluates candidate configurations using a fraction of the data and multiple random seeds. Low-performing candidates are eliminated before the next, more expensive stage. This amortizes the cost of evaluation and focuses computation on promising regions of the search space.
  • Tuned parameters. The optimizer searches over num_neurons, spectral_radius, ei_ratio, connectivity, input_strength, and the log10 ridge parameter. Fields like num_inputs, num_outputs, dt, connectivity_type, and neuron_type are inherited from base_config and held constant.
  • Scoring. The composite score for each candidate is: score = metric_mean - lambda_var * metric_std - lambda_cost * relative_cost, where relative_cost is proportional to num_neurons. This penalizes both high-variance and high-cost configurations.
  • Memory. The optimizer internally creates and destroys many reservoirs. Peak memory usage is proportional to base_config.num_neurons * budgets[last].num_seeds.
  • Thread safety. The optimizer uses internal parallelism to evaluate candidates concurrently. Do not call spires_optimize concurrently on the same input/target arrays from multiple threads.
  • Ownership. The out->best_config.neuron_params pointer in the result struct points to internally managed memory. It remains valid until the next call to spires_optimize or until the program exits. Copy the contents if you need them to persist.

See Also

Last updated on