Tasks#
The tasks module provides the task-based architecture for spiking RNNs, enabling evaluation of spiking neural networks on cognitive tasks.
Core Classes#
- class spiking.tasks.AbstractSpikingTask(settings: Dict[str, Any] | None = None)[source]#
Bases:
ABCAbstract base class for spiking neural network tasks.
This class defines the interface for evaluating spiking networks on cognitive tasks. Each task is responsible for generating stimuli, running evaluations, and analyzing performance metrics specific to spiking implementations.
- create_plots_directory(base_dir: str) str[source]#
Create directory for saving plots.
- Parameters:
base_dir (str) – Base directory path.
- Returns:
Path to plots directory.
- Return type:
str
- abstract evaluate_performance(spiking_rnn: AbstractSpikingRNN, n_trials: int = 100) Dict[str, float][source]#
Evaluate performance over multiple trials.
- Parameters:
spiking_rnn (AbstractSpikingRNN) – Spiking network to evaluate.
n_trials (int) – Number of trials to evaluate.
- Returns:
Performance metrics.
- Return type:
Dict[str, float]
- abstract evaluate_trial(spiking_rnn: AbstractSpikingRNN, stimulus: ndarray, label: Any) Dict[str, Any][source]#
Evaluate a single trial on the spiking network.
- Parameters:
spiking_rnn (AbstractSpikingRNN) – Spiking network to evaluate.
stimulus (np.ndarray) – Input stimulus.
label (Any) – Expected label/condition.
- Returns:
Trial evaluation results.
- Return type:
Dict[str, Any]
- abstract generate_stimulus(trial_type: str | None = None) Tuple[ndarray, Any][source]#
Generate input stimulus for the task.
- Parameters:
trial_type (Optional[str]) – Specific trial type to generate.
- Returns:
Input stimulus array and label/condition.
- Return type:
Tuple[np.ndarray, Any]
- abstract get_default_settings() Dict[str, Any][source]#
Get default settings for the task.
- Returns:
Default task settings.
- Return type:
Dict[str, Any]
- get_sample_trial_types() List[str][source]#
Get sample trial types for visualization.
This method should be overridden by concrete task classes to specify what trial types should be used for generating sample visualizations.
- Returns:
List of trial type identifiers for this task.
- Return type:
List[str]
- class spiking.tasks.GoNogoSpikingTask(settings: Dict[str, Any] | None = None)[source]#
Bases:
AbstractSpikingTaskGo/NoGo impulse control task for spiking neural networks.
Evaluates the network’s ability to respond to “Go” stimuli and withhold responses to “NoGo” stimuli using spiking implementations.
- create_visualization(results: List[Dict[str, Any]], save_dir: str) None[source]#
Create visualization plots for Go/NoGo task results.
- Parameters:
results (List[Dict[str, Any]]) – List of trial results.
save_dir (str) – Directory to save plots.
- evaluate_performance(spiking_rnn: AbstractSpikingRNN, n_trials: int = 100) Dict[str, float][source]#
Evaluate performance over multiple Go/NoGo trials.
- Parameters:
spiking_rnn (AbstractSpikingRNN) – Spiking network to evaluate.
n_trials (int) – Number of trials to evaluate.
- Returns:
Performance metrics.
- Return type:
Dict[str, float]
- evaluate_trial(spiking_rnn: AbstractSpikingRNN, stimulus: ndarray, label: str) Dict[str, Any][source]#
Evaluate a single Go/NoGo trial.
- Parameters:
spiking_rnn (AbstractSpikingRNN) – Spiking network to evaluate.
stimulus (np.ndarray) – Input stimulus.
label (str) – ‘go’ or ‘nogo’.
- Returns:
Trial results including spikes, output, and performance.
- Return type:
Dict[str, Any]
- class spiking.tasks.XORSpikingTask(settings: Dict[str, Any] | None = None)[source]#
Bases:
AbstractSpikingTaskXOR temporal logic task for spiking neural networks.
Evaluates the network’s ability to perform XOR logic on temporal sequences using spiking implementations.
- evaluate_performance(spiking_rnn: AbstractSpikingRNN, n_trials: int = 1) Dict[str, float][source]#
Evaluate performance over multiple XOR trials.
- Parameters:
spiking_rnn (AbstractSpikingRNN) – Spiking network to evaluate.
n_trials (int) – Number of trials to evaluate.
- Returns:
Performance metrics.
- Return type:
Dict[str, float]
- evaluate_trial(spiking_rnn: AbstractSpikingRNN, stimulus: ndarray, label: str) Dict[str, Any][source]#
Evaluate a single XOR trial.
- Parameters:
spiking_rnn (AbstractSpikingRNN) – Spiking network to evaluate.
stimulus (np.ndarray) – Input stimulus.
label (str) – Expected output (‘same’ or ‘diff’).
- Returns:
Trial results.
- Return type:
Dict[str, Any]
- class spiking.tasks.ManteSpikingTask(settings: Dict[str, Any] | None = None)[source]#
Bases:
AbstractSpikingTaskContext-dependent sensory integration task for spiking neural networks.
Evaluates the network’s ability to perform context-dependent decision making using spiking implementations.
- evaluate_performance(spiking_rnn: AbstractSpikingRNN, n_trials: int = 100) Dict[str, float][source]#
Evaluate performance over multiple Mante task trials.
- Parameters:
spiking_rnn (AbstractSpikingRNN) – Spiking network to evaluate.
n_trials (int) – Number of trials to evaluate.
- Returns:
Performance metrics.
- Return type:
Dict[str, float]
- evaluate_trial(spiking_rnn: AbstractSpikingRNN, stimulus: ndarray, label: int) Dict[str, Any][source]#
Evaluate a single Mante task trial.
- Parameters:
spiking_rnn (AbstractSpikingRNN) – Spiking network to evaluate.
stimulus (np.ndarray) – Input stimulus.
label (int) – Expected decision (+1 or -1).
- Returns:
Trial results.
- Return type:
Dict[str, Any]
Factory Classes#
- class spiking.tasks.SpikingTaskFactory[source]#
Bases:
objectFactory class for creating spiking task instances.
- classmethod create_task(task_name: str, settings: Dict[str, Any] | None = None) AbstractSpikingTask[source]#
Create a spiking task instance by type.
- Parameters:
task_name (str) – Name of task (‘go_nogo’, ‘xor’, ‘mante’).
settings (Optional[Dict[str, Any]]) – Task settings.
- Returns:
Created task instance.
- Return type:
- Raises:
ValueError – If task type is not recognized.
- classmethod register_task(task_name: str, task_class: type) None[source]#
Register a custom task class with the factory.
- Parameters:
task_name (str) – Name to register the task under.
task_class (type) – Task class that inherits from AbstractSpikingTask.
- Raises:
ValueError – If task_class doesn’t inherit from AbstractSpikingTask.
Overview#
The spiking tasks module provides specialized task implementations for evaluating spiking neural networks. These tasks extend the rate-based task framework with spiking-specific evaluation capabilities.
Key Features:
Spiking-Specific Interface: Designed for spiking neural network evaluation
Performance Metrics: Multi-trial evaluation with detailed performance analysis
Visualization Support: Built-in plotting and visualization capabilities
Extensible Registry: Dynamic task registration for custom implementations
Sample Trial Types: Configurable trial types for visualization and analysis
Task Evaluation Workflow:
Task Creation: Use
SpikingTaskFactory.create_task()or instantiate directlySingle Trial: Call
evaluate_trial()for individual trial assessmentMulti-Trial: Use
evaluate_performance()for comprehensive evaluationVisualization: Generate plots with
create_visualization()
Available Tasks:
Go-NoGo: Impulse control evaluation for spiking networks
XOR: Working memory assessment with temporal logic
Mante: Context-dependent decision making evaluation
Example Usage#
from spiking.tasks import SpikingTaskFactory
from spiking.eval_tasks import evaluate_task
# Create a spiking task
task = SpikingTaskFactory.create_task('go_nogo')
# Generate stimuli for specific trial types
go_stimulus, go_label = task.generate_stimulus('go')
nogo_stimulus, nogo_label = task.generate_stimulus('nogo')
# Evaluate with a trained spiking network
performance = task.evaluate_performance(spiking_rnn, n_trials=100)
print(f"Accuracy: {performance['overall_accuracy']:.2f}")
# High-level evaluation interface
performance = evaluate_task(
task_name='go_nogo',
model_dir='models/go-nogo/',
n_trials=100
)
Custom Spiking Task Creation#
from spiking.tasks import AbstractSpikingTask, SpikingTaskFactory
class MyCustomSpikingTask(AbstractSpikingTask):
def get_default_settings(self):
return {'T': 200, 'custom_param': 1.0}
def get_sample_trial_types(self):
return ['type_a', 'type_b'] # For visualization
def generate_stimulus(self, trial_type=None):
# Generate stimulus logic
return stimulus, label
def evaluate_performance(self, spiking_rnn, n_trials=100):
# Multi-trial performance evaluation
return {'accuracy': 0.85, 'n_trials': n_trials}
# Register with factory
SpikingTaskFactory.register_task('my_custom', MyCustomSpikingTask)
# Now works with eval_tasks.py
# python -m spiking.eval_tasks --task my_custom --model_dir models/custom/
Integration with eval_tasks.py#
The tasks module is fully integrated with the evaluation system:
Dynamic Task Discovery:
eval_tasks.pyautomatically supports all registered tasksGeneric Visualization: Uses
get_sample_trial_types()for plot generationCLI Support: Command-line interface adapts to new tasks automatically
Error Handling: Robust error handling for custom task implementations