Full API summary for Rockpool

Package structure summary

rockpool

A machine learning library for training and deploying spiking neural network applications

Base classes

nn.modules.Module(*args, **kwargs)

The native Python / numpy Module base class for Rockpool

nn.modules.TimedModule(*args, **kwargs)

The Rockpool base class for all TimedModule modules

Attribute types

parameters.ParameterBase(data, family, ...)

Base class for Rockpool registered attributes

parameters.Parameter(data, family, ...)

Represent a module parameter

parameters.State(data, family, init_func, ...)

Represent a module state

parameters.SimulationParameter(data, family, ...)

Represent a module simulation parameter

parameters.Constant(obj)

Identify an initialisation argument as a constant (non-trainable) parameter

Alternative base classes

nn.modules.JaxModule(*args, **kwargs)

Base class for Module subclasses that use a Jax backend.

nn.modules.TorchModule(*args, **kwargs)

Base class for modules that are compatible with both Torch and Rockpool

Combinator modules

nn.combinators.FFwdStack(*args, **kwargs)

Assemble modules into a feed-forward stack, with linear weights in between

nn.combinators.Sequential(*args, **kwargs)

Build a sequential stack of modules by connecting them end-to-end

nn.combinators.Residual(*args, **kwargs)

Build a residual block over a sequential stack of modules

Time series classes

timeseries.TimeSeries([times, periodic, ...])

Base class to represent a continuous or event-based time series.

timeseries.TSContinuous([times, samples, ...])

Represents a continuously-sampled time series.

timeseries.TSEvent([times, channels, ...])

Represents a discrete time series, composed of binary events (present or absent).

Module subclasses

nn.modules.Rate(*args, **kwargs)

Encapsulates a population of rate neurons, supporting feed-forward and recurrent modules

nn.modules.RateJax(*args, **kwargs)

Encapsulates a population of rate neurons, supporting feed-forward and recurrent modules, with a Jax backend

nn.modules.RateTorch(*args, **kwargs)

Encapsulates a population of rate neurons, supporting feed-forward and recurrent modules, with a Toch backend

nn.modules.LIF(*args, **kwargs)

A leaky integrate-and-fire spiking neuron model

nn.modules.LIFJax(*args, **kwargs)

A leaky integrate-and-fire spiking neuron model, with a Jax backend

nn.modules.LIFTorch(*args, **kwargs)

A leaky integrate-and-fire spiking neuron model with a Torch backend

nn.modules.aLIFTorch(*args, **kwargs)

A leaky integrate-and-fire spiking neuron model with adaptive hyperpolarisation, with a Torch backend

nn.modules.LIFNeuronTorch(*args, **kwargs)

A leaky integrate-and-fire spiking neuron model

nn.modules.UpDownTorch(*args, **kwargs)

Feedforward layer that converts each analogue input channel to one spiking up and one spiking down channel.

nn.modules.Linear(*args, **kwargs)

Encapsulates a linear weight matrix

nn.modules.LinearJax(*args, **kwargs)

Encapsulates a linear weight matrix, with a Jax backend

nn.modules.LinearTorch(*args, **kwargs)

Applies a linear transformation to the incoming data: \(y = xA + b\)

nn.modules.Instant(*args, **kwargs)

Wrap a callable function as an instantaneous Rockpool module

nn.modules.InstantJax(*args, **kwargs)

Wrap a callable function as an instantaneous Rockpool module, with a Jax backend

nn.modules.InstantTorch(*args, **kwargs)

Wrap a callable function as an instantaneous Rockpool module, with a Torch backend

nn.modules.ExpSyn(*args, **kwargs)

Exponential synapse module

nn.modules.ExpSynJax(*args, **kwargs)

Exponential synapse module with a Jax backend

nn.modules.ExpSynTorch(*args, **kwargs)

Exponential synapse module with a Torch backend

nn.modules.SoftmaxJax(*args, **kwargs)

A Jax-backed module implementing a smoothed weighted softmax, compatible with spiking inputs

nn.modules.LogSoftmaxJax(*args, **kwargs)

A Jax-backed module implementing a smoothed weighted softmax, compatible with spiking inputs

nn.modules.ButterMelFilter(*args, **kwargs)

Define a Butterworth filter bank (mel spacing) filtering layer with continuous sampled output

nn.modules.ButterFilter(*args, **kwargs)

Define a Butterworth filter bank filtering layer with continuous output

nn.modules.LIFExodus(*args, **kwargs)

nn.modules.LIFMembraneExodus(*args, **kwargs)

nn.modules.ExpSynExodus(*args, **kwargs)

Layer subclasses from Rockpool v1

These classes are deprecated, but are still usable via the high-level API, until they are converted to the v2 API.

nn.layers.Layer(weights[, dt, noise_std, name])

Base class for Layers in rockpool

nn.layers.FFIAFBrian(*args, **kwargs)

A spiking feedforward layer with current inputs and spiking outputs

nn.layers.FFIAFSpkInBrian(*args, **kwargs)

Spiking feedforward layer with spiking inputs and outputs

nn.layers.RecIAFBrian(*args, **kwargs)

A spiking recurrent layer with current inputs and spiking outputs, using a Brian2 backend

nn.layers.RecIAFSpkInBrian(*args, **kwargs)

Spiking recurrent layer with spiking in- and outputs, and a Brian2 backend

nn.layers.FFExpSynBrian(*args, **kwargs)

Define an exponential synapse layer (spiking input), with a Brian2 backend

Standard networks

nn.networks.WaveSenseNet(*args, **kwargs)

Implement a WaveSense network

Conversion utilities

nn.modules.timed_module.TimedModuleWrapper(...)

Wrap a low-level Rockpool Module automatically into a TimedModule object

nn.modules.timed_module.LayerToTimedModule(...)

An adapter class to wrap a Rockpool v1 Layer object, converting the object to support the TimedModule high-level Rockpool v2 API

nn.modules.timed_module.astimedmodule([...])

Convert a Rockpool v1 class to a v2 class

Jax training utilities

training.jax_loss

Jax functions useful for training networks using Jax Modules.

training.adversarial_jax

Functions to implement adversarial training approaches using Jax

training.adversarial_jax.pga_attack(...[, ...])

Performs the PGA (projected gradient ascent) based attack on the parameters of the network given inputs.

training.adversarial_jax.adversarial_loss(...)

Implement a hybrid task / adversarial robustness loss

PyTorch training utilities

training.torch_loss

Torch loss functions and regularizers useful for training networks using Torch Modules.

PyTorch transformation API (beta)

transform.torch_transform

Defines the parameter and activation transformation-in-training pipeline for TorchModule s

Xylo hardware support and simulation

Support modules

devices.xylo.find_xylo_hdks()

Enumerate connected Xylo HDKs, and import the corresponding support module

devices.xylo

Xylo-family device simulations, deployment and HDK support

devices.xylo.syns61300

Package to support the Xylo HW SYNS61300 (Xylo™ core; "Pollen")

devices.xylo.syns61201

Package to support the Xylo HW SYNS61201 (Xylo™ Audio 2)

devices.xylo.syns65300

Package to support the Xylo HW SYNS65300 (Xylo™ Audio 1)

devices.xylo.syns63300

Package to support the Xylo HW SYNS63300 (Xylo™ IMU)

transform.quantize_methods.global_quantize(...)

Quantize a Xylo model for deployment, using global parameter scaling

transform.quantize_methods.channel_quantize(...)

Quantize a Xylo model for deployment, using per-channel parameter scaling

Xylo Audio support

devices.xylo.syns61201.mapper(graph[, ...])

Map a computational graph onto the Xylo v2 (SYNS61201) architecture

devices.xylo.syns61201.config_from_specification(...)

Convert a full network specification to a xylo config and validate it

devices.xylo.syns61201.load_config(filename)

Read a Xylo configuration from disk in JSON format

devices.xylo.syns61201.save_config(config, ...)

Save a Xylo configuration to disk in JSON format

devices.xylo.syns61201.cycles_model(config)

Calculate the average number of cycles required for a given network architecture

devices.xylo.syns61201.est_clock_freq(config, dt)

Estimate the required master clock frequency, to run a network in real-time

devices.xylo.syns61201.XyloSim(*args, **kwargs)

A Module simulating a digital SNN on Xylo, using XyloSim as a back-end.

devices.xylo.syns61201.XyloSamna(*args, **kwargs)

A spiking neuron Module backed by the Xylo hardware, via samna.

devices.xylo.syns61201.XyloMonitor(*args, ...)

A spiking neuron Module backed by the Xylo hardware, via samna.

devices.xylo.syns61201.AFESim(*args, **kwargs)

A Module that simulates analog hardware for preprocessing audio and converting into spike features.

devices.xylo.syns61201.AFESamna(*args, **kwargs)

Interface to the Audio Front-End module on a Xylo-A2 HDK

devices.xylo.syns61201.DivisiveNormalisation(...)

A digital divisive normalization block

devices.xylo.syns61201.Xylo2HiddenNeurons(...)

A graph.GraphModule encapsulating Xylo v2 hidden neurons

devices.xylo.syns61201.Xylo2OutputNeurons(...)

A graph.GraphModule encapsulating Xylo V1 output neurons

Xylo IMU support

devices.xylo.syns63300.mapper(graph[, ...])

Map a computational graph onto the Xylo IMU architecture

devices.xylo.syns63300.config_from_specification(...)

Convert a full network specification to a xylo config and validate it

devices.xylo.syns63300.load_config(filename)

Read a Xylo configuration from disk in JSON format

devices.xylo.syns63300.save_config(config, ...)

Save a Xylo configuration to disk in JSON format

devices.xylo.syns63300.cycles_model(config)

Calculate the average number of cycles required for a given network architecture

devices.xylo.syns63300.est_clock_freq(config, dt)

Estimate the required master clock frequency, to run a network in real-time

devices.xylo.syns63300.XyloSim(*args, **kwargs)

A Module simulating a digital SNN on Xylo, using XyloSim as a back-end.

devices.xylo.syns63300.XyloSamna(*args, **kwargs)

A spiking neuron Module backed by the Xylo hardware, via samna.

devices.xylo.syns63300.XyloIMUMonitor(*args, ...)

A spiking neuron Module backed by the Xylo-IMU hardware, via samna.

devices.xylo.syns63300.XyloIMUHiddenNeurons(...)

A graph.GraphModule encapsulating Xylo IMU hidden neurons

devices.xylo.syns63300.XyloIMUOutputNeurons(...)

A graph.GraphModule encapsulating Xylo IMU output neurons

devices.xylo.syns63300.XyloSamna(*args, **kwargs)

A spiking neuron Module backed by the Xylo hardware, via samna.

devices.xylo.syns63300.XyloSim(*args, **kwargs)

A Module simulating a digital SNN on Xylo, using XyloSim as a back-end.

devices.xylo.syns63300.XyloIMUMonitor(*args, ...)

A spiking neuron Module backed by the Xylo-IMU hardware, via samna.

IMU Preprocessing Interface

devices.xylo.syns63300.IMUIFSim(*args, **kwargs)

A Module that simulates the IMU signal preprocessing on Xylo IMU

devices.xylo.syns63300.IMUIFSamna(*args, ...)

A module wrapping the Xylo IMU IF on hardware, permitting recording

devices.xylo.syns63300.IMUData(*args, **kwargs)

Interface to the IMU sensor on a Xylo IMU HDK

devices.xylo.syns63300.imuif

IMU-IF submodules, as implemented in Xylo IMU

devices.xylo.syns63300.IMUIFSim(*args, **kwargs)

A Module that simulates the IMU signal preprocessing on Xylo IMU

devices.xylo.syns63300.imuif.RotationRemoval(...)

A Rockpool module simulating the rotation estimation and removal block in the Xylo IMU interface

devices.xylo.syns63300.imuif.BandPassFilter([...])

Class that instantiates a single quantised band-pass filter, as implemented on Xylo IMU hardware

devices.xylo.syns63300.imuif.FilterBank(...)

This class builds the block-diagram version of the filters, which is exactly as it is done in HW.

devices.xylo.syns63300.imuif.ScaleSpikeEncoder(...)

Encode spikes as follows

devices.xylo.syns63300.imuif.IAFSpikeEncoder(...)

Synchronous integrate and fire spike encoder

devices.xylo.syns63300.Quantizer(*args, **kwargs)

The quantizer that converts the input signal into python-object which can handle/simulate arbitrary register size in hardware implementation.

Dynap-SE2 hardware support and simulation

devices.dynapse

Dynap-SE2 Application Programming Interface (API)

Simulation

devices.dynapse.simulation

Dynap-SE2 Simulation Module

devices.dynapse.DynapSim(*args, **kwargs)

DynapSim solves dynamical chip equations for the DPI neuron and synapse models.

Mismatch

transform.mismatch_generator(prototype[, ...])

mismatch_generator returns a function which simulates the analog device mismatch effect.

devices.dynapse.frozen_mismatch_prototype(mod)

frozen_mismatch_prototype process the module attributes tree and returns a frozen mismatch prototype which indicates the values to be deviated.

devices.dynapse.dynamic_mismatch_prototype(mod)

dynamic_mismatch_prototype process the module attributes tree and returns a dynamical mismatch prototype which indicates the values to be deviated at run-time.

Device to Simulation

devices.dynapse.mapper(graph[, in_place, ...])

mapper maps a computational graph onto Dynap-SE2 architecture.

devices.dynapse.autoencoder_quantization(...)

autoencoder_quantization executes the unsupervised weight configuration learning approach rockpool.devices.dynapse.quantization.autoencoder.learn.learn_weights for each cluster seperately.

devices.dynapse.config_from_specification(...)

config_from_specification gets a specification and creates a samna configuration object for Dynap-SE2 chip.

Computer Interface

devices.dynapse.find_dynapse_boards([name])

find_dynapse_boards identifies the Dynap-SE2 boards plugged in to the system.

devices.dynapse.DynapseSamna(*args, **kwargs)

DynapseSamna bridges the gap between the chip and the computer.

Simulation to Device

devices.dynapse.dynapsim_net_from_spec(...)

dynapsim_net_from_specification gets a specification and creates a sequential dynapsim network consisting of a linear layer (virtual connections) and a recurrent layer (hardware connections)

devices.dynapse.dynapsim_net_from_config(config)

dynapsim_net_from_config constructs a DynapSim network by processing a samna configuration object

More

devices.dynapse.DynapseNeurons(input_nodes, ...)

DynapseNeurons stores the core computational properties of a Dynap-SE network

devices.dynapse.DynapSimCore([Iw_0, Iw_1, ...])

DynapSimCore stores the simulation currents and manages the conversion from configuration objects.

Graph tracing and mapping

Base modules

graph.GraphModuleBase(input_nodes, ...)

Base class for graph modules

graph.GraphModule(input_nodes, output_nodes, ...)

Describe a module of computation in a graph

graph.GraphNode(source_modules, sink_modules)

Describe a node connecting GraphModule s

graph.GraphHolder(input_nodes, output_nodes, ...)

A GraphModule that encapsulates other graphs

graph.graph_base.as_GraphHolder(g)

Encapsulate a GraphModule inside a GraphHolder

Computational graph modules

graph.LinearWeights(input_nodes, ...[, biases])

A GraphModule that encapsulates a single set of linear weights

graph.GenericNeurons(input_nodes, ...)

A GraphModule than encapsulates a set of generic neurons

graph.AliasConnection(input_nodes, ...)

A GraphModule that encapsulates a set of alias connections

graph.LIFNeuronWithSynsRealValue(...)

A GraphModule that encapsulates a set of LIF spiking neurons with synaptic and membrane dynamics, and with real-valued parameters

graph.RateNeuronWithSynsRealValue(...)

A GraphModule that encapsulates a set of rate neurons, with synapses, and with real-valued parameters

graph.utils

Utilities for generating and manipulating computational graphs

General Utilities

utilities.backend_management

Utility functionality for managing backends

utilities.tree_utils

Tree manipulation utilities with no external dependencies

utilities.jax_tree_utils

Utility functions for working with trees.

utilities.type_handling

type_handling.py - Convenience functions for checking and converting object types

NIR import and export

rockpool.nn.modules.to_nir(module[, ...])

Convert a Rockpool module into a NIR graph for export

rockpool.nn.modules.from_nir(source)

Generate a rockpool model from a NIR representation