dynapcnn_network#

This module describes the DynapcnnNetwork class that is used to transform a spiking neural network into a network that is compatible with dynapcnn.

class DynapcnnCompatibleNetwork(snn: Union[torch.nn.modules.container.Sequential, sinabs.network.Network], input_shape: Optional[Tuple[int, int, int]] = None, dvs_input: bool = False, discretize: bool = True)[source]#

Deprecated class, use DynapcnnNetwork instead.

DynapcnnNetwork: a class turning sinabs networks into dynapcnn compatible networks, and making dynapcnn configurations.

Parameters
  • snn (sinabs.Network) – SNN that determines the structure of the DynapcnnNetwork

  • input_shape (None or tuple of ints) – Shape of the input, convention: (features, height, width) If None, snn needs an InputLayer

  • dvs_input (bool) – Does dynapcnn receive input from its DVS camera?

  • discretize (bool) – If True, discretize the parameters and thresholds. This is needed for uploading weights to dynapcnn. Set to False only for testing purposes.

class DynapcnnNetwork(snn: Union[torch.nn.modules.container.Sequential, sinabs.network.Network], input_shape: Optional[Tuple[int, int, int]] = None, dvs_input: bool = False, discretize: bool = True)[source]#

Given a sinabs spiking network, prepare a dynapcnn-compatible network. This can be used to test the network will be equivalent once on DYNAPCNN. This class also provides utilities to make the dynapcnn configuration and upload it to DYNAPCNN.

The following operations are done when converting to dynapcnn-compatible:

  • multiple avg pooling layers in a row are consolidated into one and turned into sum pooling layers;

  • checks are performed on layer hyperparameter compatibility with dynapcnn (kernel sizes, strides, padding)

  • checks are performed on network structure compatibility with dynapcnn (certain layers can only be followed by other layers)

  • linear layers are turned into convolutional layers

  • dropout layers are ignored

  • weights, biases and thresholds are discretized according to dynapcnn requirements

DynapcnnNetwork: a class turning sinabs networks into dynapcnn compatible networks, and making dynapcnn configurations.

Parameters
  • snn (sinabs.Network) – SNN that determines the structure of the DynapcnnNetwork

  • input_shape (None or tuple of ints) – Shape of the input, convention: (features, height, width) If None, snn needs an InputLayer

  • dvs_input (bool) – Does dynapcnn receive input from its DVS camera?

  • discretize (bool) – If True, discretize the parameters and thresholds. This is needed for uploading weights to dynapcnn. Set to False only for testing purposes.

find_chip_layer(layer_idx)[source]#

Given an index of a layer in the model, find the corresponding cnn core id where it is placed

> Note that the layer index does not include the DVSLayer. > For instance your model comprises two layers [DVSLayer, DynapcnnLayer], > then the index of DynapcnnLayer is 0 and not 1.

Parameters

layer_idx (int) – Index of a layer

Returns

chip_lyr_idx (int) – Index of the layer on the chip where the model layer is placed.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

make_config(chip_layers_ordering: Union[Sequence[int], str] = 'auto', device='dynapcnndevkit:0', monitor_layers: Optional[Union[List, str]] = None, config_modifier=None)[source]#

Prepare and output the samna DYNAPCNN configuration for this network.

Parameters
  • chip_layers_ordering (sequence of integers or auto) – The order in which the dynapcnn layers will be used. If auto, an automated procedure will be used to find a valid ordering. A list of layers on the device where you want each of the model’s DynapcnnLayers to be placed. Note: This list should be the same length as the number of dynapcnn layers in your model.

  • device (String) – dynapcnndevkit, speck2b or speck2devkit

  • monitor_layers (None/List/Str) –

    A list of all chip-layers that you want to monitor. If you want to monitor the dvs layer for eg.

    monitor_layers = ["dvs"]  # If you want to monitor the output of the pre-processing layer
    monitor_layers = ["dvs", 8] # If you want to monitor preprocessing and layer 8
    monitor_layers = "all" # If you want to monitor all the layers
    

    If this value is left as None, by default the last layer of the model is monitored.

  • config_modifier – A user configuration modifier method. This function can be used to make any custom changes you want to make to the configuration object.

Returns

Configuration object – Object defining the configuration for the device

Raises

ImportError – If samna is not available.

memory_summary()[source]#

Get a summary of the network’s memory requirements

Returns

dict – A dictionary with keys kernel, neuron, bias. The values are a list of the corresponding number per layer in the same order as the model

reset_states(randomize=False)[source]#

Reset the states of the network.

to(device='cpu', chip_layers_ordering='auto', monitor_layers: Optional[Union[List, str]] = None, config_modifier=None)[source]#
Parameters
  • device (String) – cpu:0, cuda:0, dynapcnndevkit, speck2devkit

  • chip_layers_ordering (sequence of integers or auto) – The order in which the dynapcnn layers will be used. If auto, an automated procedure will be used to find a valid ordering. A list of layers on the device where you want each of the model’s DynapcnnLayers to be placed. Note: This list should be the same length as the number of dynapcnn layers in your model.

  • monitor_layers (None/List) –

    A list of all chip-layers that you want to monitor. If you want to monitor the dvs layer for eg.

    monitor_layers = ["dvs"]  # If you want to monitor the output of the pre-processing layer
    monitor_layers = ["dvs", 8] # If you want to monitor preprocessing and layer 8
    monitor_layers = "all" # If you want to monitor all the layers
    

  • config_modifier – A user configuration modifier method. This function can be used to make any custom changes you want to make to the configuration object.

Note

chip_layers_ordering and monitor_layers are used only when using synsense devices. For GPU or CPU usage these options are ignored.

zero_grad(set_to_none: bool = False) None[source]#

Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context.

Parameters

set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.