This module defines the DynapcnnLayer class that is used to reproduce the behavior of a layer on the dynapcnn chip.

class DynapcnnLayer(conv: Conv2d, spk: IAFSqueeze, in_shape: Tuple[int, int, int], pool: SumPool2d | None = None, discretize: bool = True, rescale_weights: int = 1)[source]#

Create a DynapcnnLayer object representing a dynapcnn layer.

Requires a convolutional layer, a sinabs spiking layer and an optional pooling value. The layers are used in the order conv -> spike -> pool.

  • conv (torch.nn.Conv2d or torch.nn.Linear) – Convolutional or linear layer (linear will be converted to convolutional)

  • spk (sinabs.layers.IAFSqueeze) – Sinabs IAF layer

  • in_shape (tuple of int) – The input shape, needed to create dynapcnn configs if the network does not contain an input layer. Convention: (features, height, width)

  • pool (int or None) – Integer representing the sum pooling kernel and stride. If None, no pooling will be applied.

  • discretize (bool) – Whether to discretize parameters.

  • rescale_weights (int) – Layer weights will be divided by this value.

Initializes internal Module state, shared by both nn.Module and ScriptModule.


Torch forward pass.

get_neuron_shape() Tuple[int, int, int][source]#

Return the output shape of the neuron layer


features, height, width


Computes the amount of memory required for each of the components. Note that this is not necessarily the same as the number of parameters due to some architecture design constraints.

\[K_{MT} = c \cdot 2^{\lceil \log_2\left(k_xk_y\right) \rceil + \lceil \log_2\left(f\right) \rceil}\]
\[N_{MT} = f \cdot 2^{ \lceil \log_2\left(f_y\right) \rceil + \lceil \log_2\left(f_x\right) \rceil }\]

A dictionary with keys kernel, neuron and bias and the corresponding memory sizes

zero_grad(set_to_none: bool = False) None[source]#

Resets gradients of all model parameters. See similar function under torch.optim.Optimizer for more context.


set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.