Akida runtime API
- akida.__version__
Returns the current version of the akida module.
Model
- class akida.Model
An Akida neural
Model
, represented as a hierarchy of layers.The
Model
class is the main interface to Akida and allows:to create an empty
Model
to which you can add layers programmatically using the sequential API,to reload a full
Model
from a serialized file or a memory buffer,to create a new
Model
from a list of layers taken from an existingModel
.
It provides methods to instantiate, train, test and save models.
The
Model
input and output shapes have 4 dimensions, the first one being the number of samples.The
Model
accepts only uint8 tensors as inputs, whose values are encoded using either 1, 2, 4 or 8-bit precision (i.e. whose max value is 1, 3, 15 or 255 respectively).If the inputs are 8-bit, then the first layer of the
Model
must be a convolutional layer with either 1 or 3 input channels.The
Model
output is an int8 our uint8 numpy array if activations are enabled for the last layer, otherwise it is an int32 numpy array.- Parameters:
filename (str, optional) – path to the serialized Model. If None, an empty sequential model will be created, or filled with the layers in the layers parameter.
layers (
list
, optional) – list of layers that will be copied to the new model. If the list does not start with an input layer, it will be added automatically.
Methods:
add
(self, layer, inbound_layers)Add a layer to the current model.
add_classes
(self, num_add_classes)Adds classes to the last layer of the model.
compile
(self, optimizer)Select and prepare the optimizer for learning of the last layer.
evaluate
(self, inputs, labels[, ...])Returns the model class accuracy.
fit
(*args, **kwargs)Overloaded function.
forward
(self, inputs[, batch_size])Forwards a set of inputs through the model.
from_dict
(model_dict)Instantiate a Model from a dict representation
from_json
(model_str)Instantiate a Model from a JSON representation
get_layer
(*args, **kwargs)Overloaded function.
get_layer_count
(self)The number of layers.
map
(device[, hw_only, mode])Map the model to a Device using a target backend.
pop_layer
(self)Remove the last layer of the current model.
predict
(self, inputs[, batch_size])Predicts a set of inputs through the model.
predict_classes
(inputs[, num_classes, ...])Predicts the class labels for the specified inputs.
save
(self, model_file)Saves all the model configuration (all layers and weights) to a file on disk.
summary
()Prints a string summary of the model.
to_buffer
(self)Serializes all the model configuration (all layers and weights) to a bytes buffer.
to_dict
()Provide a dict representation of the Model
to_json
()Provide a JSON representation of the Model
Attributes:
The model input dimensions.
The IP version this model is compatible with.
Get a list of layers in current model.
The learning parameters set.
The model metrics.
The model output dimensions.
Copy of power events logged after inference
The list of layer sequences in the Model
Get statistics by sequence for this model.
- add(self: akida.core.Model, layer: akida::Layer, inbound_layers: List[akida::Layer] = []) None
Add a layer to the current model.
A list of inbound layers can optionally be specified. These layers must already be included in the model. if no inbound layer is specified, and the layer is not the first layer in the model, the last included layer will be used as inbound layer.
- Parameters:
layer (one of the available layers) – layer instance to be added to the model
inbound_layers (a list of Layer) – an optional list of inbound layers
- add_classes(self: akida.core.Model, num_add_classes: int) None
Adds classes to the last layer of the model.
A model with a compiled last layer is ready to learn using the Akida built-in learning algorithm. This function allows to add new classes (i.e. new neurons) to the last layer, keeping the previously learned neurons.
- Parameters:
num_add_classes (int) – number of classes to add to the last layer
- Raises:
RuntimeError – if the last layer is not compiled
- compile(self: akida.core.Model, optimizer: akida::LearningParams) None
Select and prepare the optimizer for learning of the last layer.
- Parameters:
optimizer (
akida.LearningParams
) – the optimizer used for learning
- evaluate(self: akida.core.Model, inputs: numpy.ndarray[numpy.uint8], labels: numpy.ndarray[numpy.int32], num_classes: int = 0, batch_size: int = 0) float
Returns the model class accuracy.
Forwards an input tensor through the model and compute accuracy based on labels. If the number of output neurons is greater than the number of classes, the neurons are automatically assigned to a class by dividing their id by the number of classes.
Note that the evaluation is based on the activation values of the last layer: for most use cases, you may want to disable activations for that layer (ie setting
activation=False
) to get a better accuracy.- Parameters:
inputs (
numpy.ndarray
) – a (n, x, y, c) uint8 tensorlabels (
numpy.ndarray
) – a (n) tensor of labels for the inputsnum_classes (int, optional) – optional parameter (defaults to the number of neurons in the last layer).
batch_size (int, optional) – maximum number of inputs that should be processed at a time
- Returns:
the accuracy of the model to predict the labels based on the inputs.
- Return type:
float
- fit(*args, **kwargs)
Overloaded function.
fit(self: akida.core.Model, inputs: numpy.ndarray, input_labels: float, batch_size: int = 0) -> numpy.ndarray
Trains a set of images or events through the model.
Trains the model with the specified input tensor (numpy array).
- Parameters:
inputs (
numpy.ndarray
) – a (n, x, y, c) uint8 tensorinput_labels (float, optional) – input label
batch_size (int, optional) – maximum number of inputs that should be processed at a time.
- Returns:
a (n, out_x, out_y, out_c) int8 or uint8 or int32 tensor.
- Return type:
numpy.ndarray
- Raises:
TypeError – if the input is not a numpy.ndarray.
ValueError – if the input doesn’t match the required shape, format, etc.
fit(self: akida.core.Model, inputs: numpy.ndarray, input_labels: numpy.ndarray, batch_size: int = 0) -> numpy.ndarray
Trains a set of images or events through the model.
Trains the model with the specified input tensor (numpy array).
- Parameters:
inputs (
numpy.ndarray
) – a (n, x, y, c) uint8 tensorinput_labels (
numpy.ndarray
, optional) – input labels. Must have one label per input, or a single label for all inputs. If a label exceeds the defined number of classes, the input will be discarded. (Default value = None).batch_size (int, optional) – maximum number of inputs that should be processed at a time.
- Returns:
a (n, out_x, out_y, out_c) int8 or int8 or int32 tensor.
- Return type:
numpy.ndarray
- Raises:
TypeError – if the input is not a numpy.ndarray.
ValueError – if the input doesn’t match the required shape, format, etc.
fit(self: akida.core.Model, inputs: numpy.ndarray, input_labels: list = [], batch_size: int = 0) -> numpy.ndarray
Trains a set of images or events through the model.
Trains the model with the specified input tensor (numpy array).
- Parameters:
inputs (
numpy.ndarray
) – a (n, x, y, c) uint8 tensorinput_labels (list(int), optional) – input labels. Must have one label per input, or a single label for all inputs. If a label exceeds the defined number of classes, the input will be discarded. (Default value = None).
batch_size (int, optional) – maximum number of inputs that should be processed at a time.
- Returns:
a (n, out_x, out_y, out_c) int8 or int8 or int32 tensor.
- Return type:
numpy.ndarray
- Raises:
TypeError – if the input is not a numpy.ndarray.
ValueError – if the input doesn’t match the required shape, format, etc.
- forward(self: akida.core.Model, inputs: numpy.ndarray, batch_size: int = 0) numpy.ndarray
Forwards a set of inputs through the model.
Forwards an input tensor through the model and returns an output tensor.
- Parameters:
inputs (
numpy.ndarray
) – a (n, x, y, c) uint8 tensorbatch_size (int, optional) – maximum number of inputs that should be processed at a time
- Returns:
a (n, out_x, out_y, out_c) uint8, int8 or int32 tensor.
- Return type:
numpy.ndarray
- Raises:
TypeError – if the input is not a numpy.ndarray.
ValueError – if the inputs doesn’t match the required shape, format, etc.
- static from_dict(model_dict)
Instantiate a Model from a dict representation
- Parameters:
model_dict (dict) – a Model dictionary.
- Returns:
a Model.
- Return type:
- static from_json(model_str)
Instantiate a Model from a JSON representation
- Parameters:
model_str (str) – a JSON-formatted string corresponding to a Model.
- Returns:
a Model.
- Return type:
- get_layer(*args, **kwargs)
Overloaded function.
get_layer(self: akida.core.Model, layer_name: str) -> akida::Layer
Get a reference to a specific layer.
This method allows a deeper introspection of the model by providing access to the underlying layers.
- param layer_name:
name of the layer to retrieve
- type layer_name:
str
- return:
a
Layer
get_layer(self: akida.core.Model, layer_index: int) -> akida::Layer
Get a reference to a specific layer.
This method allows a deeper introspection of the model by providing access to the underlying layers.
- param layer_index:
index of the layer to retrieve
- type layer_index:
int
- return:
a
Layer
- get_layer_count(self: akida.core.Model) int
The number of layers.
- property input_shape
The model input dimensions.
- property ip_version
The IP version this model is compatible with.
- property layers
Get a list of layers in current model.
- property learning
The learning parameters set.
- map(device, hw_only=False, mode=MapMode.AllNps)
Map the model to a Device using a target backend.
This method tries to map a Model to the specified Device, implicitly identifying one or more layer sequences that are mapped individually on the Device Mesh.
An optional hw_only parameter can be specified to force the mapping strategy to use only one hardware sequence, thus reducing software intervention on the inference.
Note
Default mapping gives a higher throughput, lower latency, and better NP concurrent utilization but an optimal mapping depends on the system characteristics.
Note
To use a custom MeshMapper, use the mapping mode MapMode.Minimal.
- property metrics
The model metrics.
- property output_shape
The model output dimensions.
- pop_layer(self: akida.core.Model) None
Remove the last layer of the current model.
- property power_events
Copy of power events logged after inference
- predict(self: akida.core.Model, inputs: numpy.ndarray, batch_size: int = 0) numpy.ndarray
Predicts a set of inputs through the model.
Forwards an input tensor through the model and returns a float array.
It applies ONLY to models without an activation on the last layer. The output values are obtained from the model discrete potentials by applying a shift and a scale.
- Parameters:
inputs (
numpy.ndarray
) – a (n, x, y, c) uint8 tensorbatch_size (int, optional) – maximum number of inputs that should be processed at a time
- Returns:
a (n, w, h, c) float tensor.
- Return type:
numpy.ndarray
- Raises:
TypeError – if the input is not a numpy.ndarray.
RuntimeError – if the model last layer has an activation.
ValueError – if the input doesn’t match the required shape, format, or if the model only has an InputData layer.
- predict_classes(inputs, num_classes=0, batch_size=0)
Predicts the class labels for the specified inputs.
- Parameters:
inputs (
numpy.ndarray
) – a (n, x, y, c) uint8 tensornum_classes (int, optional) – the number of output classes
batch_size (int, optional) – maximum number of inputs that should be processed at a time
- Returns:
an array of class labels
- Return type:
numpy.ndarray
- save(self: akida.core.Model, model_file: str) None
Saves all the model configuration (all layers and weights) to a file on disk.
- Parameters:
model_file (str) – full path of the serialized model (.fbz file).
- property sequences
The list of layer sequences in the Model
- property statistics
Get statistics by sequence for this model.
- Returns:
a dictionary of
SequenceStatistics
indexed by name.
- summary()
Prints a string summary of the model.
This method prints a summary of the model with details for every layer, grouped by sequences:
name and type in the first column
output shape
kernel shape
If there is any layer with unsupervised learning enabled, it will list them, with these details:
name of layer
number of incoming connections
number of weights per neuron
- to_buffer(self: akida.core.Model) bytes
Serializes all the model configuration (all layers and weights) to a bytes buffer.
- to_dict()
Provide a dict representation of the Model
- Returns:
a Model dictionary.
- Return type:
dict
- to_json()
Provide a JSON representation of the Model
- Returns:
a JSON-formatted string corresponding to a Model.
- Return type:
str
Layer
Layer
- class akida.Layer
Methods:
Returns an histogram of learning percentages.
get_variable
(name)Get the value of a layer variable.
Get the list of variable names for this layer.
set_variable
(name, values)Set the value of a layer variable.
to_dict
()Provide a dict representation of the Layer
Attributes:
The layer inbound layers.
The layer input bits.
The layer input dimensions.
The layer hardware mapping.
The layer name.
The layer output dimensions.
Whether output is signed or not.
The layer parameters set.
The layer trainable variables.
- get_learning_histogram()
Returns an histogram of learning percentages.
Returns a list of learning percentages and the associated number of neurons.
- Returns:
a (n,2) numpy.ndarray containing the learning percentages and the number of neurons.
- Return type:
numpy.ndarray
- get_variable(name)
Get the value of a layer variable.
Layer variables are named entities representing the weights or thresholds used during inference:
Weights variables are typically integer arrays of shape: (x, y, features/channels, num_neurons) row-major (‘C’).
Threshold variables are typically integer or float arrays of shape: (num_neurons).
- Parameters:
name (str) – the variable name.
- Returns:
an array containing the variable.
- Return type:
numpy.ndarray
- get_variable_names()
Get the list of variable names for this layer.
- Returns:
a list of variable names.
- property inbounds
The layer inbound layers.
- property input_bits
The layer input bits.
- property input_dims
The layer input dimensions.
- property mapping
The layer hardware mapping.
- property name
The layer name.
- property output_dims
The layer output dimensions.
- property output_signed
Whether output is signed or not.
- property parameters
The layer parameters set.
- set_variable(name, values)
Set the value of a layer variable.
Layer variables are named entities representing the weights or thresholds used during inference:
Weights variables are typically integer arrays of shape:
(num_neurons, features/channels, y, x) col-major ordered (‘F’)
or equivalently:
(x, y, features/channels, num_neurons) row-major (‘C’).
Threshold variables are typically integer or float arrays of shape: (num_neurons).
- Parameters:
name (str) – the variable name.
values (
numpy.ndarray
) – a numpy.ndarray containing the variable values.
- to_dict()
Provide a dict representation of the Layer
- Returns:
a Layer dictionary.
- Return type:
dict
- property variables
The layer trainable variables.
Mapping
Akida layers
- class akida.InputData(input_shape, input_bits=4, name='')[source]
This layer is used to specify the Model input dimensions and bitwidth.
It specifically targets Models accepting signed or low bitwidth inputs, or if the channel number is neither 1 nor 3. For images input, model must start instead with an image-specific input layer.
- Parameters:
input_shape (tuple) – the 3D input shape.
input_bits (int, optional) – input bitwidth. Defaults to 4.
name (str, optional) – name of the layer. Defaults to empty string.
Akida V1 layers
- class akida.InputConvolutional(input_shape, kernel_size, filters, name='', padding=<Padding.Same: 1>, kernel_stride=(1, 1), weights_bits=1, pool_size=(-1, -1), pool_type=<PoolType.NoPooling: 0>, pool_stride=(-1, -1), activation=True, act_bits=1, padding_value=0)[source]
This represents an image-specific input convolutional layer.
The initial convolutional layer in a network, which receives image inputs in either RGB or grayscale format is converted into an InputConvolutional Layer on Akida. This layer optionally executes Pooling and ReLU operation to the outputs of Convolution.
It is the only Akida V1 layer with 8-bit weights. It applies a ‘convolution’ (actually a cross-correlation) optionally followed by a pooling operation to the input images. It can optionally apply a step-wise ReLU activation to its outputs. The layer expects a 4D tensor whose first dimension is the sample index representing the 8-bit images as input. It returns a 4D tensor whose first dimension is the sample index and the last dimension is the number of convolution filters. The order of the input spatial dimensions is preserved, but their value may change according to the convolution and pooling parameters.
- Parameters:
input_shape (tuple) – the 3D input shape.
filters (int) – number of filters.
kernel_size (list) – list of 2 integers representing the spatial dimensions of the convolutional kernel.
name (str, optional) – name of the layer. Defaults to empty string.
padding (
Padding
, optional) – type of convolution. Defaults to Padding.Same.kernel_stride (tuple, optional) – tuple of integer representing the convolution stride (X, Y). Defaults to (1, 1).
weights_bits (int, optional) – number of bits used to quantize weights. Defaults to 1.
pool_size (list, optional) – list of 2 integers, representing the window size over which to take the maximum or the average (depending on pool_type parameter). Defaults to (-1, -1).
pool_type (
PoolType
, optional) – pooling type (NoPooling, Max or Average). Defaults to PoolType.NoPooling.pool_stride (list, optional) – list of 2 integers representing the stride dimensions. Defaults to (-1, -1)
activation (bool, optional) – enable or disable activation function. Defaults to True.
act_bits (int, optional) – number of bits used to quantize the neuron response. Defaults to 1.
padding_value (int, optional) – value used when padding. Defaults to 0.
- class akida.FullyConnected(units, name='', weights_bits=1, activation=True, act_bits=1)[source]
This represents a Dense or Linear neural layer.
A standard Dense Layer in a network is converted to FullyConnected Layer on Akida. This layer optionally executes ReLU operation to the outputs of the Dense Layer.
The FullyConnected layer accepts 1-bit, 2-bit or 4-bit input tensors. The FullyConnected can be configured with 1-bit, 2-bit or 4-bit weights. It multiplies the inputs by its internal unit weights, returning a 4D tensor of values whose first dimension is the number of samples and the last dimension represents the number of units. It can optionally apply a step-wise ReLU activation to its outputs.
- Parameters:
units (int) – number of units.
name (str, optional) – name of the layer. Defaults to empty string.
weights_bits (int, optional) – number of bits used to quantize weights. Defaults to 1.
activation (bool, optional) – enable or disable activation function. Defaults to True.
act_bits (int, optional) – number of bits used to quantize the neuron response. Defaults to 1.
- class akida.Convolutional(kernel_size, filters, name='', padding=<Padding.Same: 1>, kernel_stride=(1, 1), weights_bits=1, pool_size=(-1, -1), pool_type=<PoolType.NoPooling: 0>, pool_stride=(-1, -1), activation=True, act_bits=1)[source]
This represents a standard Convolutional layer.
A standard Convolution Layer in a network having an input with arbitrary number of channels is converted to Convolutional Layer on Akida. This layer optionally executes Pooling and ReLU operation to the outputs of Convolution.
The Convolutional layer accepts 1-bit, 2-bit or 4-bit 3D input tensors with an arbitrary number of channels. The Convolutional layer can be configured with 1-bit, 2-bit or 4-bit weights. It applies a convolution (not a cross-correlation) optionally followed by a pooling operation to the input tensors. It can optionally apply a step-wise ReLU activation to its outputs. The layer expects a 4D tensor whose first dimension is the sample index as input. It returns a 4D tensor whose first dimension is the sample index and the last dimension is the number of convolution filters. The order of the input spatial dimensions is preserved, but their value may change according to the convolution and pooling parameters.
- Parameters:
kernel_size (list) – list of 2 integers representing the spatial dimensions of the convolutional kernel.
filters (int) – number of filters.
name (str, optional) – name of the layer. Defaults to empty string
padding (
Padding
, optional) – type of convolution. Defaults to Padding.Same.kernel_stride (list, optional) – list of 2 integers representing the convolution stride (X, Y). Defaults to (1, 1).
weights_bits (int, optional) – number of bits used to quantize weights. Defaults to 1.
pool_size (list, optional) – list of 2 integers, representing the window size over which to take the maximum or the average (depending on pool_type parameter). Defaults to (-1, -1).
pool_type (
PoolType
, optional) – pooling type (NoPooling, Max or Average). Defaults to Pooling.NoPooling.pool_stride (list, optional) – list of 2 integers representing the stride dimensions. Defaults to (-1, -1).
activation (bool, optional) – enable or disable activation function. Defaults to True.
act_bits (int, optional) – number of bits used to quantize the neuron response. Defaults to 1.
- class akida.SeparableConvolutional(kernel_size, filters, name='', padding=<Padding.Same: 1>, kernel_stride=(1, 1), weights_bits=2, pool_size=(-1, -1), pool_type=<PoolType.NoPooling: 0>, pool_stride=(-1, -1), activation=True, act_bits=1)[source]
This represents a separable convolution layer.
A standard Separable Convolution Layer in a network having an input with arbitrary number of channels is converted to SeparableConvolutional Layer on Akida. This layer optionally executes Pooling and ReLU operation to the outputs of Separable Convolution.
This layer accepts 1-bit, 2-bit or 4-bit 3D input tensors. It can be configured with 1-bit, 2-bit or 4-bit weights. Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. Note: this layer applies a real convolution, and not a cross-correlation. It can optionally apply a step-wise ReLU activation to its outputs. The layer expects a 4D tensor whose first dimension is the sample index as input. It returns a 4D tensor whose first dimension is the sample index and the last dimension is the number of convolution filters. The order of the input spatial dimensions is preserved, but their value may change according to the convolution and pooling parameters.
- Parameters:
kernel_size (list) – list of 2 integers representing the spatial dimensions of the convolutional kernel.
filters (int) – number of pointwise filters.
name (str, optional) – name of the layer. Defaults to empty string.
padding (
Padding
, optional) – type of convolution. Defaults to Padding.Same.kernel_stride (list, optional) – list of 2 integers representing the convolution stride (X, Y). Defaults to (1, 1).
weights_bits (int, optional) – number of bits used to quantize weights. Defaults to 2.
pool_size (list, optional) – list of 2 integers, representing the window size over which to take the maximum or the average (depending on pool_type parameter). Defaults to (-1, -1).
pool_type (
PoolType
, optional) – pooling type (NoPooling, Max or Average). Defaults to PoolType.NoPooling.pool_stride (list, optional) – list of 2 integers representing the stride dimensions. Defaults to (-1, -1).
activation (bool, optional) – enable or disable activation function. Defaults to True.
act_bits (int, optional) – number of bits used to quantize the neuron response. Defaults to 1.
Akida V2 layers
- class akida.InputConv2D(input_shape, filters, kernel_size, padding=<Padding.Same: 1>, kernel_stride=1, pool_type=<PoolType.NoPooling: 0>, pool_size=-1, pool_stride=-1, activation=True, output_bits=8, buffer_bits=32, post_op_buffer_bits=32, name='')[source]
This represents the Akida V2 InputConv2D layer.
This layer is an image-specific input layer. It only accepts images in 8-bit pixels, either grayscale or RGB. Its kernel weights should be 8-bit. It applies a convolution (actually a cross-correlation) optionally followed by a bias addition, a pooling operation and a ReLU activation. Inputs shape must be in the form (X, Y, C). Being the result of a quantized operation, it is possible to apply some shifts to adjust the output scales to the equivalent operation performed on floats, while maintaining a limited usage of bits and performing the operations on integer values. The order of the input spatial dimensions is preserved, but their values may change according to the convolution and pooling parameters.
The InputConv2D operation can be described as follows:
>>> prod = conv2d(inputs, weights) >>> output = prod + (bias << bias_shift) #optional >>> output = pool(output) #optional >>> output = ReLU(output) #optional >>> output = output * output_scale >> output_shift
Note that output values will be saturated on the range that can be represented with output_bits.
- Parameters:
input_shape (tuple) – the 3D input shape.
filters (int) – number of filters.
kernel_size (int) – integer value specifying the height and width of the 2D convolution window.
padding (
Padding
, optional) – type of convolution rather Padding.Same or Padding.Valid. Defaults to Padding.Same.kernel_stride (int, optional) – integer representing the convolution stride across both spatial dimensions. Defaults to 1.
pool_type (
PoolType
, optional) – pooling type (NoPooling, Max). Defaults to PoolType.NoPooling.pool_size (int, optional) – integer value specifying the height and width of the window over which to take the maximum or the average (depending on pool_type parameter). Defaults to -1.
pool_stride (int, optional) – integer representing the stride across both dimensions. Defaults to -1.
activation (bool, optional) – enable or disable activation function. Defaults to True.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 32.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Stem(input_shape, filters=192, kernel_size=16, output_bits=8, buffer_bits=28, post_op_buffer_bits=32, num_non_patch_tokens=0, name='')[source]
Stem layer corresponding to the Stem block of Transformer models.
It’s composed of the following layers:
The Embedding layer
The Reshape layer
The ClassToken (+ DistToken for distilled model) layer(s)
The AddPosEmbedding layer
This layer covers all the above layers operations.
Note that final output values will be saturated on the range that can be represented with output_bits.
- Parameters:
input_shape (tuple) – the spatially square 3D input shape.
filters (int, optional) – Positive integer, dimensionality of the output space. Defaults to 192.
kernel_size (int, optional) – kernel size. Defaults to 16.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 32.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
num_non_patch_tokens (int, optional) – number of non patch tokens to concatenate with the input along it last axis. Defaults to 0.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Conv2D(filters, kernel_size, kernel_stride=1, padding=<Padding.Same: 1>, pool_type=<PoolType.NoPooling: 0>, pool_size=-1, pool_stride=-1, output_bits=8, buffer_bits=28, activation=True, post_op_buffer_bits=32, name='')[source]
This represents the Akida V2 Conv2D layer.
It applies a convolution optionally followed by a bias addition, a pooling operation and a ReLU activation. Inputs shape must be in the form (X, Y, C). Being the result of a quantized operation, it is possible to apply some shifts to adjust the inputs/outputs scales to the equivalent operation performed on floats, while maintaining a limited usage of bits and performing the operations on integer values. The order of the input spatial dimensions is preserved, but their values may change according to the convolution and pooling parameters.
The Conv2D operation can be described as follows:
>>> inputs = inputs << input_shift >>> prod = conv2d(inputs, weights) >>> output = prod + (bias << bias_shift) (optional) >>> output = pool(output) (optional) >>> output = ReLU(output) (optional) >>> output = output * output_scale >> output_shift
Note that output values will be saturated on the range that can be represented with output_bits.
- Parameters:
filters (int) – number of filters.
kernel_size (int) – integer value specifying the height and width of the 2D convolution window.
kernel_stride (int, optional) – integer representing the convolution stride across both spatial dimensions. Defaults to 1.
padding (
Padding
, optional) – type of convolution rather Padding.Same or Padding.Valid. Defaults to Padding.Same.pool_type (
PoolType
, optional) – pooling type (NoPooling, Max or Average). Defaults to PoolType.NoPooling.pool_size (int, optional) – integer value specifying the height and width of the window over which to take the maximum or the average (depending on pool_type parameter). Defaults to -1.
pool_stride (int, optional) – integer representing the stride across both dimensions. Defaults to -1.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 28.
activation (bool, optional) – enable or disable activation function. Defaults to True.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Conv2DTranspose(filters, kernel_size, activation=True, output_bits=8, buffer_bits=28, post_op_buffer_bits=32, name='')[source]
This represents the Akida V2 Conv2DTranspose layer.
It applies a transposed convolution (also called deconvolution) optionally followed by a bias addition and a ReLU activation. Inputs shape must be in the form (X, Y, C). Being the result of a quantized operation, it is possible to apply some shifts to adjust the inputs/outputs scales to the equivalent operation performed on floats, while maintaining a limited usage of bits and performing the operations on integer values. The order of the input spatial dimensions is preserved, but their values may change according to the layer parameters. Note that the layer performs only transpose convolution with a “Same” padding and a kernel stride equal to 2.
The Conv2DTranspose operation can be described as follows:
>>> inputs = inputs << input_shift >>> prod = conv2d_transpose(inputs, weights) >>> output = prod + (bias << bias_shift) #optional >>> output = ReLU(output) #optional >>> output = output * output_scale >> output_shift
Note that output values will be saturated on the range that can be represented with output_bits.
- Parameters:
filters (int) – number of filters.
kernel_size (int) – integer value specifying the height and width of the 2D convolution window.
activation (bool, optional) – enable or disable activation function. Defaults to True.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 28.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Dense1D(units, output_bits=8, buffer_bits=28, post_op_buffer_bits=32, activation=False, name='')[source]
Dense layer capable of working on 1D inputs.
This is a simple dotproduct between an input of shape (1, 1, X) and a kernel of shape (X, F) to output a tensor of shape (1, 1, F). Being the result of a quantized operation, it is possible to apply some shifts to adjust the inputs/outputs scales to the equivalent operation performed on floats, while maintaining a limited usage of bits and performing the operations on integer values.
The 1D Dense operation can be described as follows:
>>> inputs = inputs << input_shift >>> prod = matmul(inputs, weights) >>> output = prod + (bias << bias_shift) >>> output = output * output_scale >> output_shift
Inputs shape must be (1, 1, X), if not it’s reshaped automatically at the beginning. Note that output values will be saturated on the range that can be represented with output_bits.
- Parameters:
units (int) – Positive integer, dimensionality of the output space.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 28.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
activation (bool, optional) – apply a ReLU activation. Defaults to False.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Dense2D(units, output_bits=8, buffer_bits=32, post_op_buffer_bits=32, activation=False, name='')[source]
Dense layer capable of working on 2D inputs.
The 2D Dense operation is simply the repetition of a 1D FullyConnected/Dense operation over each input row. Inputs shape mush be in the form (1, X, Y). Being the result of a quantized operation, it is possible to apply some shifts to adjust the inputs/outputs scales to the equivalent operation performed on floats, while maintaining a limited usage of bits and performing the operations on integer values.
The 2D Dense operation can be described as follows:
>>> inputs = inputs << input_shift >>> prod = matmul(inputs, weights) >>> output = prod + (bias << bias_shift) >>> output = output * output_scale >> output_shift
Inputs shape must be (1, X, Y). Note that output values will be saturated on the range that can be represented with output_bits.
- Parameters:
units (int) – Positive integer, dimensionality of the output space.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 32.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
activation (bool, optional) – apply a ReLU activation. Defaults to False.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.DepthwiseConv2D(kernel_size, kernel_stride=1, padding=<Padding.Same: 1>, pool_type=<PoolType.NoPooling: 0>, pool_size=-1, pool_stride=-1, output_bits=8, buffer_bits=28, post_op_buffer_bits=32, activation=True, name='')[source]
This represents a depthwise convolutional layer.
This is like a standard convolution, except it acts on each input channel separately. There is a single filter per input channel, so weights shape is (X, Y, F). Being the result of a quantized operation, it is possible to apply some shifts to adjust the inputs/outputs scales to the equivalent operation performed on floats, while maintaining a limited usage of bits and performing the operations on integer values.
Note: this layer applies a real convolution, and not a cross-correlation. It can optionally apply a step-wise ReLU activation to its outputs. The layer expects a 4D tensor whose first dimension is the sample index as input.
It returns a 4D tensor whose first dimension is the sample index and the last dimension is the number of convolution filters, so the same as input channels. The order of the input spatial dimensions is preserved, but their value may change according to the convolution and pooling parameters.
- Parameters:
kernel_size (int) – Integer representing the spatial dimensions of the depthwise kernel.
kernel_stride (int, optional) – Integer representing the spatial convolution stride. Defaults to 1.
padding (
Padding
, optional) – type of convolution. Defaults to Padding.Same.pool_type (
PoolType
, optional) – pooling type (NoPooling, or Max). Defaults to PoolType.NoPooling.pool_size (int, optional) – Integer representing the window size over which to take the maximum. Defaults to -1.
pool_stride (int, optional) – Integer representing the pooling stride dimensions. A value of -1 means same as pool_size. Defaults to -1.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 28.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
activation (bool, optional) – enable or disable ReLU activation function. Defaults to True.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.DepthwiseConv2DTranspose(kernel_size, activation=True, output_bits=8, buffer_bits=28, post_op_buffer_bits=32, name='')[source]
This represents the Akida V2 DepthwiseConv2DTranspose layer.
It applies a transposed depthwise convolution (also called deconvolution) optionally followed by a bias addition and a ReLU activation. This is like a standard transposed convolution, except it acts on each input channel separately. Inputs shape must be in the form (X, Y, C). Being the result of a quantized operation, it is possible to apply some shifts to adjust the inputs/outputs scales to the equivalent operation performed on floats, while maintaining a limited usage of bits and performing the operations on integer values. The order of the input spatial dimensions is preserved, but their values may change according to the layer parameters. Note that the layer performs only transpose depthwise convolution with a “Same” padding and a kernel stride equal to 2.
The DepthwiseConv2DTranspose operation can be described as follows:
>>> inputs = inputs << input_shift >>> prod = depthwise_conv2d_transpose(inputs, weights) >>> output = prod + (bias << bias_shift) #optional >>> output = ReLU(output) #optional >>> output = output * output_scale >> output_shift
Note that output values will be saturated on the range that can be represented with output_bits.
- Parameters:
kernel_size (int) – Integer representing the spatial dimensions of the depthwise kernel.
activation (bool, optional) – enable or disable activation function. Defaults to True.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 28.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Attention(num_heads, output_bits=8, buffer_bits=32, post_op_buffer_bits=32, shiftmax_output_bits=10, name='')[source]
Multi-head attention layer.
From A. Vaswani et al., “Attention is All You Need” (arXiv:1706.03762): “Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence.”
This layer will take three inputs, Query, Key and Value, and perform these actions on each head:
Multiply Query and Key to obtain a vector of attention scores expressing how tokens/patches relate to one another.
Divide by a scale factor.
Convert the score to a probability mask using a Softmax function (replaced by a Shiftmax in our implementation).
Multiply the mask by the Values.
Note that outputs and masks will be saturated on the range that can be represented with output_bits.
- Parameters:
num_heads (int) – number of heads.
output_bits (int, optional) – output bitwidth. Defaults to 8
buffer_bits (int, optional) – internal bitwidth. Defaults to 32
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
shiftmax_output_bits (int, optional) – output bitwidth for shiftmax, must be no more than 1/2 of buffer_bits. Defaults to 10
name (str, optional) – name of the layer. Defaults to empty string
- class akida.VitEncoderBlock(hidden_size=192, mlp_dim=768, num_heads=3, num_classes=0, tokens_to_extract=0, output_bits=8, buffer_bits=32, post_op_buffer_bits=32, head_bits=28, name='')[source]
Layer corresponding to a ViT encoder block.
It’s composed of the following layers:
a pre-attention MadNorm layer
Query, Key and Value Dense layers
an Attention layer and it Dense projection layer
a skip connection (Add) between the input and the output of attention projection
a pre-ML MadNorm layer
a MLP composed of two Dense layers
a skip connection (Add) between the MLP output and the previous Add layer
optionally when tokens_to_extract is set to a non zero value, a BatchNormalization layer and the given ExtractToken number (1 or 2)
optionally when num_classes is set a classification head with one or 2 Dense layers depending on the number of tokens
This layer covers all the above layers operations.
Note that final output values will be saturated on the range that can be represented with output_bits.
- Parameters:
hidden_size (int, optional) – internal shape of the block. Defaults to 192.
mlp_dim (int, optional) – dimension of the first dense layer of the MLP. Defaults to 768.
num_heads (int, optional) – number of heads in the multi-head attention. Defaults to 3.
num_classes (int, optional) – number of classes to set in the classification head, if zero no classification head is added. ‘tokens_to_extract’ must be different from 0. Defaults to 0.
tokens_to_extract (int, optional) – number of non patch tokens to extract. Defaults to 0.
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 32.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
head_bits (int, optional) – similar to ‘output_bits’ but for the optional head(s). Defaults to 28.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Add(output_bits=8, buffer_bits=32, post_op_buffer_bits=32, activation=False, name='')[source]
Layer that adds two inputs from incoming layers.
It takes as input the output tensors from the input layers, all of the same shape, and returns a single tensor (also of the same shape). Add layers require Incoming input layers to produce output tensors of the same type. The Add layer will create three variables, a_shift, b_shift and output_shift. The operation it will perform on each couple of integer values on input tensors (a, b) is equivalent to:
>>> a1 = a << a_shift >>> b1 = b << b_shift >>> intermediate_output = a1 + b1 >>> for i, shift in enumerate(output_shift): >>> if shift > 0: >>> output[i] = intermediate_output[i] << |shift| >>> else: >>> output[i] = intermediate_output[i] >> |shift|
Note that output values will be saturated on the range that can be represented with output_bits.
- Parameters:
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – internal bitwidth. Defaults to 32.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
activation (bool, optional) – apply a ReLU activation. Defaults to False.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Concatenate(name='')[source]
Layer that concatenates two or more inputs from incoming layers, along the last dimensions.
The operation is equivalent to this numpy operation
>>> # Inputs are a and b >>> output = np.concatenate((a, b), axis=-1)
All inbound layers should have the same output dimensions on the first two axis. All inbound layers should have the same output bitwidth and output sign.
- Parameters:
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.ExtractToken(begin=0, end=None, name='')[source]
A layer capable of extracting a range from input tensor.
This is similar to numpy.take_along_axis, where the indices are in the range [begin:end]. Note that reduction axis will be the first axis that is not 1.
- Parameters:
begin (int, optional) – beginning of the range to take into account. Defaults to 0.
end (int, optional) – end of the range to take into account. Defaults to None.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.BatchNormalization(output_bits=8, buffer_bits=32, post_op_buffer_bits=32, activation=False, name='')[source]
Batch Normalization applied on the last axis.
The normalization is applied as:
outputs = a * x + b
- Parameters:
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – buffer bitwidth. Defaults to 32.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
activation (bool, optional) – add a ReLU activation. Defaults to False.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.MadNorm(output_bits=8, buffer_bits=32, post_op_buffer_bits=32, name='')[source]
A function similar to the MAD normalization layer presented in quantizeml. (Note that the normalization is only available over the last dimension)
Instead of using the standard deviation (std) during the normalization division, the sum of absolute values is used. The normalization is performed in this way:
MadNorm(x) = x * gamma / sum(abs(x)) + beta
- Parameters:
output_bits (int, optional) – output bitwidth. Defaults to 8
buffer_bits (int, optional) – buffer bitwidth. Defaults to 32.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
name (str, optional) – name of the layer. Defaults to empty string.
- class akida.Shiftmax(output_bits=8, buffer_bits=32, post_op_buffer_bits=32, name='')[source]
A function similar to the softmax.
Instead of using e as base, it uses 2 and a shift. So we replace
\[softmax(x_i) = \frac{e^{x_i}}{sum(e^{x_k})}\]with
\[shiftmax(x_i) = \frac{2^{x_i}}{round(log2(sum(2^{x_k})))}\]This is evaluated with a shift.
- Parameters:
output_bits (int, optional) – output bitwidth. Defaults to 8.
buffer_bits (int, optional) – internal bitwidth. Defaults to 32.
post_op_buffer_bits (int, optional) – internal bitwidth for post operations. Defaults to 32.
Layer parameters
LayerType
- class akida.LayerType
The layer type
Members:
Unknown
InputData
InputConvolutional
FullyConnected
Convolutional
SeparableConvolutional
Add
Dense2D
Shiftmax
Attention
Stem
MadNorm
Concatenate
BatchNormalization
Conv2D
InputConv2D
DepthwiseConv2D
Conv2DTranspose
ExtractToken
Dequantizer
DepthwiseConv2DTranspose
BufferTempConv
DepthwiseBufferTempConv
StatefulRecurrent
VitEncoderBlock
Dense1D
Padding
- class akida.Padding
Sets the effective padding of the input for convolution, thereby determining the output dimensions. Naming conventions are the same as Keras/Tensorflow.
Members:
Valid : No padding
Same : Padded so that output size is input size divided by the stride
PoolType
- class akida.PoolType
The pooling type
Members:
NoPooling : No pooling applied
Max : Maximum pixel value is selected
Average : Average pixel value is selected
Optimizers
- class akida.core.Optimizer
Optimizer generic parameters
Methods:
get
(self, key)Retrieve a value from the LearningParams object.
- get(self: akida.core.Optimizer, key: str) float
Retrieve a value from the LearningParams object.
- Parameters:
key (str) – key of the value.
- Returns:
value associated to the key.
- Return type:
int
- Raises:
ValueError – if the value is not present in the LearningParams object.
- class akida.AkidaUnsupervised(num_weights: int, num_classes: int = 1, initial_plasticity: float = 1.0, learning_competition: float = 0.0, min_plasticity: float = 0.10000000149011612, plasticity_decay: float = 0.25)
Prepare the Akida Unsupervised optimizer for learning of the last layer.
- Parameters:
num_weights (int) – number of connections for each neuron.
num_classes (int, optional) – number of classes when running in a ‘labeled mode’.
initial_plasticity (float, optional) – defines how easily the weights will change when learning occurs.
learning_competition (float, optional) – controls competition between neurons.
min_plasticity (float, optional) – defines the minimum level to which plasticity will decay.
plasticity_decay (float, optional) – defines the decay of plasticity with each learning step.
optimizer (
akida.LearningParams
) – the optimizer used for learning
Sequence
Sequence
- class akida.Sequence
Represents a sequence of layers.
Sequences can be mapped in Software or on a Device.
Attributes:
The backend type for this Sequence.
The name of the sequence
Get the list of passes in this sequence.
Get the hardware program for this sequence.
Get the program, splitted in different parts
- property backend
The backend type for this Sequence.
- property name
The name of the sequence
- property passes
Get the list of passes in this sequence.
- property program
Get the hardware program for this sequence.
Returns None if the Sequence is not compatible with the selected Device.
- Returns:
a bytes buffer or None
- property program_parts
Get the program, splitted in different parts
BackendType
- class akida.BackendType
Members:
Software
Hardware
Hybrid
Pass
- class akida.Pass
Represents a subset of the Sequence.
Hardware Sequences can typically be split into multiple passes on devices that support hardware partial reconfiguration feature, reducing the intervention of the software during inference.
Attributes:
Get the list of layers in this pass.
- property layers
Get the list of layers in this pass.
Device
Device
- class akida.Device
Attributes:
Returns the Device description
The device Mesh layout
The device hardware version.
- property desc
Returns the Device description
- Returns:
a string describing the Device
- property mesh
The device Mesh layout
- property version
The device hardware version.
- akida.devices() List[akida.core.HardwareDevice]
Returns the full list of available hardware devices
- Returns:
list of Device
HwVersion
- class akida.HwVersion
Attributes:
The hardware major revision
The hardware minor revision
The hardware product identifier
The hardware vendor identifier
- property major_rev
The hardware major revision
- property minor_rev
The hardware minor revision
- property product_id
The hardware product identifier
- property vendor_id
The hardware vendor identifier
HWDevice
HWDevice
- class akida.HardwareDevice
Methods:
fit
(*args, **kwargs)Overloaded function.
forward
(self, arg0)Processes inputs on a programmed device.
predict
(self, arg0)Processes inputs on a programmed device, returns a float array.
program_external
(self, arg0, arg1)Program a device using a serialized program info bytes object, and the address, as it is seen from akida on the device, of corresponding program data that must have been written beforehand.
reset_top_memory
(self)Reset the device memory informations
unprogram
(self)Clear current program from hardware device, restoring its initial state
Attributes:
Copy of power events logged after inference
Property that enables/disables learning on current program (if possible).
Property that retrieves learning layer's memory or updates a device using a serialized learning layer memory buffer.
The device memory usage and top usage (in bytes)
The metrics from this device
Property that retrieves current program or programs a device using a serialized program bytes object.
The SocDriver interface used by the device, or None if the device is not a SoC
- fit(*args, **kwargs)
Overloaded function.
fit(self: akida.core.HardwareDevice, inputs: numpy.ndarray[numpy.uint8], input_labels: float) -> numpy.ndarray
Learn from inputs on a programmed device.
fit(self: akida.core.HardwareDevice, inputs: numpy.ndarray[numpy.uint8], input_labels: numpy.ndarray) -> numpy.ndarray
Learn from inputs on a programmed device.
fit(self: akida.core.HardwareDevice, inputs: numpy.ndarray[numpy.uint8], input_labels: list = []) -> numpy.ndarray
Learn from inputs on a programmed device.
- forward(self: akida.core.HardwareDevice, arg0: numpy.ndarray[numpy.uint8]) numpy.ndarray
Processes inputs on a programmed device.
- Parameters:
inputs –
numpy.ndarray
with shape matching current program
:return
numpy.ndarray
with outputs from the device
- property inference_power_events
Copy of power events logged after inference
- property learn_enabled
Property that enables/disables learning on current program (if possible).
- property learn_mem
Property that retrieves learning layer’s memory or updates a device using a serialized learning layer memory buffer.
- property memory
The device memory usage and top usage (in bytes)
- property metrics
The metrics from this device
- predict(self: akida.core.HardwareDevice, arg0: numpy.ndarray[numpy.uint8]) numpy.ndarray
Processes inputs on a programmed device, returns a float array.
- Parameters:
inputs –
numpy.ndarray
with shape matching current program
:return
numpy.ndarray
with float outputs from the device
- property program
Property that retrieves current program or programs a device using a serialized program bytes object.
- program_external(self: akida.core.HardwareDevice, arg0: bytes, arg1: int) None
Program a device using a serialized program info bytes object, and the address, as it is seen from akida on the device, of corresponding program data that must have been written beforehand.
- reset_top_memory(self: akida.core.HardwareDevice) None
Reset the device memory informations
- property soc
The SocDriver interface used by the device, or None if the device is not a SoC
- unprogram(self: akida.core.HardwareDevice) None
Clear current program from hardware device, restoring its initial state
SocDriver
- class akida.core.SocDriver
Attributes:
Clock mode of the NSoC.
Power measurement is off by default.
Power meter associated to the SoC.
- property clock_mode
Clock mode of the NSoC.
- property power_measurement_enabled
Power measurement is off by default. Toggle it on to get power information in the statistics or when calling PowerMeter.events().
- property power_meter
Power meter associated to the SoC.
ClockMode
- class akida.core.soc.ClockMode
Clock mode configuration
Members:
Performance
Economy
LowPower
PowerMeter
- class akida.PowerMeter
Gives access to power measurements.
When power measurements are enabled for a specific device, this object stores them as a list of
PowerEvent
objects. The events list cannot exceed a predefined size: when it is full, older events are replaced by newer events.Methods:
events
(self)Retrieve all pending events
Attributes:
Get the floor power
- events(self: akida.core.PowerMeter) List[akida.core.PowerEvent]
Retrieve all pending events
- property floor
Get the floor power
- class akida.PowerEvent
A timestamped power measurement.
Each PowerEvent contains: - a voltage value in µV (microvolt), - a current value in mA (milliampere), - the corresponding power value in mW (milliwatt).
Attributes:
Current value in mA
Power value in mW
Timestamp of the event
Voltage value in µV
- property current
Current value in mA
- property power
Power value in mW
- property ts
Timestamp of the event
- property voltage
Voltage value in µV
NP
- class akida.NP.Mesh
Attributes:
DMA configuration endpoint
DMA event endpoint
Neural processors
Skip DMAs
- property dma_conf
DMA configuration endpoint
- property dma_event
DMA event endpoint
- property nps
Neural processors
- property skip_dmas
Skip DMAs
- class akida.NP.Info
Attributes:
NP identifier
NP supported types
- property ident
NP identifier
- property types
NP supported types
- class akida.NP.Ident
Attributes:
NP column number
NP id
NP row number
- property col
NP column number
- property id
NP id
- property row
NP row number
- class akida.NP.Type
Members:
HRC : High Resolution Convolution
CNP1 : Convolutional Neural Processor Type 1
CNP2 : Convolutional Neural Processor Type 2
FNP2 : FullyConnected Neural Processor (external memory)
FNP3 : FullyConnected Neural Processor (internal memory)
VIT_BLOCK : Vision Transformer Block
TNP_B : Temporal Neural Processor Buffered
TNP_R : Temporal Neural Processor Recurrent
- class akida.NP.Mapping
The mapping of a subset of a Layer on a Neural Processor
Attributes:
Number of filters processed by the Neural Processor
Neural Processor identifier
Neural Processor uses a single or dual input buffer
Neural Processor type
- property filters
Number of filters processed by the Neural Processor
- property ident
Neural Processor identifier
- property single_buffer
Neural Processor uses a single or dual input buffer
- property type
Neural Processor type
MapMode
- class akida.MapMode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Mapping mode
Define the strategy for the hardware mapping.
Attributes:
Maximize HW ressources (number of NPs used) with minimum HW passes.
Maximize HW ressources (number of NPs used) with maximum HW passes.
Minimize HW ressources or mode to use a custom MeshMapper
- AllNps = 1
Maximize HW ressources (number of NPs used) with minimum HW passes.
- HwPr = 2
Maximize HW ressources (number of NPs used) with maximum HW passes. This mode provides the potential for higher-performances
- Minimal = 3
Minimize HW ressources or mode to use a custom MeshMapper