Note
Go to the end to download the full example code.
Build Vision Transformers for Akida
The Vision Transformer, or ViT, is a model for image classification that employs a Transformer-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors are fed to a standard Transformer encoder. Please refer to https://arxiv.org/abs/2010.11929 for further details.
Akida 2.0 now supports patch and position embeddings, and the encoder block in hardware. This tutorial explains how to build an optimized ViT using Akida models python API for Akida 2.0 hardware.
1. Model selection
There are many variants of ViT. The choice of the model is typically influenced by the tradeoff among architecture size, accuracy, inference speed, and training capabilities.
The following table shows few variants of commonly used ViT:
Architecture |
Original accuracy |
#Params |
Architecture |
---|---|---|---|
ViT Base |
79.90% |
86M |
12 heads, 12 blocks, hidden size 768 |
ViT Tiny |
75.48% |
5.8M |
3 heads, 12 blocks, hidden size 192 |
DeiT-dist Tiny |
74.17% |
5.8M |
3 heads, 12 blocks, hidden size 192 |
Note
The Vision Transformers support has been introduced in Akida 2.0.
The Akida model zoo provides tiny ViT architectures that are optimized to run on Akida hardware:
Both architectures have been modified so that their layers can be quantized to integer only operations.
2. Model optimization for Akida hardware
ViT has many encoder blocks that perform self-attention to process visual data. Each encoder block consists of many different layers. To optimally run ViT at the edge using Akida requires transforming this encoder block in the following way:
replace LayerNormalization with LayerMadNormalization,
replace the last LayerNormalization previous to the classification head with a BatchNormalization,
replace Softmax operation in Attention with a shiftmax operation.
Note
Sections below show different ways to train a ViT for Akida which uses the above transformations.
3. Model Training
Akida accelerates ViT model that has the transformation mentioned in Section 2. Training a ViT that optimally runs on Akida can be made possible in the following two ways:
3.1 Option 1: Training a ViT (original) model first and then transforming each layer incrementally
First, train a ViT (original) model on a custom dataset until satisfactory accuracy. It is then possible to transform this model into an Akida optimized one as per Section 2. The layers mentioned in Section 2 are functionally equivalent to each of the layers present in the original model.
Note
To overcome the accuracy drop from the original when transforming the model as per Section 2, it is recommended to replace the original layers all at once and to fine-tune afterwards.
The example below shows the transformation of ViT (tiny) into an optimized model that can run on the Akida hardware.
The akida_models python package provides a Command Line Interface (CLI) to transform vit_ti16 and deit_ti16 model architectures and fine-tune them respectively.
$ akida_models create vit_ti16 -h
usage: akida_models create vit_ti16 [-h] [-c CLASSES] [-bw BASE_WEIGHTS] [--norm {LN,GN1,BN,LMN}]
[--last_norm {LN,BN}] [--softmax {softmax,softmax2}]
[--act {GeLU,ReLU8,swish}] [-i {224,384}]
optional arguments:
-h, --help show this help message and exit
-c CLASSES, --classes CLASSES
The number of classes, by default 1000.
-bw BASE_WEIGHTS, --base_weights BASE_WEIGHTS
Optional keras weights to load in the model, by default None.
--norm {LN,GN1,BN,LMN}
Replace normalization in model with a custom function, by default LN
--last_norm {LN,BN} Replace last normalization in model with a custom function, by default LN
--softmax {softmax,softmax2}
Replace softmax operation in model with custom function, by default softmax
--act {GeLU,ReLU8,swish}
Replace activation function in model with custom function, by default GeLU
-i {224,384}, --image_size {224,384}
The square input image size
The following shows the transformation of a vit_ti16 model architecture which was trained on ImageNet. The same methods can be applied for other datasets.
# download the pre-trained weights
wget https://data.brainchip.com/models/AkidaV2/vit/vit_ti16_224.h5
# transformations: replace layer normalization with mad norm layer, last layer normalization
# with batch normalization, GeLU layer with ReLU and softmax with shiftmax layer
akida_models create -s vit_ti16_transformed.h5 vit_ti16 --norm LMN --last_norm BN --act ReLU8 \
--softmax softmax2 -bw vit_ti16_224.h5
# fine-tuning
imagenet_train tune -m vit_ti16_transformed.h5 -e 30 --optim Adam --lr_policy cosine_decay \
-lr 6e-5 -s vit_ti16_transformed.h5
The above transformation generates a ViT model that is optimized to run efficiently on Akida hardware. Similar steps can also be applied to deit_ti16. The table below highlights the accuracy of the original and transformed models.
Architecture |
Original accuracy |
Transformed accuracy |
---|---|---|
ViT |
75.48% |
74.25% |
DeiT-dist |
74.17% |
75.03% |
Note
The models obtained above have floating point weights and are ready to be quantized. See Section 4.
3.2 Option 2: Transfer Learning using Pre-trained transformed model
The Akida models python package has APIs for ViTs which provides pre-trained models for vit_ti16 and deit_ti16. These models can be used for Transfer Learning on a custom dataset. Since the above models are already transformed, no further transformation is required.
Visit our Transfer Learning Example to learn more about Transfer Learning using the Akida models python package. The following code snippet downloads a pre-trained model that can be used for Transfer Learning.
# The following is the API download the vit_t16 model trained on ImageNet dataset
from akida_models import fetch_file
from akida_models.model_io import load_model
# Retrieve the float model with pretrained weights and load it
model_file = fetch_file(
fname="bc_vit_ti16_224.h5",
origin="https://data.brainchip.com/models/AkidaV2/vit/bc_vit_ti16_224.h5",
cache_subdir='models/akidanet_imagenet')
model_keras = load_model(model_file)
model_keras.summary()
Downloading data from https://data.brainchip.com/models/AkidaV2/vit/bc_vit_ti16_224.h5.
0/23695632 [..............................] - ETA: 0s
122880/23695632 [..............................] - ETA: 9s
753664/23695632 [..............................] - ETA: 3s
1531904/23695632 [>.............................] - ETA: 2s
2367488/23695632 [=>............................] - ETA: 1s
3178496/23695632 [===>..........................] - ETA: 1s
4202496/23695632 [====>.........................] - ETA: 1s
5439488/23695632 [=====>........................] - ETA: 1s
6504448/23695632 [=======>......................] - ETA: 1s
7602176/23695632 [========>.....................] - ETA: 0s
8945664/23695632 [==========>...................] - ETA: 0s
10166272/23695632 [===========>..................] - ETA: 0s
11214848/23695632 [=============>................] - ETA: 0s
12279808/23695632 [==============>...............] - ETA: 0s
13312000/23695632 [===============>..............] - ETA: 0s
14245888/23695632 [=================>............] - ETA: 0s
15310848/23695632 [==================>...........] - ETA: 0s
16900096/23695632 [====================>.........] - ETA: 0s
18407424/23695632 [======================>.......] - ETA: 0s
20078592/23695632 [========================>.....] - ETA: 0s
21733376/23695632 [==========================>...] - ETA: 0s
23379968/23695632 [============================>.] - ETA: 0s
23695632/23695632 [==============================] - 1s 0us/step
Download complete.
/usr/local/lib/python3.11/dist-packages/keras/src/initializers/initializers.py:120: UserWarning: The initializer TruncatedNormal is unseeded and being called multiple times, which will return identical values each time (even if the initializer is unseeded). Please update your code to provide a seed to the initializer, or avoid using the same initializer instance more than once.
warnings.warn(
Model: "vit-tiny"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input (InputLayer) [(None, 224, 224, 3)] 0 []
Rescale (Rescaling) (None, 224, 224, 3) 0 ['input[0][0]']
Embedding (Conv2D) (None, 14, 14, 192) 147648 ['Rescale[0][0]']
reshape (Reshape) (None, 196, 192) 0 ['Embedding[0][0]']
ClassToken (ClassToken) (None, 197, 192) 192 ['reshape[0][0]']
Transformer/PosEmbed (AddP (None, 197, 192) 37824 ['ClassToken[0][0]']
ositionEmbs)
Transformer/EncoderBlock_0 (None, 197, 192) 384 ['Transformer/PosEmbed[0][0]']
/LayerNorm_0 (LayerMadNorm
alization)
Transformer/EncoderBlock_0 (None, 197, 192) 37056 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_0 (None, 197, 192) 37056 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_0 (None, 197, 192) 37056 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_0 ((None, 197, 192), 0 ['Transformer/EncoderBlock_0/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 37056 ['Transformer/EncoderBlock_0/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 0 ['dropout[0][0]',
/add_1 (Add) 'Transformer/PosEmbed[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 384 ['Transformer/EncoderBlock_0/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_0 (None, 197, 768) 148224 ['Transformer/EncoderBlock_0/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_0 (None, 197, 768) 0 ['Transformer/EncoderBlock_0/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_1 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_0/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 147648 ['dropout_1[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_2 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_0/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 0 ['Transformer/EncoderBlock_0/a
/add_2 (Add) dd_1[0][0]',
'dropout_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_0/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_1 ((None, 197, 192), 0 ['Transformer/EncoderBlock_1/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_1/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_1/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_1/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_3 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_1/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 0 ['dropout_3[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_0/a
dd_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_1/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_1/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 768) 0 ['Transformer/EncoderBlock_1/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_4 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_1/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 147648 ['dropout_4[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_5 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_1/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 0 ['Transformer/EncoderBlock_1/a
/add_2 (Add) dd_1[0][0]',
'dropout_5[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 384 ['Transformer/EncoderBlock_1/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_2 (None, 197, 192) 37056 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_2 (None, 197, 192) 37056 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_2 (None, 197, 192) 37056 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_2 ((None, 197, 192), 0 ['Transformer/EncoderBlock_2/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_2/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_2/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 37056 ['Transformer/EncoderBlock_2/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_6 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_2/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 0 ['dropout_6[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_1/a
dd_2[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 384 ['Transformer/EncoderBlock_2/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_2 (None, 197, 768) 148224 ['Transformer/EncoderBlock_2/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_2 (None, 197, 768) 0 ['Transformer/EncoderBlock_2/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_7 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_2/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 147648 ['dropout_7[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_8 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_2/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 0 ['Transformer/EncoderBlock_2/a
/add_2 (Add) dd_1[0][0]',
'dropout_8[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 384 ['Transformer/EncoderBlock_2/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_3 (None, 197, 192) 37056 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_3 (None, 197, 192) 37056 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_3 (None, 197, 192) 37056 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_3 ((None, 197, 192), 0 ['Transformer/EncoderBlock_3/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_3/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_3/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 37056 ['Transformer/EncoderBlock_3/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_9 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_3/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 0 ['dropout_9[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_2/a
dd_2[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 384 ['Transformer/EncoderBlock_3/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_3 (None, 197, 768) 148224 ['Transformer/EncoderBlock_3/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_3 (None, 197, 768) 0 ['Transformer/EncoderBlock_3/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_10 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_3/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 147648 ['dropout_10[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_11 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_3/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 0 ['Transformer/EncoderBlock_3/a
/add_2 (Add) dd_1[0][0]',
'dropout_11[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 384 ['Transformer/EncoderBlock_3/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_4 (None, 197, 192) 37056 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_4 (None, 197, 192) 37056 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_4 (None, 197, 192) 37056 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_4 ((None, 197, 192), 0 ['Transformer/EncoderBlock_4/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_4/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_4/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 37056 ['Transformer/EncoderBlock_4/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_12 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_4/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 0 ['dropout_12[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_3/a
dd_2[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 384 ['Transformer/EncoderBlock_4/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_4 (None, 197, 768) 148224 ['Transformer/EncoderBlock_4/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_4 (None, 197, 768) 0 ['Transformer/EncoderBlock_4/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_13 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_4/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 147648 ['dropout_13[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_14 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_4/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 0 ['Transformer/EncoderBlock_4/a
/add_2 (Add) dd_1[0][0]',
'dropout_14[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 384 ['Transformer/EncoderBlock_4/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_5 (None, 197, 192) 37056 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_5 (None, 197, 192) 37056 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_5 (None, 197, 192) 37056 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_5 ((None, 197, 192), 0 ['Transformer/EncoderBlock_5/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_5/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_5/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 37056 ['Transformer/EncoderBlock_5/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_15 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_5/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 0 ['dropout_15[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_4/a
dd_2[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 384 ['Transformer/EncoderBlock_5/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_5 (None, 197, 768) 148224 ['Transformer/EncoderBlock_5/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_5 (None, 197, 768) 0 ['Transformer/EncoderBlock_5/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_16 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_5/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 147648 ['dropout_16[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_17 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_5/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 0 ['Transformer/EncoderBlock_5/a
/add_2 (Add) dd_1[0][0]',
'dropout_17[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 384 ['Transformer/EncoderBlock_5/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_6 (None, 197, 192) 37056 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_6 (None, 197, 192) 37056 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_6 (None, 197, 192) 37056 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_6 ((None, 197, 192), 0 ['Transformer/EncoderBlock_6/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_6/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_6/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 37056 ['Transformer/EncoderBlock_6/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_18 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_6/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 0 ['dropout_18[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_5/a
dd_2[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 384 ['Transformer/EncoderBlock_6/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_6 (None, 197, 768) 148224 ['Transformer/EncoderBlock_6/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_6 (None, 197, 768) 0 ['Transformer/EncoderBlock_6/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_19 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_6/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 147648 ['dropout_19[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_20 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_6/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 0 ['Transformer/EncoderBlock_6/a
/add_2 (Add) dd_1[0][0]',
'dropout_20[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 384 ['Transformer/EncoderBlock_6/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_7 (None, 197, 192) 37056 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_7 (None, 197, 192) 37056 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_7 (None, 197, 192) 37056 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_7 ((None, 197, 192), 0 ['Transformer/EncoderBlock_7/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_7/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_7/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 37056 ['Transformer/EncoderBlock_7/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_21 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_7/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 0 ['dropout_21[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_6/a
dd_2[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 384 ['Transformer/EncoderBlock_7/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_7 (None, 197, 768) 148224 ['Transformer/EncoderBlock_7/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_7 (None, 197, 768) 0 ['Transformer/EncoderBlock_7/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_22 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_7/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 147648 ['dropout_22[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_23 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_7/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 0 ['Transformer/EncoderBlock_7/a
/add_2 (Add) dd_1[0][0]',
'dropout_23[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 384 ['Transformer/EncoderBlock_7/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_8 (None, 197, 192) 37056 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_8 (None, 197, 192) 37056 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_8 (None, 197, 192) 37056 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_8 ((None, 197, 192), 0 ['Transformer/EncoderBlock_8/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_8/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_8/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 37056 ['Transformer/EncoderBlock_8/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_24 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_8/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 0 ['dropout_24[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_7/a
dd_2[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 384 ['Transformer/EncoderBlock_8/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_8 (None, 197, 768) 148224 ['Transformer/EncoderBlock_8/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_8 (None, 197, 768) 0 ['Transformer/EncoderBlock_8/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_25 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_8/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 147648 ['dropout_25[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_26 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_8/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 0 ['Transformer/EncoderBlock_8/a
/add_2 (Add) dd_1[0][0]',
'dropout_26[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 384 ['Transformer/EncoderBlock_8/a
/LayerNorm_0 (LayerMadNorm dd_2[0][0]']
alization)
Transformer/EncoderBlock_9 (None, 197, 192) 37056 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (Dense)
Transformer/EncoderBlock_9 (None, 197, 192) 37056 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (Dense)
Transformer/EncoderBlock_9 (None, 197, 192) 37056 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (Dense)
Transformer/EncoderBlock_9 ((None, 197, 192), 0 ['Transformer/EncoderBlock_9/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Attention query[0][0]',
) 'Transformer/EncoderBlock_9/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_9/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 37056 ['Transformer/EncoderBlock_9/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (Dense) attention[0][0]']
dropout_27 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_9/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 0 ['dropout_27[0][0]',
/add_1 (Add) 'Transformer/EncoderBlock_8/a
dd_2[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 384 ['Transformer/EncoderBlock_9/a
/LayerNorm_2 (LayerMadNorm dd_1[0][0]']
alization)
Transformer/EncoderBlock_9 (None, 197, 768) 148224 ['Transformer/EncoderBlock_9/L
/MlpBlock/Dense_0 (Dense) ayerNorm_2[0][0]']
Transformer/EncoderBlock_9 (None, 197, 768) 0 ['Transformer/EncoderBlock_9/M
/MlpBlock/activation (ReLU lpBlock/Dense_0[0][0]']
)
dropout_28 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_9/M
lpBlock/activation[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 147648 ['dropout_28[0][0]']
/MlpBlock/Dense_1 (Dense)
dropout_29 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_9/M
lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 0 ['Transformer/EncoderBlock_9/a
/add_2 (Add) dd_1[0][0]',
'dropout_29[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_9/a
0/LayerNorm_0 (LayerMadNor dd_2[0][0]']
malization)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/query (Dense)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/key (Dense)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/value (Dense)
Transformer/EncoderBlock_1 ((None, 197, 192), 0 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten (None, 3, 197, 197)) MultiHeadDotProductAttention_1
tion_1/attention (Attentio /query[0][0]',
n) 'Transformer/EncoderBlock_10/
MultiHeadDotProductAttention_1
/key[0][0]',
'Transformer/EncoderBlock_10/
MultiHeadDotProductAttention_1
/value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten MultiHeadDotProductAttention_1
tion_1/out (Dense) /attention[0][0]']
dropout_30 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_10/
MultiHeadDotProductAttention_1
/out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 0 ['dropout_30[0][0]',
0/add_1 (Add) 'Transformer/EncoderBlock_9/a
dd_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_10/
0/LayerNorm_2 (LayerMadNor add_1[0][0]']
malization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_10/
0/MlpBlock/Dense_0 (Dense) LayerNorm_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 768) 0 ['Transformer/EncoderBlock_10/
0/MlpBlock/activation (ReL MlpBlock/Dense_0[0][0]']
U)
dropout_31 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_10/
MlpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 147648 ['dropout_31[0][0]']
0/MlpBlock/Dense_1 (Dense)
dropout_32 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_10/
MlpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 0 ['Transformer/EncoderBlock_10/
0/add_2 (Add) add_1[0][0]',
'dropout_32[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_10/
1/LayerNorm_0 (LayerMadNor add_2[0][0]']
malization)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/query (Dense)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/key (Dense)
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/value (Dense)
Transformer/EncoderBlock_1 ((None, 197, 192), 0 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten (None, 3, 197, 197)) MultiHeadDotProductAttention_1
tion_1/attention (Attentio /query[0][0]',
n) 'Transformer/EncoderBlock_11/
MultiHeadDotProductAttention_1
/key[0][0]',
'Transformer/EncoderBlock_11/
MultiHeadDotProductAttention_1
/value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37056 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten MultiHeadDotProductAttention_1
tion_1/out (Dense) /attention[0][0]']
dropout_33 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_11/
MultiHeadDotProductAttention_1
/out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 0 ['dropout_33[0][0]',
1/add_1 (Add) 'Transformer/EncoderBlock_10/
add_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_11/
1/LayerNorm_2 (LayerMadNor add_1[0][0]']
malization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_11/
1/MlpBlock/Dense_0 (Dense) LayerNorm_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 768) 0 ['Transformer/EncoderBlock_11/
1/MlpBlock/activation (ReL MlpBlock/Dense_0[0][0]']
U)
dropout_34 (Dropout) (None, 197, 768) 0 ['Transformer/EncoderBlock_11/
MlpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 147648 ['dropout_34[0][0]']
1/MlpBlock/Dense_1 (Dense)
dropout_35 (Dropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_11/
MlpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 0 ['Transformer/EncoderBlock_11/
1/add_2 (Add) add_1[0][0]',
'dropout_35[0][0]']
Transformer/EncoderNorm (B (None, 197, 192) 768 ['Transformer/EncoderBlock_11/
atchNormalization) add_2[0][0]']
ExtractToken (ExtractToken (None, 192) 0 ['Transformer/EncoderNorm[0][0
) ]']
Head (Dense) (None, 1000) 193000 ['ExtractToken[0][0]']
==================================================================================================
Total params: 5717800 (21.81 MB)
Trainable params: 5717416 (21.81 MB)
Non-trainable params: 384 (1.50 KB)
__________________________________________________________________________________________________
Note
The models in Section 3 have floating point weights. Once the desired accuracy is obtained, these models should go through quantization before converting to Akida.
4. Model quantization
Akida 2.0 hardware adds efficient processing of 8-bit weights and activations for Vision Transformer models. This requires models in Section 3 to be quantized to 8-bit integer numbers. This means both weights and activation outputs become 8-bit integer numbers. This results in a smaller model with minimal to no drop in accuracy and achieves improvements in latency and power when running on Akida hardware.
Quantization of ViT models can be done using QuantizeML python package using either Post Training Quantization (PTQ) or Quantization Aware Training (QAT) methods. The following section shows quantization an example, quantization of vit_ti16 trained on ImageNet dataset.
4.1 Post-Training Quantization
Using QuantizeML python package, ViT model can be quantized to 8-bit integer numbers (both weights and activation outputs). PTQ requires calibration (ideally using reference data) which helps to determine optimal quantization ranges. To learn more about PTQ, refer to Advanced QuantizeML tutorial.
# Using QuantizeML to perform quantization
from quantizeml.models import quantize, QuantizationParams
# Define the quantization parameters.
qparams = QuantizationParams(weight_bits=8, activation_bits=8)
# Quantize the model defined in Section 3.2
model_quantized = quantize(model_keras, qparams=qparams)
model_quantized.summary()
/usr/local/lib/python3.11/dist-packages/quantizeml/models/quantize.py:461: UserWarning: Quantizing per-axis with random calibration samples is not accurate. Set QuantizationParams.per_tensor_activations=True when calibrating with random samples.
warnings.warn("Quantizing per-axis with random calibration samples is not accurate.\
1/1024 [..............................] - ETA: 1:32:00
6/1024 [..............................] - ETA: 12s
11/1024 [..............................] - ETA: 11s
16/1024 [..............................] - ETA: 11s
21/1024 [..............................] - ETA: 11s
26/1024 [..............................] - ETA: 11s
31/1024 [..............................] - ETA: 11s
36/1024 [>.............................] - ETA: 11s
41/1024 [>.............................] - ETA: 11s
46/1024 [>.............................] - ETA: 11s
51/1024 [>.............................] - ETA: 11s
56/1024 [>.............................] - ETA: 11s
61/1024 [>.............................] - ETA: 11s
66/1024 [>.............................] - ETA: 11s
71/1024 [=>............................] - ETA: 11s
76/1024 [=>............................] - ETA: 11s
81/1024 [=>............................] - ETA: 10s
86/1024 [=>............................] - ETA: 10s
91/1024 [=>............................] - ETA: 10s
96/1024 [=>............................] - ETA: 10s
101/1024 [=>............................] - ETA: 10s
106/1024 [==>...........................] - ETA: 10s
111/1024 [==>...........................] - ETA: 10s
116/1024 [==>...........................] - ETA: 10s
121/1024 [==>...........................] - ETA: 10s
126/1024 [==>...........................] - ETA: 10s
131/1024 [==>...........................] - ETA: 10s
136/1024 [==>...........................] - ETA: 10s
141/1024 [===>..........................] - ETA: 10s
146/1024 [===>..........................] - ETA: 10s
151/1024 [===>..........................] - ETA: 10s
156/1024 [===>..........................] - ETA: 10s
161/1024 [===>..........................] - ETA: 10s
166/1024 [===>..........................] - ETA: 9s
171/1024 [====>.........................] - ETA: 9s
176/1024 [====>.........................] - ETA: 9s
181/1024 [====>.........................] - ETA: 9s
186/1024 [====>.........................] - ETA: 9s
191/1024 [====>.........................] - ETA: 9s
196/1024 [====>.........................] - ETA: 9s
201/1024 [====>.........................] - ETA: 9s
206/1024 [=====>........................] - ETA: 9s
211/1024 [=====>........................] - ETA: 9s
216/1024 [=====>........................] - ETA: 9s
221/1024 [=====>........................] - ETA: 9s
226/1024 [=====>........................] - ETA: 9s
231/1024 [=====>........................] - ETA: 9s
236/1024 [=====>........................] - ETA: 9s
241/1024 [======>.......................] - ETA: 9s
246/1024 [======>.......................] - ETA: 9s
251/1024 [======>.......................] - ETA: 8s
256/1024 [======>.......................] - ETA: 8s
261/1024 [======>.......................] - ETA: 8s
266/1024 [======>.......................] - ETA: 8s
271/1024 [======>.......................] - ETA: 8s
276/1024 [=======>......................] - ETA: 8s
281/1024 [=======>......................] - ETA: 8s
286/1024 [=======>......................] - ETA: 8s
291/1024 [=======>......................] - ETA: 8s
296/1024 [=======>......................] - ETA: 8s
301/1024 [=======>......................] - ETA: 8s
306/1024 [=======>......................] - ETA: 8s
311/1024 [========>.....................] - ETA: 8s
316/1024 [========>.....................] - ETA: 8s
321/1024 [========>.....................] - ETA: 8s
326/1024 [========>.....................] - ETA: 8s
331/1024 [========>.....................] - ETA: 8s
336/1024 [========>.....................] - ETA: 7s
341/1024 [========>.....................] - ETA: 7s
346/1024 [=========>....................] - ETA: 7s
351/1024 [=========>....................] - ETA: 7s
356/1024 [=========>....................] - ETA: 7s
361/1024 [=========>....................] - ETA: 7s
366/1024 [=========>....................] - ETA: 7s
371/1024 [=========>....................] - ETA: 7s
376/1024 [==========>...................] - ETA: 7s
381/1024 [==========>...................] - ETA: 7s
386/1024 [==========>...................] - ETA: 7s
391/1024 [==========>...................] - ETA: 7s
396/1024 [==========>...................] - ETA: 7s
401/1024 [==========>...................] - ETA: 7s
406/1024 [==========>...................] - ETA: 7s
411/1024 [===========>..................] - ETA: 7s
416/1024 [===========>..................] - ETA: 7s
421/1024 [===========>..................] - ETA: 6s
426/1024 [===========>..................] - ETA: 6s
431/1024 [===========>..................] - ETA: 6s
436/1024 [===========>..................] - ETA: 6s
441/1024 [===========>..................] - ETA: 6s
446/1024 [============>.................] - ETA: 6s
451/1024 [============>.................] - ETA: 6s
456/1024 [============>.................] - ETA: 6s
461/1024 [============>.................] - ETA: 6s
466/1024 [============>.................] - ETA: 6s
471/1024 [============>.................] - ETA: 6s
476/1024 [============>.................] - ETA: 6s
481/1024 [=============>................] - ETA: 6s
486/1024 [=============>................] - ETA: 6s
491/1024 [=============>................] - ETA: 6s
496/1024 [=============>................] - ETA: 6s
501/1024 [=============>................] - ETA: 6s
506/1024 [=============>................] - ETA: 5s
511/1024 [=============>................] - ETA: 5s
516/1024 [==============>...............] - ETA: 5s
521/1024 [==============>...............] - ETA: 5s
526/1024 [==============>...............] - ETA: 5s
531/1024 [==============>...............] - ETA: 5s
536/1024 [==============>...............] - ETA: 5s
541/1024 [==============>...............] - ETA: 5s
546/1024 [==============>...............] - ETA: 5s
551/1024 [===============>..............] - ETA: 5s
556/1024 [===============>..............] - ETA: 5s
561/1024 [===============>..............] - ETA: 5s
566/1024 [===============>..............] - ETA: 5s
571/1024 [===============>..............] - ETA: 5s
576/1024 [===============>..............] - ETA: 5s
581/1024 [================>.............] - ETA: 5s
586/1024 [================>.............] - ETA: 5s
591/1024 [================>.............] - ETA: 5s
596/1024 [================>.............] - ETA: 4s
601/1024 [================>.............] - ETA: 4s
606/1024 [================>.............] - ETA: 4s
611/1024 [================>.............] - ETA: 4s
616/1024 [=================>............] - ETA: 4s
621/1024 [=================>............] - ETA: 4s
626/1024 [=================>............] - ETA: 4s
631/1024 [=================>............] - ETA: 4s
636/1024 [=================>............] - ETA: 4s
641/1024 [=================>............] - ETA: 4s
646/1024 [=================>............] - ETA: 4s
651/1024 [==================>...........] - ETA: 4s
656/1024 [==================>...........] - ETA: 4s
661/1024 [==================>...........] - ETA: 4s
666/1024 [==================>...........] - ETA: 4s
671/1024 [==================>...........] - ETA: 4s
676/1024 [==================>...........] - ETA: 4s
681/1024 [==================>...........] - ETA: 3s
686/1024 [===================>..........] - ETA: 3s
691/1024 [===================>..........] - ETA: 3s
696/1024 [===================>..........] - ETA: 3s
701/1024 [===================>..........] - ETA: 3s
706/1024 [===================>..........] - ETA: 3s
711/1024 [===================>..........] - ETA: 3s
716/1024 [===================>..........] - ETA: 3s
721/1024 [====================>.........] - ETA: 3s
726/1024 [====================>.........] - ETA: 3s
731/1024 [====================>.........] - ETA: 3s
736/1024 [====================>.........] - ETA: 3s
741/1024 [====================>.........] - ETA: 3s
746/1024 [====================>.........] - ETA: 3s
751/1024 [=====================>........] - ETA: 3s
756/1024 [=====================>........] - ETA: 3s
761/1024 [=====================>........] - ETA: 3s
766/1024 [=====================>........] - ETA: 2s
771/1024 [=====================>........] - ETA: 2s
776/1024 [=====================>........] - ETA: 2s
781/1024 [=====================>........] - ETA: 2s
786/1024 [======================>.......] - ETA: 2s
791/1024 [======================>.......] - ETA: 2s
796/1024 [======================>.......] - ETA: 2s
801/1024 [======================>.......] - ETA: 2s
806/1024 [======================>.......] - ETA: 2s
811/1024 [======================>.......] - ETA: 2s
816/1024 [======================>.......] - ETA: 2s
821/1024 [=======================>......] - ETA: 2s
826/1024 [=======================>......] - ETA: 2s
831/1024 [=======================>......] - ETA: 2s
836/1024 [=======================>......] - ETA: 2s
841/1024 [=======================>......] - ETA: 2s
846/1024 [=======================>......] - ETA: 2s
851/1024 [=======================>......] - ETA: 2s
856/1024 [========================>.....] - ETA: 1s
861/1024 [========================>.....] - ETA: 1s
866/1024 [========================>.....] - ETA: 1s
871/1024 [========================>.....] - ETA: 1s
876/1024 [========================>.....] - ETA: 1s
881/1024 [========================>.....] - ETA: 1s
886/1024 [========================>.....] - ETA: 1s
891/1024 [=========================>....] - ETA: 1s
896/1024 [=========================>....] - ETA: 1s
901/1024 [=========================>....] - ETA: 1s
906/1024 [=========================>....] - ETA: 1s
911/1024 [=========================>....] - ETA: 1s
916/1024 [=========================>....] - ETA: 1s
921/1024 [=========================>....] - ETA: 1s
926/1024 [==========================>...] - ETA: 1s
931/1024 [==========================>...] - ETA: 1s
936/1024 [==========================>...] - ETA: 1s
941/1024 [==========================>...] - ETA: 0s
946/1024 [==========================>...] - ETA: 0s
951/1024 [==========================>...] - ETA: 0s
956/1024 [===========================>..] - ETA: 0s
961/1024 [===========================>..] - ETA: 0s
966/1024 [===========================>..] - ETA: 0s
971/1024 [===========================>..] - ETA: 0s
976/1024 [===========================>..] - ETA: 0s
981/1024 [===========================>..] - ETA: 0s
986/1024 [===========================>..] - ETA: 0s
991/1024 [============================>.] - ETA: 0s
996/1024 [============================>.] - ETA: 0s
1001/1024 [============================>.] - ETA: 0s
1006/1024 [============================>.] - ETA: 0s
1011/1024 [============================>.] - ETA: 0s
1016/1024 [============================>.] - ETA: 0s
1021/1024 [============================>.] - ETA: 0s
1024/1024 [==============================] - 17s 12ms/step
Model: "vit-tiny"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input (InputLayer) [(None, 224, 224, 3)] 0 []
Rescale (QuantizedRescalin (None, 224, 224, 3) 0 ['input[0][0]']
g)
Embedding (QuantizedConv2D (None, 14, 14, 192) 147648 ['Rescale[0][0]']
)
reshape (QuantizedReshape) (None, 196, 192) 0 ['Embedding[0][0]']
ClassToken (QuantizedClass (None, 197, 192) 192 ['reshape[0][0]']
Token)
Transformer/PosEmbed (Quan (None, 197, 192) 38208 ['ClassToken[0][0]']
tizedAddPositionEmbs)
Transformer/EncoderBlock_0 (None, 197, 192) 768 ['Transformer/PosEmbed[0][0]']
/LayerNorm_0 (QuantizedLay
erNormalization)
Transformer/EncoderBlock_0 (None, 197, 192) 37058 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_0 (None, 197, 192) 37058 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_0 (None, 197, 192) 37440 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_0 ((None, 197, 192), 384 ['Transformer/EncoderBlock_0/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 37440 ['Transformer/EncoderBlock_0/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout (QuantizedDropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 384 ['dropout[0][0]',
/add_1 (QuantizedAdd) 'Transformer/PosEmbed[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 768 ['Transformer/EncoderBlock_0/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_0 (None, 197, 768) 148224 ['Transformer/EncoderBlock_0/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_0 (None, 197, 768) 1536 ['Transformer/EncoderBlock_0/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_1 (QuantizedDropou (None, 197, 768) 0 ['Transformer/EncoderBlock_0/M
t) lpBlock/activation[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 148032 ['dropout_1[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_2 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_0/M
t) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 384 ['Transformer/EncoderBlock_0/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_0/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_1 ((None, 197, 192), 384 ['Transformer/EncoderBlock_1/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_1/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_1/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_1/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_3 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_1/M
t) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['dropout_3[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_0/a
dd_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_1/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_1/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_1 (None, 197, 768) 1536 ['Transformer/EncoderBlock_1/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_4 (QuantizedDropou (None, 197, 768) 0 ['Transformer/EncoderBlock_1/M
t) lpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 148032 ['dropout_4[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_5 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_1/M
t) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_1/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_5[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 768 ['Transformer/EncoderBlock_1/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_2 (None, 197, 192) 37058 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_2 (None, 197, 192) 37058 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_2 (None, 197, 192) 37440 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_2 ((None, 197, 192), 384 ['Transformer/EncoderBlock_2/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_2/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_2/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 37440 ['Transformer/EncoderBlock_2/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_6 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_2/M
t) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 384 ['dropout_6[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_1/a
dd_2[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 768 ['Transformer/EncoderBlock_2/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_2 (None, 197, 768) 148224 ['Transformer/EncoderBlock_2/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_2 (None, 197, 768) 1536 ['Transformer/EncoderBlock_2/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_7 (QuantizedDropou (None, 197, 768) 0 ['Transformer/EncoderBlock_2/M
t) lpBlock/activation[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 148032 ['dropout_7[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_8 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_2/M
t) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 384 ['Transformer/EncoderBlock_2/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_8[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 768 ['Transformer/EncoderBlock_2/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_3 (None, 197, 192) 37058 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_3 (None, 197, 192) 37058 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_3 (None, 197, 192) 37440 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_3 ((None, 197, 192), 384 ['Transformer/EncoderBlock_3/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_3/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_3/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 37440 ['Transformer/EncoderBlock_3/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_9 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_3/M
t) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 384 ['dropout_9[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_2/a
dd_2[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 768 ['Transformer/EncoderBlock_3/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_3 (None, 197, 768) 148224 ['Transformer/EncoderBlock_3/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_3 (None, 197, 768) 1536 ['Transformer/EncoderBlock_3/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_10 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_3/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 148032 ['dropout_10[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_11 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_3/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 384 ['Transformer/EncoderBlock_3/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_11[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 768 ['Transformer/EncoderBlock_3/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_4 (None, 197, 192) 37058 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_4 (None, 197, 192) 37058 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_4 (None, 197, 192) 37440 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_4 ((None, 197, 192), 384 ['Transformer/EncoderBlock_4/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_4/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_4/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 37440 ['Transformer/EncoderBlock_4/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_12 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_4/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 384 ['dropout_12[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_3/a
dd_2[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 768 ['Transformer/EncoderBlock_4/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_4 (None, 197, 768) 148224 ['Transformer/EncoderBlock_4/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_4 (None, 197, 768) 1536 ['Transformer/EncoderBlock_4/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_13 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_4/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 148032 ['dropout_13[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_14 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_4/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 384 ['Transformer/EncoderBlock_4/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_14[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 768 ['Transformer/EncoderBlock_4/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_5 (None, 197, 192) 37058 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_5 (None, 197, 192) 37058 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_5 (None, 197, 192) 37440 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_5 ((None, 197, 192), 384 ['Transformer/EncoderBlock_5/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_5/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_5/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 37440 ['Transformer/EncoderBlock_5/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_15 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_5/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 384 ['dropout_15[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_4/a
dd_2[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 768 ['Transformer/EncoderBlock_5/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_5 (None, 197, 768) 148224 ['Transformer/EncoderBlock_5/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_5 (None, 197, 768) 1536 ['Transformer/EncoderBlock_5/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_16 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_5/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 148032 ['dropout_16[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_17 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_5/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 384 ['Transformer/EncoderBlock_5/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_17[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 768 ['Transformer/EncoderBlock_5/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_6 (None, 197, 192) 37058 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_6 (None, 197, 192) 37058 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_6 (None, 197, 192) 37440 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_6 ((None, 197, 192), 384 ['Transformer/EncoderBlock_6/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_6/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_6/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 37440 ['Transformer/EncoderBlock_6/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_18 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_6/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 384 ['dropout_18[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_5/a
dd_2[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 768 ['Transformer/EncoderBlock_6/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_6 (None, 197, 768) 148224 ['Transformer/EncoderBlock_6/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_6 (None, 197, 768) 1536 ['Transformer/EncoderBlock_6/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_19 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_6/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 148032 ['dropout_19[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_20 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_6/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 384 ['Transformer/EncoderBlock_6/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_20[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 768 ['Transformer/EncoderBlock_6/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_7 (None, 197, 192) 37058 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_7 (None, 197, 192) 37058 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_7 (None, 197, 192) 37440 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_7 ((None, 197, 192), 384 ['Transformer/EncoderBlock_7/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_7/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_7/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 37440 ['Transformer/EncoderBlock_7/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_21 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_7/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 384 ['dropout_21[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_6/a
dd_2[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 768 ['Transformer/EncoderBlock_7/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_7 (None, 197, 768) 148224 ['Transformer/EncoderBlock_7/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_7 (None, 197, 768) 1536 ['Transformer/EncoderBlock_7/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_22 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_7/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 148032 ['dropout_22[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_23 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_7/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 384 ['Transformer/EncoderBlock_7/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_23[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 768 ['Transformer/EncoderBlock_7/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_8 (None, 197, 192) 37058 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_8 (None, 197, 192) 37058 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_8 (None, 197, 192) 37440 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_8 ((None, 197, 192), 384 ['Transformer/EncoderBlock_8/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_8/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_8/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 37440 ['Transformer/EncoderBlock_8/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_24 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_8/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 384 ['dropout_24[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_7/a
dd_2[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 768 ['Transformer/EncoderBlock_8/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_8 (None, 197, 768) 148224 ['Transformer/EncoderBlock_8/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_8 (None, 197, 768) 1536 ['Transformer/EncoderBlock_8/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_25 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_8/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 148032 ['dropout_25[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_26 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_8/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 384 ['Transformer/EncoderBlock_8/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_26[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 768 ['Transformer/EncoderBlock_8/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_9 (None, 197, 192) 37058 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_9 (None, 197, 192) 37058 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_9 (None, 197, 192) 37440 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_9 ((None, 197, 192), 384 ['Transformer/EncoderBlock_9/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_9/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_9/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 37440 ['Transformer/EncoderBlock_9/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_27 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_9/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 384 ['dropout_27[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_8/a
dd_2[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 768 ['Transformer/EncoderBlock_9/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_9 (None, 197, 768) 148224 ['Transformer/EncoderBlock_9/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_9 (None, 197, 768) 1536 ['Transformer/EncoderBlock_9/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_28 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_9/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 148032 ['dropout_28[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_29 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_9/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 384 ['Transformer/EncoderBlock_9/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_29[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_9/a
0/LayerNorm_0 (QuantizedLa dd_2[0][0]']
yerNormalization)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/query (QuantizedDen
se)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/key (QuantizedDense
)
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/value (QuantizedDen
se)
Transformer/EncoderBlock_1 ((None, 197, 192), 384 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten (None, 3, 197, 197)) MultiHeadDotProductAttention_1
tion_1/attention (Quantize /query[0][0]',
dAttention) 'Transformer/EncoderBlock_10/
MultiHeadDotProductAttention_1
/key[0][0]',
'Transformer/EncoderBlock_10/
MultiHeadDotProductAttention_1
/value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten MultiHeadDotProductAttention_1
tion_1/out (QuantizedDense /attention[0][0]']
)
dropout_30 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_10/
ut) MultiHeadDotProductAttention_1
/out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['dropout_30[0][0]',
0/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_9/a
dd_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_10/
0/LayerNorm_2 (QuantizedLa add_1[0][0]']
yerNormalization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_10/
0/MlpBlock/Dense_0 (Quanti LayerNorm_2[0][0]']
zedDense)
Transformer/EncoderBlock_1 (None, 197, 768) 1536 ['Transformer/EncoderBlock_10/
0/MlpBlock/activation (Qua MlpBlock/Dense_0[0][0]']
ntizedReLU)
dropout_31 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_10/
ut) MlpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 148032 ['dropout_31[0][0]']
0/MlpBlock/Dense_1 (Quanti
zedDense)
dropout_32 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_10/
ut) MlpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_10/
0/add_2 (QuantizedAdd) add_1[0][0]',
'dropout_32[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_10/
1/LayerNorm_0 (QuantizedLa add_2[0][0]']
yerNormalization)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/query (QuantizedDen
se)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/key (QuantizedDense
)
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/value (QuantizedDen
se)
Transformer/EncoderBlock_1 ((None, 197, 192), 384 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten (None, 3, 197, 197)) MultiHeadDotProductAttention_1
tion_1/attention (Quantize /query[0][0]',
dAttention) 'Transformer/EncoderBlock_11/
MultiHeadDotProductAttention_1
/key[0][0]',
'Transformer/EncoderBlock_11/
MultiHeadDotProductAttention_1
/value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten MultiHeadDotProductAttention_1
tion_1/out (QuantizedDense /attention[0][0]']
)
dropout_33 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_11/
ut) MultiHeadDotProductAttention_1
/out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['dropout_33[0][0]',
1/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_10/
add_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_11/
1/LayerNorm_2 (QuantizedLa add_1[0][0]']
yerNormalization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_11/
1/MlpBlock/Dense_0 (Quanti LayerNorm_2[0][0]']
zedDense)
Transformer/EncoderBlock_1 (None, 197, 768) 1536 ['Transformer/EncoderBlock_11/
1/MlpBlock/activation (Qua MlpBlock/Dense_0[0][0]']
ntizedReLU)
dropout_34 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_11/
ut) MlpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 148032 ['dropout_34[0][0]']
1/MlpBlock/Dense_1 (Quanti
zedDense)
dropout_35 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_11/
ut) MlpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_11/
1/add_2 (QuantizedAdd) add_1[0][0]',
'dropout_35[0][0]']
Transformer/EncoderNorm (Q (None, 197, 192) 1152 ['Transformer/EncoderBlock_11/
uantizedBatchNormalization add_2[0][0]']
)
ExtractToken (QuantizedExt (None, 192) 0 ['Transformer/EncoderNorm[0][0
ractToken) ]']
Head (QuantizedDense) (None, 1000) 193000 ['ExtractToken[0][0]']
dequantizer (Dequantizer) (None, 1000) 0 ['Head[0][0]']
==================================================================================================
Total params: 5773912 (22.03 MB)
Trainable params: 5717416 (21.81 MB)
Non-trainable params: 56496 (220.69 KB)
__________________________________________________________________________________________________
The bc_vit_ti16_imagenet_pretrained helper was obtained with the same 8-bit quantization scheme but with an additional QAT step to further improve accuracy.
4.2 Quantization Aware Training (Optional)
In Section 4.1, we performed PTQ and converted the weights and activation outputs to 8-bit integer numbers. In most cases, there is no accuracy drop observed after quantization, however in cases where an accurary drop is observed, it is possible to further fine-tune this model using QAT.
The model that is obtained through QuantizeML python package is an instance of Keras. This allows the model to be fine-tuned using the original dataset to regain accuracy.
Akida models python package provides pre-trained models for vit_ti16 and deit_ti16 that have been trained using QAT method. It can be used in the following way:
from akida_models import bc_vit_ti16_imagenet_pretrained
# Load the pre-trained quantized model
model_quantized = bc_vit_ti16_imagenet_pretrained()
model_quantized.summary()
Downloading data from https://data.brainchip.com/models/AkidaV2/vit/bc_vit_ti16_224_i8_w8_a8.h5.
0/24413248 [..............................] - ETA: 0s
106496/24413248 [..............................] - ETA: 11s
786432/24413248 [..............................] - ETA: 3s
1949696/24413248 [=>............................] - ETA: 1s
3063808/24413248 [==>...........................] - ETA: 1s
4341760/24413248 [====>.........................] - ETA: 1s
5554176/24413248 [=====>........................] - ETA: 1s
6881280/24413248 [=======>......................] - ETA: 0s
8142848/24413248 [=========>....................] - ETA: 0s
9453568/24413248 [==========>...................] - ETA: 0s
10764288/24413248 [============>.................] - ETA: 0s
12140544/24413248 [=============>................] - ETA: 0s
13467648/24413248 [===============>..............] - ETA: 0s
14663680/24413248 [=================>............] - ETA: 0s
16007168/24413248 [==================>...........] - ETA: 0s
17350656/24413248 [====================>.........] - ETA: 0s
18628608/24413248 [=====================>........] - ETA: 0s
19922944/24413248 [=======================>......] - ETA: 0s
21233664/24413248 [=========================>....] - ETA: 0s
22446080/24413248 [==========================>...] - ETA: 0s
23953408/24413248 [============================>.] - ETA: 0s
24413248/24413248 [==============================] - 1s 0us/step
Download complete.
Model: "vit-tiny"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input (InputLayer) [(None, 224, 224, 3)] 0 []
Rescale (QuantizedRescalin (None, 224, 224, 3) 0 ['input[0][0]']
g)
Embedding (QuantizedConv2D (None, 14, 14, 192) 147648 ['Rescale[0][0]']
)
reshape (QuantizedReshape) (None, 196, 192) 0 ['Embedding[0][0]']
ClassToken (QuantizedClass (None, 197, 192) 192 ['reshape[0][0]']
Token)
Transformer/PosEmbed (Quan (None, 197, 192) 38208 ['ClassToken[0][0]']
tizedAddPositionEmbs)
Transformer/EncoderBlock_0 (None, 197, 192) 768 ['Transformer/PosEmbed[0][0]']
/LayerNorm_0 (QuantizedLay
erNormalization)
Transformer/EncoderBlock_0 (None, 197, 192) 37058 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_0 (None, 197, 192) 37058 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_0 (None, 197, 192) 37440 ['Transformer/EncoderBlock_0/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_0 ((None, 197, 192), 384 ['Transformer/EncoderBlock_0/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 37440 ['Transformer/EncoderBlock_0/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout (QuantizedDropout) (None, 197, 192) 0 ['Transformer/EncoderBlock_0/M
ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 384 ['dropout[0][0]',
/add_1 (QuantizedAdd) 'Transformer/PosEmbed[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 768 ['Transformer/EncoderBlock_0/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_0 (None, 197, 768) 148224 ['Transformer/EncoderBlock_0/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_0 (None, 197, 768) 1536 ['Transformer/EncoderBlock_0/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_1 (QuantizedDropou (None, 197, 768) 0 ['Transformer/EncoderBlock_0/M
t) lpBlock/activation[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 148032 ['dropout_1[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_2 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_0/M
t) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_0 (None, 197, 192) 384 ['Transformer/EncoderBlock_0/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_0/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_1/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_1 ((None, 197, 192), 384 ['Transformer/EncoderBlock_1/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_1/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_1/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_1/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_3 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_1/M
t) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['dropout_3[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_0/a
dd_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_1/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_1/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_1 (None, 197, 768) 1536 ['Transformer/EncoderBlock_1/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_4 (QuantizedDropou (None, 197, 768) 0 ['Transformer/EncoderBlock_1/M
t) lpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 148032 ['dropout_4[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_5 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_1/M
t) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_1/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_5[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 768 ['Transformer/EncoderBlock_1/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_2 (None, 197, 192) 37058 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_2 (None, 197, 192) 37058 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_2 (None, 197, 192) 37440 ['Transformer/EncoderBlock_2/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_2 ((None, 197, 192), 384 ['Transformer/EncoderBlock_2/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_2/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_2/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 37440 ['Transformer/EncoderBlock_2/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_6 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_2/M
t) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 384 ['dropout_6[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_1/a
dd_2[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 768 ['Transformer/EncoderBlock_2/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_2 (None, 197, 768) 148224 ['Transformer/EncoderBlock_2/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_2 (None, 197, 768) 1536 ['Transformer/EncoderBlock_2/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_7 (QuantizedDropou (None, 197, 768) 0 ['Transformer/EncoderBlock_2/M
t) lpBlock/activation[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 148032 ['dropout_7[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_8 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_2/M
t) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_2 (None, 197, 192) 384 ['Transformer/EncoderBlock_2/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_8[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 768 ['Transformer/EncoderBlock_2/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_3 (None, 197, 192) 37058 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_3 (None, 197, 192) 37058 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_3 (None, 197, 192) 37440 ['Transformer/EncoderBlock_3/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_3 ((None, 197, 192), 384 ['Transformer/EncoderBlock_3/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_3/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_3/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 37440 ['Transformer/EncoderBlock_3/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_9 (QuantizedDropou (None, 197, 192) 0 ['Transformer/EncoderBlock_3/M
t) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 384 ['dropout_9[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_2/a
dd_2[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 768 ['Transformer/EncoderBlock_3/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_3 (None, 197, 768) 148224 ['Transformer/EncoderBlock_3/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_3 (None, 197, 768) 1536 ['Transformer/EncoderBlock_3/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_10 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_3/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 148032 ['dropout_10[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_11 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_3/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_3 (None, 197, 192) 384 ['Transformer/EncoderBlock_3/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_11[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 768 ['Transformer/EncoderBlock_3/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_4 (None, 197, 192) 37058 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_4 (None, 197, 192) 37058 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_4 (None, 197, 192) 37440 ['Transformer/EncoderBlock_4/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_4 ((None, 197, 192), 384 ['Transformer/EncoderBlock_4/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_4/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_4/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 37440 ['Transformer/EncoderBlock_4/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_12 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_4/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 384 ['dropout_12[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_3/a
dd_2[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 768 ['Transformer/EncoderBlock_4/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_4 (None, 197, 768) 148224 ['Transformer/EncoderBlock_4/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_4 (None, 197, 768) 1536 ['Transformer/EncoderBlock_4/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_13 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_4/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 148032 ['dropout_13[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_14 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_4/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_4 (None, 197, 192) 384 ['Transformer/EncoderBlock_4/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_14[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 768 ['Transformer/EncoderBlock_4/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_5 (None, 197, 192) 37058 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_5 (None, 197, 192) 37058 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_5 (None, 197, 192) 37440 ['Transformer/EncoderBlock_5/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_5 ((None, 197, 192), 384 ['Transformer/EncoderBlock_5/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_5/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_5/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 37440 ['Transformer/EncoderBlock_5/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_15 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_5/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 384 ['dropout_15[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_4/a
dd_2[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 768 ['Transformer/EncoderBlock_5/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_5 (None, 197, 768) 148224 ['Transformer/EncoderBlock_5/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_5 (None, 197, 768) 1536 ['Transformer/EncoderBlock_5/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_16 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_5/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 148032 ['dropout_16[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_17 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_5/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_5 (None, 197, 192) 384 ['Transformer/EncoderBlock_5/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_17[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 768 ['Transformer/EncoderBlock_5/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_6 (None, 197, 192) 37058 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_6 (None, 197, 192) 37058 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_6 (None, 197, 192) 37440 ['Transformer/EncoderBlock_6/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_6 ((None, 197, 192), 384 ['Transformer/EncoderBlock_6/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_6/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_6/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 37440 ['Transformer/EncoderBlock_6/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_18 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_6/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 384 ['dropout_18[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_5/a
dd_2[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 768 ['Transformer/EncoderBlock_6/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_6 (None, 197, 768) 148224 ['Transformer/EncoderBlock_6/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_6 (None, 197, 768) 1536 ['Transformer/EncoderBlock_6/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_19 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_6/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 148032 ['dropout_19[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_20 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_6/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_6 (None, 197, 192) 384 ['Transformer/EncoderBlock_6/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_20[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 768 ['Transformer/EncoderBlock_6/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_7 (None, 197, 192) 37058 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_7 (None, 197, 192) 37058 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_7 (None, 197, 192) 37440 ['Transformer/EncoderBlock_7/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_7 ((None, 197, 192), 384 ['Transformer/EncoderBlock_7/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_7/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_7/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 37440 ['Transformer/EncoderBlock_7/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_21 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_7/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 384 ['dropout_21[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_6/a
dd_2[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 768 ['Transformer/EncoderBlock_7/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_7 (None, 197, 768) 148224 ['Transformer/EncoderBlock_7/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_7 (None, 197, 768) 1536 ['Transformer/EncoderBlock_7/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_22 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_7/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 148032 ['dropout_22[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_23 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_7/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_7 (None, 197, 192) 384 ['Transformer/EncoderBlock_7/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_23[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 768 ['Transformer/EncoderBlock_7/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_8 (None, 197, 192) 37058 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_8 (None, 197, 192) 37058 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_8 (None, 197, 192) 37440 ['Transformer/EncoderBlock_8/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_8 ((None, 197, 192), 384 ['Transformer/EncoderBlock_8/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_8/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_8/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 37440 ['Transformer/EncoderBlock_8/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_24 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_8/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 384 ['dropout_24[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_7/a
dd_2[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 768 ['Transformer/EncoderBlock_8/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_8 (None, 197, 768) 148224 ['Transformer/EncoderBlock_8/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_8 (None, 197, 768) 1536 ['Transformer/EncoderBlock_8/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_25 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_8/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 148032 ['dropout_25[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_26 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_8/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_8 (None, 197, 192) 384 ['Transformer/EncoderBlock_8/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_26[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 768 ['Transformer/EncoderBlock_8/a
/LayerNorm_0 (QuantizedLay dd_2[0][0]']
erNormalization)
Transformer/EncoderBlock_9 (None, 197, 192) 37058 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/query (QuantizedDens
e)
Transformer/EncoderBlock_9 (None, 197, 192) 37058 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/key (QuantizedDense)
Transformer/EncoderBlock_9 (None, 197, 192) 37440 ['Transformer/EncoderBlock_9/L
/MultiHeadDotProductAttent ayerNorm_0[0][0]']
ion_1/value (QuantizedDens
e)
Transformer/EncoderBlock_9 ((None, 197, 192), 384 ['Transformer/EncoderBlock_9/M
/MultiHeadDotProductAttent (None, 3, 197, 197)) ultiHeadDotProductAttention_1/
ion_1/attention (Quantized query[0][0]',
Attention) 'Transformer/EncoderBlock_9/M
ultiHeadDotProductAttention_1/
key[0][0]',
'Transformer/EncoderBlock_9/M
ultiHeadDotProductAttention_1/
value[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 37440 ['Transformer/EncoderBlock_9/M
/MultiHeadDotProductAttent ultiHeadDotProductAttention_1/
ion_1/out (QuantizedDense) attention[0][0]']
dropout_27 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_9/M
ut) ultiHeadDotProductAttention_1/
out[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 384 ['dropout_27[0][0]',
/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_8/a
dd_2[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 768 ['Transformer/EncoderBlock_9/a
/LayerNorm_2 (QuantizedLay dd_1[0][0]']
erNormalization)
Transformer/EncoderBlock_9 (None, 197, 768) 148224 ['Transformer/EncoderBlock_9/L
/MlpBlock/Dense_0 (Quantiz ayerNorm_2[0][0]']
edDense)
Transformer/EncoderBlock_9 (None, 197, 768) 1536 ['Transformer/EncoderBlock_9/M
/MlpBlock/activation (Quan lpBlock/Dense_0[0][0]']
tizedReLU)
dropout_28 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_9/M
ut) lpBlock/activation[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 148032 ['dropout_28[0][0]']
/MlpBlock/Dense_1 (Quantiz
edDense)
dropout_29 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_9/M
ut) lpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_9 (None, 197, 192) 384 ['Transformer/EncoderBlock_9/a
/add_2 (QuantizedAdd) dd_1[0][0]',
'dropout_29[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_9/a
0/LayerNorm_0 (QuantizedLa dd_2[0][0]']
yerNormalization)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/query (QuantizedDen
se)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/key (QuantizedDense
)
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/value (QuantizedDen
se)
Transformer/EncoderBlock_1 ((None, 197, 192), 384 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten (None, 3, 197, 197)) MultiHeadDotProductAttention_1
tion_1/attention (Quantize /query[0][0]',
dAttention) 'Transformer/EncoderBlock_10/
MultiHeadDotProductAttention_1
/key[0][0]',
'Transformer/EncoderBlock_10/
MultiHeadDotProductAttention_1
/value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_10/
0/MultiHeadDotProductAtten MultiHeadDotProductAttention_1
tion_1/out (QuantizedDense /attention[0][0]']
)
dropout_30 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_10/
ut) MultiHeadDotProductAttention_1
/out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['dropout_30[0][0]',
0/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_9/a
dd_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_10/
0/LayerNorm_2 (QuantizedLa add_1[0][0]']
yerNormalization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_10/
0/MlpBlock/Dense_0 (Quanti LayerNorm_2[0][0]']
zedDense)
Transformer/EncoderBlock_1 (None, 197, 768) 1536 ['Transformer/EncoderBlock_10/
0/MlpBlock/activation (Qua MlpBlock/Dense_0[0][0]']
ntizedReLU)
dropout_31 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_10/
ut) MlpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 148032 ['dropout_31[0][0]']
0/MlpBlock/Dense_1 (Quanti
zedDense)
dropout_32 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_10/
ut) MlpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_10/
0/add_2 (QuantizedAdd) add_1[0][0]',
'dropout_32[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_10/
1/LayerNorm_0 (QuantizedLa add_2[0][0]']
yerNormalization)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/query (QuantizedDen
se)
Transformer/EncoderBlock_1 (None, 197, 192) 37058 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/key (QuantizedDense
)
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten LayerNorm_0[0][0]']
tion_1/value (QuantizedDen
se)
Transformer/EncoderBlock_1 ((None, 197, 192), 384 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten (None, 3, 197, 197)) MultiHeadDotProductAttention_1
tion_1/attention (Quantize /query[0][0]',
dAttention) 'Transformer/EncoderBlock_11/
MultiHeadDotProductAttention_1
/key[0][0]',
'Transformer/EncoderBlock_11/
MultiHeadDotProductAttention_1
/value[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 37440 ['Transformer/EncoderBlock_11/
1/MultiHeadDotProductAtten MultiHeadDotProductAttention_1
tion_1/out (QuantizedDense /attention[0][0]']
)
dropout_33 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_11/
ut) MultiHeadDotProductAttention_1
/out[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['dropout_33[0][0]',
1/add_1 (QuantizedAdd) 'Transformer/EncoderBlock_10/
add_2[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 768 ['Transformer/EncoderBlock_11/
1/LayerNorm_2 (QuantizedLa add_1[0][0]']
yerNormalization)
Transformer/EncoderBlock_1 (None, 197, 768) 148224 ['Transformer/EncoderBlock_11/
1/MlpBlock/Dense_0 (Quanti LayerNorm_2[0][0]']
zedDense)
Transformer/EncoderBlock_1 (None, 197, 768) 1536 ['Transformer/EncoderBlock_11/
1/MlpBlock/activation (Qua MlpBlock/Dense_0[0][0]']
ntizedReLU)
dropout_34 (QuantizedDropo (None, 197, 768) 0 ['Transformer/EncoderBlock_11/
ut) MlpBlock/activation[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 148032 ['dropout_34[0][0]']
1/MlpBlock/Dense_1 (Quanti
zedDense)
dropout_35 (QuantizedDropo (None, 197, 192) 0 ['Transformer/EncoderBlock_11/
ut) MlpBlock/Dense_1[0][0]']
Transformer/EncoderBlock_1 (None, 197, 192) 384 ['Transformer/EncoderBlock_11/
1/add_2 (QuantizedAdd) add_1[0][0]',
'dropout_35[0][0]']
Transformer/EncoderNorm (Q (None, 197, 192) 1152 ['Transformer/EncoderBlock_11/
uantizedBatchNormalization add_2[0][0]']
)
ExtractToken (QuantizedExt (None, 192) 0 ['Transformer/EncoderNorm[0][0
ractToken) ]']
Head (QuantizedDense) (None, 1000) 193000 ['ExtractToken[0][0]']
dequantizer (Dequantizer) (None, 1000) 0 ['Head[0][0]']
==================================================================================================
Total params: 5773912 (22.03 MB)
Trainable params: 5717416 (21.81 MB)
Non-trainable params: 56496 (220.69 KB)
__________________________________________________________________________________________________
5. Conversion to Akida
A model quantized through QuantizeML python package is ready to be converted to Akida. Once the quantized model has the desired accuracy CNN2SNN toolkit is used for conversion to Akida. There is no further optimization required and equivalent accuracy is observed upon converting the model to Akida.
from cnn2snn import convert
# Convert the model
model_akida = convert(model_quantized)
model_akida.summary()
Model Summary
________________________________________________
Input shape Output shape Sequences Layers
================================================
[224, 224, 3] [1, 1, 1000] 1 14
________________________________________________
_______________________________________________________________________
Layer (type) Output shape Kernel shape
================= SW/Embedding-dequantizer (Software) =================
Embedding (Stem) [1, 197, 192] (16, 16, 3, 192)
_______________________________________________________________________
VitEncoderBlock_2 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_3 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_4 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_5 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_6 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_7 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_8 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_9 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_10 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_11 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_12 (VitEncoderBlock) [1, 197, 192] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
_______________________________________________________________________
VitEncoderBlock_13 (VitEncoderBlock) [1, 1, 1000] N/A
norm_mha N/A
query (192, 192)
key (192, 192)
value (192, 192)
attention N/A
attention_projection (192, 192)
skip_connection_1 N/A
norm_mlp N/A
mlp_1 (192, 768)
mlp_2 (768, 192)
skip_connection_2 N/A
batch_norm N/A
extract_token N/A
head (192, 1000)
_______________________________________________________________________
dequantizer (Dequantizer) [1, 1, 1000] N/A
_______________________________________________________________________
6. Displaying results Attention Maps
Instead of showing predictions, here we propose to show attention maps on an image. This is derived from Abnar et al. attention rollout as shown in the following Keras tutorial. This aims to highlight the model abilities to focus on relevant parts in the input image.
Just like for the AkidaNet example, ImageNet images are not publicly available, this example uses a set of 10 copyright free images that were found on Google using ImageNet class names.
Get the preprocessed sample images:
import numpy as np
from akida_models.imagenet import get_preprocessed_samples
# Model specification and hyperparameters
NUM_CHANNELS = 3
IMAGE_SIZE = 224
NUM_IMAGES = 10
# Load the preprocessed images
x_test, _ = get_preprocessed_samples(IMAGE_SIZE, NUM_CHANNELS)
print(f'{NUM_IMAGES} images loaded and preprocessed.')
10 images loaded and preprocessed.
Build and display the attention map for one selected sample:
import cv2
import matplotlib.pyplot as plt
from keras import Model
from quantizeml.layers import ClassToken, Attention
from quantizeml.tensors import FixedPoint
from quantizeml.models.transforms.transforms_utils import get_layers_by_type
def build_attention_map(model, image):
# Get the Attention layers list
attentions = get_layers_by_type(model, Attention)
# Calculate the number of tokens and deduce the grid size
num_tokens = sum(isinstance(ly, ClassToken) for ly in model.layers)
grid_size = int(np.sqrt(attentions[0].output_shape[0][-2] - num_tokens))
# Get the attention weights from each transformer
outputs = [la.output[1] for la in attentions]
weights = Model(inputs=model.inputs, outputs=outputs).predict(np.expand_dims(image, 0))
# Converts to float if needed
weights = [w.to_float() if isinstance(w, FixedPoint) else w for w in weights]
weights = np.array(weights)
# Heads number
num_heads = weights.shape[2]
num_layers = weights.shape[0]
reshaped = weights.reshape((num_layers, num_heads, grid_size**2 + 1, grid_size**2 + 1))
# Average the attention weights across all heads
reshaped = reshaped.mean(axis=1)
# To account for residual connections, we add an identity matrix to the attention matrix and
# re-normalize the weights.
reshaped = reshaped + np.eye(reshaped.shape[1])
reshaped = reshaped / reshaped.sum(axis=(1, 2))[:, np.newaxis, np.newaxis]
# Recursively multiply the weight matrices
v = reshaped[-1]
for n in range(1, len(reshaped)):
v = np.matmul(v, reshaped[-1 - n])
# Attention from the output token to the input space
mask = v[0, 1:].reshape(grid_size, grid_size)
mask = cv2.resize(mask / mask.max(), (image.shape[1], image.shape[0]))[..., np.newaxis]
return (mask * image).astype("uint8")
# Using a specific image for which attention map is easier to observe
image = x_test[8]
# Compute the attention map
attention_float = build_attention_map(model_keras, image)
attention_quantized = build_attention_map(model_quantized, image)
# Display the attention map
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3)
ax1.axis('off')
ax1.set_title('Original')
ax1.imshow(image)
ax2.axis('off')
ax2.set_title('Float')
ax2.imshow(attention_float)
ax3.axis('off')
ax3.set_title('Quantized')
ax3.imshow(attention_quantized)
fig.suptitle('Attention masks', fontsize=10)
plt.show()
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 51s 51s/step
Total running time of the script: (4 minutes 4.565 seconds)