My Tensorflow collection

My collection. from left to right: RISC-V, ARM and x86
  • A not too recent x86 gaming PC . It features a 6th generation Intel Core (i7–6700HQ).
  • Two ARM SBC: A raspberry PI4 (64 bit quad-core Cortex-A72) and a Nvidia Jetson Nano (64 bit quad-core Cortex-A57).
  • Several RISC-V boards, based on Kendryte K210 processor (64 bit, dual core)
  • a gaming PC is in the 1000-2000 € price range. RAM is typically 8 to 32 Gb. Always run some kind of operating system (Windows, Linux, MacOS)
  • An ARM SBC is in the 50-100 € price range. RAM is typically 256Mb to a few Gb. Typically runs some kind of Linux.
  • A RISC-V development board can get as low as 30 €. RAM is in the Mbytes (mine has 8Mb). Not powerful enough to run a full OS, but can run an embedded MicroPython interpreter.
Edge TPU USB accelerator
The K210 RISC-V processor includes a Neural Network accelerator (KPU)
The image classification model running on RISC-V. The camera looks at a smartphone displaying the toddler’s picture. She is recognized with a 89% probability.

Standing on the shoulders of giants: Transfer learning

To build our image classification model, there is the “let’s start from scratch” way. It consists of learning how to design CNN (Convolution Neural Network — a typical network architecture for image classification), and then spending time to train the model from scratch. Here time also means electricity, as training a model from scratch can be very computing intensive.

Synthetic data

As for any deep learning application, we start by gathering training data, i.e. a set of images for the 4 classes at hand. And, as for any deep learning application, the more training images, the better.

An example of data augmentation


Anna joins forces with mountaineer Kristoff and his reindeer sidekick …

  • a convolutional base, whose purpose in life is to learn images features. The first layers learn to recognize basic shapes (edges, lines, ovals) , whereas the last layers learn to recognize ‘higher levels’ elements, such as eyes, hears ..
  • a classifier, whose purpose is to generate a prediction , e.g. is the image a lizard or a chameleon.
From the excellent book ‘Deep Learning with Python’ from Francois Cholet, Keras’s creator
  • Load from internet a trained MobileNet model, without it’s classifier (as indicated by the include_top = False
base_mobilenet = tf.keras.applications.mobilenet_v2.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet')
  • Create a new model which combines MobileNet and our own classifier
base_mobilenet.trainable = Falseinputs = tf.keras.Input(shape=(96,96,3))# combine Mobilenet ...
x = base_mobilenet(inputs, training=False)
# with our own classifier ...
x = tf.keras.layers.GlobalAveragePooling2D() (x)
outputs = tf.keras.layers.Dense(4, activation = 'softmax') (x)
# to create a new model, to be trained
new_model = tf.keras.Model(inputs, outputs)
Layer (type) Output Shape Param #
input_2 (InputLayer) [(None, 96, 96, 3)] 0
mobilenetv2_1.00_96 (None, 3, 3, 1280) 2257984 global_average_pooling2d_1 (None, 1280) 0dense_1 (Dense) (None, 4) 5124=================================================================
Total params: 2,263,108
Trainable params: 5,124
Non-trainable params: 2,257,984
  • model’s inputs are images 96x96 pixels, with 3 colors
  • the trained MobileNet is included, with 2.2 Millions parameters (aka weights)
  • the model’s output is 4 classes
Flattening: The orange boxes represents the output of MobileNet without classifier (3x3x1280). The diagram shows 7 neurons and 3 classes, whereas our example has 1280 neurons and 4 classes, but you get the point.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
pascal boudalier

pascal boudalier

Tinkering with Raspberry PI, ESP32, Solar, LifePo4, mppt, IoT, Zwave, energy harvesting, Python, MicroPython, Keras, Tensorflow, tflite, TPU. Ex Intel and HP