TensorFlow Easy
Identify objects using TensorFlow deep neural network.
By Laurent Ittiitti@usc.eduhttp://jevois.orgGPL v3
 Language: C++Supports mappings with USB output: YesSupports mappings with NO USB output: Yes 
 Video Mapping:   NONE 0 0 0.0 YUYV 320 240 60.0 JeVois TensorFlowEasy
 Video Mapping:   YUYV 320 308 30.0 YUYV 320 240 30.0 JeVois TensorFlowEasy
 Video Mapping:   YUYV 640 548 30.0 YUYV 640 480 30.0 JeVois TensorFlowEasy
 Video Mapping:   YUYV 1280 1092 7.0 YUYV 1280 1024 7.0 JeVois TensorFlowEasy

Module Documentation

TensorFlow is a popular neural network framework. This module identifies the object in a square region in the center of the camera field of view using a deep convolutional neural network.

The deep network analyzes the image by filtering it using many different filter kernels, and several stacked passes (network layers). This essentially amounts to detecting the presence of both simple and complex parts of known objects in the image (e.g., from detecting edges in lower layers of the network to detecting car wheels or even whole cars in higher layers). The last layer of the network is reduced to a vector with one entry per known kind of object (object class). This module returns the class names of the top scoring candidates in the output vector, if any have scored above a minimum confidence threshold. When nothing is recognized with sufficiently high confidence, there is no output.

This module runs a TensorFlow network and shows the top-scoring results. In this module, we run the deep network on every video frame, so framerate will vary depending on network complexity (see below). Point your camera towards some interesting object, make the object fit within the grey box shown in the video (which will be fed to the neural network), keep it stable, and TensorFlow will tell you what it thinks this object is.

Note that by default this module runs different flavors of MobileNets trained on the ImageNet dataset. There are 1000 different kinds of objects (object classes) that these networks can recognize (too long to list here). The input layer of these networks is 299x299, 224x224, 192x192, 160x160, or 128x128 pixels by default, depending on the network used. The networks provided on the JeVois microSD image have been trained on large clusters of GPUs, using 1.2 million training images from the ImageNet dataset.

For more information about MobileNets, see https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md

For more information about the ImageNet dataset used for training, see http://www.image-net.org/challenges/LSVRC/2012/

Sometimes this module will make mistakes! The performance of mobilenets is about 40% to 70% correct (mean average precision) on the test set, depending on network size (bigger networks are more accurate but slower).

Neural network size and speed

This module takes a central image region of size given by the foa parameter. If necessary, this image region is then rescaled to match the deep network's expected input size. The network input size varies depending on which network is used; for example, mobilenet_v1_0.25_128_quant expects 128x128 input images, while mobilenet_v1_1.0_224 expects 224x224. Note that there is a CPU cost to rescaling, so, for best performance, you should match the foa size to the network's input size.

For example:

  • mobilenet_v1_0.25_128_quant (network size 128x128), runs at about 8ms/prediction (125 frames/s).
  • mobilenet_v1_0.5_128_quant (network size 128x128), runs at about 18ms/prediction (55 frames/s).
  • mobilenet_v1_0.25_224_quant (network size 224x224), runs at about 24ms/prediction (41 frames/s).
  • mobilenet_v1_1.0_224_quant (network size 224x224), runs at about 139ms/prediction (7 frames/s).

To easily select one of the available networks, see JEVOIS:/modules/JeVois/TensorFlowEasy/params.cfg on the microSD card of your JeVois camera.

Serial messages

When detections are found with confidence scores above thresh, a message containing up to top category:score pairs will be sent per video frame. Exact message format depends on the current serstyle setting and is described in Standardized serial messages formatting. For example, when serstyle is Detail, this module sends:

DO category:score category:score ... category:score

where category is a category name (from namefile) and score is the confidence score from 0.0 to 100.0 that this category was recognized. The pairs are in order of decreasing score.

See Standardized serial messages formatting for more on standardized serial messages, and Helper functions to convert coordinates from camera resolution to standardized for more info on standardized coordinates.

More networks

Search the web for models in TFLITE format and for TensorFlow 1.x series. For example, see https://tfhub.dev/s?module-type=image-classification

To add a new model to your microSD card:

  • create a directory for it under JEVOIS:/share/tensorflow
  • put your .tflite in there as model.tflite
  • put a list of labels as a plain text file, one label per line, in your directory as labels.txt
  • edit params.cfg for this module (best done in JeVois Inventor) to add a new entry for your network, and to comment out the default entry.

Using your own network

For a step-by-step tutorial, see Training custom TensorFlow networks for JeVois.

This module supports RGB or grayscale inputs, byte or float32. You should create and train your network using fast GPUs, and then follow the instruction here to convert your trained network to TFLite format:

https://www.tensorflow.org/lite/

Then you just need to create a directory under JEVOIS:/share/tensorflow/ with the name of your network, and, in there, two files, labels.txt with the category labels, and model.tflite with your model converted to TensorFlow Lite (flatbuffer format). Finally, edit JEVOIS:/modules/JeVois/TensorFlowEasy/params.cfg to select your new network when the module is launched.

ParameterTypeDescriptionDefaultValid Values
(TensorFlowEasy) foacv::SizeWidth and height (in pixels) of the fixed, central focus of attention. This is the size of the central image crop that is taken in each frame and fed to the deep neural network. If the foa size does not fit within the camera input frame size, it will be shrunk to fit. To avoid spending CPU resources on rescaling the selected image region, it is best to use here the size that the deep network expects as input.cv::Size(128, 128)-
(TensorFlow) netdirstd::stringNetwork to load. This should be the name of a directory within JEVOIS:/share/tensorflow/ which should contain two files: model.tflite and labels.txtmobilenet_v1_224_android_quant_2017_11_08-
(TensorFlow) datarootstd::stringRoot path for data, config, and weight files. If empty, use the module's path.JEVOIS_SHARE_PATH /tensorflow-
(TensorFlow) topunsigned intMax number of top-scoring predictions that score above thresh to return5-
(TensorFlow) threshfloatThreshold (in percent confidence) above which predictions will be reported20.0Fjevois::Range<float>(0.0F, 100.0F)
(TensorFlow) threadsintNumber of parallel computation threads, or 0 for auto4jevois::Range<int>(0, 1024)
(TensorFlow) scorescalefloatScaling factors applied to recognition scores, useful for InceptionV31.0F-
params.cfg file
# Config for TensorFlowEasy. Just uncomment the network you want to use:

###########################################################################
# The default network provided with TensorFlow Lite:

#netdir=mobilenet_v1_224_android_quant_2017_11_08
#foa=224 224

###########################################################################
# Quite slow but accurate, about 4s/prediction, and scores seem out of scale:
#netdir=inception_v3_slim_2016_android_2017_11_10
#scorescale=0.07843
#foa=299 299

# Here is a quantized Inception v1:
#netdir=inception_v1_224_quant_20181026
#foa=224 224

###########################################################################
# All mobilenets V2 with different input sizes, compression levels, and
# quantization. See this link for some info on how to pick one:
# https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet

#netdir=mobilenet_v2_1.0_224_quantized_1_default_1
#foa=224 224

#netdir=mobilenet_v2_0.35_96
#foa=96 96

#netdir=mobilenet_v2_0.35_128
#foa=128 128

#netdir=mobilenet_v2_0.35_160
#foa=160 160

#netdir=mobilenet_v2_0.35_192
#foa=192 192

#netdir=mobilenet_v2_0.35_224
#foa=224 224

#netdir=mobilenet_v2_0.5_96
#foa=96 96

#netdir=mobilenet_v2_0.5_128
#foa=128 128

#netdir=mobilenet_v2_0.5_160
#foa=160 160

#netdir=mobilenet_v2_0.5_192
#foa=192 192

#netdir=mobilenet_v2_0.5_224
#foa=224 224

#netdir=mobilenet_v2_0.75_96
#foa=96 96

#netdir=mobilenet_v2_0.75_128
#foa=128 128

#netdir=mobilenet_v2_0.75_160
#foa=160 160

#netdir=mobilenet_v2_0.75_192
#foa=192 192

#netdir=mobilenet_v2_0.75_224
#foa=224 224

#netdir=mobilenet_v2_1.0_96
#foa=96 96

#netdir=mobilenet_v2_1.0_128
#foa=128 128

#netdir=mobilenet_v2_1.0_160
#foa=160 160

#netdir=mobilenet_v2_1.0_192
#foa=192 192

#netdir=mobilenet_v2_1.0_224
#foa=224 224

#netdir=mobilenet_v2_1.0_224_quant
#foa=224 224

#netdir=mobilenet_v2_1.3_224
#foa=224 224

#netdir=mobilenet_v2_1.4_224
#foa=224 224


###########################################################################
# All mobilenets V1 with different input sizes, compression levels, and
# quantization. See this link for some info on how to pick one:
# https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md

#netdir=mobilenet_v1_0.25_128
#foa=128 128

#netdir=mobilenet_v1_0.25_128_quant
#foa=128 128

#netdir=mobilenet_v1_0.25_16
#foa=160 160

#netdir=mobilenet_v1_0.25_160_quant
#foa=160 160

#netdir=mobilenet_v1_0.25_192
#foa=192 192

#netdir=mobilenet_v1_0.25_192_quant
#foa=192 192

#netdir=mobilenet_v1_0.25_224
#foa=224 224

#netdir=mobilenet_v1_0.25_224_quant
#foa=224 224

#netdir=mobilenet_v1_0.5_128
#foa=128 128

#netdir=mobilenet_v1_0.5_128_quant
#foa=128 128

#netdir=mobilenet_v1_0.5_160
#foa=160 160

#### DEFAULT: runs at about 30 frames/s
netdir=mobilenet_v1_0.5_160_quant
foa=160 160

#netdir=mobilenet_v1_0.5_192
#foa=192 192

#netdir=mobilenet_v1_0.5_192_quant
#foa=192 192

#netdir=mobilenet_v1_0.5_224
#foa=224 224

#netdir=mobilenet_v1_0.5_224_quant
#foa=224 224

#netdir=mobilenet_v1_0.75_128
#foa=128 128

#netdir=mobilenet_v1_0.75_128_quant
#foa=128 128

#netdir=mobilenet_v1_0.75_160
#foa=160 160

#netdir=mobilenet_v1_0.75_160_quant
#foa=160 160

#netdir=mobilenet_v1_0.75_192
#foa=192 192

#netdir=mobilenet_v1_0.75_192_quant
#foa=192 192

#netdir=mobilenet_v1_0.75_224
#foa=224 224

#netdir=mobilenet_v1_0.75_224_quant
#foa=224 224

#netdir=mobilenet_v1_1.0_128
#foa=128 128

#netdir=mobilenet_v1_1.0_128_quant
#foa=128 128

#netdir=mobilenet_v1_1.0_160
#foa=160 160

#netdir=mobilenet_v1_1.0_160_quant
#foa=160 160

#netdir=mobilenet_v1_1.0_192
#foa=192 192

#netdir=mobilenet_v1_1.0_192_quant
#foa=192 192

#netdir=mobilenet_v1_1.0_224
#foa=224 224

#netdir=mobilenet_v1_1.0_224_quant
#foa=224 224

###########################################################################
# AutoML mobile-optimized models: MnasNet flavors
# note: the scores are on a different scale, and generally seem squished upwards.
# You may want to set scorescale to 0.095 and increase thresh to 70 or so.

#netdir=mnasnet_1.3_224
#foa=224 224
#scorescale=0.1

#netdir=mnasnet_1.0_224
#foa=224 224
#scorescale=0.1

#netdir=mnasnet_1.0_192
#foa=192 192
#scorescale=0.1

#netdir=mnasnet_1.0_160
#foa=160 160
#scorescale=0.1

#netdir=mnasnet_1.0_128
#foa=128 128
#scorescale=0.1

#netdir=mnasnet_1.0_96
#foa=96 96
#scorescale=0.1

#netdir=mnasnet_0.75_224
#foa=224 224
#scorescale=0.1

#netdir=mnasnet_0.5_224
#foa=224 224
#scorescale=0.1
Detailed docs:TensorFlowEasy
Copyright:Copyright (C) 2018 by Laurent Itti, iLab and the University of Southern California
License:GPL v3
Distribution:Unrestricted
Restrictions:None
Support URL:http://jevois.org/doc
Other URL:http://iLab.usc.edu
Address:University of Southern California, HNB-07A, 3641 Watt Way, Los Angeles, CA 90089-2520, USA