The JeVois DNN module provides a generic engine to run neural network inference on JeVois-A33 and JeVois-Pro. Note that currently no neural network training is available on JeVois yet, as training usually requires large servers with big GPUs. Thus here we assume that you have an already-trained model which you want to use on your JeVois camera for runtime inference on live video streams.
While we here focus on the JeVois DNN module, several older modules provide DNN functionality:
- TensorFlowEasy: TensorFlow-Lite object classification on CPU using TensorFlow API
- TensorFlowSaliency: Itti et al. (1998) saliency model + TensorFlow-Lite object classification on CPU using TensorFlow API
- TensorFlowSingle: TensorFlow-Lite object classification on CPU using TensorFlow API
- DarknetSingle: Darknet object recognition on CPU, using Darknet API
- DarknetSaliency: Itti et al. (1998) saliency model + Darknet object recognition on CPU, Darknet API
- DarknetYOLO: Darknet YOLO object detection on CPU, Darknet API
- DetectionDNN: Object detection using OpenCV on CPU
- PyDetectionDNN: Object detection using OpenCV on CPU, Python version
- PyClassificationDNN: Object classification using OpenCV on CPU, Python version
- PyEmotion: Facial emotion recognition network, in Python
- JeVois-Pro only: PyFaceMesh: Facial landmarks using MediaPipe
- JeVois-Pro only: PyHandDetector: Hand landmarks using MediaPipe
- JeVois-Pro only: PyPoseDetector: Body pose landmarks using MediaPipe
- JeVois-Pro only: MultiDNN: Run multiple neural networks in parallel, display in quadrants
- JeVois-Pro only: MultiDNN2: Run multiple neural networks in parallel, overlapped displays
- JeVois-Pro only: PyCoralClassify: Run classification models on optional Coral TPU, using Coral Python API
- JeVois-Pro only: PyCoralDetect: Run detection models on optional Coral TPU, using Coral Python API
- JeVois-Pro only: PyCoralSegment: Run segmentation models on optional Coral TPU, using Coral Python API
- Note
- On JeVois-Pro, some of these modules are under the Legacy list of modules in the graphical interface.
JeVois-Pro DNN Benchmarks with various hardware accelerators
See JeVois-Pro Deep Neural Network Benchmarks
JeVois DNN framework overview
The DNN module implements a Pipeline component, which serves as overall inference orchestrator, as well as a factory for three sub-components:
- PreProcessor: receives an image from the camere sensor and prepares it for network inference (e.g., resize, swap RGB to BGR, quantize, etc).
Available variants:
- Network: receives a pre-processed image and runs neural network inference, producing some outputs.
Available variants:
- PostProcessor: receives the raw network outputs and presents them in a human-friendly way. For example, draw boxes on the live camera video after running an object detection network.
Available variants:
The parameters of a Pipeline are specified in a YAML file that describes which pre-processor to use, which network type, which post-processor, and various parameters for these, as well as where the trained weights are stored on microSD. These YAML files are stored in JEVOIS[PRO]:/share/dnn/ and available on JeVois-Pro through the Config tab of the user interface.
A given network is selected in the DNN module via the pipe
parameter of the Pipeline component. Available pipes are described in that parameter as:
<ACCEL>:<TYPE>:<NAME>
where ACCEL is one of (OpenCV, NPU, SPU, TPU, VPU, NPUX, VPUX, Python), and TYPE is one of (Stub, Classify, Detect, Segment, YuNet, Python, Custom).
The following keys are used in the JeVois-Pro GUI (pipe
parameter of Pipeline component):
- OpenCV: network loaded by OpenCV DNN framework and running on CPU.
- ORT: network loaded by ONNX Runtime framework and running on CPU.
- NPU: network running native on the JeVois-Pro integrated 5-TOPS NPU (neural processing unit).
- TPU: network running on the optional 4-TOPS Google Coral TPU accelerator (tensor processing unit).
- SPU: network running on the optional 26-TOPS Hailo8 SPU accelerator (stream processing unit).
- VPU: network running on the optional 1-TOPS MyriadX VPU accelerator (vector processing unit).
- NPUX: network loaded by OpenCV and running on NPU via the TIM-VX OpenCV extension. To run efficiently, network should have been quantized to int8, otherwise some slow CPU-based emulation will occur.
- VPUX: network optimized for VPU but running on CPU if VPU is not available. Note that VPUX entries are automatically created by scanning all VPU entries and changing their target from Myriad to CPU, if a VPU accelerator is not detected. If a VPU is detected, then VPU models are listed and VPUX ones are not. VPUX emulation runs on the JeVois-Pro CPU using the Arm Compute Library to provide efficient implementation of various network layers and operations.
For example:
%YAML 1.0
---
SqueezeNet:
preproc: Blob
nettype: OpenCV
postproc: Classify
model: "opencv-dnn/classification/squeezenet_v1.1.caffemodel"
config: "opencv-dnn/classification/squeezenet_v1.1.prototxt"
intensors: "NCHW:32F:1x3x227x227"
mean: "0 0 0"
scale: 1.0
rgb: false
classes: "classification/imagenet_labels.txt"
classoffset: 1
will be available in the DNN module via the pipe
parameter of Pipeline as OpenCV:Classify:SqueezeNet
For an up-to-date list of supported keys in the YAML file, see all the parameters defined (using JEVOIS_DECLARE_PARAMETER(...)
) in:
From the links above, click on Go to the source code of this file to see the parameter definitions.
Procedure to add a new network
- Everything you need at runtime (OpenCV full, with all available backends, targets, etc, OpenVino, Coral EdgeTPU libraries, Hailo libraries, NPU libraries, etc) is pre-installed on JeVois, so you do not need to install any additional software on the camera to run your custom networks using these frameworks.
- Obtain a model: train your own, or download a pretrained model.
- Obtain some parameters about the model (e.g., pre-processing mean, stdev, scale, expected input image size, RGB or BGR, packed (NWHC) or planar (NCHW) pixels, names of the input and output layers, etc).
- For running on JeVois-Pro, convert/quantize the model on your desktop Linux computer, so that it is optimized to run on one of the available neural accelerators, like integrated NPU, Hailo8, Coral TPU, etc.
- This will require that you install a vendor-provided SDK for each target accelerator (e.g., Amlogic NPU SDK, OpenVino SDK, Hailo SDK, Coral EdgeTPU compiler), on a fast Linux desktop with plenty of RAM, disk space, and possibly a big nVidia GPU.
- For quantization, you will also need a representative sample dataset. This usually is about 100 images from the validation set used for your model. The goal is to run this dataset through the original network (forward inference only) and record the range of values encountered on every layer. These ranges of values will then be used to quantize the layers with best accuracy.
- Using the vendor SDK for the acelerator of your choice, convert and quantize the model on your fast Linux desktop.
- Copy model to JeVois microSD card under JEVOIS[PRO]:/share/dnn/custom/
- Create a JeVois model zoo entry for your model, where you specify the model parameters and the location where you copied your model files. Typically this is a YAML file under JEVOIS[PRO]:/share/dnn/custom/
- On the camera, launch the JeVois DNN module. It will scan the custom directory for any valid YAML file, and make your model available through the
pipe
parameter of the DNN module's Pipeline component. Select that pipe to run your model.
- You can adjust many parameters while the model is running (e.g., confidence threshold, pre-processing mean and scale, swap RGB/BGR), while others are frozen at runtime (e.g., input tensor dimensions, post-processor type). Once you determine good values for the online-tunable parameters, you can copy those values to your YAML file. Frozen parameters can only changed in the YAML file.
Details for the available frameworks