Custom YOLOv8 / YOLOv9 / YOLOv10 / YOLO11 for JeVois-Pro NPU and Hailo-8
This tutorial will help you convert retrained YOLO networks for JeVois-Pro, focusing on the built-in NPU and on the optional Hailo-8 neural accelerator.
1. Read the basic docs
To understand the conversion and quantization process, read:
You will get runs/detect/predict/bus.jpg with a bunch of detections that confirm that the model is working.
3. Get a dataset for training and quantization
Roboflow is a great resource to build and annotate custom datasets, or to download custom datasets contributed by others. It can also train models in the cloud, but from what we have tried, you cannot export the retrained model to ONNX, which is what we need. Instead they want you to use their custom (paid) inference. See https://discuss.roboflow.com/t/export-model-into-onnx/1002
To create a new annotated dataset, create an account on https://roboflow.com which will allow you to upload images and annotate them.
The images are in train/images/, valid/images/ and test/images/ and the labels (ground-truth bounding boxes and object identities) in train/labels/, valid/labels/ and test/labels/
We also get data.yaml which we will use to retrain our model.
4. Retrain (fine-tune) your model
We retrain the model by following the training instructions at https://docs.ultralytics.com/modes/train/ (note: we will use a different image resolution below for export, but training can only use square images):
If you have multiple GPUs on your machine, you can use all of them by adding a device argument. For example, device=0,1 for 2 GPUs.
If it fails to find the dataset, move your dataset to where it wants it and try again. We also had to edit data.yaml to change the paths for train, val, and test for this to work. Just follow the errors until you get it right. We ended up with moving our rockpaperscissors/ folder to the location that ultralytics wanted and then editing data.yaml as follows:
train: rockpaperscissors/train/images
val: rockpaperscissors/valid/images
test: rockpaperscissors/test/images
Your final model will be runs/detect/trainXX/weights/best.pt (replace XX by a number that depends on how many other training runs you have made; look at the file dates to make sure you are using the correct one).
5. Export trained model to ONNX
For the export, we use resolution 1024x576 which works well with JeVois-Pro (no distorsion given the camera's 16:9 aspect ratio):
We need a representative sample dataset to quantize the model, so that we can estimate the range of values experienced at each layer of the network during inference. These ranges of values will be used to set the quantization parameters. Hence, it is essential that the sample dataset contains images with the desired targets, and also images with other things that the camera will be exposed to but that we do not want to falsely detect.
Here we will use images from our validation dataset.
ls /absolute/path/to/rockpaperscissors/valid/images/*.jpg | shuf -n 1000 > dataset-rockpaperscissors.txt
Note
We store the full absolute file paths into the text file as we will run the NPU conversion from a different directory, inside the NPU SDK.
6.2. For Hailo-8, make a numpy archive
Like in Converting and running neural networks for Hailo-8 SPU, we write a little python script numpy_dataset.py to create the sample dataset as a big numpy array. Change dir, width, height, and numimages below to fit your needs:
We get 3 results in outputs/yolov8n-1024x576-rockpaperscissors/
libnn_yolov8n-1024x576-rockpaperscissors.so # runtime library for JeVois-Pro that will load the model
yolov8n-1024x576-rockpaperscissors.nb # model weights
yolov8n-1024x576-rockpaperscissors.yml # JeVois model zoo file
Copy all 3 files to your microSD into JEVOISPRO:/share/dnn/custom/
7.2. For Hailo-8, convert in the Hailo docker
We do not yet have a script for Hailo. Here are the outputs you should use, for each type of YOLO model that we could directly get from Ultralytics (since detection is the default task, -det may not be present in your model name):
We get into the Hailo container and copy yolov8n-1024x576-rockpaperscissors.onnx and dataset-rockpaperscissors.npy via the shared_with_docker/ folder, as explained in Converting and running neural networks for Hailo-8 SPU, then:
If you get an error onnx.onnx_cpp2py_export.checker.ValidationError: Your model ir_version 10 is higher than the checker's (9). that means that you need to use an older version of ONNX during the export in step 5. We recommend installing pip install ultralytics in the hailo container to avoid this problem, then copy the model's .pt file to the container and run the export to ONNX in the hailo container.
We eventually obtain yolov8n-1024x576-rockpaperscissors.hef
Get it out of the container via shared_with_docker:
To run this model on JeVois-Pro, we need a small YAML file that will instruct the camera on how to run this model. Create yolov8n-1024x576-rockpaperscissors-spu.yml with these contents:
Copy both yolov8n-1024x576-rockpaperscissors.hef and yolov8n-1024x576-rockpaperscissors-spu.yml to microSD into directory JEVOISPRO:/share/dnn/custom/
We will create the rockpaperscissors.txt mentioned above in the next step.
8. Create a text file with your custom class names
In the data.yaml of our dataset, we see:
names: ['Paper', 'Rock', 'Scissors']
So we create a corresponding text file for JeVois-Pro to know about these names, rockpaperscissors.txt
Paper
Rock
Scissors
Copy this file to microSD in JEVOISPRO:/share/dnn/labels/
Then, if you converted for NPU, adjust the classes entry in file JEVOISPRO:/share/dnn/custom/yolov8n-1024x576-rockpaperscissors.yml as follows (for Hailo SPU, we already put the correct classes in step 7.2):