JeVois
1.22
JeVois Smart Embedded Machine Vision Toolkit
|
|
JEVOIS_DECLARE_PARAMETER(...)
directives at the beginning of each header file:
rgb
is a pre-processor parameter, it should be specified in the YAML file after preproc
has been set.pypre
to select which python file to loadscale
and mean
rgb
is true). This will make your YAML file more concise.mean
and stdev
values should be in the same order as your model's input images (RGB or BGR, as specified by the rgb
parameter).If your network produces outputs that are close to what the post-processor wants, but not quite, you can try to apply transforms to the outputs. Supported transforms are:
Output transformes are specified in the YAML zoo file using outransforms:
and a sequence of transforms; for example:
classes
is optional. If you just want to check how fast a model will run on your JeVois camera but don't have the class list handy, just remove the classes
parameter from your YAML. JeVois will just display class numbers instead of class names.classoffset:
likely you have a computer keyboard around, and those tend to be easily recognized by models trained on ImageNet. So just point your camera to the keyboard and play with classoffset
in the JeVois GUI until you get "computer keyboard" as output.With quantized models, we prefer to use split outputs as they will give better quantized accuracy, with one output tensor for each type and scale (stride) of outputs. Use Netron to find their names. For example, for YOLOv8 / YOLOv9 / YOLOv10 / YOLO11:
Check out the many models that ship with JeVois-Pro, and also our script jevoispro-npu-convert.sh
YOLOv8 post-processors have two versions: normal and transposed. This is so that we do not waste time at runtime trying to figure out which one to use. Usually, networks converted for NPU will use the normal version (YOLOv8, for detection, pose, OBB, etc), and networks converted for Hailo-8 will use the transposed version (VOLOv8t). If one does not work for your model, try the other. Also see the following files for exactly what tensor shapes a given postprocessor expects:
Have a look at the models in JeVois-Pro Deep Neural Network Benchmarks for many examples of the kinds of outputs we extract from various models to work with our C++ or Python post-processors.
ONNX can also presumably convert ir_version, but we have had no success in running this to try to convert from ir_version 9 to 8:
We always get an error.