Welcome new user! You can search existing questions and answers without registering, but please register to post new questions and receive answers.
Welcome to JeVois Tech Zone, where you can ask questions and receive answers from other members of the community.

Custom train SSD-Mobilenet object detection framework with own dataset

0 votes

Hi I'm looking to crowd-source some ideas here from people who perhaps have managed to do this. Taking guidance from http://jevois.org/tutorials/UserTensorFlowTraining.html on transfer learning on a custom dataset, I'm looking to extend the recognition to include object detection as well. I've chosen the baseline framework with SDD-MobileNet v2 and hopefully follow the steps using TensorFlow Object Detection API with a baseline model (ssdlite_mobilenet_v2_coco) to do transfer learning followed by inference optimization and conversion to TFlite to run on Jevois Cam.

I'm getting some directions and inspirations from the following blogs (Dat Tran's Custom Framework - https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) and (David Salek's Transfer Learning Steps - https://github.com/salekd/santa/wiki/Transfer-learning-with-SSD-MobileNet-v1). Also (Kumar's Custom Food Detector - https://github.com/kumarkan/Food_Detection) has some good insights on retraining the model, but I would prefer to explore the use of transfer learning to scale down on the training duration.

Thinking out loud, after retrieving the retrained model and converting it to TF Lite, I can follow the deployment of the model using the script provided in the flowers tutorial:

# Check that the card was properly detected:

ls /media/${USER}/JEVOIS/share/tensorflow

# You should see a bunch of directories and should get no error, otherwise check the path by which you can access your

# microSD card.

# Create a directory for our new model and copy the model and labels files to it:

mkdir /media/${USER}/JEVOIS/share/tensorflow/flowers

cp tf_files/jevois_model.tflite /media/${USER}/JEVOIS/share/tensorflow/flowers/model.tflite

cp tf_files/retrained_labels.txt /media/${USER}/JEVOIS/share/tensorflow/flowers/labels.txt

Is there all that is to it? Am I missing out on any steps?

If there's anyone who has successfully deployed a custom SSD+MobileNet object detector on the Jevois could give some advice would be great!

Thanks very much.

asked Mar 5 in Programmer Questions by adrielkuek (210 points)

1 Answer

0 votes

You would actually run a different module on JeVois, as TensorFlowEasy and that flowers tutorial is for object recognition only, but you want detection plus recognition. So you need a module that will get bounding boxes and then labels for the boxes. We have two types: YOLO and OpenCV DNN. Since your custom model is not derived from YOLO, you would use this




Then the steps to copy the model to JeVois are similar: after you have your trained model, you need to copy 3 files to your microSD into /jevois/share/opencv-dnn/ (for class names, model (.pb file), and model structure (.pbtxt file)), and then create the appropriate entries in the module's params.cfg. See the current contents of params.cfg towards the bottom of the page for DetectionDNN. You would end up with some new entries in params.cfg:

# MobileNet + SSD trained on Custom (XX object classes), TensorFlow model
classnames = /jevois/share/opencv-dnn/detection/custom.names
modelname = /jevois/share/opencv-dnn/detection/ssd_mobilenet_custom.pb
configname = /jevois/share/opencv-dnn/detection/ssd_mobilenet_custom.pbtxt
rgb = false # maybe? see other parameters available for this module
nms = 10.0 # maybe?

For PyDetectionDNN, you would instead edit the python code and load your model there.

answered Mar 6 by JeVois (38,060 points)
You could train your own model on Yolo too. See the write ups here: https://pjreddie.com/darknet/yolo/ and the readme here: https://github.com/AlexeyAB/darknet
Jevois uses a different implementation of YOLO than either of these - it uses NNPACK Darknet to take advantage of the CPU optimization. No matter, if you train it using Alexy's fork, the weights will work on Jevois. You'll want a GPU on the box you use for training, otherwise it's dreadfully slow.
Hi Peter, thanks for the recommendation. Just to check does Jevois support implementation of Yolo2-light? its the v2 implementation with quantization with 30% speedup and -1% mAP performance. Its using the  BIT1-XNOR-inference framework though.