Saliency SURF
Simple salient region detection and identification using keypoint matching.
By Laurent Ittiitti@usc.eduhttp://jevois.orgGPL v3
Video Mapping:   YUYV 320 288 30.0 YUYV 320 240 30.0 JeVois SaliencySURF

Module Documentation

This module finds objects by matching keypoint descriptors between a current set of salient regions and a set of training images. Here we use SURF keypoints and descriptors as provided by OpenCV. The algorithm is quite slow and consists of 3 phases: detect keypoint locations, compute keypoint descriptors, and match descriptors from current image to training image descriptors. Here, we alternate between computing keypoints and descriptors on one frame (or more, depending on how slow that gets), and doing the matching on the next frame. This module also provides an example of letting some computation happen even after we exit the process() function. Here, we keep detecting keypoints and computing descriptors even outside process(). The itsKPfut future is our handle to that thread, and we also use it to alternate between detection and matching on alternating frames.

Also see the ObjectDetect module for a related algorithm (without attention).

Training: Simply add images of the objects you want to detect in JEVOIS:/modules/JeVois/SaliencySURF/images/ on your JeVois microSD card. Those will be processed when the module starts. The names of recognized objects returned by this module are simply the file names of the pictures you have added in that directory. No additional trainign procedure is needed. Beware that the more images you add, the slower the algorithm will run, and the higher your chances of confusions among several of your objects.

ParameterTypeDescriptionDefaultValid Values
(SaliencySURF) inhsigmafloatSigma (pixels) used for inhibition of return32.0F-
(SaliencySURF) regionssize_tNumber of salient regions2-
(SaliencySURF) rsizsize_tWidth and height (pixels) of salient regions64-
(SaliencySURF) saveboolSave regions when true, useful to create a training set. They will be saved to /jevois/data/saliencysurf/false-
(Saliency) cweightbyteColor channel weight255-
(Saliency) iweightbyteIntensity channel weight255-
(Saliency) oweightbyteOrientation channel weight255-
(Saliency) fweightbyteFlicker channel weight255-
(Saliency) mweightbyteMotion channel weight255-
(Saliency) centerminsize_tLowest (finest) of the 3 center scales2-
(Saliency) deltaminsize_tLowest (finest) of the 2 center-surround delta scales3-
(Saliency) smscalesize_tScale of the saliency map4-
(Saliency) mthreshbyteMotion threshold0-
(Saliency) fthreshbyteFlicker threshold0-
(Saliency) msflickboolUse multiscale flicker computationfalse-
(ObjectMatcher) hessiandoubleHessian threshold800.0-
(ObjectMatcher) traindirstd::stringDirectory where training images areimages-
(ObjectMatcher) goodptsjevois::Range<size_t>Number range of good matches consideredjevois::Range<size_t>(15, 100)-
(ObjectMatcher) distthreshdoubleMaximum distance for a match to be considered good0.2-
params.cfg file
# Default parameters that are set upon loading the module
goodpts = 4...15
distthresh = 0.4
Detailed docs:SaliencySURF, Saliency, ObjectMatcher
Copyright:Copyright (C) 2016 by Laurent Itti, iLab and the University of Southern California
License:GPL v3
Distribution:Unrestricted
Restrictions:None
Support URL:http://jevois.org/doc
Other URL:http://iLab.usc.edu
Address:University of Southern California, HNB-07A, 3641 Watt Way, Los Angeles, CA 90089-2520, USA