JeVois Tutorials
1.22
JeVois Smart Embedded Machine Vision Tutorials
|
|
JeVois can directly process video input from its camera sensor, and record results to microSD. Here we will develop a new module, which records a short video clip each time a surprising event is detected in the video stream, so that the event is captured to microSD. This is an enhanced version of standard motion-based video surveillance systems, which only record video when things are moving. One drawback of such systems is that repetitive motions, such as foliage moving in the wind, easily trip the motion detection algorithm and could potentially trigger recording of a lot of data.
This is a fairly long and detailed tutorial. To get you excited, let us look at the payoff upfront: Here is an hour-long surveillance video. It is very boring overall, except that a few brief surprising things occur (a few seconds each). Can you find them?
Here is what the SurpriseRecorder module we develop in this tutorial found (4 true events plus 2 false alarms):
That is, it summarized 1 hour of video into 6 snippets of about 12 seconds each (50x reduction: you just watch slightly over a minute of surprising snippets instead of 1 hour of mostly boring footage). Upon closer inspection after this tutorial results video was made, the last detected event actually appears to be a bird flying very quickly across the video frames. So, as far as surprise is concerned, this is actually a hit rather than a false alarm noted in the results video (i.e., it was a surprising event, although possibly not relevant to a surveillance goal - see here for recent highly related work on relevance). So we got 5 hits and 1 false alarm. No misses as far as we can tell by watching the full hour-long video, i.e., our module did detect all the boats (and birds) that passed by. Not bad at all!
We will use Itti & Baldi's theory of surprise to detect surprising events in video.
They defined surprise in a formal, quantitative manner (for the first time!), as follows: An observation is surprising if it significantly affects the internal (subjective) beliefs of an observer. For example, if I believe that there is a 10% chance of rain today (my prior belief), and then I look outside and I see only a few small scattered clouds, then I may still believe in that same 10% chance of rain (posterior belief after the observation). My observation was not surprising, and Itti & Baldi say that this is because it did not affect my beliefs. Formally, when my posterior beliefs after an observation are very similar to what my prior beliefs were before the observation, the observation carries no surprise. In contrast, if I see a sky covered with menacing dark clouds all over, I may revise my belief to a 80% chance of rain today. Because my posterior beliefs are now much different than my prior beliefs (80% vs 10% chance of rain), the observation of clouds is said to carry a high surprise. Itti & Baldi further specify how to compute surprise by using Bayes' theorem to compute posterior beliefs in a principled way, and by using the Kullback-Leibler (KL) divergence to measure the difference between posterior and prior distributions of beliefs. This gives rise to a new quantitative measure of surprise, with a new unit, the wow (one wow of surprise is experienced when your belief in something doubles).
For more information, check out L. Itti, P. F. Baldi, Bayesian Surprise Attracts Human Attention, Vision Research, Vol. 49, No. 10, pp. 1295-1306, May 2009
Here, we will:
This approach is related to [R. C. Voorhies, L. Elazary, L. Itti, Neuromorphic Bayesian Surprise for Far Range Event Detection, In Proc 9th IEEE AVSS, Beijing, China, Sep 2012](http://ilab.usc.edu/publications/doc/Voorhies_etal12avss.pdf)
We first create a new component and name it Surprise. Because it is closely related to Saliency, we will just place it in jevoisbase/include/jevoisbase/Components/Saliency/ (for the .H file) and jevoisbase/src/Components/Saliency/ (for the .C file), together with the Saliency component. We need Surprise.H and Surprise.C here so that users of our component will just include Surprise.H in their own components or modules. By placing the source under jevoisbase/src/Components/ we ensure that the jevoisbase build rules (CMake) will automatically detect it and compile it into Surprise.o, which will also automatically be linked into the libjevoisbase.so library that contains all the components of jevoisbase. This is the result of the following line in the CMakeLists.txt of jevoisbase:
jevois_setup_library(src/Components jevoisbase 1.0)
which instructs CMake to build a new library called libjevoisbase.so from all the source files under src/Components/ (recursively).
We start Surprise.H with two statements:
The first one tells the compiler to not include this file again if it was already included, thereby simplifying how people will use our component (they will not get errors about duplicate definitions if somehow Surprise.H ends up being included twice, maybe because it is used by two higher-level components that are both used in an even higher-level component). The second one will allow us to use the Saliency component in our component. Because Saliency already is a Component, including Saliency.H will automatically pull in the whole Component, Parameter, etc machinery of JeVois and hence no further include statement is needed here for now.
We then declare a new class Surprise that derives from jevois::Component. We do not place it into any namespace since all of jevoisbase is just in the global namespace (it is end-user code). In the documentation for the new component, we make sure to include a statement:
\ingroup components
so that our component gets listed under all the components of jevoisbase in the online documentation of jevoisbase.
We thus start like this:
Note how our new component Surprise does not derive from Saliency, but rather derives directly from jevois::Component. Indeed, in the JeVois framework, we use composition to create hierarchies of components as opposed to using inheritance. This is to avoid possibly ambiguous inheritance chains with respect to parameters (remember that JeVois components inherit from all of their parameters, as explained in Parameter-related classes and functions). Thus, Surprise will instead contain a Saliency sub-component.
Although lookup functions are provided by jevois::Component to find sub-components by name, it is usually a good idea to maintain a redundant but more direct handle to the sub-components which we will often use. Hence, we add the following to our class:
and we initialize it in our constructor, while the destructor does nothing for now. Thus Surprise.C looks like this now:
Note how all the parameters of Saliency (for scales, weights of the different features, etc) are now exposed and accessible to users of Surprise, with no work required from us thanks to the JeVois Component and Parameter framework. This is a big leap forward compared to using functions with many parameters as done, e.g., in OpenCV, which would require us here to manually expose and forward all these parameters. See the doc of jevois::Component and Parameter in JeVois for more explanations.
Let us run a
./rebuild-host.sh
now from inside ~/jevoisbase to make sure that CMake detects our new files src/Components/Saliency/Surprise.H and src/Components/Saliency/Surprise.C and compiles them.
We only need to do this full rebuild once to update the CMake cache. Later, as we continue to develop our component, we can just change directory to hbuild and simply type make
to re-compile only what we have changed and not the whole jevoisbase.
We will first fully develop and build our new component for host, which is easier and faster than cross-compiling for the JeVois ARM processor. Once everything works well on the host computer, we will move on to cross-compiling for platform hardware.
Now for the actual work: Here we want a single function that will take a video frame in and will return a surprise value, which will be the value of the most surprising location in the video frame. It will first compute saliency and feature maps using our Saliency sub-component, and will the compute surprise on those maps. We model or processing function to match one of the functions in Saliency since we will directly pass our received video frame to Saliency. Thus, we declare a new member function in Surprise.H:
and we implement it in Surprise.C:
Note how Saliency::process() could throw if something goes wrong. Here, we do not worry about it since we have not allocated any resource that could be leaked if we were to exit through an exception. See Tips for writing machine vision modules for more info.
Typing make
in ~/jevoisbase/hbuild/ still compiles fine at this point.
Following Itti & Baldi, 2009, here we will assume Poisson data (spikes from neurons representing the feature values) and we will hence naturally use a Gamma conjugate prior. Under these conditions, surprise can be computed in closed form (see Itti & Baldi, 2009). The Gamma distribution has two parameters, alpha and beta - these will basically capture, at each pixel in each feature map, our beliefs about how this feature pixel usually looks like. We will here update this belief over time, each time a new video frame is received. If a given video frame yields a big update, we will conclude that something surprising just happened.
To compute surprise over the salmap, intens, color, ori, flicker, and motion maps that are inside the Saliency component, we will need to store two corresponding arrays: one for the prior alpha, and one for the prior beta for each pixel of each map. We will then update those arrays, pixel-by-pixel, treating the maps from Saliency as new data observations, and assuming that the single value received from the Saliency component represents the mean of a Poisson-distributed spike train. Our prior alpha and beta arrays will then become posterior alpha and beta arrays through Bayesian update using the data, and we will compute surprise from the KL divergence between prior and posterior. Note (see paper for details) that our choice of a Gamma prior guarantees that, after an update using Poisson data, the posterior is also Gamma (i.e., Gamma is the conjugate prior for Poisson). We will then transfer the posterior arrays for the current frame into the prior arrays stored in our class for the next frame.
Here we will just compute surprise independently for each pixel in each feature map. Thus, the 2D structure of the saliency feature maps will not be exploited. We need float or double values for alpha and beta to accurately compute surprise, let us use float here. Thus, we could possibly just use std::array<float> to store alpha and beta maps as 1D arrays, except for one detail: we do not know the feature map sizes until the first video frame has been processed through saliency.
So we need dynamically-sized arrays and can we just use std::vector<float>. This is preferred to just using a raw C array because the memory will be automatically freed by the vector destructor when our component is destroyed, thus we do not have to worry about freeing it, about exceptions, memory leaks, etc by using std::vector<float> as opposed the using raw float * and new
and delete
.
One final thing to note is that the map of origin will also not be important for surprise computation here, we will just compute surprise independently for every pixel of every map. So we can just concatenate all the pixels of all maps into a single vector of data, and likewise for alpha and beta. We hence add the following data members to our class under the protected section:
Yes, we could have used a vector of pairs, or created an auxiliary (alpha, beta) struct as well, but access is probably faster with our 2 vectors. We start fleshing out our process() function as follows:
At this point we realize two things: 1) the update factor updatefac could be a Parameter of our component; 2) how about also computing surprise on the gist vector, while we are at it, if the users want it? Let us add these two features. First a parameter for updatefac:
In Saliency.H, we create a namespace surprise that we will use for our parameters. This is to avoid clashes among different components that could be used jointly in a given higher-level component (some other component may also have a parameter named updatefac). Convention here is to use the lowercase version of the class name as namespace name for its parameters. We create a parameter category so all our parameters will be grouped together when users type help
, and note the \relates directive to instruct doxygen to relate the parameter to our class in the online documentation:
Remember that the format of JEVOIS_DECLARE_PARAMETER()
is:
We then add these two parameters to our Component:
and we use them in our process function:
That's it. These parameters will now appear in the help
message, users will able to set and get their values, and every component or module that includes our Surprise component as a sub-component will also expose them!
Note, however, that doing a get()
on a parameter is slightly heavier than just getting the value of a variable, because parameters are thread-safe, and hence a mutex must be locked during the get(). Hence, instead of calling updatefac::get()
for each data element, we can here pre-compute our divider beforehand:
We update our function to concatenate the gist vector to the data:
A quick make
and all still compiles fine. No need to mess around with cross-compilers, microSD cards, etc at this point thanks to the simultaneous host/platform development environment of JeVois.
All right, so we just need to compute surprise, we do here as in the SurpriseModelSP of the iLab Neuromorphic C++ Vision Toolkit, from which our implementation here is derived. Also see Itti & Baldi, Vis Res, 2009, for details:
We just need to implement the helper function KLgamma()
and we are done. This is the closed-form solution for surprise on a Gamma prior. It ends up using hundreds of lines of code of no interest here. We just rip that code from the Itti & Baldi implementation.
One last improvement before we are done with our component: instead of just a boolean dogist
, how about allowing users to choose which channels or combinations they like. We replace dogist
by:
The regex specification of valid values specifies that we want a string composed exclusively of the allowed letters, and with at least one letter. We modify the class parameter list to use this parameter instead of dogist
, and we modify the process()
function:
We are done with the Surprise component. Final code is in Surprise.H and Surprise.C of jevoisbase and should be pretty close to what we have developed above, except for small optimizations introduced after this tutorial was written. The details of KLgamma()
are also in there.
We are now ready to develop a new module, which we will call SurpriseRecorder. It will compute surprise and record to microSD small video snippets around each detected surprising event.
To get started, we:
Create a directory jevoisbase/src/Modules/SurpriseRecorder and start a file jevoisbase/src/Modules/SurpriseRecorder/SurpriseRecorder.C
Since modules are terminal entities (nobody will use-them as subs), we usually develop them in Java-style, i.e., in a single .C file that contains declarations and implementation.
Our module will use the Surprise component, so we pull it in. We also include Module.H from jevois:
We start our class as a JeVois Module. We fill in custom doxygen tags that will be used to generate the module's documentation page (see Programmer SDK and writing new modules).
Note how for now we are aiming to only support modes with no video out over USB. That is, this module is mainly geared towards standalone operation: one lets it run for a day, then one checks out the video snippets that were recorded on the microSD. No live output is needed. Thus, we will override the process()
function of jevois::Module that only takes an input frame in (and no output frame). The jevois::Engine will call this function for each new video frame from the camera sensor.
At some point, also remember to add an icon file, maybe some screenshots, etc to your module, as explained in Programmer SDK and writing new modules
We get going with constructor and destructor and a skeleton for the process()
function. Our module will use a Surprise sub-component. We do not forget to register the module with the JeVois framework so it can be loaded at runtime into the JeVois engine (see last line and the JEVOIS_REGISTER_MODULE() macro):
Let's try to compile. Like with our Component before, the first time we add the new file, we need to re-run CMake, by simply running a full:
./rebuild-host.sh
The rebuild script will detect the new module under jevoisbase/src/Modules/ and will add it to the list of modules to build. Later, as we keep editing our module file, we can just type make
from within the hbuild directory.
Let's add a few parameters. Because modules are terminal entities, we do not need to place them in a namespace. We take inspiration from the SaveVideo module:
and we add them to our module class:
Now let's see how we will record the frames. We will use an approach similar to what was done in SaveVideo. But, because of our desire to also save context frames before and after each surprising event, the overall logic here will be sufficiently different. Thus, while one could at first think of splitting off the video saving aspect of the SaveVideo module into a Component that could then be shared, here we will not attempt this because of the differences introduced by the context. Recording context is important for humans reviewing the surprising events, as some events may be surprising only for a few frames and would thus be very difficult to watch without a few extra seconds of context before and after the event.
We will use the following basic plan of attack:
process()
we will compute surprise and decide whether the current frame should be saved.process()
running in the main thread to our video compression and saving thread. We will use jevois::BounderBuffer<T> which was developed for this kind of purpose. BoundedBuffer is a thread-safe producer-consumer queue. Our process()
function will push frames into it, and our video writing thread will pop frames from it.ctxframes
in another, non-thread-safe, queue to be used by the main thread only, so that they are ready to provide the context before a surprising event whenever that event is encountered. When a surprising event starts, we will transfer all these frames to our BoundedBuffer, unless we are already saving a previous event. An std::deque<cv::Mat>
should work great for that, we will call it itsCtxBuf. Note that, because cv::Mat is only a thin wrapper around pixel data, pushing a cv::Mat into a queue or transferring a bunch of cv::Mat from one queue to another will not copy the pixel data, just the shared pointers to it that are in cv::Mat. So it is cheap to move these images around.Let's start with starting and stopping our thread. We override jevois::Component::postInit() to start it, because we want to access the parameter that contains the number of context frames as we start. This parameter will not be set yet at construction time, but will be ready by the time postInit() is run. Thus, likewise, we override jevois::Component::postUninit() to stop it.
We also initialize some of our synchronization variables in the constructor:
and we need to know about BoundedBuffer and a bunch of other things we will use in run()
and process()
:
Typing a quick make
in hbuild compiles with no errors or warnings.
Let's now flesh out our process()
function: We just convert frames to cv::Mat and push them into our buffer, and we also compute surprise. We will do both in parallel by using an async thread for the surprise computation:
And finally our run()
thread. As it turns out, we end up using it unmodified from that of SaveVideo. So, actually, we could benefit in splitting off this video saving machinery into a Component that would be used both by SaveVideo and SurpriseRecorder. We will leave that for future work:
One last thing: because we so far have no idea of what surprise values to expect, let us show them on each frame. We add a simple LINFO()
message as follows:
A quick make
and all compiles. Type a sudo make install
to install your compiled module into the /jevois/ directory on your host computer. Time to test!
Let us first try our new module on the host computer. We will use YUYV 640x480 @ 15 fps.
To provide video input, let us use our JeVois camera, configured in "dumb camera mode": We add a video mapping on our microSD that allows it to just output YUYV 640x480 @ 15 fps using the PassThrough module (no processing on JeVois). Then we will run jevois-daemon
on our host, grab that format of video, and process it on the host:
Add this line to JEVOIS:/config/videomappings.cfg on the microSD of your JeVois camera:
YUYV 640 480 15.0 YUYV 640 480 15.0 JeVois PassThrough
Your JeVois camera can now operate as a dumb camera in that mode. Alternatively, you could also here use a regular webcam to provide inputs, as long as it supports this format.
Connect your JeVois camera to your host computer and allow it to boot up.
On the host, make sure that you have write permission to /jevois/data/, then type
jevois-daemon
to run the JeVois processing on your host processor. It will likely start DemoSaliency as default processing module.
To switch to our new module, type this into the terminal in which you launched jevois-daemon
:
streamoff setmapping2 YUYV 640 480 15.0 JeVois SurpriseRecorder setpar serout All streamon
Your terminal should show these lines:
streamoff OK setmapping2 YUYV 640 480 15.0 JeVois SurpriseRecorder INF Engine::setFormatInternal: OUT: NONE 0x0 @ 0fps CAM: YUYV 640x480 @ 15fps MOD: JeVois:SurpriseRecorder INF Camera::setFormat: Camera set video format to 640x480 YUYV INF Engine::setFormatInternal: Instantiating dynamic loader for /jevois/modules/JeVois/SurpriseRecorder/SurpriseRecorder.so OK INF Engine::setFormatInternal: Module [SurpriseRecorder] loaded, initialized, and ready. streamon INF Camera::streamOn: 6 buffers of 614400 bytes allocated OK INF SurpriseRecorder::process: surprise = 0.00094728 itsToSave = 0 INF SurpriseRecorder::process: surprise = 1.44831e+07 itsToSave = 0 SAVETO /jevois/data/surpriserecorder/video000000.avi INF SurpriseRecorder::process: surprise = 7.42191e+06 itsToSave = 150 INF SurpriseRecorder::process: surprise = 6.70748e+06 itsToSave = 149 INF SurpriseRecorder::process: surprise = 3.98372e+06 itsToSave = 148 INF SurpriseRecorder::process: surprise = 1.16248e+07 itsToSave = 147 INF SurpriseRecorder::process: surprise = 4.28625e+06 itsToSave = 150 INF SurpriseRecorder::process: surprise = 3.55222e+06 itsToSave = 149 INF SurpriseRecorder::process: surprise = 2.4415e+06 itsToSave = 148 INF SurpriseRecorder::process: surprise = 1.08243e+07 itsToSave = 147 INF SurpriseRecorder::process: surprise = 3.69354e+06 itsToSave = 150 INF SurpriseRecorder::process: surprise = 3.03062e+06 itsToSave = 149 INF SurpriseRecorder::process: surprise = 1.00832e+07 itsToSave = 148 INF SurpriseRecorder::process: surprise = 5.33606e+06 itsToSave = 150 [...]
Those values are very large. First. wave your hand in front of the camera and check out the values. We get values above 1e7 wows (10 megawows). So let us set the threshold there:
setpar thresh 1e7
Now, after 10 seconds, you should see messages about the video file being closed. Wave your hand and a new file will open. That LINFO()
is getting annoying now, so remove it from the module, re-compile, and run it again, type this after you launch jevois-daemon
:
streamoff setmapping2 YUYV 640 480 15.0 JeVois SurpriseRecorder setpar serout All setpar thresh 1e7 setpar channels S streamon
Now, keep your camera still and looking at nothing that moves. Note how one event may get recorded at the start, this is because the first few frames are surprising (we believe, due to some initial gain control on the camera as capture starts), and then rapidly get boring and recording stops.
Each time you wave your hand in front of the camera, you should see it save to a new file. 10 seconds after you stop waving, saving stops.
Use mplayer
or similar to play the videos that are getting written to /jevois/data/surpriserecorder/video*. Each one should start with 10 seconds of nothing then your waving hand then 10 seconds of nothing.
The JeVois framework provides convenient jevois::Timer and jevois::Profiler classes to help you measure how much time it takes to do things on each frame. This will help us decide what standard videomapping we should suggest for our surprise recorder. Both classes operate in the same way:
start()
on the timer or profiler object to indicate start of frame (start of process()
function)checkpoint()
commands at various checkpoints in your process()
functionstop()
commandThe timer and profiler classes will accumulate average statistics over 100 frames and will display those once in a while. We do not display on every frame as this could slow us down too much, especially if sending those reports over serial port.
Let us first include the profiler declarations so we can use it:
Let us instrument our process()
function with a jevois::Profiler as follows. The new lines below all have the word prof in them, look for it, and we also added some ////////////////////////////////////////////////////////
markers to help:
Now, every 100 frames, you will see something like this:
INF Profiler::stop: surpriserecorder overall average (100) duration 15.4445ms [11.2414ms .. 22.1041ms] (64.7478 fps) INF Profiler::stop: surpriserecorder - surprise launched average (100) delta duration 43.7507us [27.532us .. 77.293us] (22856.8 fps) INF Profiler::stop: surpriserecorder - image pushed average (100) delta duration 950.279us [501.272us .. 1.96373ms] (1052.32 fps) INF Profiler::stop: surpriserecorder - surprise done average (100) delta duration 14.4426ms [10.6092ms .. 20.9499ms] (69.2396 fps)
The overall average is the time from start()
to stop()
. The others are for checkpoints and they report the time between start to first checkpoint, then from first to second checkpoint, etc. Durations displayed will depend on how fast your host computer is.
On the host this is not very useful, so let us run this puppy on the JeVois camera now that everything seems to be working well.
We basically follow the standard compilation instructions (see Flashing to microSD card).
Here we will first do a full cross-recompilation of everything for the JeVois platform hardware, but that may not always be necessary.
cd ~/jevois && ./rebuild-platform.sh cd ~/jevoisbase && ./rebuild-platform.sh --microsd sudo jevois-flash-card -y /dev/sdX
Make sure that you replace sdX above by the device of your microSD.
Once the microSD card is written, insert it into your JeVois camera, connect the camera to a host computer, and let it boot. Do not start a video capture software.
Connect to JeVois through a serial terminal (see Command-line interface user guide) using the serial-over-USB connection to JeVois. Then issue the following commands:
help info setpar serlog USB setpar serout USB setmapping2 YUYV 640 480 15.0 JeVois SurpriseRecorder setpar thresh 1e7 setpar channels S streamon
You should see these messages every few seconds:
INF Profiler::stop: surpriserecorder overall average (100) duration 68.4739ms [63.6285ms .. 83.3681ms] (14.6041 fps) INF Profiler::stop: surpriserecorder - surprise launched average (100) delta duration 114.16us [102.125us .. 200.25us] (8759.64 fps) INF Profiler::stop: surpriserecorder - image pushed average (100) delta duration 3.93424ms [2.97025ms .. 6.0565ms] (254.179 fps) INF Profiler::stop: surpriserecorder - surprise done average (100) delta duration 64.4076ms [59.4994ms .. 79.3229ms] (15.5261 fps)
Now wave in front of the camera and you should get some
SAVETO /jevois/data/surpriserecorder/video000000.avi SAVEDNUM 100 SAVEDNUM 200 [...] SAVEDONE /jevois/data/surpriserecorder/video000000.avi
After you are done recording a bunch of events, unplug JeVois, get the microSD out, connect it to your host, and check out the videos that were saved to it!
From the profiler, looks like our guess that we would be able to do 15fps at 640x480 was pretty good (see the overall average reports).
Sometimes, it is useful to be able to run an algorithm on a pre-recorded video sequence to fine-tune it. Here, for example, we might want to tune the threshold, update factor, channels, of the algorithm in a systematic manner using always the same data. The JeVois framework allows for this, simply by specifying a video file as cameradev
when starting jevois-daemon
(see The jevois-daemon executable).
Here, we will use an hour-long 320x240 video that was posted live on the web several years ago as part of the now defunct blueservo project. These cameras were recording live outdoors video near the border between Texas and Mexico, and citizens were asked to watch those and to call the sheriff if they saw anything suspect.
We will run out tests on the host. The same could work on JeVois.
First, download the test video from http://jevois.org/data/blueservo23_66.asf
Check out this video and see how it is non trivial to process, due to:
Yes, this video is very boring for a whole hour! But it does contain a few short interesting events as we will see below.
Then, create a new entry in your host's /jevois/config/videomapping.cfg so we can start with our surprise recorder module right away:
NONE 0 0 0.0 YUYV 320 240 70.0 JeVois SurpriseRecorder
Or, with JeVois v1.3 and later, you can just type, in a Linux terminal:
sudo jevois-add-videomapping NONE 0 0 0.0 YUYV 320 240 70.0 JeVois SurpriseRecorder
Here we assume that your host is fast enough to run our surprise module at 70fps at 320x240.
Run jevois-daemon
on your host (with a camera connected) and type listmappings
and note the mapping number assigned to the one you just created. For us, the number was 0.
Now edit /jevois/config/initscript.cfg:
setmapping 0 setpar serout All setpar thresh 2.0e7 setpar ctxframes 75 setpar channels S
Note how we are using a high threshold so we get very few events. We also decreased the context to +/- 5 seconds (75 frames). For now, we also only compute surprise over the saliency map, and not over the gist or the other feature maps. This is subject to more experimentation.
Finally, run
jevois-daemon --cameradev=blueservo23_66.asf
and let it go though it. Watch the events that were extracted from this hour-long video.
jevois-daemon
for now. This could be changed in the JeVois core some day.Here are the results:
You could add the following to this module:
process()
function with video output, maybe showing the most surprising location and a bar-graph plotting its surprise and the current surprise threshold.