JeVoisBase  1.3
JeVois Smart Embedded Machine Vision Toolkit Base Modules
Share this page:
JeVoisIntro.C
Go to the documentation of this file.
1 // ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
2 //
3 // JeVois Smart Embedded Machine Vision Toolkit - Copyright (C) 2016 by Laurent Itti, the University of Southern
4 // California (USC), and iLab at USC. See http://iLab.usc.edu and http://jevois.org for information about this project.
5 //
6 // This file is part of the JeVois Smart Embedded Machine Vision Toolkit. This program is free software; you can
7 // redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software
8 // Foundation, version 2. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
9 // without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
10 // License for more details. You should have received a copy of the GNU General Public License along with this program;
11 // if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
12 //
13 // Contact information: Laurent Itti - 3641 Watt Way, HNB-07A - Los Angeles, CA 90089-2520 - USA.
14 // Tel: +1 213 740 3527 - itti@pollux.usc.edu - http://iLab.usc.edu - http://jevois.org
15 // ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
16 /*! \file */
17 
18 #include <jevois/Core/Module.H>
19 
20 #include <jevois/Debug/Log.H>
21 #include <jevois/Debug/Timer.H>
28 
29 #include <opencv2/core/core.hpp>
30 #include <opencv2/imgproc/imgproc.hpp>
31 #include <linux/videodev2.h> // for v4l2 pixel types
32 //#include <opencv2/highgui/highgui.hpp> // used for debugging only, see imshow below
33 
34 // icon by Freepik in interface at flaticon
35 
36 struct ScriptItem { char const * msg; int blinkx, blinky; };
37 static ScriptItem const TheScript[] = {
38  { "Hello! Welcome to this simple demonstration of JeVois", 0, 0 },
39  { "JeVois = camera sensor + quad-core processor + USB video output", 0, 0 },
40  { "This demo is running on the small processor inside JeVois", 0, 0 },
41  { "Neat, isn't it?", 0, 0 },
42  { "", 0, 0 },
43  { "We will help you discover what you see on this screen", 0, 0 },
44  { "We will use this blinking marker to point at things:", 600, 335 },
45  { "", 0, 0 },
46  { "Now a brief tutorial...", 0, 0 },
47  { "", 0, 0 },
48  { "This demo: Attention + Gist + Faces + Objects", 0, 0 },
49  { "Attention: detect things that catch the human eye", 0, 0 },
50  { "Pink square in video above: most interesting (salient) location", 0, 0 },
51  { "Green circle in video above: smoothed attention trajectory", 0, 0 },
52  { "Try it: wave at JeVois, show it some objects, move it around", 0, 0 },
53  { "", 0, 0 },
54  { "Did you catch the attention of JeVois?", 0, 0 },
55  { "", 0, 0 },
56  { "Attention is guided by color contrast, ...", 40, 270 },
57  { "by luminance (intensity) contrast, ...", 120, 270 },
58  { "by oriented edges, ...", 200, 270 },
59  { "by flickering or blinking lights, ...", 280, 270 },
60  { "and by moving objects.", 360, 270 },
61  { "All these visual cues combine into a measure of saliency", 480, 120 },
62  { "or visual interest for every location in view.", 480, 120 },
63  { "", 0, 0 },
64  { "", 0, 0 },
65  { "Gist: statistical summary of a scene, also based on ...", 440, 270 },
66  { "color, intensity, orientation, flicker and motion features.", 0, 0 },
67  { "Gist can be used to recognize places, such as a kitchen or ...", 0, 0 },
68  { "a bathroom, or a road turning left versus turning right.", 0, 0 },
69  { "Try it: point JeVois to different things and see gist change", 440, 270 },
70  { "", 0, 0 },
71  { "", 0, 0 },
72  { "Face detection finds human faces in the camera's view", 612, 316 },
73  { "Try it: point JeVois towards a face. Adjust distance until ...", 612, 316 },
74  { "the face fits inside the attention pink square. When a face ...", 0, 0 },
75  { "is detected, it will appear in the bottom-right corner.", 612, 316 },
76  { "", 0, 0 },
77  { "", 0, 0 },
78  { "Objects: Here we recognize handwritten digits using ...", 525, 316 },
79  { "deep neural networks. Try it! Draw a number on paper ...", 0, 0 },
80  { "and point JeVois towards it. Adjust distance until the", 0, 0 },
81  { "number fits in the attention pink square.", 0, 0 },
82  { "Recognized digits are shown near the detected faces.", 525, 316 },
83  { "", 0, 0 },
84  { "Recognition scores for digits 0 to 9 are shown above", 464, 310 },
85  { "Sometimes the neural network makes mistakes and thinks it ...", 0, 0 },
86  { "found a digit when actually it is looking at something else.", 0, 0 },
87  { "This is still a research issue", 0, 0 },
88  { "but machine vision is improving fast, so stay tuned!", 0, 0 },
89  { "", 0, 0 },
90  { "With JeVois the future of machine vision is in your hands.", 0, 0 },
91  { "", 0, 0 },
92  { "", 0, 0 },
93  { "", 0, 0 },
94  { "This tutorial is now complete. It will restart.", 0, 0 },
95  { "", 0, 0 },
96  { nullptr, 0, 0 }
97 };
98 
99 //! Simple introduction to JeVois and demo that combines saliency, gist, face detection, and object recognition
100 /*! This module plays an introduction movie, and then launches the equivalent of DemoSalGistFaceObj, but with some added
101  text messages that explain what is going on on the screen.
102 
103  Try it and follow the instructions on screen!
104 
105  @author Laurent Itti
106 
107  @displayname JeVois Intro
108  @videomapping YUYV 640 360 50.0 YUYV 320 240 50.0 JeVois JeVoisIntro
109  @videomapping YUYV 640 480 50.0 YUYV 320 240 50.0 JeVois JeVoisIntro
110  @email itti\@usc.edu
111  @address University of Southern California, HNB-07A, 3641 Watt Way, Los Angeles, CA 90089-2520, USA
112  @copyright Copyright (C) 2016 by Laurent Itti, iLab and the University of Southern California
113  @mainurl http://jevois.org
114  @supporturl http://jevois.org/doc
115  @otherurl http://iLab.usc.edu
116  @license GPL v3
117  @distribution Unrestricted
118  @restrictions None
119  \ingroup modules */
120 class JeVoisIntro : public jevois::StdModule
121 {
122  public:
123  //! Constructor
124  JeVoisIntro(std::string const & instance) : jevois::StdModule(instance), itsScoresStr(" ")
125  {
126  itsSaliency = addSubComponent<Saliency>("saliency");
127  itsFaceDetector = addSubComponent<FaceDetector>("facedetect");
128  itsObjectRecognition = addSubComponent<ObjectRecognitionMNIST>("MNIST");
129  itsKF = addSubComponent<Kalman2D>("kalman");
130  itsVideo = addSubComponent<BufferedVideoReader>("intromovie");
131  }
132 
133  //! Virtual destructor for safe inheritance
134  virtual ~JeVoisIntro() { }
135 
136  //! Initialization once parameters are set:
137  virtual void postInit() override
138  {
139  // Read the banner image and convert to YUYV RawImage:
140  cv::Mat banner_bgr = cv::imread(absolutePath("jevois-banner-notext.png"));
141  itsBanner.width = banner_bgr.cols;
142  itsBanner.height = banner_bgr.rows;
143  itsBanner.fmt = V4L2_PIX_FMT_YUYV;
144  itsBanner.bufindex = 0;
145  itsBanner.buf.reset(new jevois::VideoBuf(-1, itsBanner.bytesize(), 0));
147 
148  // Allow our movie to load a bit:
149  std::this_thread::sleep_for(std::chrono::milliseconds(750));
150  }
151 
152  //! Processing function
153  virtual void process(jevois::InputFrame && inframe, jevois::OutputFrame && outframe) override
154  {
155  static jevois::Timer itsProcessingTimer("Processing");
156  static cv::Mat itsLastFace(60, 60, CV_8UC2, 0x80aa) ; // Note that this one will contain raw YUV pixels
157  static cv::Mat itsLastObject(60, 60, CV_8UC2, 0x80aa) ; // Note that this one will contain raw YUV pixels
158  static std::string itsLastObjectCateg;
159  static bool doobject = false; // alternate between object and face recognition
160  static bool intromode = false; // intro mode plays a video at the beginning, then shows some info messages
161  static bool intromoviedone = false; // turns true when intro movie complete
162  static ScriptItem const * scriptitem = &TheScript[0];
163  static int scriptframe = 0;
164 
165  // Wait for next available camera image:
166  jevois::RawImage inimg = inframe.get();
167 
168  // We only handle one specific input format in this demo:
169  inimg.require("input", 320, 240, V4L2_PIX_FMT_YUYV);
170 
171  itsProcessingTimer.start();
172  int const roihw = 32; // face & object roi half width and height
173 
174  // Compute saliency, in a thread:
175  auto sal_fut = std::async(std::launch::async, [&](){ itsSaliency->process(inimg, true); });
176 
177  // While computing, wait for an image from our gadget driver into which we will put our results:
178  jevois::RawImage outimg = outframe.get();
179  outimg.require("output", 640, outimg.height, V4L2_PIX_FMT_YUYV);
180  switch (outimg.height)
181  {
182  case 312: break; // normal mode
183  case 360:
184  case 480: intromode = true; break; // intro mode
185  default: LFATAL("Incorrect output height: should be 312, 360 or 480");
186  }
187 
188  // Play the intro movie first if requested:
189  if (intromode && intromoviedone == false)
190  {
191  cv::Mat m = itsVideo->get();
192 
193  if (m.empty()) intromoviedone = true;
194  else
195  {
197 
198  // Handle bottom of th eframe: blank or banner
199  if (outimg.height == 480)
200  jevois::rawimage::paste(itsBanner, outimg, 0, 360);
201  else if (outimg.height > 360)
202  jevois::rawimage::drawFilledRect(outimg, 0, 360, outimg.width, outimg.height - 360, 0x8000);
203 
204  // If on a mac with height = 480, need to flip horizontally for photobooth to work (it will flip again):
205  if (outimg.height == 480) jevois::rawimage::hFlipYUYV(outimg);
206 
207  sal_fut.get(); // yes, we are wasting CPU here, just to keep code more readable with the intro stuff added
208  inframe.done();
209  outframe.send();
210  return;
211  }
212  }
213 
214  // Paste the original image to the top-left corner of the display:
215  unsigned short const txtcol = jevois::yuyv::White;
216  jevois::rawimage::paste(inimg, outimg, 0, 0);
217  jevois::rawimage::writeText(outimg, "JeVois Saliency + Gist + Faces + Objects", 3, 3, txtcol);
218 
219  // Wait until saliency computation is complete:
220  sal_fut.get();
221 
222  // find most salient point:
223  int mx, my; intg32 msal;
224  itsSaliency->getSaliencyMax(mx, my, msal);
225 
226  // Scale back to original image coordinates:
227  int const smlev = itsSaliency->smscale::get();
228  int const smadj = smlev > 0 ? (1 << (smlev-1)) : 0; // half a saliency map pixel adjustment
229  int const dmx = (mx << smlev) + smadj;
230  int const dmy = (my << smlev) + smadj;
231 
232  // Compute instantaneous attended ROI (note: coords must be even to avoid flipping U/V when we later paste):
233  int const rx = std::min(int(inimg.width) - roihw, std::max(roihw, dmx));
234  int const ry = std::min(int(inimg.height) - roihw, std::max(roihw, dmy));
235 
236  // Asynchronously launch a bunch of saliency drawings and filter the attended locations
237  auto draw_fut =
238  std::async(std::launch::async, [&]() {
239  // Paste the various saliency results:
240  drawMap(outimg, &itsSaliency->salmap, 320, 0, 16, 20);
241  jevois::rawimage::writeText(outimg, "Saliency Map", 640 - 12*6-4, 3, txtcol);
242 
243  drawMap(outimg, &itsSaliency->color, 0, 240, 4, 18);
244  jevois::rawimage::writeText(outimg, "Color", 3, 243, txtcol);
245 
246  drawMap(outimg, &itsSaliency->intens, 80, 240, 4, 18);
247  jevois::rawimage::writeText(outimg, "Intensity", 83, 243, txtcol);
248 
249  drawMap(outimg, &itsSaliency->ori, 160, 240, 4, 18);
250  jevois::rawimage::writeText(outimg, "Orientation", 163, 243, txtcol);
251 
252  drawMap(outimg, &itsSaliency->flicker, 240, 240, 4, 18);
253  jevois::rawimage::writeText(outimg, "Flicker", 243, 243, txtcol);
254 
255  drawMap(outimg, &itsSaliency->motion, 320, 240, 4, 18);
256  jevois::rawimage::writeText(outimg, "Motion", 323, 243, txtcol);
257 
258  // Draw the gist vector:
259  drawGist(outimg, itsSaliency->gist, itsSaliency->gist_size, 400, 242, 40, 2);
260 
261  // Draw a small square at most salient location in image and in saliency map:
262  jevois::rawimage::drawFilledRect(outimg, mx * 16 + 5, my * 16 + 5, 8, 8, 0xffff);
263  jevois::rawimage::drawFilledRect(outimg, 320 + mx * 16 + 5, my * 16 + 5, 8, 8, 0xffff);
264  jevois::rawimage::drawRect(outimg, rx - roihw, ry - roihw, roihw*2, roihw*2, 0xf0f0);
265  jevois::rawimage::drawRect(outimg, rx - roihw+1, ry - roihw+1, roihw*2-2, roihw*2-2, 0xf0f0);
266 
267  // Blank out free space from 480 to 519 at the bottom, and small space above and below gist vector:
268  jevois::rawimage::drawFilledRect(outimg, 480, 240, 40, 60, 0x8000);
269  jevois::rawimage::drawRect(outimg, 400, 240, 80, 2, 0x80a0);
270  jevois::rawimage::drawRect(outimg, 400, 298, 80, 2, 0x80a0);
271  jevois::rawimage::drawFilledRect(outimg, 0, 300, 640, 12, jevois::yuyv::Black);
272 
273  // If intro mode, blank out rows 312 to bottom:
274  if (outimg.height == 480)
275  {
276  jevois::rawimage::drawFilledRect(outimg, 0, 312, outimg.width, 48, 0x8000);
277  jevois::rawimage::paste(itsBanner, outimg, 0, 360);
278  }
279  else if (outimg.height > 312)
280  jevois::rawimage::drawFilledRect(outimg, 0, 312, outimg.width, outimg.height - 312, 0x8000);
281 
282  // Filter the attended locations:
283  itsKF->set(dmx, dmy, inimg.width, inimg.height);
284  float kfxraw, kfyraw, kfximg, kfyimg;
285  itsKF->get(kfxraw, kfyraw, kfximg, kfyimg, inimg.width, inimg.height, 1.0F, 1.0F);
286 
287  // Draw a circle around the kalman-filtered attended location:
288  jevois::rawimage::drawCircle(outimg, int(kfximg), int(kfyimg), 20, 1, jevois::yuyv::LightGreen);
289 
290  // Send saliency info to serial port (for arduino, etc):
291  sendSerialImg2D(inimg.width, inimg.height, kfximg, kfyimg, roihw * 2, roihw * 2, "salient");
292 
293  // If intro mode, draw some text messages according to our script:
294  if (intromode && intromoviedone)
295  {
296  // Compute fade: we do 1s fade in, 2s full luminance, 1s fade out:
297  int lum = 255;
298  if (scriptframe < 32) lum = scriptframe * 8;
299  else if (scriptframe > 4*30 - 32) lum = std::max(0, (4*30 - scriptframe) * 8);
300 
301  // Display the text with the proper fade:
302  int x = (640 - 10 * strlen(scriptitem->msg)) / 2;
303  jevois::rawimage::writeText(outimg, scriptitem->msg, x, 325, 0x7700 | lum, jevois::rawimage::Font10x20);
304 
305  // Add a blinking marker if specified in the script:
306  if (scriptitem->blinkx)
307  {
308  int phase = scriptframe / 10;
309  if ((phase % 2) == 0) jevois::rawimage::drawDisk(outimg, scriptitem->blinkx, scriptitem->blinky,
310  10, jevois::yuyv::LightTeal);
311  }
312 
313  // Move to next video frame and possibly next script item or loop the script:
314  if (++scriptframe >= 140)
315  {
316  scriptframe = 0; ++scriptitem;
317  if (scriptitem->msg == nullptr) scriptitem = &TheScript[0];
318  }
319  }
320  });
321 
322  // Extract a raw YUYV ROI around attended point:
323  cv::Mat rawimgcv = jevois::rawimage::cvImage(inimg);
324  cv::Mat rawroi = rawimgcv(cv::Rect(rx - roihw, ry - roihw, roihw * 2, roihw * 2));
325 
326  if (doobject)
327  {
328  // #################### Object recognition:
329 
330  // Prepare a color or grayscale ROI for the object recognition module:
331  auto objsz = itsObjectRecognition->insize();
332  cv::Mat objroi;
333  switch (objsz.depth_)
334  {
335  case 1: // grayscale input
336  {
337  // mnist is white letters on black background, so invert the image before we send it for recognition, as we
338  // assume here black letters on white backgrounds. We also need to provide a clean crop around the digit for
339  // the deep network to work well:
340  cv::cvtColor(rawroi, objroi, CV_YUV2GRAY_YUYV);
341 
342  // Find the 10th percentile gray value:
343  size_t const elem = (objroi.cols * objroi.rows * 10) / 100;
344  std::vector<unsigned char> v; v.assign(objroi.datastart, objroi.dataend);
345  std::nth_element(v.begin(), v.begin() + elem, v.end());
346  unsigned char const thresh = std::min((unsigned char)(100), std::max((unsigned char)(30), v[elem]));
347 
348  // Threshold and invert the image:
349  cv::threshold(objroi, objroi, thresh, 255, cv::THRESH_BINARY_INV);
350 
351  // Find the digit and center and crop it:
352  cv::Mat pts; cv::findNonZero(objroi, pts);
353  cv::Rect r = cv::boundingRect(pts);
354  int const cx = r.x + r.width / 2;
355  int const cy = r.y + r.height / 2;
356  int const siz = std::min(roihw * 2, std::max(16, 8 + std::max(r.width, r.height))); // margin of 4 pix
357  int const tlx = std::max(0, std::min(roihw*2 - siz, cx - siz/2));
358  int const tly = std::max(0, std::min(roihw*2 - siz, cy - siz/2));
359  cv::Rect ar(tlx, tly, siz, siz);
360  cv::resize(objroi(ar), objroi, cv::Size(objsz.width_, objsz.height_), 0, 0, cv::INTER_AREA);
361  //cv::imshow("cropped roi", objroi);cv::waitKey(1);
362  }
363  break;
364 
365  case 3: // color input
366  cv::cvtColor(rawroi, objroi, CV_YUV2RGB_YUYV);
367  cv::resize(objroi, objroi, cv::Size(objsz.width_, objsz.height_), 0, 0, cv::INTER_AREA);
368  break;
369 
370  default:
371  LFATAL("Unsupported object detection input depth " << objsz.depth_);
372  }
373 
374  // Launch object recognition on the ROI and get the recognition scores:
375  auto scores = itsObjectRecognition->process(objroi);
376 
377  // Create a string to show all scores:
378  std::ostringstream oss;
379  for (size_t i = 0; i < scores.size(); ++i)
380  oss << itsObjectRecognition->category(i) << ':' << std::fixed << std::setprecision(2) << scores[i] << ' ';
381  itsScoresStr = oss.str();
382 
383  // Check whether the highest score is very high and significantly higher than the second best:
384  float best1 = scores[0], best2 = scores[0]; size_t idx1 = 0, idx2 = 0;
385  for (size_t i = 1; i < scores.size(); ++i)
386  {
387  if (scores[i] > best1) { best2 = best1; idx2 = idx1; best1 = scores[i]; idx1 = i; }
388  else if (scores[i] > best2) { best2 = scores[i]; idx2 = i; }
389  }
390 
391  // Update our display upon each "clean" recognition:
392  if (best1 > 90.0F && best2 < 20.0F)
393  {
394  // Remember this recognized object for future displays:
395  itsLastObjectCateg = itsObjectRecognition->category(idx1);
396  itsLastObject = rawimgcv(cv::Rect(rx - 30, ry - 30, 60, 60)).clone(); // make a deep copy
397 
398  LINFO("Object recognition: best: " << itsLastObjectCateg <<" (" << best1 <<
399  "), second best: " << itsObjectRecognition->category(idx2) << " (" << best2 << ')');
400  }
401  }
402  else
403  {
404  // #################### Face detection:
405 
406  // Prepare a grey ROI from our raw YUYV roi:
407  cv::Mat grayroi; cv::cvtColor(rawroi, grayroi, CV_YUV2GRAY_YUYV);
408  cv::equalizeHist(grayroi, grayroi);
409 
410  // Launch the face detector:
411  std::vector<cv::Rect> faces; std::vector<std::vector<cv::Rect> > eyes;
412  itsFaceDetector->process(grayroi, faces, eyes, false);
413 
414  // Draw the faces and eyes, if any:
415  if (faces.size())
416  {
417  LINFO("detected " << faces.size() << " faces");
418  // Store the attended ROI into our last ROI, fixed size 60x60 for our display:
419  itsLastFace = rawimgcv(cv::Rect(rx - 30, ry - 30, 60, 60)).clone(); // make a deep copy
420  }
421 
422  for (size_t i = 0; i < faces.size(); ++i)
423  {
424  // Draw one face:
425  cv::Rect const & f = faces[i];
426  jevois::rawimage::drawRect(outimg, f.x + rx - roihw, f.y + ry - roihw, f.width, f.height, 0xc0ff);
427 
428  // Draw the corresponding eyes:
429  for (auto const & e : eyes[i])
430  jevois::rawimage::drawRect(outimg, e.x + rx - roihw, e.y + ry - roihw, e.width, e.height, 0x40ff);
431  }
432  }
433 
434  // Let camera know we are done processing the raw YUV input image. NOTE: rawroi is now invalid:
435  inframe.done();
436 
437  // Paste our last attended and recognized face and object (or empty pics):
438  cv::Mat outimgcv(outimg.height, outimg.width, CV_8UC2, outimg.buf->data());
439  itsLastObject.copyTo(outimgcv(cv::Rect(520, 240, 60, 60)));
440  itsLastFace.copyTo(outimgcv(cv::Rect(580, 240, 60, 60)));
441 
442  // Wait until all saliency drawings are complete (since they blank out our object label area):
443  draw_fut.get();
444 
445  // Print all object scores:
446  jevois::rawimage::writeText(outimg, itsScoresStr, 2, 301, txtcol);
447 
448  // Write any positively recognized object category:
449  jevois::rawimage::writeText(outimg, itsLastObjectCateg.c_str(), 517-6*itsLastObjectCateg.length(), 263, txtcol);
450 
451  // FIXME do svm on gist and write resuts here
452 
453  // Show processing fps:
454  std::string const & fpscpu = itsProcessingTimer.stop();
455  jevois::rawimage::writeText(outimg, fpscpu, 3, 240 - 13, jevois::yuyv::White);
456 
457  // If on a mac with height = 480, need to flip horizontally for photobooth to work (it will flip again):
458  if (outimg.height == 480) jevois::rawimage::hFlipYUYV(outimg);
459 
460  // Send the output image with our processing results to the host over USB:
461  outframe.send();
462 
463  // Alternate between face and object recognition:
464  doobject = ! doobject;
465  }
466 
467  protected:
468  std::shared_ptr<Saliency> itsSaliency;
469  std::shared_ptr<FaceDetector> itsFaceDetector;
470  std::shared_ptr<ObjectRecognitionBase> itsObjectRecognition;
471  std::shared_ptr<Kalman2D> itsKF;
472  std::shared_ptr<BufferedVideoReader> itsVideo;
474  std::string itsScoresStr;
475 };
476 
477 // Allow the module to be loaded as a shared object (.so) file:
jevois::RawImage itsBanner
cv::Mat cvImage(RawImage const &src)
std::shared_ptr< FaceDetector > itsFaceDetector
std::string itsScoresStr
void drawCircle(RawImage &img, int x, int y, unsigned int rad, unsigned int thick, unsigned int col)
void writeText(RawImage &img, std::string const &txt, int x, int y, unsigned int col, Font font=Font6x10)
void require(char const *info, unsigned int w, unsigned int h, unsigned int f) const
unsigned int height
JeVoisIntro(std::string const &instance)
Constructor.
Definition: JeVoisIntro.C:124
unsigned int fmt
std::shared_ptr< VideoBuf > buf
void drawGist(jevois::RawImage &img, unsigned char const *gist, size_t gistsize, unsigned int xoff, unsigned int yoff, unsigned int width, unsigned int scale)
Definition: Saliency.C:771
virtual void postInit() override
Initialization once parameters are set:
Definition: JeVoisIntro.C:137
std::shared_ptr< Kalman2D > itsKF
StdModule(std::string const &instance)
void hFlipYUYV(RawImage &img)
void drawMap(jevois::RawImage &img, env_image const *fmap, unsigned int xoff, unsigned int yoff, unsigned int scale)
Definition: Saliency.C:709
void sendSerialImg2D(unsigned int camw, unsigned int camh, float x, float y, float w=0.0F, float h=0.0F, std::string const &id="", std::string const &extra="")
Simple introduction to JeVois and demo that combines saliency, gist, face detection, and object recognition.
void convertCvBGRtoRawImage(cv::Mat const &src, RawImage &dst, int quality)
unsigned int bytesize() const
#define LFATAL(msg)
char const * msg
ENV_INTG32_TYPE intg32
32-bit signed integer
Definition: env_types.h:52
JEVOIS_REGISTER_MODULE(JeVoisIntro)
void drawFilledRect(RawImage &img, int x, int y, unsigned int w, unsigned int h, unsigned int col)
virtual void process(jevois::InputFrame &&inframe, jevois::OutputFrame &&outframe) override
Processing function.
Definition: JeVoisIntro.C:153
virtual ~JeVoisIntro()
Virtual destructor for safe inheritance.
Definition: JeVoisIntro.C:134
std::shared_ptr< Saliency > itsSaliency
#define LINFO(msg)
void drawDisk(RawImage &img, int x, int y, unsigned int rad, unsigned int col)
void drawRect(RawImage &img, int x, int y, unsigned int w, unsigned int h, unsigned int thick, unsigned int col)
unsigned int width
std::shared_ptr< ObjectRecognitionBase > itsObjectRecognition
std::string const & stop()
std::shared_ptr< BufferedVideoReader > itsVideo
void paste(RawImage const &src, RawImage &dest, int dx, int dy)
std::string absolutePath(std::string const &path="")