JeVois  1.8
JeVois Smart Embedded Machine Vision Toolkit
Share this page:
Module.H
Go to the documentation of this file.
1 // ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
2 //
3 // JeVois Smart Embedded Machine Vision Toolkit - Copyright (C) 2016 by Laurent Itti, the University of Southern
4 // California (USC), and iLab at USC. See http://iLab.usc.edu and http://jevois.org for information about this project.
5 //
6 // This file is part of the JeVois Smart Embedded Machine Vision Toolkit. This program is free software; you can
7 // redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software
8 // Foundation, version 2. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
9 // without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
10 // License for more details. You should have received a copy of the GNU General Public License along with this program;
11 // if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
12 //
13 // Contact information: Laurent Itti - 3641 Watt Way, HNB-07A - Los Angeles, CA 90089-2520 - USA.
14 // Tel: +1 213 740 3527 - itti@pollux.usc.edu - http://iLab.usc.edu - http://jevois.org
15 // ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
16 /*! \file */
17 
18 #pragma once
19 
20 #include <memory>
21 #include <jevois/Image/RawImage.H>
22 #include <jevois/Core/VideoBuf.H>
23 #include <jevois/Core/UserInterface.H> // not strictly needed but derived classes will want to use it
25 #include <jevois/Types/Enum.H>
26 #include <opencv2/core/core.hpp>
27 #include <ostream>
28 
29 namespace jevois
30 {
31  class VideoInput;
32  class VideoOutput;
33  class Engine;
34 
35  //! Exception-safe wrapper around a raw camera input frame
36  /*! This wrapper operates much like std:future in standard C++11. Users can get the next image captured by the camera
37  by calling get(), which may block if the capture is not complete yet, or may throw if the capture fails for some
38  reason (e.g., the camera is not streaming). The image size and pixel type are as defined by the current
39  VideoMapping, camera section. In addition, a done() function is provided which users may use as soon as they are
40  finished with the pixel data in the image obtained via get(), to allow the camera driver to setup the underlying
41  memory buffer again for capture. If done() has not been called by the time the InputFrame is destroyed, it will be
42  called automatically, if get() had been called. It may in some cases improve your frame rate to call done()
43  manually as early as possible instead of letting the InputFrame destructor do it.
44 
45  InputFrame implements a zero-copy, zero-wait access to input video frames, that is:
46 
47  1. the pixel data of the image you obtain via get() is directly the memory-mapped pixel buffer that the silicon
48  hardware on the JeVois chip uses via direct-memory-access (DMA) to stream the pixel data from the camera chip
49  to processor memory;
50  2. as soon as an image is captured by the camera hardware, get() unblocks and returns it (as opposed to having a
51  fixed, regular interval at which images may be available). Camera has several image buffers, allowing one to be
52  captured while another is being handed over for processing via get(). These buffers are recycled, i.e., once
53  done() is called, the underlying buffer is sent back to the camera hardware for future capture.
54 
55  \ingroup core */
56  class InputFrame
57  {
58  public:
59  //! Move constructor
60  InputFrame(InputFrame && other) = default;
61 
62  //! Get the next captured camera image
63  /*! Throws if we the camera is not streaming or blocks until an image is available (has been captured). */
64  RawImage const & get(bool casync = false) const;
65 
66  //! Indicate that user processing is done with the image previously obtained via get()
67  /*! You should call this as soon after get() as possible, once you are finished with the RawImage data so that it
68  can be recycled and sent back to the camera driver for video capture. */
69  void done() const;
70 
71  //! Shorthand to get the input image as a GRAY cv::Mat and release the raw buffer
72  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
73  processing. C++ module writers should stick to the get()/done() pair as this provides better fine-grained
74  control. Note that the raw image from the camera will always be copied or converted to cv::Mat and will then
75  be released by calling done(), so users should not call done() after using this function. This function is
76  basically equivalent to calling get(), converting to cv::Mat, and calling done(). */
77  cv::Mat getCvGRAY(bool casync = false) const;
78 
79  //! Shorthand to get the input image as a BGR cv::Mat and release the raw buffer
80  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
81  processing. C++ module writers should stick to the get()/done() pair as this provides better fine-grained
82  control. Note that the raw image from the camera will always be copied or converted to cv::Mat and will then
83  be released by calling done(), so users should not call done() after using this function. This function is
84  basically equivalent to calling get(), converting to cv::Mat, and calling done(). */
85  cv::Mat getCvBGR(bool casync = false) const;
86 
87  //! Shorthand to get the input image as a RGB cv::Mat and release the raw buffer
88  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
89  processing. C++ module writers should stick to the get()/done() pair as this provides better fine-grained
90  control. Note that the raw image from the camera will always be copied or converted to cv::Mat and will then
91  be released by calling done(), so users should not call done() after using this function. This function is
92  basically equivalent to calling get(), converting to cv::Mat, and calling done(). */
93  cv::Mat getCvRGB(bool casync = false) const;
94 
95  //! Shorthand to get the input image as a RGBA cv::Mat and release the raw buffer
96  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
97  processing. C++ module writers should stick to the get()/done() pair as this provides better fine-grained
98  control. Note that the raw image from the camera will always be copied or converted to cv::Mat and will then
99  be released by calling done(), so users should not call done() after using this function. This function is
100  basically equivalent to calling get(), converting to cv::Mat, and calling done(). */
101  cv::Mat getCvRGBA(bool casync = false) const;
102 
103  //! Destructor, returns the buffers to the driver as needed
104  ~InputFrame();
105 
106  private:
107  InputFrame() = delete;
108  InputFrame(InputFrame const & other) = delete;
109  InputFrame & operator=(InputFrame const & other) = delete;
110 
111  friend class Engine;
112  InputFrame(std::shared_ptr<VideoInput> const & cam, bool turbo); // Only our friends can construct us
113 
114  std::shared_ptr<VideoInput> itsCamera;
115  mutable bool itsDidGet;
116  mutable bool itsDidDone;
117  mutable RawImage itsImage;
118  bool const itsTurbo;
119  };
120 
121  //! Exception-safe wrapper around a raw image to be sent over USB
122  /*! This wrapper operates much like std:future in standard C++11. Users can get the next memory-allocated but blank
123  image to be sent over USB by calling get(), which may block if all buffers are still being sent over USB by Gadget
124  and no blank one is available, or may throw if getting that buffer fails for some reason (e.g., usb disconnect,
125  user just changed video mode in their webcam software or closed it). The allocated image size and pixel type is as
126  defined by the current VideoMapping, USB section, i.e., it is the USB video mode currently selected by the
127  user. To save time, image buffers are not zeroed out, so you should not assume that the image is filled with black
128  pixels, it could contain random pixels, or previous output frames. In addition, a send() function is provided
129  which users may use as soon as they are finished with writing the pixel data into the image obtained via get(), to
130  allow the USB driver to send that image to the connected host computer. If send() has not been called by the time
131  the OutputFrame is destroyed, it will be called automatically, if get() had been called.
132 
133  OutputFrame implements a zero-copy, zero-wait access to output video frames, that is:
134 
135  1. the pixel data of the image you obtain via get() is directly the memory-mapped pixel buffer that the silicon
136  hardware on the JeVois chip will use via direct-memory-access (DMA) to stream the data out over the USB link;
137  2. as soon as you call send() that buffer will be queued for sending over USB (as opposed to having a fixed,
138  regular interval at which images may be streamed out). Gadget has several image buffers, allowing one to be
139  streamed out over USB while another is being handed over for filling by your application via get(). These
140  buffers are recycled, i.e., once send() is called, the underlying buffer is streamed over USB and then sent
141  back to the Gadget for future access by your code.
142 
143  \ingroup core */
145  {
146  public:
147  //! Move constructor
148  OutputFrame(OutputFrame && other) = default;
149 
150  //! Get a pre-allocated image so that we can fill the pixel data and later send out over USB using send()
151  /*! May throw if not buffer is available, i.e., all have been queued to send to the host but have not yet been
152  sent. Application code must balance exactly one send() for each get(). */
153  RawImage const & get() const;
154 
155  //! Send an image out over USB to the host computer
156  /*! May throw if the format is incorrect or std::overflow_error if we have not yet consumed the previous image. */
157  void send() const;
158 
159  //! Shorthand to send a cv::Mat after converting / scaling it to the current output format
160  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
161  processing. The cv::Mat will be rescaled to the same dims as the output frame.
162 
163  The pixel format of the given cv::Mat is guessed as follows:
164  - if img.type() == CV_8UC3, assume BGR pixels
165  - if img.type() == CV_8UC1, assume GRAY pixels
166  - if img.type() == CV_8UC4, assume RGBA pixels
167 
168  C++ module writers should stick to the get()/send() pair as this provides better fine-grained control. Note
169  that the cv::Mat image will always be copied or converted to the destination RawImage and will then be sent
170  out immediately by calling send(), so users should not call send() after using this function. This function is
171  basically equivalent to calling get(), converting the given cv::Mat to the proper output format, and calling
172  send(). quality is used only if the output format is MJPEG and should be between 1 and 100.*/
173  void sendCv(cv::Mat const & img, int quality = 75) const;
174 
175  //! Shorthand to send a GRAY cv::Mat after converting it to the current output format
176  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
177  processing. The cv::Mat must have same dims as the output frame. C++ module writers should stick to the
178  get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always be
179  copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
180  users should not call send() after using this function. This function is basically equivalent to calling
181  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
182  the output format is MJPEG and should be between 1 and 100. */
183  void sendCvGRAY(cv::Mat const & img, int quality = 75) const;
184 
185  //! Shorthand to send a BGR cv::Mat after converting it to the current output format
186  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
187  processing. The cv::Mat must have same dims as the output frame. C++ module writers should stick to the
188  get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always be
189  copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
190  users should not call send() after using this function. This function is basically equivalent to calling
191  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
192  the output format is MJPEG and should be between 1 and 100.*/
193  void sendCvBGR(cv::Mat const & img, int quality = 75) const;
194 
195  //! Shorthand to send a RGB cv::Mat after converting it to the current output format
196  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
197  processing. The cv::Mat must have same dims as the output frame. C++ module writers should stick to the
198  get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always be
199  copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
200  users should not call send() after using this function. This function is basically equivalent to calling
201  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
202  the output format is MJPEG and should be between 1 and 100.*/
203  void sendCvRGB(cv::Mat const & img, int quality = 75) const;
204 
205  //! Shorthand to send a RGBA cv::Mat after converting it to the current output format
206  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
207  processing. The cv::Mat must have same dims as the output frame. C++ module writers should stick to the
208  get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always be
209  copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
210  users should not call send() after using this function. This function is basically equivalent to calling
211  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
212  the output format is MJPEG and should be between 1 and 100.*/
213  void sendCvRGBA(cv::Mat const & img, int quality = 75) const;
214 
215  //! Shorthand to send a GRAY cv::Mat after converting it to the current output format
216  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
217  processing. The cv::Mat will be rescaled to the same dims as the output frame. C++ module writers should stick
218  to the get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always
219  be copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
220  users should not call send() after using this function. This function is basically equivalent to calling
221  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
222  the output format is MJPEG and should be between 1 and 100. */
223  void sendScaledCvGRAY(cv::Mat const & img, int quality = 75) const;
224 
225  //! Shorthand to send a BGR cv::Mat after converting it to the current output format
226  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
227  processing. The cv::Mat will be rescaled to the same dims as the output frame. C++ module writers should stick
228  to the get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always
229  be copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
230  users should not call send() after using this function. This function is basically equivalent to calling
231  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
232  the output format is MJPEG and should be between 1 and 100.*/
233  void sendScaledCvBGR(cv::Mat const & img, int quality = 75) const;
234 
235  //! Shorthand to send a RGB cv::Mat after converting it to the current output format
236  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
237  processing. The cv::Mat will be rescaled to the same dims as the output frame. C++ module writers should stick
238  to the get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always
239  be copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
240  users should not call send() after using this function. This function is basically equivalent to calling
241  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
242  the output format is MJPEG and should be between 1 and 100.*/
243  void sendScaledCvRGB(cv::Mat const & img, int quality = 75) const;
244 
245  //! Shorthand to send a RGBA cv::Mat after converting it to the current output format
246  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
247  processing. The cv::Mat will be rescaled to the same dims as the output frame. C++ module writers should stick
248  to the get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always
249  be copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
250  users should not call send() after using this function. This function is basically equivalent to calling
251  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
252  the output format is MJPEG and should be between 1 and 100.*/
253  void sendScaledCvRGBA(cv::Mat const & img, int quality = 75) const;
254 
255  //! Destructor, returns the buffers to the driver as needed
256  ~OutputFrame();
257 
258  private:
259  OutputFrame() = delete;
260  OutputFrame(OutputFrame const & other) = delete;
261  OutputFrame & operator=(OutputFrame const & other) = delete;
262 
263  // Only our friends can construct us:
264  friend class Engine;
265  OutputFrame(std::shared_ptr<VideoOutput> const & gad, RawImage * excimg = nullptr);
266 
267  std::shared_ptr<VideoOutput> itsGadget;
268  mutable bool itsDidGet;
269  mutable bool itsDidSend;
270  mutable RawImage itsImage;
271  jevois::RawImage * itsImagePtrForException;
272  };
273 
274  //! Virtual base class for a vision processing module
275  /*! Module is the base class to implement camera-to-USB frame-by-frame video processing. The Engine instantiates one
276  class derived from Module, according to the current VideoMapping selected by the end user (e.g., current image
277  resolution, format, and frame rate setected by a webcam viewing program on a host computer). The Module is loaded
278  as shared object (.so) file according to the VideoMapping definitions in videomappings.cfg and the current
279  VideoMapping selected by the user.
280 
281  Module derives from Component and as such can contain:
282 
283  - any number of Parameter settings
284 
285  - any arbitrarily complex sub-hierarchy of Component objects to implement various functionality. Parameter
286  settings from all the sub-components are available to the Module and to users connected over Serial ports of the
287  Engine.
288 
289  This allows one to implement complex vision processing pipelines efficiently and with substantial code re-use. For
290  example, one may first want to implement an EdgeDetector or Saliency component, with Parameter settings for
291  various thresholds, features, etc. One can then create any number of top-level objects that derive from Module and
292  that may contain one or more EdgeDetector, Saliency, etc components in their hierarchy of sub-components, with the
293  implementation in the module simply routing images from one Component to another to create a processing pipeline.
294 
295  Classes that derive from Module should implement up to four functions:
296 
297  - process(InputFrame && inframe, OutputFrame && outframe) is called once per iteration of the Engine main loop
298  when the current VideoMapping specifies both a particular Camera resolution and format, and a USB resolution and
299  format. This function should process the received input frame and fill the pixel buffer of the output frame with
300  results. Memory has already been allocated for both the input and output images before process() is
301  called. Because the output image is actually allocated by the USB Gadget driver (and, ultimately, by the Linux
302  kernel), its pixel memory location cannot be moved (hence, do not attempt to copy the output image or replace it
303  by another image, etc; just write pixel data into the output image's pixel array). There is no restriction on
304  video modes or frame rates, except as suported by the Camera hardware, and as limited by USB bandwidth. For most
305  implementations, matching the input and output frame rate is easiest, and means that each invocation of
306  process() would access and use both of the provided InputFrame and OutputFrame (one-input-to-one-output
307  processing pipeline). But this is not mandatory. For example, a motion flow computation Module for use in a
308  flying drone may have 320x240 YUYV 53.0fps inputs and 100x142 GREY 10.6fps output (since output frame rate is 5x
309  lower than input in this example, the process() function would here get, fill, and send the OutputFrame only
310  once every 5 times it is called; implementation of the process() function should keep track of that, e.g.,
311  through a member variable that gets incremented each time process() is called). In addition to filling the pixel
312  data of the OutputFrame, process() may also send results over the serial ports (e.g., for use by an Arduino
313  connected to the JeVois platform hardware) using sendSerial().
314 
315  - process(InputFrame && inframe) is called once per Camera frame when the current VideoMapping specifies a
316  particular Camera resolution and format, and NONE as USB output format. This function should process the
317  received input frame and would typicaly then send results to serial ports (e.g., for use by an Arduino connected
318  to the JeVois platform hardware) using sendSerial(). There is no restriction on video modes or frame rates,
319  except as suported by the Camera hardware.
320 
321  - parseSerial(std::string const & str, std::shared_ptr<UserInterface> s) allows the Module to support custom user
322  commands. Engine will forward to this function any command received over Serial or other UserInterface that it
323  does not understand. You should use this for things that go beyond Parameter settings (which is already natively
324  supported by Engine) or built-in commands of Engine (see \ref UserCli). For example, one could implement here a
325  command called "start" to allow users to start some specific thing.
326 
327  - supportedCommands(std::ostream & os) should stream out a human-readable description of any custom commands
328  supported by parseSerial(). These will be shown to users when they type "help" over a Serial port.
329 
330  \note Every module implementation file should contain a call to #JEVOIS_REGISTER_MODULE(MODULENAME) for the
331  module's class. This creates some plain-C entry points that will be used when the module is loaded from a dynamic
332  library (.so) file to instantiate the module. See \ref ModuleTutorial for examples.
333 
334  \ingroup core */
335  class Module : public Component
336  {
337  public:
338  //! Constructor
339  /*! the instance is a user-defined string that may be used to differentiate between several instances of the
340  same module. */
341  Module(std::string const & instance);
342 
343  //! Virtual destructor for safe inheritance
344  virtual ~Module();
345 
346  //! Processing function, version that receives a frame from camera and sends a frame out over USB
347  /*! This function is called once for each grabbed video frame from the camera, and it should complete within the
348  camera's frame period in order to avoid dropping frames. The InputFrame and OutputFrame objects are simple
349  wrappers to ensure that the low-level video buffers will always be returned to the low-level camera and USB
350  drivers even if the process function throws at any point during the processing. If any error occurs, it is
351  hence ok to throw from within process() at any time, just make sure your locally allocated resources will be
352  freed, which is usually best achieved by using shared_ptr and similar wrappers around them. The Engine (which
353  calls process() on your module for every frame) will catch any exception an proceed to the next frame.
354 
355  Default implementation in the base class just throws. Derived classes should override it. */
356  virtual void process(InputFrame && inframe, OutputFrame && outframe);
357 
358  //! Processing function, version that receives a frame from camera and does not use USB
359  /*! This function is called once for each grabbed video frame from the camera, and it should complete within the
360  camera's frame period in order to avoid dropping frames. The InputFrame object is a simple wrapper to ensure
361  that the low-level video buffers will always be returned to the low-level camera driver even if the process
362  function throws at any point during the processing. If any error occurs, it is hence ok to throw from within
363  process() at any time, just make sure your locally allocated resources will be freed, which is usually best
364  achieved by using shared_ptr and similar wrappers around them. The Engine (which calls process() on your
365  module for every frame) will catch any exception an proceed to the next frame.
366 
367  Default implementation in the base class just throws. Derived classes should override it. */
368  virtual void process(InputFrame && inframe);
369 
370  //! Send a string over the 'serout' serial port
371  /*! The default implementation just sends the string to the serial port specified by the 'serout' Parameter in
372  Engine (which could be the hardware serial port, the serial-over-USB port, both, or none; see \ref UserCli for
373  information about \c serout). No need to override in most cases. Typically, you would use this function from
374  within process() to send out some results of your processing.
375 
376  Note that the default 'serout' Parameter setting in Engine is None. This is to allow users to configure
377  parameters, get parameter values, possibly read the help message, etc before the flow of serial outputs from
378  vision processing starts. Once ready to receive serial outputs, one would typically issue a command 'setpar
379  serout Hard' over the JeVois command line to enable serial outputs to the hardware serial port. An Arduino
380  would issue that setpar commands when it is ready to work. See ArduinoTutorial for an example. */
381  virtual void sendSerial(std::string const & str);
382 
383  //! Receive a string from a serial port which contains a user command
384  /*! This function may be called in between calls to process() with any received string from any of the serial
385  ports. Some commands are parsed upstream already (like "help", set param value, set camera control, etc; see
386  the Engine class) and will not be received here. Only the ones not recognized by the Engine will be received
387  (i.e., custom commands specific to your module).
388 
389  The default implementation just throws std::runtime_error("Unsupported command"), but some modules may want to
390  override this function to handle custom commands. If you successfully process the command, just return;
391  otherwise, throw, and if your exception derives from std::exception, the Engine will append its what() to the
392  error message issued to the user. When you support commands here, you should update the implementation of
393  supportedCommands() to provide some description of those commands to the users.
394 
395  The \c s parameter is the serial port that received the command. You can send any results back to that port
396  using writeString() on it. Note that the Engine will automatically add the 'OK' message upon success, so you
397  do not have to send that here. */
398  virtual void parseSerial(std::string const & str, std::shared_ptr<UserInterface> s);
399 
400  //! Human-readable description of this Module's supported custom commands
401  /*! The format here is free. Just use std::endl to demarcate lines, these will be converted to the appropriate
402  line endings by the serial ports. Default implementation writes "None" to os. */
403  virtual void supportedCommands(std::ostream & os);
404 
405  //! Get frame number from the Engine
406  /*! The Engine maintains a master frame counter that is incremented on each call to process(), whether or not the
407  call succeeds. The counter is not incremented when a module has not been loaded (e.g., failed to load). The
408  counter is reset to zero each time a new module is loaded. */
409  size_t frameNum() const;
410  };
411 
412  namespace module
413  {
414  static ParameterCategory const ParamCateg("Module Serial Message Options");
415 
416  //! Enum for Parameter \relates jevois::StdModule
417  JEVOIS_DEFINE_ENUM_CLASS(SerStyle, (Terse) (Normal) (Detail) (Fine) )
418 
419  //! Parameter \relates jevois::StdModule
420  JEVOIS_DECLARE_PARAMETER(serstyle, SerStyle, "Style for standardized serial messages as defined in "
421  "http://jevois.org/doc/UserSerialStyle.html",
422  SerStyle::Terse, SerStyle_Values, ParamCateg);
423 
424  //! Parameter \relates jevois::StdModule
425  JEVOIS_DECLARE_PARAMETER(serprec, unsigned int, "Number of decimal points in standardized serial messages as "
426  "defined in http://jevois.org/doc/UserSerialStyle.html",
427  0U, jevois::Range<unsigned int>(0U, 10U), ParamCateg);
428 
429  //! Enum for Parameter \relates jevois::StdModule
430  JEVOIS_DEFINE_ENUM_CLASS(SerStamp, (None) (Frame) (Time) (FrameTime) (FrameDateTime) )
431 
432  //! Parameter \relates jevois::StdModule
433  JEVOIS_DECLARE_PARAMETER(serstamp, SerStamp, "Prepend standardized serial messages with a frame number, "
434  "time, frame+time, or frame+date+time. See details in "
435  "http://jevois.org/doc/UserSerialStyle.html",
436  SerStamp::None, SerStamp_Values, ParamCateg);
437  }
438 
439  //! Base class for a module that supports standardized serial messages
440  /*! Modules that can output standardized serial messages should derive from StdModule instead of Module. StdModule
441  brings in extra parameters to set serial message style and precision, and extra member functions to assemble,
442  format, and send standardized serial messages. The process(), sendSerial(), parseSerial(), supportedCommands(),
443  etc of StdModule functions are directly inherited from Module. See \ref UserSerialStyle for standardized serial
444  messages. \ingroup core */
445  class StdModule : public Module, public Parameter<module::serprec, module::serstyle, module::serstamp>
446  {
447  public:
448  //! Constructor
449  /*! the instance is a user-defined string that may be used to differentiate between several instances of the
450  same module. */
451  StdModule(std::string const & instance);
452 
453  //! Virtual destructor for safe inheritance
454  virtual ~StdModule();
455 
456  //! Send standardized 1D message for an X image coordinate
457  /*! See \ref UserSerialStyle for more info. Coordinates should be in camera image pixels, this function will
458  convert them to standardized coordinates as per \ref coordhelpers. */
459  void sendSerialImg1Dx(unsigned int camw, float x, float size = 0.0F, std::string const & id = "",
460  std::string const & extra = "");
461 
462  //! Send standardized 1D message for a standardized X coordinate
463  /*! See \ref UserSerialStyle for more info. Coordinates should be in camera image pixels, this function will
464  convert them to standardized coordinates as per \ref coordhelpers. */
465  void sendSerialStd1Dx(float x, float size = 0.0F, std::string const & id = "", std::string const & extra = "");
466 
467  //! Send standardized 1D message for an Y image coordinate
468  /*! See \ref UserSerialStyle for more info. Coordinates should be in camera image pixels, this function will
469  convert them to standardized coordinates as per \ref coordhelpers. */
470  void sendSerialImg1Dy(unsigned int camh, float y, float size = 0.0F, std::string const & id = "",
471  std::string const & extra = "");
472 
473  //! Send standardized 1D message for a standardized Y coordinate
474  /*! See \ref UserSerialStyle for more info. Coordinates should be in camera image pixels, this function will
475  convert them to standardized coordinates as per \ref coordhelpers. */
476  void sendSerialStd1Dy(float y, float size = 0.0F, std::string const & id = "", std::string const & extra = "");
477 
478  //! Send standardized 2D message for image coordinates
479  /*! Use this function if you only know location and optionally size. Use the other variants if you have the
480  corners. An upright rectangular shape will be assumed here. See \ref UserSerialStyle for more
481  info. Coordinates should be in camera image pixels, this function will convert them to standardized
482  coordinates as per \ref coordhelpers. */
483  void sendSerialImg2D(unsigned int camw, unsigned int camh, float x, float y, float w = 0.0F, float h = 0.0F,
484  std::string const & id = "", std::string const & extra = "");
485 
486  //! Send standardized 2D message for standardized coordinates
487  /*! Use this function if you only know location and optionally size. Use the other variants if you have the
488  corners. An upright rectangular shape will be assumed here. See \ref UserSerialStyle for more
489  info. Coordinates should be in camera image pixels, this function will convert them to standardized
490  coordinates as per \ref coordhelpers. */
491  void sendSerialStd2D(float x, float y, float w = 0.0F, float h = 0.0F,
492  std::string const & id = "", std::string const & extra = "");
493 
494  //! Send standardized 2D message for polygons in image coordinates
495  /*! Use this function if you have a polygon around your object, for example, one of the contours found with
496  cv::findContours(), or if you have the 4 corners of a rectangular object. See \ref UserSerialStyle for more
497  info. Coordinates should be in camera image pixels, this function will convert them to standardized
498  coordinates as per \ref coordhelpers. For \b Terse serial style, the center of gravity of the points will be
499  computed and output; for \b Normal, an upright bounding rectangle will be computed and output; for \b
500  Detailed, a rotated bounding rectangle will be computed and output; for \b Fine, all the given points will
501  be output. Make sure you try to reduce the number of points so the message is not too long; for example see
502  OpenCV approxPolyDP() or similar. */
503  template <typename T = int>
504  void sendSerialContour2D(unsigned int camw, unsigned int camh, std::vector<cv::Point_<T> > points,
505  std::string const & id = "", std::string const & extra = "");
506 
507  //! Send standardized 3D message
508  /*! Use this function if you only know location and optionally size and an orientation quaternion. Use the other
509  variants if you have a bunch of vertices. See \ref UserSerialStyle for more info. Coordinates should be in
510  millimeters. */
511  void sendSerialStd3D(float x, float y, float z, float w = 0.0F, float h = 0.0F, float d = 0.0F,
512  float q1 = 0.0F, float q2 = 0.0F, float q3 = 0.0f, float q4 = 0.0F,
513  std::string const & id = "", std::string const & extra = "");
514 
515  //! Send standardized 3D message
516  /*! Use this function if you only know location and optionally size and an orientation quaternion. Use the other
517  variants if you have a bunch of vertices. See \ref UserSerialStyle for more info. Coordinates should be in
518  millimeters. */
519  void sendSerialStd3D(std::vector<cv::Point3f> points, std::string const & id = "",
520  std::string const & extra = "");
521 
522  protected:
523  //! Get a string with the frame/date/time stamp in it, depending on serstamp parameter
524  std::string getStamp() const;
525  };
526 }
527 
528 //! Register a module, allowing it to be dynamically loaded from a .so file
529 /* \def JEVOIS_REGISTER_MODULE(MODULENAME)
530  \hideinitializer
531 
532  Every module implementation file should contain a call to JEVOIS_REGISTER_MODULE for the module's class. This creates
533  some plain-C entry points that will be used when the module is loaded from a dynamic library (.so) file to
534  instantiate the module. \relates Module */
535 #define JEVOIS_REGISTER_MODULE(MODULENAME) \
536  extern "C" std::shared_ptr<jevois::Module> MODULENAME##_create(std::string const & instanceid) \
537  { return std::shared_ptr<jevois::Module>(new MODULENAME(instanceid)); } \
538  extern "C" int MODULENAME##_version_major() { return JEVOIS_VERSION_MAJOR; } \
539  extern "C" int MODULENAME##_version_minor() { return JEVOIS_VERSION_MINOR; } \
540 
541 
Style for standardized serial messages as defined in SerStyle_Values
Definition: Module.H:420
Exception-safe wrapper around a raw camera input frame.
Definition: Module.H:56
Generic variadic template class template definition for Component Parameters.
cv::Mat getCvRGBA(bool casync=false) const
Shorthand to get the input image as a RGBA cv::Mat and release the raw buffer.
Definition: Module.C:94
A category to which multiple ParameterDef definitions can belong.
Definition: ParameterDef.H:33
JEVOIS_DECLARE_PARAMETER(thresh1, double, "First threshold for hysteresis", 50.0, ParamCateg)
A component of a model hierarchy.
Definition: Component.H:177
Prepend standardized serial messages with a frame number
Definition: Module.H:433
A raw image as coming from a V4L2 Camera and/or being sent out to a USB Gadget.
Definition: RawImage.H:110
Prepend standardized serial messages with a frame frame or frame date time See details in SerStamp_Values
Definition: Module.H:433
cv::Mat getCvRGB(bool casync=false) const
Shorthand to get the input image as a RGB cv::Mat and release the raw buffer.
Definition: Module.C:85
cv::Mat getCvGRAY(bool casync=false) const
Shorthand to get the input image as a GRAY cv::Mat and release the raw buffer.
Definition: Module.C:67
A generic range class.
Definition: Range.H:80
Prepend standardized serial messages with a frame time
Definition: Module.H:433
JeVois processing engine - gets images from camera sensor, processes them, and sends results over USB...
Definition: Engine.H:257
Base class for a module that supports standardized serial messages.
Definition: Module.H:445
Virtual base class for a vision processing module.
Definition: Module.H:335
Exception-safe wrapper around a raw image to be sent over USB.
Definition: Module.H:144
~InputFrame()
Destructor, returns the buffers to the driver as needed.
Definition: Module.C:38
Style for standardized serial messages as defined in http
Definition: Module.H:420
cv::Mat getCvBGR(bool casync=false) const
Shorthand to get the input image as a BGR cv::Mat and release the raw buffer.
Definition: Module.C:76
void done() const
Indicate that user processing is done with the image previously obtained via get() ...
Definition: Module.C:60