JeVois  1.8
JeVois Smart Embedded Machine Vision Toolkit
Share this page:
Module.H
Go to the documentation of this file.
1 // ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
2 //
3 // JeVois Smart Embedded Machine Vision Toolkit - Copyright (C) 2016 by Laurent Itti, the University of Southern
4 // California (USC), and iLab at USC. See http://iLab.usc.edu and http://jevois.org for information about this project.
5 //
6 // This file is part of the JeVois Smart Embedded Machine Vision Toolkit. This program is free software; you can
7 // redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software
8 // Foundation, version 2. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
9 // without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
10 // License for more details. You should have received a copy of the GNU General Public License along with this program;
11 // if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
12 //
13 // Contact information: Laurent Itti - 3641 Watt Way, HNB-07A - Los Angeles, CA 90089-2520 - USA.
14 // Tel: +1 213 740 3527 - itti@pollux.usc.edu - http://iLab.usc.edu - http://jevois.org
15 // ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
16 /*! \file */
17 
18 #pragma once
19 
20 #include <memory>
21 #include <jevois/Image/RawImage.H>
22 #include <jevois/Core/VideoBuf.H>
23 #include <jevois/Core/UserInterface.H> // not strictly needed but derived classes will want to use it
25 #include <jevois/Types/Enum.H>
26 #include <opencv2/core/core.hpp>
27 #include <ostream>
28 
29 namespace jevois
30 {
31  class VideoInput;
32  class VideoOutput;
33  class Engine;
34 
35  //! Exception-safe wrapper around a raw camera input frame
36  /*! This wrapper operates much like std:future in standard C++11. Users can get the next image captured by the camera
37  by calling get(), which may block if the capture is not complete yet, or may throw if the capture fails for some
38  reason (e.g., the camera is not streaming). The image size and pixel type are as defined by the current
39  VideoMapping, camera section. In addition, a done() function is provided which users may use as soon as they are
40  finished with the pixel data in the image obtained via get(), to allow the camera driver to setup the underlying
41  memory buffer again for capture. If done() has not been called by the time the InputFrame is destroyed, it will be
42  called automatically, if get() had been called. It may in some cases improve your frame rate to call done()
43  manually as early as possible instead of letting the InputFrame destructor do it.
44 
45  InputFrame implements a zero-copy, zero-wait access to input video frames, that is:
46 
47  1. the pixel data of the image you obtain via get() is directly the memory-mapped pixel buffer that the silicon
48  hardware on the JeVois chip uses via direct-memory-access (DMA) to stream the pixel data from the camera chip
49  to processor memory;
50  2. as soon as an image is captured by the camera hardware, get() unblocks and returns it (as opposed to having a
51  fixed, regular interval at which images may be available). Camera has several image buffers, allowing one to be
52  captured while another is being handed over for processing via get(). These buffers are recycled, i.e., once
53  done() is called, the underlying buffer is sent back to the camera hardware for future capture.
54 
55  \ingroup core */
56  class InputFrame
57  {
58  public:
59  //! Move constructor
60  InputFrame(InputFrame && other) = default;
61 
62  //! Get the next captured camera image
63  /*! Throws if we the camera is not streaming or blocks until an image is available (has been captured). */
64  RawImage const & get(bool casync = false) const;
65 
66  //! Indicate that user processing is done with the image previously obtained via get()
67  /*! You should call this as soon after get() as possible, once you are finished with the RawImage data so that it
68  can be recycled and sent back to the camera driver for video capture. */
69  void done() const;
70 
71  //! Shorthand to get the input image as a GRAY cv::Mat and release the raw buffer
72  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
73  processing. C++ module writers should stick to the get()/done() pair as this provides better fine-grained
74  control. Note that the raw image from the camera will always be copied or converted to cv::Mat and will then
75  be released by calling done(), so users should not call done() after using this function. This function is
76  basically equivalent to calling get(), converting to cv::Mat, and calling done(). */
77  cv::Mat getCvGRAY(bool casync = false) const;
78 
79  //! Shorthand to get the input image as a BGR cv::Mat and release the raw buffer
80  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
81  processing. C++ module writers should stick to the get()/done() pair as this provides better fine-grained
82  control. Note that the raw image from the camera will always be copied or converted to cv::Mat and will then
83  be released by calling done(), so users should not call done() after using this function. This function is
84  basically equivalent to calling get(), converting to cv::Mat, and calling done(). */
85  cv::Mat getCvBGR(bool casync = false) const;
86 
87  //! Shorthand to get the input image as a RGB cv::Mat and release the raw buffer
88  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
89  processing. C++ module writers should stick to the get()/done() pair as this provides better fine-grained
90  control. Note that the raw image from the camera will always be copied or converted to cv::Mat and will then
91  be released by calling done(), so users should not call done() after using this function. This function is
92  basically equivalent to calling get(), converting to cv::Mat, and calling done(). */
93  cv::Mat getCvRGB(bool casync = false) const;
94 
95  //! Shorthand to get the input image as a RGBA cv::Mat and release the raw buffer
96  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
97  processing. C++ module writers should stick to the get()/done() pair as this provides better fine-grained
98  control. Note that the raw image from the camera will always be copied or converted to cv::Mat and will then
99  be released by calling done(), so users should not call done() after using this function. This function is
100  basically equivalent to calling get(), converting to cv::Mat, and calling done(). */
101  cv::Mat getCvRGBA(bool casync = false) const;
102 
103  //! Destructor, returns the buffers to the driver as needed
104  ~InputFrame();
105 
106  private:
107  InputFrame() = delete;
108  InputFrame(InputFrame const & other) = delete;
109  InputFrame & operator=(InputFrame const & other) = delete;
110 
111  friend class Engine;
112  InputFrame(std::shared_ptr<VideoInput> const & cam, bool turbo); // Only our friends can construct us
113 
114  std::shared_ptr<VideoInput> itsCamera;
115  mutable bool itsDidGet;
116  mutable bool itsDidDone;
117  mutable RawImage itsImage;
118  bool const itsTurbo;
119  };
120 
121  //! Exception-safe wrapper around a raw image to be sent over USB
122  /*! This wrapper operates much like std:future in standard C++11. Users can get the next memory-allocated but blank
123  image to be sent over USB by calling get(), which may block if all buffers are still being sent over USB by Gadget
124  and no blank one is available, or may throw if getting that buffer fails for some reason (e.g., usb disconnect,
125  user just changed video mode in their webcam software or closed it). The allocated image size and pixel type is as
126  defined by the current VideoMapping, USB section, i.e., it is the USB video mode currently selected by the
127  user. To save time, image buffers are not zeroed out, so you should not assume that the image is filled with black
128  pixels, it could contain random pixels, or previous output frames. In addition, a send() function is provided
129  which users may use as soon as they are finished with writing the pixel data into the image obtained via get(), to
130  allow the USB driver to send that image to the connected host computer. If send() has not been called by the time
131  the OutputFrame is destroyed, it will be called automatically, if get() had been called.
132 
133  OutputFrame implements a zero-copy, zero-wait access to output video frames, that is:
134 
135  1. the pixel data of the image you obtain via get() is directly the memory-mapped pixel buffer that the silicon
136  hardware on the JeVois chip will use via direct-memory-access (DMA) to stream the data out over the USB link;
137  2. as soon as you call send() that buffer will be queued for sending over USB (as opposed to having a fixed,
138  regular interval at which images may be streamed out). Gadget has several image buffers, allowing one to be
139  streamed out over USB while another is being handed over for filling by your application via get(). These
140  buffers are recycled, i.e., once send() is called, the underlying buffer is streamed over USB and then sent
141  back to the Gadget for future access by your code.
142 
143  \ingroup core */
145  {
146  public:
147  //! Move constructor
148  OutputFrame(OutputFrame && other) = default;
149 
150  //! Get a pre-allocated image so that we can fill the pixel data and later send out over USB using send()
151  /*! May throw if not buffer is available, i.e., all have been queued to send to the host but have not yet been
152  sent. Application code must balance exactly one send() for each get(). */
153  RawImage const & get() const;
154 
155  //! Send an image out over USB to the host computer
156  /*! May throw if the format is incorrect or std::overflow_error if we have not yet consumed the previous image. */
157  void send() const;
158 
159  //! Shorthand to send a GRAY cv::Mat after converting it to the current output format
160  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
161  processing. The cv::Mat must have same dims as the output frame. C++ module writers should stick to the
162  get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always be
163  copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
164  users should not call send() after using this function. This function is basically equivalent to calling
165  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
166  the output format is MJPEG and should be between 1 and 100. */
167  void sendCvGRAY(cv::Mat const & img, int quality = 75) const;
168 
169  //! Shorthand to send a BGR cv::Mat after converting it to the current output format
170  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
171  processing. The cv::Mat must have same dims as the output frame. C++ module writers should stick to the
172  get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always be
173  copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
174  users should not call send() after using this function. This function is basically equivalent to calling
175  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
176  the output format is MJPEG and should be between 1 and 100.*/
177  void sendCvBGR(cv::Mat const & img, int quality = 75) const;
178 
179  //! Shorthand to send a RGB cv::Mat after converting it to the current output format
180  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
181  processing. The cv::Mat must have same dims as the output frame. C++ module writers should stick to the
182  get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always be
183  copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
184  users should not call send() after using this function. This function is basically equivalent to calling
185  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
186  the output format is MJPEG and should be between 1 and 100.*/
187  void sendCvRGB(cv::Mat const & img, int quality = 75) const;
188 
189  //! Shorthand to send a RGBA cv::Mat after converting it to the current output format
190  /*! This is mostly intended for Python module writers, as they will likely use OpenCV for all their image
191  processing. The cv::Mat must have same dims as the output frame. C++ module writers should stick to the
192  get()/send() pair as this provides better fine-grained control. Note that the cv::Mat image will always be
193  copied or converted to the destination RawImage and will then be sent out immediately by calling send(), so
194  users should not call send() after using this function. This function is basically equivalent to calling
195  get(), converting the given cv::Mat to the proper output format, and calling send(). quality is used only if
196  the output format is MJPEG and should be between 1 and 100.*/
197  void sendCvRGBA(cv::Mat const & img, int quality = 75) const;
198 
199  //! Destructor, returns the buffers to the driver as needed
200  ~OutputFrame();
201 
202  private:
203  OutputFrame() = delete;
204  OutputFrame(OutputFrame const & other) = delete;
205  OutputFrame & operator=(OutputFrame const & other) = delete;
206 
207  // Only our friends can construct us:
208  friend class Engine;
209  OutputFrame(std::shared_ptr<VideoOutput> const & gad, RawImage * excimg = nullptr);
210 
211  std::shared_ptr<VideoOutput> itsGadget;
212  mutable bool itsDidGet;
213  mutable bool itsDidSend;
214  mutable RawImage itsImage;
215  jevois::RawImage * itsImagePtrForException;
216  };
217 
218  //! Virtual base class for a vision processing module
219  /*! Module is the base class to implement camera-to-USB frame-by-frame video processing. The Engine instantiates one
220  class derived from Module, according to the current VideoMapping selected by the end user (e.g., current image
221  resolution, format, and frame rate setected by a webcam viewing program on a host computer). The Module is loaded
222  as shared object (.so) file according to the VideoMapping definitions in videomappings.cfg and the current
223  VideoMapping selected by the user.
224 
225  Module derives from Component and as such can contain:
226 
227  - any number of Parameter settings
228 
229  - any arbitrarily complex sub-hierarchy of Component objects to implement various functionality. Parameter
230  settings from all the sub-components are available to the Module and to users connected over Serial ports of the
231  Engine.
232 
233  This allows one to implement complex vision processing pipelines efficiently and with substantial code re-use. For
234  example, one may first want to implement an EdgeDetector or Saliency component, with Parameter settings for
235  various thresholds, features, etc. One can then create any number of top-level objects that derive from Module and
236  that may contain one or more EdgeDetector, Saliency, etc components in their hierarchy of sub-components, with the
237  implementation in the module simply routing images from one Component to another to create a processing pipeline.
238 
239  Classes that derive from Module should implement up to four functions:
240 
241  - process(InputFrame && inframe, OutputFrame && outframe) is called once per iteration of the Engine main loop
242  when the current VideoMapping specifies both a particular Camera resolution and format, and a USB resolution and
243  format. This function should process the received input frame and fill the pixel buffer of the output frame with
244  results. Memory has already been allocated for both the input and output images before process() is
245  called. Because the output image is actually allocated by the USB Gadget driver (and, ultimately, by the Linux
246  kernel), its pixel memory location cannot be moved (hence, do not attempt to copy the output image or replace it
247  by another image, etc; just write pixel data into the output image's pixel array). There is no restriction on
248  video modes or frame rates, except as suported by the Camera hardware, and as limited by USB bandwidth. For most
249  implementations, matching the input and output frame rate is easiest, and means that each invocation of
250  process() would access and use both of the provided InputFrame and OutputFrame (one-input-to-one-output
251  processing pipeline). But this is not mandatory. For example, a motion flow computation Module for use in a
252  flying drone may have 320x240 YUYV 53.0fps inputs and 100x142 GREY 10.6fps output (since output frame rate is 5x
253  lower than input in this example, the process() function would here get, fill, and send the OutputFrame only
254  once every 5 times it is called; implementation of the process() function should keep track of that, e.g.,
255  through a member variable that gets incremented each time process() is called). In addition to filling the pixel
256  data of the OutputFrame, process() may also send results over the serial ports (e.g., for use by an Arduino
257  connected to the JeVois platform hardware) using sendSerial().
258 
259  - process(InputFrame && inframe) is called once per Camera frame when the current VideoMapping specifies a
260  particular Camera resolution and format, and NONE as USB output format. This function should process the
261  received input frame and would typicaly then send results to serial ports (e.g., for use by an Arduino connected
262  to the JeVois platform hardware) using sendSerial(). There is no restriction on video modes or frame rates,
263  except as suported by the Camera hardware.
264 
265  - parseSerial(std::string const & str, std::shared_ptr<UserInterface> s) allows the Module to support custom user
266  commands. Engine will forward to this function any command received over Serial or other UserInterface that it
267  does not understand. You should use this for things that go beyond Parameter settings (which is already natively
268  supported by Engine) or built-in commands of Engine (see \ref UserCli). For example, one could implement here a
269  command called "start" to allow users to start some specific thing.
270 
271  - supportedCommands(std::ostream & os) should stream out a human-readable description of any custom commands
272  supported by parseSerial(). These will be shown to users when they type "help" over a Serial port.
273 
274  \note Every module implementation file should contain a call to #JEVOIS_REGISTER_MODULE(MODULENAME) for the
275  module's class. This creates some plain-C entry points that will be used when the module is loaded from a dynamic
276  library (.so) file to instantiate the module. See \ref ModuleTutorial for examples.
277 
278  \ingroup core */
279  class Module : public Component
280  {
281  public:
282  //! Constructor
283  /*! the instance is a user-defined string that may be used to differentiate between several instances of the
284  same module. */
285  Module(std::string const & instance);
286 
287  //! Virtual destructor for safe inheritance
288  virtual ~Module();
289 
290  //! Processing function, version that receives a frame from camera and sends a frame out over USB
291  /*! This function is called once for each grabbed video frame from the camera, and it should complete within the
292  camera's frame period in order to avoid dropping frames. The InputFrame and OutputFrame objects are simple
293  wrappers to ensure that the low-level video buffers will always be returned to the low-level camera and USB
294  drivers even if the process function throws at any point during the processing. If any error occurs, it is
295  hence ok to throw from within process() at any time, just make sure your locally allocated resources will be
296  freed, which is usually best achieved by using shared_ptr and similar wrappers around them. The Engine (which
297  calls process() on your module for every frame) will catch any exception an proceed to the next frame.
298 
299  Default implementation in the base class just throws. Derived classes should override it. */
300  virtual void process(InputFrame && inframe, OutputFrame && outframe);
301 
302  //! Processing function, version that receives a frame from camera and does not use USB
303  /*! This function is called once for each grabbed video frame from the camera, and it should complete within the
304  camera's frame period in order to avoid dropping frames. The InputFrame object is a simple wrapper to ensure
305  that the low-level video buffers will always be returned to the low-level camera driver even if the process
306  function throws at any point during the processing. If any error occurs, it is hence ok to throw from within
307  process() at any time, just make sure your locally allocated resources will be freed, which is usually best
308  achieved by using shared_ptr and similar wrappers around them. The Engine (which calls process() on your
309  module for every frame) will catch any exception an proceed to the next frame.
310 
311  Default implementation in the base class just throws. Derived classes should override it. */
312  virtual void process(InputFrame && inframe);
313 
314  //! Send a string over the 'serout' serial port
315  /*! The default implementation just sends the string to the serial port specified by the 'serout' Parameter in
316  Engine (which could be the hardware serial port, the serial-over-USB port, both, or none; see \ref UserCli for
317  information about \c serout). No need to override in most cases. Typically, you would use this function from
318  within process() to send out some results of your processing.
319 
320  Note that the default 'serout' Parameter setting in Engine is None. This is to allow users to configure
321  parameters, get parameter values, possibly read the help message, etc before the flow of serial outputs from
322  vision processing starts. Once ready to receive serial outputs, one would typically issue a command 'setpar
323  serout Hard' over the JeVois command line to enable serial outputs to the hardware serial port. An Arduino
324  would issue that setpar commands when it is ready to work. See ArduinoTutorial for an example. */
325  virtual void sendSerial(std::string const & str);
326 
327  //! Receive a string from a serial port which contains a user command
328  /*! This function may be called in between calls to process() with any received string from any of the serial
329  ports. Some commands are parsed upstream already (like "help", set param value, set camera control, etc; see
330  the Engine class) and will not be received here. Only the ones not recognized by the Engine will be received
331  (i.e., custom commands specific to your module).
332 
333  The default implementation just throws std::runtime_error("Unsupported command"), but some modules may want to
334  override this function to handle custom commands. If you successfully process the command, just return;
335  otherwise, throw, and if your exception derives from std::exception, the Engine will append its what() to the
336  error message issued to the user. When you support commands here, you should update the implementation of
337  supportedCommands() to provide some description of those commands to the users.
338 
339  The \c s parameter is the serial port that received the command. You can send any results back to that port
340  using writeString() on it. Note that the Engine will automatically add the 'OK' message upon success, so you
341  do not have to send that here. */
342  virtual void parseSerial(std::string const & str, std::shared_ptr<UserInterface> s);
343 
344  //! Human-readable description of this Module's supported custom commands
345  /*! The format here is free. Just use std::endl to demarcate lines, these will be converted to the appropriate
346  line endings by the serial ports. Default implementation writes "None" to os. */
347  virtual void supportedCommands(std::ostream & os);
348 
349  //! Get frame number from the Engine
350  /*! The Engine maintains a master frame counter that is incremented on each call to process(), whether or not the
351  call succeeds. The counter is not incremented when a module has not been loaded (e.g., failed to load). The
352  counter is reset to zero each time a new module is loaded. */
353  size_t frameNum() const;
354  };
355 
356  namespace module
357  {
358  static ParameterCategory const ParamCateg("Module Serial Message Options");
359 
360  //! Enum for Parameter \relates jevois::StdModule
361  JEVOIS_DEFINE_ENUM_CLASS(SerStyle, (Terse) (Normal) (Detail) (Fine) )
362 
363  //! Parameter \relates jevois::StdModule
364  JEVOIS_DECLARE_PARAMETER(serstyle, SerStyle, "Style for standardized serial messages as defined in "
365  "http://jevois.org/doc/UserSerialStyle.html",
366  SerStyle::Terse, SerStyle_Values, ParamCateg);
367 
368  //! Parameter \relates jevois::StdModule
369  JEVOIS_DECLARE_PARAMETER(serprec, unsigned int, "Number of decimal points in standardized serial messages as "
370  "defined in http://jevois.org/doc/UserSerialStyle.html",
371  0U, ParamCateg);
372 
373  //! Enum for Parameter \relates jevois::StdModule
374  JEVOIS_DEFINE_ENUM_CLASS(SerStamp, (None) (Frame) (Time) (FrameTime) (FrameDateTime) )
375 
376  //! Parameter \relates jevois::StdModule
377  JEVOIS_DECLARE_PARAMETER(serstamp, SerStamp, "Prepend standardized serial messages with a frame number, "
378  "time, frame+time, or frame+date+time. See details in "
379  "http://jevois.org/doc/UserSerialStyle.html",
380  SerStamp::None, ParamCateg);
381  }
382 
383  //! Base class for a module that supports standardized serial messages
384  /*! Modules that can output standardized serial messages should derive from StdModule instead of Module. StdModule
385  brings in extra parameters to set serial message style and precision, and extra member functions to assemble,
386  format, and send standardized serial messages. The process(), sendSerial(), parseSerial(), supportedCommands(),
387  etc of StdModule functions are directly inherited from Module. See \ref UserSerialStyle for standardized serial
388  messages. \ingoup core */
389  class StdModule : public Module, public Parameter<module::serprec, module::serstyle, module::serstamp>
390  {
391  public:
392  //! Constructor
393  /*! the instance is a user-defined string that may be used to differentiate between several instances of the
394  same module. */
395  StdModule(std::string const & instance);
396 
397  //! Virtual destructor for safe inheritance
398  virtual ~StdModule();
399 
400  //! Send standardized 1D message for an X image coordinate
401  /*! See \ref UserSerialStyle for more info. Coordinates should be in camera image pixels, this function will
402  convert them to standardized coordinates as per \ref coordhelpers. */
403  void sendSerialImg1Dx(unsigned int camw, float x, float size = 0.0F, std::string const & id = "",
404  std::string const & extra = "");
405 
406  //! Send standardized 1D message for a standardized X coordinate
407  /*! See \ref UserSerialStyle for more info. Coordinates should be in camera image pixels, this function will
408  convert them to standardized coordinates as per \ref coordhelpers. */
409  void sendSerialStd1Dx(float x, float size = 0.0F, std::string const & id = "", std::string const & extra = "");
410 
411  //! Send standardized 1D message for an Y image coordinate
412  /*! See \ref UserSerialStyle for more info. Coordinates should be in camera image pixels, this function will
413  convert them to standardized coordinates as per \ref coordhelpers. */
414  void sendSerialImg1Dy(unsigned int camh, float y, float size = 0.0F, std::string const & id = "",
415  std::string const & extra = "");
416 
417  //! Send standardized 1D message for a standardized Y coordinate
418  /*! See \ref UserSerialStyle for more info. Coordinates should be in camera image pixels, this function will
419  convert them to standardized coordinates as per \ref coordhelpers. */
420  void sendSerialStd1Dy(float y, float size = 0.0F, std::string const & id = "", std::string const & extra = "");
421 
422  //! Send standardized 2D message for image coordinates
423  /*! Use this function if you only know location and optionally size. Use the other variants if you have the
424  corners. An upright rectangular shape will be assumed here. See \ref UserSerialStyle for more
425  info. Coordinates should be in camera image pixels, this function will convert them to standardized
426  coordinates as per \ref coordhelpers. */
427  void sendSerialImg2D(unsigned int camw, unsigned int camh, float x, float y, float w = 0.0F, float h = 0.0F,
428  std::string const & id = "", std::string const & extra = "");
429 
430  //! Send standardized 2D message for standardized coordinates
431  /*! Use this function if you only know location and optionally size. Use the other variants if you have the
432  corners. An upright rectangular shape will be assumed here. See \ref UserSerialStyle for more
433  info. Coordinates should be in camera image pixels, this function will convert them to standardized
434  coordinates as per \ref coordhelpers. */
435  void sendSerialStd2D(float x, float y, float w = 0.0F, float h = 0.0F,
436  std::string const & id = "", std::string const & extra = "");
437 
438  //! Send standardized 2D message for polygons in image coordinates
439  /*! Use this function if you have a polygon around your object, for example, one of the contours found with
440  cv::findContours(), or if you have the 4 corners of a rectangular object. See \ref UserSerialStyle for more
441  info. Coordinates should be in camera image pixels, this function will convert them to standardized
442  coordinates as per \ref coordhelpers. For \b Terse serial style, the center of gravity of the points will be
443  computed and output; for \b Normal, an upright bounding rectangle will be computed and output; for \b
444  Detailed, a rotated bounding rectangle will be computed and output; for \b Fine, all the given points will
445  be output. Make sure you try to reduce the number of points so the message is not too long; for example see
446  OpenCV approxPolyDP() or similar. */
447  template <typename T = int>
448  void sendSerialContour2D(unsigned int camw, unsigned int camh, std::vector<cv::Point_<T> > points,
449  std::string const & id = "", std::string const & extra = "");
450 
451  //! Send standardized 3D message
452  /*! Use this function if you only know location and optionally size and an orientation quaternion. Use the other
453  variants if you have a bunch of vertices. See \ref UserSerialStyle for more info. Coordinates should be in
454  millimeters. */
455  void sendSerialStd3D(float x, float y, float z, float w = 0.0F, float h = 0.0F, float d = 0.0F,
456  float q1 = 0.0F, float q2 = 0.0F, float q3 = 0.0f, float q4 = 0.0F,
457  std::string const & id = "", std::string const & extra = "");
458 
459  //! Send standardized 3D message
460  /*! Use this function if you only know location and optionally size and an orientation quaternion. Use the other
461  variants if you have a bunch of vertices. See \ref UserSerialStyle for more info. Coordinates should be in
462  millimeters. */
463  void sendSerialStd3D(std::vector<cv::Point3f> points, std::string const & id = "",
464  std::string const & extra = "");
465  protected:
466  //! Get a string with the frame/date/time stamp in it, depending on serstamp parameter
467  std::string getStamp() const;
468  };
469 }
470 
471 //! Register a module, allowing it to be dynamically loaded from a .so file
472 /* \def JEVOIS_REGISTER_MODULE(MODULENAME)
473  \hideinitializer
474 
475  Every module implementation file should contain a call to JEVOIS_REGISTER_MODULE for the module's class. This creates
476  some plain-C entry points that will be used when the module is loaded from a dynamic library (.so) file to
477  instantiate the module. \relates Module */
478 #define JEVOIS_REGISTER_MODULE(MODULENAME) \
479  extern "C" std::shared_ptr<jevois::Module> MODULENAME##_create(std::string const & instanceid) \
480  { return std::shared_ptr<jevois::Module>(new MODULENAME(instanceid)); } \
481  extern "C" int MODULENAME##_version_major() { return JEVOIS_VERSION_MAJOR; } \
482  extern "C" int MODULENAME##_version_minor() { return JEVOIS_VERSION_MINOR; } \
483 
484 
Style for standardized serial messages as defined in SerStyle_Values
Definition: Module.H:364
Exception-safe wrapper around a raw camera input frame.
Definition: Module.H:56
Generic variadic template class template definition for Component Parameters.
cv::Mat getCvRGBA(bool casync=false) const
Shorthand to get the input image as a RGBA cv::Mat and release the raw buffer.
Definition: Module.C:94
A category to which multiple ParameterDef definitions can belong.
Definition: ParameterDef.H:33
JEVOIS_DECLARE_PARAMETER(thresh1, double, "First threshold for hysteresis", 50.0, ParamCateg)
A component of a model hierarchy.
Definition: Component.H:176
Prepend standardized serial messages with a frame number
Definition: Module.H:377
A raw image as coming from a V4L2 Camera and/or being sent out to a USB Gadget.
Definition: RawImage.H:110
cv::Mat getCvRGB(bool casync=false) const
Shorthand to get the input image as a RGB cv::Mat and release the raw buffer.
Definition: Module.C:85
cv::Mat getCvGRAY(bool casync=false) const
Shorthand to get the input image as a GRAY cv::Mat and release the raw buffer.
Definition: Module.C:67
Prepend standardized serial messages with a frame time
Definition: Module.H:377
JeVois processing engine - gets images from camera sensor, processes them, and sends results over USB...
Definition: Engine.H:257
Base class for a module that supports standardized serial messages.
Definition: Module.H:389
Virtual base class for a vision processing module.
Definition: Module.H:279
Exception-safe wrapper around a raw image to be sent over USB.
Definition: Module.H:144
~InputFrame()
Destructor, returns the buffers to the driver as needed.
Definition: Module.C:38
Style for standardized serial messages as defined in http
Definition: Module.H:364
cv::Mat getCvBGR(bool casync=false) const
Shorthand to get the input image as a BGR cv::Mat and release the raw buffer.
Definition: Module.C:76
void done() const
Indicate that user processing is done with the image previously obtained via get() ...
Definition: Module.C:60