JeVois  1.20
JeVois Smart Embedded Machine Vision Toolkit
Share this page:
PyModule.py
Go to the documentation of this file.
1 import pyjevois
2 if pyjevois.pro: import libjevoispro as jevois
3 else: import libjevois as jevois
4 import cv2
5 import numpy as np
6 
7 ## __SYNOPSIS__
8 #
9 # This module is here for you to experiment with Python OpenCV on JeVois and JeVois-Pro.
10 #
11 # By default, we get the next video frame from the camera as an OpenCV BGR (color) image named 'inimg'.
12 # We then apply some image processing to it to create an overlay in Pro/GUI mode, an output BGR image named
13 # 'outimg' in Legacy mode, or no image in Headless mode.
14 #
15 # - In Legacy mode (JeVois-A33 or JeVois-Pro acts as a webcam connected to a host): process() is called on every
16 # frame. A video frame from the camera sensor is given in 'inframe' and the process() function create an output frame
17 # that is sent over USB to the host computer (JeVois-A33) or displayed (JeVois-Pro).
18 #
19 # - In Pro/GUI mode (JeVois-Pro is connected to an HDMI display): processGUI() is called on every frame. A video frame
20 # from the camera is given, as well as a GUI helper that can be used to create overlay drawings.
21 #
22 # - In Headless mode (JeVois-A33 or JeVois-Pro only produces text messages over serial port, no video output):
23 # processNoUSB() is called on every frame. A video frame from the camera is given, and the module sends messages over
24 # serial to report what it sees.
25 #
26 # Which mode is activated depends on which VideoMapping was selected by the user. The VideoMapping specifies camera
27 # format and framerate, and what kind of mode and output format to use.
28 #
29 # See http://jevois.org/tutorials for tutorials on getting started with programming JeVois in Python without having
30 # to install any development software on your host computer.
31 #
32 # @author __AUTHOR__
33 #
34 # @videomapping __VIDEOMAPPING__
35 # @email __EMAIL__
36 # @address fixme
37 # @copyright Copyright (C) 2021 by __AUTHOR__
38 # @mainurl __WEBSITE__
39 # @supporturl
40 # @otherurl
41 # @license __LICENSE__
42 # @distribution Unrestricted
43 # @restrictions None
44 # @ingroup modules
45 class __MODULE__:
46  # ###################################################################################################
47  ## Constructor
48  def __init__(self):
49  # Instantiate a JeVois Timer to measure our processing framerate:
50  self.timer = jevois.Timer("timer", 100, jevois.LOG_INFO)
51 
52  # Create an ArUco marker detector:
53  self.dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_4X4_50)
54  self.params = cv2.aruco.DetectorParameters_create()
55 
56  # ###################################################################################################
57  ## Process function with USB output (Legacy mode):
58  def process(self, inframe, outframe):
59  # Get the next camera image for processing (may block until it is captured) and here convert it to OpenCV BGR by
60  # default. If you need a grayscale image instead, just use getCvGRAYp() instead of getCvBGRp(). Also supported
61  # are getCvRGBp() and getCvRGBAp():
62  inimg = inframe.getCvBGRp()
63 
64  # Start measuring image processing time (NOTE: does not account for input conversion time):
65  self.timer.start()
66 
67  # Detect edges using the Laplacian algorithm from OpenCV:
68  #
69  # Replace the line below by your own code! See for example
70  # - http://docs.opencv.org/trunk/d4/d13/tutorial_py_filtering.html
71  # - http://docs.opencv.org/trunk/d9/d61/tutorial_py_morphological_ops.html
72  # - http://docs.opencv.org/trunk/d5/d0f/tutorial_py_gradients.html
73  # - http://docs.opencv.org/trunk/d7/d4d/tutorial_py_thresholding.html
74  #
75  # and so on. When they do "img = cv2.imread('name.jpg', 0)" in these tutorials, the last 0 means they want a
76  # gray image, so you should use getCvGRAY() above in these cases. When they do not specify a final 0 in imread()
77  # then usually they assume color and you should use getCvBGRp() here.
78  #
79  # The simplest you could try is:
80  # outimg = inimg
81  # which will make a simple copy of the input image to output.
82  outimg = cv2.Laplacian(inimg, -1, ksize=5, scale=0.25, delta=127)
83 
84  # Also detect and draw ArUco markers:
85  grayimg = cv2.cvtColor(inimg, cv2.COLOR_BGR2GRAY)
86  corners, ids, rej = cv2.aruco.detectMarkers(grayimg, self.dict, parameters = self.params)
87  outimg = cv2.aruco.drawDetectedMarkers(outimg, corners, ids)
88 
89  # Write a title:
90  cv2.putText(outimg, "JeVois Python Sandbox", (3, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255))
91 
92  # Write frames/s info from our timer into the edge map (NOTE: does not account for output conversion time):
93  fps = self.timer.stop()
94  outheight = outimg.shape[0]
95  outwidth = outimg.shape[1]
96  cv2.putText(outimg, fps, (3, outheight - 6), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255))
97 
98  # Convert our OpenCv output image to video output format and send to host over USB:
99  outframe.sendCv(outimg)
100 
101  # ###################################################################################################
102  ## Process function with GUI output (JeVois-Pro mode):
103  def processGUI(self, inframe, helper):
104  # Start a new display frame, gets its size and also whether mouse/keyboard are idle:
105  idle, winw, winh = helper.startFrame()
106 
107  # Draw full-resolution color input frame from camera. It will be automatically centered and scaled to fill the
108  # display without stretching it. The position and size are returned, but often it is not needed as JeVois
109  # drawing functions will also automatically scale and center. So, when drawing overlays, just use image
110  # coordinates and JeVois will convert them to display coordinates automatically:
111  x, y, iw, ih = helper.drawInputFrame("c", inframe, False, False)
112 
113  # Get the next camera image for processing (may block until it is captured), as greyscale:
114  inimg = inframe.getCvGRAYp()
115 
116  # Start measuring image processing time (NOTE: does not account for input conversion time):
117  self.timer.start()
118 
119  # Detect edges using the Canny algorithm from OpenCV:
120  #
121  # Replace the line below by your own code! See for example
122  # - https://docs.opencv.org/master/da/d22/tutorial_py_canny.html
123  # - http://docs.opencv.org/trunk/d4/d13/tutorial_py_filtering.html
124  # - http://docs.opencv.org/trunk/d9/d61/tutorial_py_morphological_ops.html
125  # - http://docs.opencv.org/trunk/d5/d0f/tutorial_py_gradients.html
126  # - http://docs.opencv.org/trunk/d7/d4d/tutorial_py_thresholding.html
127  #
128  # and so on. When they do "img = cv2.imread('name.jpg', 0)" in these tutorials, the last 0 means they want a
129  # gray image, so you should use getCvGRAYp() above in these cases. When they do not specify a final 0 in
130  # imread() then usually they assume BGR color and you should use getCvBGRp() here.
131  edges = cv2.Canny(inimg, 100, 200)
132 
133  # Edges is a greyscale image. To display it as an overlay, we convert it to RGBA, with zero alpha (transparent)
134  # in background and full alpha on edges. We just duplicate our edge map 4 times for A, B, G, R:
135  mask = cv2.merge([edges, edges, edges, edges])
136 
137  # Draw the edges as an overlay on top of the full-resolution camera input frame. It will automatically be
138  # re-scaled and centered to match the last-drawn full-resolution frame:
139  # Flags here are: rgb = True, noalias = False, isoverlay=True
140  helper.drawImage("edges", mask, True, False, True)
141 
142  # Examples of some GUI overlay drawings. Draw color format in hex: 0xAABBGGRR where AA is the alpha (typically
143  # keep at ff). The last 'True' parameter is to draw a semi-transparent filled shape.
144  helper.drawCircle(50, 50, 20, 0xff80ffff, True)
145  helper.drawRect(100, 100, 300, 200, 0xffff80ff, True)
146 
147  # Also detect and draw ArUco markers:
148  corners, ids, rej = cv2.aruco.detectMarkers(inimg, self.dict, parameters = self.params)
149  if len(corners) > 0:
150  for (marker, id) in zip(corners, ids):
151  helper.drawPoly(marker, 0xffff0000, True)
152  helper.drawText(float(marker[0][0][0]), float(marker[0][0][1]), "id={}".format(id), 0xffff0000)
153 
154  # Write frames/s info from our timer:
155  fps = self.timer.stop()
156  helper.iinfo(inframe, fps, winw, winh);
157 
158  # End of frame:
159  helper.endFrame()
160 
161  # ###################################################################################################
162  ## Process function with no USB output (Headless mode):
163  def processNoUSB(self, inframe):
164  # Get the next camera image at the processing resolution (may block until it is captured) and here convert it to
165  # OpenCV GRAY by default. Also supported are getCvRGBp(), getCvBGRp(), and getCvRGBAp():
166  inimg = inframe.getCvGRAYp()
167 
168  # Detect ArUco markers:
169  corners, ids, rej = cv2.aruco.detectMarkers(inimg, self.dict, parameters = self.params)
170 
171  # Nothing to display in headless mode. Instead, just send some data to serial port:
172  if len(corners) > 0:
173  for (marker, id) in zip(corners, ids):
174  jevois.sendSerial("Detected ArUco ID={}".format(id))
175 
176 
PyModule.__MODULE__.processGUI
def processGUI(self, inframe, helper)
Process function with GUI output (JeVois-Pro mode):
Definition: PyModule.py:103
PyModule.__MODULE__.__init__
def __init__(self)
Constructor.
Definition: PyModule.py:48
PyModule.__MODULE__.timer
timer
Definition: PyModule.py:50
PyModule.__MODULE__.params
params
Definition: PyModule.py:54
PyModule.__MODULE__.processNoUSB
def processNoUSB(self, inframe)
Process function with no USB output (Headless mode):
Definition: PyModule.py:163
PyModule.__MODULE__.process
def process(self, inframe, outframe)
Process function with USB output (Legacy mode):
Definition: PyModule.py:58
PyModule.__MODULE__
SYNOPSIS
Definition: PyModule.py:45
PyModule.__MODULE__.dict
dict
Definition: PyModule.py:53
jevois::Timer
Simple timer class.
Definition: Timer.H:34