JeVoisBase  1.20
JeVois Smart Embedded Machine Vision Toolkit Base Modules
Share this page:
FirstPython.py
Go to the documentation of this file.
1 ######################################################################################################################
2 #
3 # JeVois Smart Embedded Machine Vision Toolkit - Copyright (C) 2017 by Laurent Itti, the University of Southern
4 # California (USC), and iLab at USC. See http://iLab.usc.edu and http://jevois.org for information about this project.
5 #
6 # This file is part of the JeVois Smart Embedded Machine Vision Toolkit. This program is free software; you can
7 # redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software
8 # Foundation, version 2. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
9 # without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
10 # License for more details. You should have received a copy of the GNU General Public License along with this program;
11 # if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
12 #
13 # Contact information: Laurent Itti - 3641 Watt Way, HNB-07A - Los Angeles, CA 90089-2520 - USA.
14 # Tel: +1 213 740 3527 - itti@pollux.usc.edu - http://iLab.usc.edu - http://jevois.org
15 ######################################################################################################################
16 
17 import pyjevois
18 if pyjevois.pro: import libjevoispro as jevois
19 else: import libjevois as jevois
20 import cv2
21 import numpy as np
22 import math # for cos, sin, etc
23 
24 ## Simple example of FIRST Robotics image processing pipeline using OpenCV in Python on JeVois
25 #
26 # This module is a simplified version of the C++ module \jvmod{FirstVision}. It is available with \jvversion{1.6.2} or
27 # later.
28 #
29 # This module implements a simple color-based object detector using OpenCV in Python. Its main goal is to also
30 # demonstrate full 6D pose recovery of the detected object, in Python.
31 #
32 # This module isolates pixels within a given HSV range (hue, saturation, and value of color pixels), does some cleanups,
33 # and extracts object contours. It is looking for a rectangular U shape of a specific size (set by parameters \p owm and
34 # \p ohm for object width and height in meters). See screenshots for an example of shape. It sends information about
35 # detected objects over serial.
36 #
37 # This module usually works best with the camera sensor set to manual exposure, manual gain, manual color balance, etc
38 # so that HSV color values are reliable. See the file \b script.cfg file in this module's directory for an example of
39 # how to set the camera settings each time this module is loaded.
40 #
41 # This module is provided for inspiration. It has no pretension of actually solving the FIRST Robotics vision problem
42 # in a complete and reliable way. It is released in the hope that FRC teams will try it out and get inspired to
43 # develop something much better for their own robot.
44 #
45 # Using this module
46 # -----------------
47 #
48 # Check out [this tutorial](http://jevois.org/tutorials/UserFirstVision.html) first, for the \jvmod{FirstVision} module
49 # written in C++ and also check out the doc for \jvmod{FirstVision}. Then you can just dive in and start editing the
50 # python code of \jvmod{FirstPython}.
51 #
52 # See http://jevois.org/tutorials for tutorials on getting started with programming JeVois in Python without having
53 # to install any development software on your host computer.
54 #
55 # Trying it out
56 # -------------
57 #
58 # Edit the module's file at JEVOIS:/modules/JeVois/FirstPython/FirstPython.py and set the parameters \p self.owm and \p
59 # self.ohm to the physical width and height of your U-shaped object in meters. You should also review and edit the other
60 # parameters in the module's constructor, such as the range of HSV colors.
61 #
62 # @author Laurent Itti
63 #
64 # @displayname FIRST Python
65 # @videomapping YUYV 640 252 60.0 YUYV 320 240 60.0 JeVois FirstPython
66 # @videomapping YUYV 320 252 60.0 YUYV 320 240 60.0 JeVois FirstPython
67 # @email itti\@usc.edu
68 # @address University of Southern California, HNB-07A, 3641 Watt Way, Los Angeles, CA 90089-2520, USA
69 # @copyright Copyright (C) 2018 by Laurent Itti, iLab and the University of Southern California
70 # @mainurl http://jevois.org
71 # @supporturl http://jevois.org/doc
72 # @otherurl http://iLab.usc.edu
73 # @license GPL v3
74 # @distribution Unrestricted
75 # @restrictions None
76 # @ingroup modules
78  # ###################################################################################################
79  ## Constructor
80  def __init__(self):
81  # HSV color range to use:
82  #
83  # H: 0=red/do not use because of wraparound, 30=yellow, 45=light green, 60=green, 75=green cyan, 90=cyan,
84  # 105=light blue, 120=blue, 135=purple, 150=pink
85  # S: 0 for unsaturated (whitish discolored object) to 255 for fully saturated (solid color)
86  # V: 0 for dark to 255 for maximally bright
87  self.HSVmin = np.array([ 20, 50, 180], dtype=np.uint8)
88  self.HSVmax = np.array([ 80, 255, 255], dtype=np.uint8)
89 
90  # Measure your U-shaped object (in meters) and set its size here:
91  self.owm = 0.280 # width in meters
92  self.ohm = 0.175 # height in meters
93 
94  # Other processing parameters:
95  self.epsilon = 0.015 # Shape smoothing factor (higher for smoother)
96  self.hullarea = ( 20*20, 300*300 ) # Range of object area (in pixels) to track
97  self.hullfill = 50 # Max fill ratio of the convex hull (percent)
98  self.ethresh = 900 # Shape error threshold (lower is stricter for exact shape)
99  self.margin = 5 # Margin from from frame borders (pixels)
100 
101  # Instantiate a JeVois Timer to measure our processing framerate:
102  self.timer = jevois.Timer("FirstPython", 100, jevois.LOG_INFO)
103 
104  # CAUTION: The constructor is a time-critical code section. Taking too long here could upset USB timings and/or
105  # video capture software running on the host computer. Only init the strict minimum here, and do not use OpenCV,
106  # read files, etc
107 
108  # ###################################################################################################
109  ## Load camera calibration from JeVois share directory
110  def loadCameraCalibration(self, w, h):
111  cpf = pyjevois.share + "/camera/calibration{}x{}.yaml".format(w, h)
112  fs = cv2.FileStorage(cpf, cv2.FILE_STORAGE_READ)
113  if (fs.isOpened()):
114  self.camMatrix = fs.getNode("camera_matrix").mat()
115  self.distCoeffs = fs.getNode("distortion_coefficients").mat()
116  jevois.LINFO("Loaded camera calibration from {}".format(cpf))
117  else:
118  jevois.LERROR("Failed to read camera parameters from file [{}] -- IGNORED".format(cpf))
119  self.camMatrix = np.eye(3, 3, dtype=double)
120  self.distCoeffs = np.zeros(5, 1, dtype=double)
121 
122  # ###################################################################################################
123  ## Detect objects within our HSV range
124  def detect(self, imgbgr, outimg = None):
125  maxn = 5 # max number of objects we will consider
126  h, w, chans = imgbgr.shape
127 
128  # Convert input image to HSV:
129  imghsv = cv2.cvtColor(imgbgr, cv2.COLOR_BGR2HSV)
130 
131  # Isolate pixels inside our desired HSV range:
132  imgth = cv2.inRange(imghsv, self.HSVmin, self.HSVmax)
133  str = "H={}-{} S={}-{} V={}-{} ".format(self.HSVmin[0], self.HSVmax[0], self.HSVmin[1],
134  self.HSVmax[1], self.HSVmin[2], self.HSVmax[2])
135 
136  # Create structuring elements for morpho maths:
137  if not hasattr(self, 'erodeElement'):
138  self.erodeElement = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
139  self.dilateElement = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
140 
141  # Apply morphological operations to cleanup the image noise:
142  imgth = cv2.erode(imgth, self.erodeElement)
143  imgth = cv2.dilate(imgth, self.dilateElement)
144 
145  # Detect objects by finding contours:
146  contours, hierarchy = cv2.findContours(imgth, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
147  str += "N={} ".format(len(contours))
148 
149  # Only consider the 5 biggest objects by area:
150  contours = sorted(contours, key = cv2.contourArea, reverse = True)[:maxn]
151  hlist = [ ] # list of hulls of good objects, which we will return
152  str2 = ""
153  beststr2 = ""
154 
155  # Identify the "good" objects:
156  for c in contours:
157  # Keep track of our best detection so far:
158  if len(str2) > len(beststr2): beststr2 = str2
159  str2 = ""
160 
161  # Compute contour area:
162  area = cv2.contourArea(c, oriented = False)
163 
164  # Compute convex hull:
165  rawhull = cv2.convexHull(c, clockwise = True)
166  rawhullperi = cv2.arcLength(rawhull, closed = True)
167  hull = cv2.approxPolyDP(rawhull, epsilon = self.epsilon * rawhullperi * 3.0, closed = True)
168 
169  # Is it the right shape?
170  if (hull.shape != (4,1,2)): continue # 4 vertices for the rectangular convex outline (shows as a trapezoid)
171  str2 += "H" # Hull is quadrilateral
172 
173  huarea = cv2.contourArea(hull, oriented = False)
174  if huarea < self.hullarea[0] or huarea > self.hullarea[1]: continue
175  str2 += "A" # Hull area ok
176 
177  hufill = area / huarea * 100.0
178  if hufill > self.hullfill: continue
179  str2 += "F" # Fill is ok
180 
181  # Check object shape:
182  peri = cv2.arcLength(c, closed = True)
183  approx = cv2.approxPolyDP(c, epsilon = self.epsilon * peri, closed = True)
184  if len(approx) < 7 or len(approx) > 9: continue # 8 vertices for a U shape
185  str2 += "S" # Shape is ok
186 
187  # Compute contour serr:
188  serr = 100.0 * cv2.matchShapes(c, approx, cv2.CONTOURS_MATCH_I1, 0.0)
189  if serr > self.ethresh: continue
190  str2 += "E" # Shape error is ok
191 
192  # Reject the shape if any of its vertices gets within the margin of the image bounds. This is to avoid
193  # getting grossly incorrect 6D pose estimates as the shape starts getting truncated as it partially exits
194  # the camera field of view:
195  reject = 0
196  for v in c:
197  if v[0,0] < self.margin or v[0,0] >= w-self.margin or v[0,1] < self.margin or v[0,1] >= h-self.margin:
198  reject = 1
199  break
200 
201  if reject == 1: continue
202  str2 += "M" # Margin ok
203 
204  # Re-order the 4 points in the hull if needed: In the pose estimation code, we will assume vertices ordered
205  # as follows:
206  #
207  # 0| |3
208  # | |
209  # | |
210  # 1----------2
211 
212  # v10+v23 should be pointing outward the U more than v03+v12 is:
213  v10p23 = complex(hull[0][0,0] - hull[1][0,0] + hull[3][0,0] - hull[2][0,0],
214  hull[0][0,1] - hull[1][0,1] + hull[3][0,1] - hull[2][0,1])
215  len10p23 = abs(v10p23)
216  v03p12 = complex(hull[3][0,0] - hull[0][0,0] + hull[2][0,0] - hull[1][0,0],
217  hull[3][0,1] - hull[0][0,1] + hull[2][0,1] - hull[1][0,1])
218  len03p12 = abs(v03p12)
219 
220  # Vector from centroid of U shape to centroid of its hull should also point outward of the U:
221  momC = cv2.moments(c)
222  momH = cv2.moments(hull)
223  vCH = complex(momH['m10'] / momH['m00'] - momC['m10'] / momC['m00'],
224  momH['m01'] / momH['m00'] - momC['m01'] / momC['m00'])
225  lenCH = abs(vCH)
226 
227  if len10p23 < 0.1 or len03p12 < 0.1 or lenCH < 0.1: continue
228  str2 += "V" # Shape vectors ok
229 
230  good = (v10p23.real * vCH.real + v10p23.imag * vCH.imag) / (len10p23 * lenCH)
231  bad = (v03p12.real * vCH.real + v03p12.imag * vCH.imag) / (len03p12 * lenCH)
232 
233  # We reject upside-down detections as those are likely to be spurious:
234  if vCH.imag >= -2.0: continue
235  str2 += "U" # U shape is upright
236 
237  # Fixup the ordering of the vertices if needed:
238  if bad > good: hull = np.roll(hull, shift = 1, axis = 0)
239 
240  # This detection is a keeper:
241  str2 += " OK"
242  hlist.append(hull)
243 
244  if len(str2) > len(beststr2): beststr2 = str2
245 
246  # Display any results requested by the users:
247  if outimg is not None and outimg.valid():
248  if (outimg.width == w * 2): jevois.pasteGreyToYUYV(imgth, outimg, w, 0)
249  jevois.writeText(outimg, str + beststr2, 3, h+1, jevois.YUYV.White, jevois.Font.Font6x10)
250 
251  return hlist
252 
253  # ###################################################################################################
254  ## Estimate 6D pose of each of the quadrilateral objects in hlist:
255  def estimatePose(self, hlist):
256  rvecs = []
257  tvecs = []
258 
259  # set coordinate system in the middle of the object, with Z pointing out
260  objPoints = np.array([ ( -self.owm * 0.5, -self.ohm * 0.5, 0 ),
261  ( -self.owm * 0.5, self.ohm * 0.5, 0 ),
262  ( self.owm * 0.5, self.ohm * 0.5, 0 ),
263  ( self.owm * 0.5, -self.ohm * 0.5, 0 ) ])
264 
265  for detection in hlist:
266  det = np.array(detection, dtype=np.float).reshape(4,2,1)
267  (ok, rv, tv) = cv2.solvePnP(objPoints, det, self.camMatrix, self.distCoeffs)
268  if ok:
269  rvecs.append(rv)
270  tvecs.append(tv)
271  else:
272  rvecs.append(np.array([ (0.0), (0.0), (0.0) ]))
273  tvecs.append(np.array([ (0.0), (0.0), (0.0) ]))
274 
275  return (rvecs, tvecs)
276 
277  # ###################################################################################################
278  ## Send serial messages, one per object
279  def sendAllSerial(self, w, h, hlist, rvecs, tvecs):
280  idx = 0
281  for c in hlist:
282  # Compute quaternion: FIXME need to check!
283  tv = tvecs[idx]
284  axis = rvecs[idx]
285  angle = (axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2]) ** 0.5
286 
287  # This code lifted from pyquaternion from_axis_angle:
288  mag_sq = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2]
289  if (abs(1.0 - mag_sq) > 1e-12): axis = axis / (mag_sq ** 0.5)
290  theta = angle / 2.0
291  r = math.cos(theta)
292  i = axis * math.sin(theta)
293  q = (r, i[0], i[1], i[2])
294 
295  jevois.sendSerial("D3 {} {} {} {} {} {} {} {} {} {} FIRST".
296  format(np.asscalar(tv[0]), np.asscalar(tv[1]), np.asscalar(tv[2]), # position
297  self.owm, self.ohm, 1.0, # size
298  r, np.asscalar(i[0]), np.asscalar(i[1]), np.asscalar(i[2]))) # pose
299  idx += 1
300 
301  # ###################################################################################################
302  ## Draw all detected objects in 3D
303  def drawDetections(self, outimg, hlist, rvecs = None, tvecs = None):
304  # Show trihedron and parallelepiped centered on object:
305  hw = self.owm * 0.5
306  hh = self.ohm * 0.5
307  dd = -max(hw, hh)
308  i = 0
309  empty = np.array([ (0.0), (0.0), (0.0) ])
310 
311  for obj in hlist:
312  # skip those for which solvePnP failed:
313  if np.array_equal(rvecs[i], empty):
314  i += 1
315  continue
316 
317  # Project axis points:
318  axisPoints = np.array([ (0.0, 0.0, 0.0), (hw, 0.0, 0.0), (0.0, hh, 0.0), (0.0, 0.0, dd) ])
319  imagePoints, jac = cv2.projectPoints(axisPoints, rvecs[i], tvecs[i], self.camMatrix, self.distCoeffs)
320 
321  # Draw axis lines:
322  jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
323  int(imagePoints[1][0,0] + 0.5), int(imagePoints[1][0,1] + 0.5),
324  2, jevois.YUYV.MedPurple)
325  jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
326  int(imagePoints[2][0,0] + 0.5), int(imagePoints[2][0,1] + 0.5),
327  2, jevois.YUYV.MedGreen)
328  jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
329  int(imagePoints[3][0,0] + 0.5), int(imagePoints[3][0,1] + 0.5),
330  2, jevois.YUYV.MedGrey)
331 
332  # Also draw a parallelepiped:
333  cubePoints = np.array([ (-hw, -hh, 0.0), (hw, -hh, 0.0), (hw, hh, 0.0), (-hw, hh, 0.0),
334  (-hw, -hh, dd), (hw, -hh, dd), (hw, hh, dd), (-hw, hh, dd) ])
335  cu, jac2 = cv2.projectPoints(cubePoints, rvecs[i], tvecs[i], self.camMatrix, self.distCoeffs)
336 
337  # Round all the coordinates and cast to int for drawing:
338  cu = np.rint(cu)
339 
340  # Draw parallelepiped lines:
341  jevois.drawLine(outimg, int(cu[0][0,0]), int(cu[0][0,1]), int(cu[1][0,0]), int(cu[1][0,1]),
342  1, jevois.YUYV.LightGreen)
343  jevois.drawLine(outimg, int(cu[1][0,0]), int(cu[1][0,1]), int(cu[2][0,0]), int(cu[2][0,1]),
344  1, jevois.YUYV.LightGreen)
345  jevois.drawLine(outimg, int(cu[2][0,0]), int(cu[2][0,1]), int(cu[3][0,0]), int(cu[3][0,1]),
346  1, jevois.YUYV.LightGreen)
347  jevois.drawLine(outimg, int(cu[3][0,0]), int(cu[3][0,1]), int(cu[0][0,0]), int(cu[0][0,1]),
348  1, jevois.YUYV.LightGreen)
349  jevois.drawLine(outimg, int(cu[4][0,0]), int(cu[4][0,1]), int(cu[5][0,0]), int(cu[5][0,1]),
350  1, jevois.YUYV.LightGreen)
351  jevois.drawLine(outimg, int(cu[5][0,0]), int(cu[5][0,1]), int(cu[6][0,0]), int(cu[6][0,1]),
352  1, jevois.YUYV.LightGreen)
353  jevois.drawLine(outimg, int(cu[6][0,0]), int(cu[6][0,1]), int(cu[7][0,0]), int(cu[7][0,1]),
354  1, jevois.YUYV.LightGreen)
355  jevois.drawLine(outimg, int(cu[7][0,0]), int(cu[7][0,1]), int(cu[4][0,0]), int(cu[4][0,1]),
356  1, jevois.YUYV.LightGreen)
357  jevois.drawLine(outimg, int(cu[0][0,0]), int(cu[0][0,1]), int(cu[4][0,0]), int(cu[4][0,1]),
358  1, jevois.YUYV.LightGreen)
359  jevois.drawLine(outimg, int(cu[1][0,0]), int(cu[1][0,1]), int(cu[5][0,0]), int(cu[5][0,1]),
360  1, jevois.YUYV.LightGreen)
361  jevois.drawLine(outimg, int(cu[2][0,0]), int(cu[2][0,1]), int(cu[6][0,0]), int(cu[6][0,1]),
362  1, jevois.YUYV.LightGreen)
363  jevois.drawLine(outimg, int(cu[3][0,0]), int(cu[3][0,1]), int(cu[7][0,0]), int(cu[7][0,1]),
364  1, jevois.YUYV.LightGreen)
365 
366  i += 1
367 
368  # ###################################################################################################
369  ## Process function with no USB output
370  def processNoUSB(self, inframe):
371  # Get the next camera image (may block until it is captured) as OpenCV BGR:
372  imgbgr = inframe.getCvBGR()
373  h, w, chans = imgbgr.shape
374 
375  # Start measuring image processing time:
376  self.timer.start()
377 
378  # Get a list of quadrilateral convex hulls for all good objects:
379  hlist = self.detect(imgbgr)
380 
381  # Load camera calibration if needed:
382  if not hasattr(self, 'camMatrix'): self.loadCameraCalibration(w, h)
383 
384  # Map to 6D (inverse perspective):
385  (rvecs, tvecs) = self.estimatePose(hlist)
386 
387  # Send all serial messages:
388  self.sendAllSerial(w, h, hlist, rvecs, tvecs)
389 
390  # Log frames/s info (will go to serlog serial port, default is None):
391  self.timer.stop()
392 
393  # ###################################################################################################
394  ## Process function with USB output
395  def process(self, inframe, outframe):
396  # Get the next camera image (may block until it is captured). To avoid wasting much time assembling a composite
397  # output image with multiple panels by concatenating numpy arrays, in this module we use raw YUYV images and
398  # fast paste and draw operations provided by JeVois on those images:
399  inimg = inframe.get()
400 
401  # Start measuring image processing time:
402  self.timer.start()
403 
404  # Convert input image to BGR24:
405  imgbgr = jevois.convertToCvBGR(inimg)
406  h, w, chans = imgbgr.shape
407 
408  # Get pre-allocated but blank output image which we will send over USB:
409  outimg = outframe.get()
410  outimg.require("output", w * 2, h + 12, jevois.V4L2_PIX_FMT_YUYV)
411  jevois.paste(inimg, outimg, 0, 0)
412  jevois.drawFilledRect(outimg, 0, h, outimg.width, outimg.height-h, jevois.YUYV.Black)
413 
414  # Let camera know we are done using the input image:
415  inframe.done()
416 
417  # Get a list of quadrilateral convex hulls for all good objects:
418  hlist = self.detect(imgbgr, outimg)
419 
420  # Load camera calibration if needed:
421  if not hasattr(self, 'camMatrix'): self.loadCameraCalibration(w, h)
422 
423  # Map to 6D (inverse perspective):
424  (rvecs, tvecs) = self.estimatePose(hlist)
425 
426  # Send all serial messages:
427  self.sendAllSerial(w, h, hlist, rvecs, tvecs)
428 
429  # Draw all detections in 3D:
430  self.drawDetections(outimg, hlist, rvecs, tvecs)
431 
432  # Write frames/s info from our timer into the edge map (NOTE: does not account for output conversion time):
433  fps = self.timer.stop()
434  jevois.writeText(outimg, fps, 3, h-10, jevois.YUYV.White, jevois.Font.Font6x10)
435 
436  # We are done with the output, ready to send it to host over USB:
437  outframe.send()
438 
FirstPython.FirstPython.margin
margin
Definition: FirstPython.py:99
demo.int
int
Definition: demo.py:37
FirstPython.FirstPython.timer
timer
Definition: FirstPython.py:102
FirstPython.FirstPython.ethresh
ethresh
Definition: FirstPython.py:98
FirstPython.FirstPython.process
def process(self, inframe, outframe)
Process function with USB output.
Definition: FirstPython.py:395
FirstPython.FirstPython.hullfill
hullfill
Definition: FirstPython.py:97
FirstPython.FirstPython.__init__
def __init__(self)
Constructor.
Definition: FirstPython.py:80
FirstPython.FirstPython.ohm
ohm
Definition: FirstPython.py:92
FirstPython.FirstPython.estimatePose
def estimatePose(self, hlist)
Estimate 6D pose of each of the quadrilateral objects in hlist:
Definition: FirstPython.py:255
FirstPython.FirstPython.sendAllSerial
def sendAllSerial(self, w, h, hlist, rvecs, tvecs)
Send serial messages, one per object.
Definition: FirstPython.py:279
FirstPython.FirstPython.erodeElement
erodeElement
Definition: FirstPython.py:138
FirstPython.FirstPython.HSVmax
HSVmax
Definition: FirstPython.py:88
FirstPython.FirstPython.owm
owm
Definition: FirstPython.py:91
FirstPython.FirstPython.camMatrix
camMatrix
Definition: FirstPython.py:114
hasattr
bool hasattr(boost::python::object &o, char const *name)
FirstPython.FirstPython.epsilon
epsilon
Definition: FirstPython.py:95
FirstPython.FirstPython.distCoeffs
distCoeffs
Definition: FirstPython.py:115
FirstPython.FirstPython.processNoUSB
def processNoUSB(self, inframe)
Process function with no USB output.
Definition: FirstPython.py:370
FirstPython.FirstPython.hullarea
hullarea
Definition: FirstPython.py:96
FirstPython.FirstPython.dilateElement
dilateElement
Definition: FirstPython.py:139
FirstPython.FirstPython.drawDetections
def drawDetections(self, outimg, hlist, rvecs=None, tvecs=None)
Draw all detected objects in 3D.
Definition: FirstPython.py:303
FirstPython.FirstPython.detect
def detect(self, imgbgr, outimg=None)
Detect objects within our HSV range.
Definition: FirstPython.py:124
FirstPython.FirstPython.loadCameraCalibration
def loadCameraCalibration(self, w, h)
Load camera calibration from JeVois share directory.
Definition: FirstPython.py:110
FirstPython.FirstPython
Simple example of FIRST Robotics image processing pipeline using OpenCV in Python on JeVois.
Definition: FirstPython.py:77
FirstPython.FirstPython.HSVmin
HSVmin
Definition: FirstPython.py:87
jevois::Timer