JeVoisBase  1.20
JeVois Smart Embedded Machine Vision Toolkit Base Modules
Share this page:
PythonObject6D.py
Go to the documentation of this file.
1 ######################################################################################################################
2 #
3 # JeVois Smart Embedded Machine Vision Toolkit - Copyright (C) 2018 by Laurent Itti, the University of Southern
4 # California (USC), and iLab at USC. See http://iLab.usc.edu and http://jevois.org for information about this project.
5 #
6 # This file is part of the JeVois Smart Embedded Machine Vision Toolkit. This program is free software; you can
7 # redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software
8 # Foundation, version 2. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
9 # without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
10 # License for more details. You should have received a copy of the GNU General Public License along with this program;
11 # if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
12 #
13 # Contact information: Laurent Itti - 3641 Watt Way, HNB-07A - Los Angeles, CA 90089-2520 - USA.
14 # Tel: +1 213 740 3527 - itti@pollux.usc.edu - http://iLab.usc.edu - http://jevois.org
15 ######################################################################################################################
16 
17 import pyjevois
18 if pyjevois.pro: import libjevoispro as jevois
19 else: import libjevois as jevois
20 import cv2
21 import numpy as np
22 import math # for cos, sin, etc
23 
24 ## Simple example of object detection using ORB keypoints followed by 6D pose estimation in Python
25 #
26 # This module implements an object detector using ORB keypoints using OpenCV in Python. Its main goal is to also
27 # demonstrate full 6D pose recovery of the detected object, in Python, as well as locating in 3D a sub-element of the
28 # detected object (here, a window within a larger textured wall). See \jvmod{ObjectDetect} for more info about object
29 # detection using keypoints. This module is available with \jvversion{1.6.3} and later.
30 #
31 # The algorithm consists of 5 phases:
32 # - detect keypoint locations, typically corners or other distinctive texture elements or markings;
33 # - compute keypoint descriptors, which are summary representations of the image neighborhood around each keypoint;
34 # - match descriptors from current image to descriptors previously extracted from training images;
35 # - if enough matches are found between the current image and a given training image, and they are of good enough
36 # quality, compute the homography (geometric transformation) between keypoint locations in that training image and
37 # locations of the matching keypoints in the current image. If it is well conditioned (i.e., a 3D viewpoint change
38 # could well explain how the keypoints moved between the training and current images), declare that a match was
39 # found, and draw a pink rectangle around the detected whole object.
40 # - finally perform 6D pose estimation (3D translation + 3D rotation), here for a window located at a specific position
41 # within the whole object, given the known physical sizes of both the whole object and the window within. A green
42 # parallelepiped is drawn at that window's location, sinking into the whole object (as it is representing a tunnel
43 # or port into the object).
44 #
45 # For more information about ORB keypoint detection and matching in OpenCV, see, e.g.,
46 # https://docs.opencv.org/3.4.0/d1/d89/tutorial_py_orb.html
47 #
48 # This module is provided for inspiration. It has no pretension of actually solving the FIRST Robotics Power Up (sm)
49 # vision problem in a complete and reliable way. It is released in the hope that FRC teams will try it out and get
50 # inspired to develop something much better for their own robot.
51 #
52 # Note how, contrary to \jvmod{FirstVision}, \jvmod{DemoArUco}, etc, the green parallelepiped is drawn going into the
53 # object instead of sticking out of it, as it is depicting a tunnel at the window location.
54 #
55 # Using this module
56 # -----------------
57 #
58 # This module is for now specific to the "exchange" of the FIRST Robotics 2018 Power Up (sm) challenge. See
59 # https://www.firstinspires.org/resource-library/frc/competition-manual-qa-system
60 #
61 # The exchange is a large textured structure with a window at the bottom into which robots should deliver foam cubes.
62 #
63 # A reference picture of the whole exchange (taken from the official rules) is in
64 # <b>JEVOIS:/modules/JeVois/PythonObject6D/images/reference.png</b> on your JeVois microSD card. It will be processed
65 # when the module starts. No additional training procedure is needed.
66 #
67 # If you change the reference image, you should also edit:
68 # - values of \p self.owm and \p self.ohm to the width ahd height, in meters, of the actual physical object in your
69 # picture. Square pixels are assumed, so make sure the aspect ratio of your PNG image matches the aspect ratio in
70 # meters given by variables \p self.owm and \p self.ohm in the code.
71 # - values of \p self.wintop, \p self.winleft, \p self.winw, \p self.winh to the location of the top-left corner, in
72 # meters and relative to the top-left corner of the whole reference object, of a window of interest (the tunnel into
73 # which the cubes should be delivered), and width and height, in meters, of the window.
74 #
75 # \b TODO: Add support for multiple images and online training as in \jvmod{ObjectDetect}
76 #
77 # Things to tinker with
78 # ---------------------
79 #
80 # There are a number of limitations and caveats to this module:
81 #
82 # - It does not use color, the input image is converted to grayscale before processing. One could use a different
83 # approach to object detection that would make use of color.
84 # - Results are often quite noisy. Maybe using another detector, like SIFT which provides subpixel accuracy, and better
85 # pruning of false matches (e.g., David Lowe's ratio of the best to second-best match scores) would help.
86 # - This algorithm is slow in this single-threaded Python example, and frame rate depends on image complexity (it gets
87 # slower when more keypoints are detected). One should explore parallelization, as was done in C++ for the
88 # \jvmod{ObjectDetect} module. One could also alternate between full detection using this algorithm once in a while,
89 # and much faster tracking of previous detections at a higher framerate (e.g., using the very robust TLD tracker
90 # (track-learn-detect), also supported in OpenCV).
91 # - If you want to detect smaller objects or pieces of objects, and you do not need 6D pose, you may want to use modules
92 # \jvmod{ObjectDetect} or \jvmod{SaliencySURF} as done, for example, by JeVois user Bill Kendall at
93 # https://www.youtube.com/watch?v=8wYhOnsNZcc
94 #
95 #
96 # @author Laurent Itti
97 #
98 # @displayname Python Object 6D
99 # @videomapping YUYV 320 262 15.0 YUYV 320 240 15.0 JeVois PythonObject6D
100 # @email itti\@usc.edu
101 # @address University of Southern California, HNB-07A, 3641 Watt Way, Los Angeles, CA 90089-2520, USA
102 # @copyright Copyright (C) 2018 by Laurent Itti, iLab and the University of Southern California
103 # @mainurl http://jevois.org
104 # @supporturl http://jevois.org/doc
105 # @otherurl http://iLab.usc.edu
106 # @license GPL v3
107 # @distribution Unrestricted
108 # @restrictions None
109 # @ingroup modules
111  # ###################################################################################################
112  ## Constructor
113  def __init__(self):
114  # Full file name of the training image:
115  self.fname = "/jevois/modules/JeVois/PythonObject6D/images/reference.png"
116 
117  # Measure your object (in meters) and set its size here:
118  self.owm = 48 * 0.0254 # width in meters (specs call for 48 inches)
119  self.ohm = 77.75 * 0.0254 # height in meters (specs call for 77.75 inches)
120 
121  # Window within the object for which we will compute 3D pose: top-left corner in meters relative to the top-left
122  # corner of the full reference object, and window width and height in meters:
123  self.wintop = (77.75 - 18) * 0.0254 # top of exchange window is 18in from ground
124  self.winleft = 6.88 * 0.0254 # left of exchange window is 6.88in from left edge
125  self.winw = (12 + 9) * 0.0254 # exchange window is 1ft 9in wide
126  self.winh = (12 + 4.25) * 0.0254 # exchange window is 1ft 4-1/4in tall
127 
128  # Other parameters:
129  self.distth = 50.0 # Descriptor distance threshold (lower is stricter for exact matches)
130 
131  # Instantiate a JeVois Timer to measure our processing framerate:
132  self.timer = jevois.Timer("PythonObject6D", 100, jevois.LOG_INFO)
133 
134  # ###################################################################################################
135  ## Load camera calibration from JeVois share directory
136  def loadCameraCalibration(self, w, h):
137  cpf = pyjevois.share + "/camera/calibration{}x{}.yaml".format(w, h)
138  fs = cv2.FileStorage(cpf, cv2.FILE_STORAGE_READ)
139  if (fs.isOpened()):
140  self.camMatrix = fs.getNode("camera_matrix").mat()
141  self.distCoeffs = fs.getNode("distortion_coefficients").mat()
142  jevois.LINFO("Loaded camera calibration from {}".format(cpf))
143  else:
144  jevois.LERROR("Failed to read camera parameters from file [{}] -- IGNORED".format(cpf))
145  self.camMatrix = np.eye(3, 3, dtype=double)
146  self.distCoeffs = np.zeros(5, 1, dtype=double)
147 
148  # ###################################################################################################
149  ## Detect objects using keypoints
150  def detect(self, imggray, outimg = None):
151  h, w = imggray.shape
152  hlist = []
153 
154  # Create a keypoint detector if needed:
155  if not hasattr(self, 'detector'):
156  self.detector = cv2.ORB_create()
157 
158  # Load training image and detect keypoints on it if needed:
159  if not hasattr(self, 'refkp'):
160  refimg = cv2.imread(self.fname, 0)
161  self.refkp, self.refdes = self.detector.detectAndCompute(refimg, None)
162 
163  # Also store corners of reference image and of window for homography mapping:
164  refh, refw = refimg.shape
165  self.refcorners = np.float32([ [ 0.0, 0.0 ], [ 0.0, refh ], [refw, refh ], [ refw, 0.0 ] ]).reshape(-1,1,2)
166  self.wincorners = np.float32([
167  [ self.winleft * refw / self.owm, self.wintop * refh / self.ohm ],
168  [ self.winleft * refw / self.owm, (self.wintop + self.winh) * refh / self.ohm ],
169  [ (self.winleft + self.winw) * refw / self.owm, (self.wintop + self.winh) * refh / self.ohm ],
170  [ (self.winleft + self.winw) * refw / self.owm, self.wintop * refh / self.ohm ] ]).reshape(-1,1,2)
171  jevois.LINFO("Extracted {} keypoints and descriptors from {}".format(len(self.refkp), self.fname))
172 
173  # Compute keypoints and descriptors:
174  kp, des = self.detector.detectAndCompute(imggray, None)
175  str = "{} keypoints".format(len(kp))
176 
177  # Create a matcher if needed:
178  if not hasattr(self, 'matcher'):
179  self.matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True)
180 
181  # Compute matches between reference image and camera image, then sort them by distance:
182  matches = self.matcher.match(des, self.refdes)
183  matches = sorted(matches, key = lambda x:x.distance)
184  str += ", {} matches".format(len(matches))
185 
186  # Keep only good matches:
187  lastidx = 0
188  for m in matches:
189  if m.distance < self.distth: lastidx += 1
190  else: break
191  matches = matches[0:lastidx]
192  str += ", {} good".format(len(matches))
193 
194  # If we have enough matches, compute homography:
195  corners = []
196  wincorners = []
197  if len(matches) >= 10:
198  obj = []
199  scene = []
200 
201  # Localize the object (see JeVois C++ class ObjectMatcher for details):
202  for m in matches:
203  obj.append(self.refkp[m.trainIdx].pt)
204  scene.append(kp[m.queryIdx].pt)
205 
206  # compute the homography
207  hmg, mask = cv2.findHomography(np.array(obj), np.array(scene), cv2.RANSAC, 5.0)
208 
209  # Check homography conditioning using SVD:
210  u, s, v = np.linalg.svd(hmg, full_matrices = False)
211 
212  # We need the smallest eigenvalue to not be too small, and the ratio of largest to smallest eigenvalue to be
213  # quite large for our homography to be declared good here. Note that linalg.svd returns the eigenvalues in
214  # descending order already:
215  if s[-1] > 0.001 and s[0] / s[-1] > 100:
216  # Project the reference image corners to the camera image:
217  corners = cv2.perspectiveTransform(self.refcorners, hmg)
218  wincorners = cv2.perspectiveTransform(self.wincorners, hmg)
219 
220  # Display any results requested by the users:
221  if outimg is not None and outimg.valid():
222  if len(corners) == 4:
223  jevois.drawLine(outimg, int(corners[0][0,0] + 0.5), int(corners[0][0,1] + 0.5),
224  int(corners[1][0,0] + 0.5), int(corners[1][0,1] + 0.5),
225  2, jevois.YUYV.LightPink)
226  jevois.drawLine(outimg, int(corners[1][0,0] + 0.5), int(corners[1][0,1] + 0.5),
227  int(corners[2][0,0] + 0.5), int(corners[2][0,1] + 0.5),
228  2, jevois.YUYV.LightPink)
229  jevois.drawLine(outimg, int(corners[2][0,0] + 0.5), int(corners[2][0,1] + 0.5),
230  int(corners[3][0,0] + 0.5), int(corners[3][0,1] + 0.5),
231  2, jevois.YUYV.LightPink)
232  jevois.drawLine(outimg, int(corners[3][0,0] + 0.5), int(corners[3][0,1] + 0.5),
233  int(corners[0][0,0] + 0.5), int(corners[0][0,1] + 0.5),
234  2, jevois.YUYV.LightPink)
235  jevois.writeText(outimg, str, 3, h+4, jevois.YUYV.White, jevois.Font.Font6x10)
236 
237  # Return window corners if we did indeed detect the object:
238  hlist = []
239  if len(wincorners) == 4: hlist.append(wincorners)
240 
241  return hlist
242 
243  # ###################################################################################################
244  ## Estimate 6D pose of each of the quadrilateral objects in hlist:
245  def estimatePose(self, hlist):
246  rvecs = []
247  tvecs = []
248 
249  # set coordinate system in the middle of the window, with Z pointing out
250  objPoints = np.array([ ( -self.winw * 0.5, -self.winh * 0.5, 0 ),
251  ( -self.winw * 0.5, self.winh * 0.5, 0 ),
252  ( self.winw * 0.5, self.winh * 0.5, 0 ),
253  ( self.winw * 0.5, -self.winh * 0.5, 0 ) ])
254 
255  for detection in hlist:
256  det = np.array(detection, dtype=np.float).reshape(4,2,1)
257  (ok, rv, tv) = cv2.solvePnP(objPoints, det, self.camMatrix, self.distCoeffs)
258  if ok:
259  rvecs.append(rv)
260  tvecs.append(tv)
261  else:
262  rvecs.append(np.array([ (0.0), (0.0), (0.0) ]))
263  tvecs.append(np.array([ (0.0), (0.0), (0.0) ]))
264 
265  return (rvecs, tvecs)
266 
267  # ###################################################################################################
268  ## Send serial messages, one per object
269  def sendAllSerial(self, w, h, hlist, rvecs, tvecs):
270  idx = 0
271  for c in hlist:
272  # Compute quaternion: FIXME need to check!
273  tv = tvecs[idx]
274  axis = rvecs[idx]
275  angle = (axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2]) ** 0.5
276 
277  # This code lifted from pyquaternion from_axis_angle:
278  mag_sq = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2]
279  if (abs(1.0 - mag_sq) > 1e-12): axis = axis / (mag_sq ** 0.5)
280  theta = angle / 2.0
281  r = math.cos(theta)
282  i = axis * math.sin(theta)
283  q = (r, i[0], i[1], i[2])
284 
285  jevois.sendSerial("D3 {} {} {} {} {} {} {} {} {} {} OBJ6D".
286  format(np.asscalar(tv[0]), np.asscalar(tv[1]), np.asscalar(tv[2]), # position
287  self.owm, self.ohm, 1.0, # size
288  r, np.asscalar(i[0]), np.asscalar(i[1]), np.asscalar(i[2]))) # pose
289  idx += 1
290 
291  # ###################################################################################################
292  ## Draw all detected objects in 3D
293  def drawDetections(self, outimg, hlist, rvecs = None, tvecs = None):
294  # Show trihedron and parallelepiped centered on object:
295  hw = self.winw * 0.5
296  hh = self.winh * 0.5
297  dd = -max(hw, hh)
298  i = 0
299  empty = np.array([ (0.0), (0.0), (0.0) ])
300 
301  # NOTE: this code similar to FirstVision, but in the present module we only have at most one object in the list
302  # (the window, if detected):
303  for obj in hlist:
304  # skip those for which solvePnP failed:
305  if np.array_equal(rvecs[i], empty):
306  i += 1
307  continue
308  # This could throw some overflow errors as we convert the coordinates to int, if the projection gets
309  # singular because of noisy detection:
310  try:
311  # Project axis points:
312  axisPoints = np.array([ (0.0, 0.0, 0.0), (hw, 0.0, 0.0), (0.0, hh, 0.0), (0.0, 0.0, dd) ])
313  imagePoints, jac = cv2.projectPoints(axisPoints, rvecs[i], tvecs[i], self.camMatrix, self.distCoeffs)
314 
315  # Draw axis lines:
316  jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
317  int(imagePoints[1][0,0] + 0.5), int(imagePoints[1][0,1] + 0.5),
318  2, jevois.YUYV.MedPurple)
319  jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
320  int(imagePoints[2][0,0] + 0.5), int(imagePoints[2][0,1] + 0.5),
321  2, jevois.YUYV.MedGreen)
322  jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
323  int(imagePoints[3][0,0] + 0.5), int(imagePoints[3][0,1] + 0.5),
324  2, jevois.YUYV.MedGrey)
325 
326  # Also draw a parallelepiped: NOTE: contrary to FirstVision, here we draw it going into the object, as
327  # opposed to sticking out of it (we just negate Z for that):
328  cubePoints = np.array([ (-hw, -hh, 0.0), (hw, -hh, 0.0), (hw, hh, 0.0), (-hw, hh, 0.0),
329  (-hw, -hh, -dd), (hw, -hh, -dd), (hw, hh, -dd), (-hw, hh, -dd) ])
330  cu, jac2 = cv2.projectPoints(cubePoints, rvecs[i], tvecs[i], self.camMatrix, self.distCoeffs)
331 
332  # Round all the coordinates and cast to int for drawing:
333  cu = np.rint(cu)
334 
335  # Draw parallelepiped lines:
336  jevois.drawLine(outimg, int(cu[0][0,0]), int(cu[0][0,1]), int(cu[1][0,0]), int(cu[1][0,1]),
337  1, jevois.YUYV.LightGreen)
338  jevois.drawLine(outimg, int(cu[1][0,0]), int(cu[1][0,1]), int(cu[2][0,0]), int(cu[2][0,1]),
339  1, jevois.YUYV.LightGreen)
340  jevois.drawLine(outimg, int(cu[2][0,0]), int(cu[2][0,1]), int(cu[3][0,0]), int(cu[3][0,1]),
341  1, jevois.YUYV.LightGreen)
342  jevois.drawLine(outimg, int(cu[3][0,0]), int(cu[3][0,1]), int(cu[0][0,0]), int(cu[0][0,1]),
343  1, jevois.YUYV.LightGreen)
344  jevois.drawLine(outimg, int(cu[4][0,0]), int(cu[4][0,1]), int(cu[5][0,0]), int(cu[5][0,1]),
345  1, jevois.YUYV.LightGreen)
346  jevois.drawLine(outimg, int(cu[5][0,0]), int(cu[5][0,1]), int(cu[6][0,0]), int(cu[6][0,1]),
347  1, jevois.YUYV.LightGreen)
348  jevois.drawLine(outimg, int(cu[6][0,0]), int(cu[6][0,1]), int(cu[7][0,0]), int(cu[7][0,1]),
349  1, jevois.YUYV.LightGreen)
350  jevois.drawLine(outimg, int(cu[7][0,0]), int(cu[7][0,1]), int(cu[4][0,0]), int(cu[4][0,1]),
351  1, jevois.YUYV.LightGreen)
352  jevois.drawLine(outimg, int(cu[0][0,0]), int(cu[0][0,1]), int(cu[4][0,0]), int(cu[4][0,1]),
353  1, jevois.YUYV.LightGreen)
354  jevois.drawLine(outimg, int(cu[1][0,0]), int(cu[1][0,1]), int(cu[5][0,0]), int(cu[5][0,1]),
355  1, jevois.YUYV.LightGreen)
356  jevois.drawLine(outimg, int(cu[2][0,0]), int(cu[2][0,1]), int(cu[6][0,0]), int(cu[6][0,1]),
357  1, jevois.YUYV.LightGreen)
358  jevois.drawLine(outimg, int(cu[3][0,0]), int(cu[3][0,1]), int(cu[7][0,0]), int(cu[7][0,1]),
359  1, jevois.YUYV.LightGreen)
360  except:
361  pass
362 
363  i += 1
364 
365  # ###################################################################################################
366  ## Process function with no USB output
367  def processNoUSB(self, inframe):
368  # Get the next camera image (may block until it is captured) as OpenCV GRAY:
369  imggray = inframe.getCvGRAY()
370  h, w = imggray.shape
371 
372  # Start measuring image processing time:
373  self.timer.start()
374 
375  # Get a list of quadrilateral convex hulls for all good objects:
376  hlist = self.detect(imggray)
377 
378  # Load camera calibration if needed:
379  if not hasattr(self, 'camMatrix'): self.loadCameraCalibration(w, h)
380 
381  # Map to 6D (inverse perspective):
382  (rvecs, tvecs) = self.estimatePose(hlist)
383 
384  # Send all serial messages:
385  self.sendAllSerial(w, h, hlist, rvecs, tvecs)
386 
387  # Log frames/s info (will go to serlog serial port, default is None):
388  self.timer.stop()
389 
390  # ###################################################################################################
391  ## Process function with USB output
392  def process(self, inframe, outframe):
393  # Get the next camera image (may block until it is captured). To avoid wasting much time assembling a composite
394  # output image with multiple panels by concatenating numpy arrays, in this module we use raw YUYV images and
395  # fast paste and draw operations provided by JeVois on those images:
396  inimg = inframe.get()
397 
398  # Start measuring image processing time:
399  self.timer.start()
400 
401  # Convert input image to GRAY:
402  imggray = jevois.convertToCvGray(inimg)
403  h, w = imggray.shape
404 
405  # Get pre-allocated but blank output image which we will send over USB:
406  outimg = outframe.get()
407  outimg.require("output", w, h + 22, jevois.V4L2_PIX_FMT_YUYV)
408  jevois.paste(inimg, outimg, 0, 0)
409  jevois.drawFilledRect(outimg, 0, h, outimg.width, outimg.height-h, jevois.YUYV.Black)
410 
411  # Let camera know we are done using the input image:
412  inframe.done()
413 
414  # Get a list of quadrilateral convex hulls for all good objects:
415  hlist = self.detect(imggray, outimg)
416 
417  # Load camera calibration if needed:
418  if not hasattr(self, 'camMatrix'): self.loadCameraCalibration(w, h)
419 
420  # Map to 6D (inverse perspective):
421  (rvecs, tvecs) = self.estimatePose(hlist)
422 
423  # Send all serial messages:
424  self.sendAllSerial(w, h, hlist, rvecs, tvecs)
425 
426  # Draw all detections in 3D:
427  self.drawDetections(outimg, hlist, rvecs, tvecs)
428 
429  # Write frames/s info from our timer into the edge map (NOTE: does not account for output conversion time):
430  fps = self.timer.stop()
431  jevois.writeText(outimg, fps, 3, h-10, jevois.YUYV.White, jevois.Font.Font6x10)
432 
433  # We are done with the output, ready to send it to host over USB:
434  outframe.send()
435 
PythonObject6D.PythonObject6D.estimatePose
def estimatePose(self, hlist)
Estimate 6D pose of each of the quadrilateral objects in hlist:
Definition: PythonObject6D.py:245
PythonObject6D.PythonObject6D.__init__
def __init__(self)
Constructor.
Definition: PythonObject6D.py:113
demo.int
int
Definition: demo.py:37
PythonObject6D.PythonObject6D.sendAllSerial
def sendAllSerial(self, w, h, hlist, rvecs, tvecs)
Send serial messages, one per object.
Definition: PythonObject6D.py:269
PythonObject6D.PythonObject6D.processNoUSB
def processNoUSB(self, inframe)
Process function with no USB output.
Definition: PythonObject6D.py:367
PythonObject6D.PythonObject6D.refdes
refdes
Definition: PythonObject6D.py:161
PythonObject6D.PythonObject6D.detector
detector
Definition: PythonObject6D.py:156
PythonObject6D.PythonObject6D
Simple example of object detection using ORB keypoints followed by 6D pose estimation in Python.
Definition: PythonObject6D.py:110
PythonObject6D.PythonObject6D.timer
timer
Definition: PythonObject6D.py:132
PythonObject6D.PythonObject6D.process
def process(self, inframe, outframe)
Process function with USB output.
Definition: PythonObject6D.py:392
PythonObject6D.PythonObject6D.winleft
winleft
Definition: PythonObject6D.py:124
PythonObject6D.PythonObject6D.distth
distth
Definition: PythonObject6D.py:129
PythonObject6D.PythonObject6D.owm
owm
Definition: PythonObject6D.py:118
PythonObject6D.PythonObject6D.wincorners
wincorners
Definition: PythonObject6D.py:166
hasattr
bool hasattr(boost::python::object &o, char const *name)
PythonObject6D.PythonObject6D.distCoeffs
distCoeffs
Definition: PythonObject6D.py:141
PythonObject6D.PythonObject6D.drawDetections
def drawDetections(self, outimg, hlist, rvecs=None, tvecs=None)
Draw all detected objects in 3D.
Definition: PythonObject6D.py:293
PythonObject6D.PythonObject6D.loadCameraCalibration
def loadCameraCalibration(self, w, h)
Load camera calibration from JeVois share directory.
Definition: PythonObject6D.py:136
PythonObject6D.PythonObject6D.matcher
matcher
Definition: PythonObject6D.py:179
PythonObject6D.PythonObject6D.detect
def detect(self, imggray, outimg=None)
Detect objects using keypoints.
Definition: PythonObject6D.py:150
PythonObject6D.PythonObject6D.fname
fname
Definition: PythonObject6D.py:115
PythonObject6D.PythonObject6D.camMatrix
camMatrix
Definition: PythonObject6D.py:140
PythonObject6D.PythonObject6D.wintop
wintop
Definition: PythonObject6D.py:123
PythonObject6D.PythonObject6D.winh
winh
Definition: PythonObject6D.py:126
PythonObject6D.PythonObject6D.winw
winw
Definition: PythonObject6D.py:125
PythonObject6D.PythonObject6D.ohm
ohm
Definition: PythonObject6D.py:119
jevois::Timer
PythonObject6D.PythonObject6D.refcorners
refcorners
Definition: PythonObject6D.py:165