Welcome new user! You can search existing questions and answers without registering, but please register to post new questions and receive answers.
Welcome to JeVois Tech Zone, where you can ask questions and receive answers from other members of the community.

fps and precision of aruco pose estimates in 1280x1024 resolution

0 votes

Hello,

I am trying to evaluate the following
 * precision of pose estimates 
 * number of pose estimates per second
with DemoAruco vision module.

http://jevois.org/doc/VideoMapping.html suggests that the highest resolution is SXGA (1280 x 1024) at up to 15 fps

While trying to explore the influence of input image resolution, I finally managed to create and use the following videomapping
YUYV 1280 1044 15.0 YUYV 1280 1024 15.0 JeVois DemoArUco

(thanks to http://jevois.org/qa/index.php?qa=216&qa_1=added-demoaruco-video-mappings-not-working&show=216#q216 for the 20 additional pixels)

However, I noticed that the fps dropped to about 7~9fps. JeVois is connected on USB3 port on a Windows 7 host. Should I still observe documented expectations at ~9.3 fps on USB3? 
I can happily live with a lower resolution preview as long as I am sure the image really used for pose estimates is higher (after all, ultimately if the system works the image data will never be transmitted, just the pose estimates)

I tried 
YUYV 640 500 15.0 YUYV 1280 1024 15.0 JeVois DemoArUco 
But i get an error message about wrong image size. Most likely I need to adjust the module source to downsize the image after the pose estimate prior to sending it over USB. However, i'm unable to modify the C++ source code and jevois-inventor suggests that C++ modification may not be possible and i am affraid using python may degrade performance.

How can I change the source of C++ DemoAruco ? (maybe outside jevois-inventor?)
How can I observe the performance of aruco pose estimates in higher resolutions than 640x480 without observing the limitations of USB transfer rate? (using greyscale maybe?)

Also, the documentation states the following

If your algorithm really runs at 26.3fps but you specify 30.0fps camera frame rate, then the frames will actually end up being pumped to USB at only 15.0fps (i.e., by the time you finish processing the current frame, you have missed the next one from the camera, and you need to wait for the following one).

This is confusing to me. Sounds like a chicken and egg issue: I am trying to find out the fps of the aruco detection depending on marker properties, but I am supposed to already know it so that fps does not drop... ?
=> Can someone please elaborate? How can I setup videomapping so that I really observe the algorithm performance?

asked Oct 25 in Programmer Questions by fourchette (350 points)

1 Answer

+1 vote
 
Best answer

Great questions. 9.3fps is the theoretical max at 1280x1024 YUYV based on image size and USB isochronous bandwidth (24 MBytes/s). If you are seeing less than that, likely it is because your vision algorithm runs slower.

The jevois::Timer class was created specifically for your needs. It will measure the actual processing speed of your algo. In many modules, we start the timer measurement period just as we have receive the video frame from the sensor, and we end it as we send off the results frame to USB. The time this takes (converted to fps) is what you see at the bottom of the video for many modules, next to the CPU load, temperature, and MHz. Each time you see that info at the bottom left of the video, it is coming from a Timer that measures algo performance (not camera sensor frames/s).

So, if you see that DemoArUco runs at less than 9.3fps according to what is displayed at the bottom left of the video, then your speed is limited by the algo (at that resolution).

The fps reported at the bottom left is precisely to help you decide how fast your algo can run at a given resolution, and how you can optimize your video mapping. For example, say you run DemoArUco at 320x240@30fps setting, and you see a report of 48fps at the bottom left of the video, that means that your algorithm could run at up to 48fps. Because there is a bit of overhead with grabbing video and sending it over USB, you might then want to adjust your video mapping to 320x240@45fps or so. Hence there is no chicken and egg problem with this method. Note that python algorithms will require more margin (maybe from 48fps to 40fps) as the timer is started after the camera frame is converted to numpy format, and stopped before it is converted back from numpy to raw data for USB, and those two conversions take a bit of time.

To experiment with different output sizes with rescaling on the fly, try the python version of the aruco module, as described here: http://jevois.org/tutorials/ProgrammerInvHello.html towards the end of the tutorial. It is the same core algorithm (C++ aruco implementation of opencv). In there, out.sendCv() will take care of any rescaling and pixel type conversion, so you can experiment with a bunch of different mappings using that code.

To change the C++ code, you need to install the jevois SDK, see here: http://jevois.org/tutorials/ProgrammerSetup.html

Finally, regarding the 26.3fps vs 30fps, this is a simple estimate that assumes no buffering (while we actually do allow for some buffering in JeVois now), which goes like this:

If your sensor runs at 30fps, you get a frame every 33.33ms.

If your algo runs at 26.3fps, you need 38ms to process one frame. 

The camera sensor has a fixed clock and it just outputs one frame after the other with no delay or pause. So, you end up with this (again, assuming no buffering):

at t=0, frame 1 is done capturing and is sent to processing.

at t=33.33ms frame 2 is done capturing but processing of frame 1 takes 38ms so it still is ongoing, hence we miss frame 2 and the camera sensor starts capturing frame 3.

at t=38ms processing is done but frame 2 was dropped and frame 3 is not ready yet, we have to wait until frame 3 is completely captured, that will be at t=66.66ms.

and so on, we basically end up dropping every other frame and processing at 15fps, with lots of idle CPU as we wait for the next frame after we barely missed the previous one. But if you program the camera sensor for slightly below 26.3fps, you should be able to be done processing one frame just before the next frame arrives, and you will not drop frames.

Finally note that at 1280x1024 with USB output, significant time is spent just creating the video output and sending it over USB. So you may also want to try running the algo with no USB output (only serial messages). The C++ version can do it. See this new tutorial for adding this capability to the python version: http://jevois.org/tutorials/UserHeadless.html and first you need to implement processNoUSB() in your python module, see http://jevois.org/doc/ProgrammerPython.html and http://jevois.org/doc/ModulePythonTutorial.html for more about processNoUSB()

answered Oct 26 by JeVois (32,640 points)
selected Oct 28 by fourchette
wow fantastic and very detailed answer! thanks

No only the JeVois cam fits a perfect spot in the maker universe for a very decent price, but it's documentation is very detailed (i love the videos btw, i wouldn't really dare to go c++ otherwise) and support quality is really impressive!

hats off!

just an annoying detail for the admins: when i post a question, it becomes in the "awaiting review" state, which is fine. However, if i accidentally close the tab and didnt copy/paste its url somewhere, or login on the forum from another pc => i cannot see my "questions awaiting review" in my user account (only already approved questions). Therefore, since it takes about two days to get it approved, i'm always thinking: "did you push that submit button?"
=> how can i know if i did?
(again it's really a detail, but i can feel that the more i'll use JeVois, the more i'll be facing that situation :p)
You are most welcome. Yes, the moderation is annoying, we need it to block fake spam postings. However, moderation stops after you have 2 approved posts (which is your case now, so no more moderation for you!)
...