I depends on what you are trying to achieve. The TensorFlow models usually are trained for a specific resolution, eg 224x224 or 128x128
In the TensorFlowEasy module http://jevois.org/moddoc/TensorFlowEasy/modinfo.html we first take a central crop of the input video (the grey square shown in the screenshots), then resize this, if needed, to the network's expected input size, then process through the network. The crop size is determined by parameter "foa" with this doc:
Width and height (in pixels) of the fixed, central focus of attention. This is the size of the central image crop that is taken in each frame and fed to the deep neural network. If the foa size does not fit within the camera input frame size, it will be shrunk to fit. To avoid spending CPU resources on rescaling the selected image region, it is best to use here the size that the deep network expects as input.
So if you run at 800x600, you could set foa to 500 500 or whatever else you like. This is best done in params.cfg (see above link).
If you have a trained network that expects 800x600 inputs, then set foa to 800 600 too. The only thing there is that likely your network will run very slowly on JeVois and you might run out of memory depending on how complex your network is.
|