Hi, I've started playing with Kinect v2 and have some questions on how it can operate with Vuo.
Let's start with the data I'm able to read from NiMate. One the left side is "Depth" image encoded with grayscale (0-255) color, on the right is "Encoded Depth" using REP118 protocol with RG channels for econding depth (0-65025).
I'm able to receive the simpler "Depth" image via Syphon and then displace an object with it, it works fine, but amount of details is really low.
1) When operating with Kinect v1, does Vuo.Kinect node decode more details than I'm currently able to read with the greyscale image?
2.a) Thinking about adding support for Kinect v2 and ROS118 encoded depth - I think it should be easy to write a node decoding RGB Syphon images to a 3d plane, but won't the performance degrade because of Syphon protocol usage? Shouldn't it rather use system drivers to fetch data directly from Kinect v2 via USB bus? (that would be beyond my capabilities)
2.b) Would Vuo be able to process 30 frames per second with this amount of 3d data, or I should rather think about using lower-level tool like openFrameworks? Take a loot at this image, to see how detailed the mesh is: