As one of Vuo's developers, I work on Vuo's engine (the thing that makes compositions run), work on nodes, and write documentation. You'll see me on the forums answering people's questions about Vuo.
I've been developing apps and frameworks for several years (since I was in college). Pre-Vuo projects include Kineme Quartz Composer plugins, iOS apps for education, an app that analyzes photographs of tomato slices, and software to help people with disabilities use talking keyboards.
I enjoy using Vuo to make live music visuals. My hope for Vuo is that it will grow into a community of people of diverse backgrounds and identities making lots of different artistic, useful, unique, goofy, beautiful, crafty, wonderful compositions.
I'm able to receive the simpler "Depth" image via Syphon and then displace an object with it, it works fine, but amount of details is really low.
That would be because Syphon is limited to 8bpc, whereas the Kinect is providing higher bit depth images.
1) When operating with Kinect v1, does Vuo.Kinect node decode more details than I'm currently able to read with the greyscale image?
The node receives 16bpc images from the Kinect v1. However, since the sensor hardware is better on the Kinect v2 than the Kinect v1, I'm not sure how Kinect v1 at 16bpc would compare to what you're getting now with Kinect v2 at 8bpc.
2.a) Thinking about adding support for Kinect v2 and ROS118 encoded depth - I think it should be easy to write a node decoding RGB Syphon images to a 3d plane, but won't the performance degrade because of Syphon protocol usage? Shouldn't it rather use system drivers to fetch data directly from Kinect v2 via USB bus? (that would be beyond my capabilities)
2.b) Would Vuo be able to process 30 frames per second with this amount of 3d data, or I should rather think about using lower-level tool like openFrameworks?
Since Vuo does mesh deformations on the GPU, it should be able to handle large meshes like that quickly. Though in general performance is hard to predict because it depends on a number of factors, including your computer. You're using Displace 3D Object with Image now? You could test by feeding it a series of images at the size and level of detail that you would expect from the Kinect v2.
Interesting idea. This would make a straighter cut than Trim 3D Object, since it would be at the pixel level instead of the triangle level.
Since clipping is per-camera, one possibility would be to render your object to clip with one Render Scene to Window and the chosen clipping distances, and the rest of the scene with another Render Scene to Window and the default clipping distances, and combine them with Blend Images.