jstrecker's picture

Jaymie (@jstrecker)

Groups

  • Vuo Founder
  • Team Vuo
jstrecker's picture
@jstrecker commented on @Kewl's Feature Request, “Merge WXYZ Lists

@Kewl, would Lists within lists do what you need? If not, please create a feature request, ideally with some info about how you plan to use it.

jstrecker's picture
@jstrecker commented on @bLackburst's Bug Report, “Auto type-convertor bug

@Bodysoulspirit's explanation is correct. Still, as @bLackburst pointed out, the fact that the Transform -> Rotation type converter is automatically chosen there can easily to mistakes when building a composition, so it could be considered a usability bug. Scheduled for work.

jstrecker's picture

We're planning to provide both a Get 3D Mesh Object Info that outputs the shader (among other things) and a Get Mesh Info node that outputs the full mesh data (vertices, etc.). Get Mesh Info would take more time to execute, since it would have to download the vertices and stuff from the GPU and process them. But you wouldn't need to do that if you were just grabbing the shader. You would use Get 3D Mesh Object Info.

jstrecker's picture

I'm able to receive the simpler "Depth" image via Syphon and then displace an object with it, it works fine, but amount of details is really low.

That would be because Syphon is limited to 8bpc, whereas the Kinect is providing higher bit depth images.

1) When operating with Kinect v1, does Vuo.Kinect node decode more details than I'm currently able to read with the greyscale image?

The node receives 16bpc images from the Kinect v1. However, since the sensor hardware is better on the Kinect v2 than the Kinect v1, I'm not sure how Kinect v1 at 16bpc would compare to what you're getting now with Kinect v2 at 8bpc.

2.a) Thinking about adding support for Kinect v2 and ROS118 encoded depth - I think it should be easy to write a node decoding RGB Syphon images to a 3d plane, but won't the performance degrade because of Syphon protocol usage? Shouldn't it rather use system drivers to fetch data directly from Kinect v2 via USB bus? (that would be beyond my capabilities)

Yes, native Kinect support in Vuo should use less system resources and provide better-quality images than importing from NI mate + Syphon. There's an open feature request for Add support for Xbox One Kinect (Kinect V2). I expect we'd use https://github.com/OpenKinect/libfreenect2 .

2.b) Would Vuo be able to process 30 frames per second with this amount of 3d data, or I should rather think about using lower-level tool like openFrameworks?

Since Vuo does mesh deformations on the GPU, it should be able to handle large meshes like that quickly. Though in general performance is hard to predict because it depends on a number of factors, including your computer. You're using Displace 3D Object with Image now? You could test by feeding it a series of images at the size and level of detail that you would expect from the Kinect v2.

jstrecker's picture
jstrecker's picture
+0

Interesting idea. This would make a straighter cut than Trim 3D Object, since it would be at the pixel level instead of the triangle level.

Since clipping is per-camera, one possibility would be to render your object to clip with one Render Scene to Window and the chosen clipping distances, and the rest of the scene with another Render Scene to Window and the default clipping distances, and combine them with Blend Images.

Pages