can't test now. i left the project on iMac in the studio. anyway i just downloaded 1.2.6 alpha and immediately test cmmnd+f for searching nodes function. Thats cool !!
I will try the project on my Mac pro asap.
bLackburst, that's a good question, and unfortunately we don't know the answer yet. It depends on the library / 3rd-party code we would use to implement the nodes. Ideally we'd be able to find a good library that takes depth images as input, so with the Kinect v2 feature request you would automatically get Kinect v2 support in the skeletal tracking nodes as well. However, if the only available adequate libraries grab the Kinect depth images themselves instead of allowing them to be input by the user, it would depend whether those libraries support Kinect v2. We're still researching libraries and are open to suggestions.
Yep, those are Vuo with point meshes. The reference image is from Reaktor which generates the sine waves. To get it I split the receive live audio into First/Last in list, enqueued the values and merged them into an x/y list. I then triggered the enqueue node from a 'fire periodically' node that probably fires too fast for stable/efficient usage.
Shaders shouldn't be a too big of a problem in Vuo, they are quite easy to set up after you understand where stuff is supposed to go (but it probably helps to have someone knowing what they are doing (I'm not though)) . The SDK and the source are great resources, as are shadertoy, Paul Bourkes image filters and a bunch of other websites I don't remember.
As for shaders, it sounds like you're referring to coding a custom shader for vuo? Probably beyond our capabilities at the moment but sometimes we are fortunate to have talented programmer volunteers working with us. What are the main tips/resources we can look at to get programmers started in the right direction; Vuo SDK?