bLackburst, that's a good question, and unfortunately we don't know the answer yet. It depends on the library / 3rd-party code we would use to implement the nodes. Ideally we'd be able to find a good library that takes depth images as input, so with the Kinect v2 feature request you would automatically get Kinect v2 support in the skeletal tracking nodes as well. However, if the only available adequate libraries grab the Kinect depth images themselves instead of allowing them to be input by the user, it would depend whether those libraries support Kinect v2. We're still researching libraries and are open to suggestions.
Yep, those are Vuo with point meshes. The reference image is from Reaktor which generates the sine waves. To get it I split the receive live audio into First/Last in list, enqueued the values and merged them into an x/y list. I then triggered the enqueue node from a 'fire periodically' node that probably fires too fast for stable/efficient usage.
Shaders shouldn't be a too big of a problem in Vuo, they are quite easy to set up after you understand where stuff is supposed to go (but it probably helps to have someone knowing what they are doing (I'm not though)) . The SDK and the source are great resources, as are shadertoy, Paul Bourkes image filters and a bunch of other websites I don't remember.
As for shaders, it sounds like you're referring to coding a custom shader for vuo? Probably beyond our capabilities at the moment but sometimes we are fortunate to have talented programmer volunteers working with us. What are the main tips/resources we can look at to get programmers started in the right direction; Vuo SDK?
The shader approach is so that you get more control over the individual pixels, and maybe a different approach to audio. As it stands now, I'm not sure about how phase is handled between the L/R channels.
Here I'm using Jerobeam Fendersons Oscilloscope music (no audio, sorry, check out http://oscilloscopemusic.com for some absolutely fantastic visuals), which is tailored for this sort of thing, and although I can see some definitive shapes in there, it doesn't seem to line up with what I get from Vuo.
However, when sending two sinewaves and adjusting the sample delay for one of them, I can do this:
Very cool project by the way! You don't have to do several process lists, you can easily have several 'Is Within Box' nodes within the one 'Build List'. All you have to do is a bit of nesting of the logic.
Here you have three different sources for the 'box'. If the first 'Select Input' is true, cyan will be passed along as the false color for the second box. If the second 'Select Input' is false, cyan will pass to the third box as the false color. If the third 'Select Input' is also false, it will then pass along cyan as the final color. If all are false, blue will be passed along, and if any of the later ones in the tree are true, it will settle at the latest 'true' color. This way, you only need to convert the one color-list to Art-Net. I think the color list would be the simplest option to deal with by far, especially for LED strips.