As one of Vuo's developers, I work on Vuo's engine (the thing that makes compositions run), work on nodes, and write documentation. You'll see me on the forums answering people's questions about Vuo.
I've been developing apps and frameworks for several years (since I was in college). Pre-Vuo projects include Kineme Quartz Composer plugins, iOS apps for education, an app that analyzes photographs of tomato slices, and software to help people with disabilities use talking keyboards.
I enjoy using Vuo to make live music visuals. My hope for Vuo is that it will grow into a community of people of diverse backgrounds and identities making lots of different artistic, useful, unique, goofy, beautiful, crafty, wonderful compositions.
@bLackburst, that's a good question, and unfortunately we don't know the answer yet. It depends on the library / 3rd-party code we would use to implement the nodes. Ideally we'd be able to find a good library that takes depth images as input, so with the Kinect v2 feature request you would automatically get Kinect v2 support in the skeletal tracking nodes as well. However, if the only available adequate libraries grab the Kinect depth images themselves instead of allowing them to be input by the user, it would depend whether those libraries support Kinect v2. We're still researching libraries and are open to suggestions.