Shared by on 2019.05.03 17:57

In previous community spotlights, we've talked about folks using Vuo to create some extraordinary artwork and impressive visuals. But Vuo can also be useful behind the scenes. This spotlight is about how Martinus Magneson Larsen (Magneson) uses Vuo in his everyday "bread-and-butter" tasks in [job field/industry].

Magneson is a [job title] at [company]. You may recognize him as one of the Vuo community's most prolific answerers of questions and creators of nodes and tutorials. In his [years] in [industry], Magneson has worked behind the scenes to [do what] for artists including [some names].

Magneson told us about three projects in which he's used Vuo, along with some tips for learning Vuo.

Project: Programming simple user interfaces

For a couple of clients, we have provided a simple playback solution consisting of QLab running videos with different audio tracks (languages). As these systems are meant for users which aren’t expected to know the details of AV-equipment and playback solutions, Vuo was an easy way to achieve a simple interface. The interfaces can easily be branded and given relevant backgrounds to the company we provide the solution to. The user gets presented with the languages, and gets a one-click choice free of clutter and confusing UI-elements.

Technically Vuo just spits out MIDI data triggered by an on-screen button, corresponding to whatever video (cue) is desired by the user. Quick and easy to set up, and with the benefit of being able to export the composition to an app, the systems can be automated further to account for restarts and power-loss if that should occur.

Diagram of system

What did your clients use the apps for?

The apps I exported from Vuo have been used for simple menu interfaces for less technically inclined operators. A one-button press takes care of most of the setup and sorting of options. In the case of QLab operation the interface and options can seem a bit overwhelming and stressful when your main job is ticketing and customer service. A screen popping up where you only get the option of which language you want to play is a lot more convenient and less scary. It’s not super pretty, but it’s functional and to the point.

I used QLab in these instances rather than a pure Vuo comp since it is a widely accepted and known program for most other techs I work with. Because of that, service is still possible if I’m unavailable for some reason. Troubleshooting, updating videos, audio adjustment and other issues that might show up is then not dependent on me being around.

Project: Low latency iMag vfx

When faced with artists on stage that wanted a bit more control of their iMag (image magnification) appearance, meaning a mostly desaturated image, this wasn’t available in a low-latency solution from the camera-production provider. Complicating things further was the scaling chain on our end that added an unacceptable latency if we inserted processing between the signal from the camera feed and the scaler.

Since we had a decent amount of time to try out a few things, we toyed with popping in Vuo instead of the effects processor just to see if that made a difference in latency. It did! In addition, removing the scaler from the chain and doing that as well in Vuo also removed a conversion point, making for even less latency.

This meant we went from about 2-300ms latency, to less than 50ms (3 frames-ish) — rock solid. That is good enough for the latency to be unnoticeable for the untrained eye at a distance where the screens make any sense. I would never have thought it to be possible to reliably scale and apply color adjustments through a computer faster than dedicated HW, so good job indeed! Also, compared to other solutions like MadMapper or Resolume it is a world of difference which I'd guess is because of the lack of added overhead from all the other stuff always running in those applications.

The solution then was to bypass the scaler altogether, scale and desaturate the signal in Vuo, and then feed it directly to the LED walls (with a fallback/backup to the original solution of course). A shorter chain meant less latency, and an on-screen result that the artist was happy with.

Project: Easy nominee presentation

For a conference that had an award ceremony with a lot of nominees, we needed an easy and flexible way to present them for the audience. The solution was to exploit Vuo's file handling along with some list coordination and manipulation, and then feed it to Resolume over a network connection.

To achieve this we pre-scaled/cropped all images to a set size and labeled the files according to their position in the nominee list (01_name1, 02_name2, etc.). This allowed us to set up different lists of images and texts, and just replace the source for the image and text layers at the press of a button. When executing, this was a two-press operation, selecting the category and then scrolling through the nominees coordinated with the presenter on stage.

Did you use your custom list nodes?

I only used a grid node for convenience. For the meat of the composition I opted to exploit how files are sorted in the first place to be able to construct the composition in a short amount of time. A “list files” node combined with a naming scheme of 01_filename, 02_filename, etc. made it simple to use a count node to combine a text list with the images from the folder. I also edited the images to have the same size to make it easier. This could perhaps have been improved by pulling the nominee-names from the image file-names, and cropping and scaling the images inside Vuo. This would however mean over-complicating an easy task for no other reason than satisfying my own desire to have it neat — so those ideas were dropped fast.

How did Vuo communicate with Resolume?

As it wasn’t a particularly time-sensitive task, and we had a somewhere around 3600 x 1080px canvas, the composition output was piped via Syphon to NDI which Resolume picked up natively over the network (nudge-nudge, wink-wink). This allowed me to send only the actual canvas size with transparency. In opposition to sending a 4k signal over SDI and cropping, masking and placing the content, this saved resources on both the Mac constructing the composition and the PC receiving it and outputting to the projectors. I also think that the savings in timing wouldn’t have been very large in this scenario for SDI compared to NDI in the end.

Editor's note: I think Magneson would like you to vote for feature request Add support for NewTek NDI ;)

Do you have any tips for learning Vuo?

I think copying concepts is a great way to learn. In that matter Daniel Schiffman/The Coding Train is a huge resource for learning to code, or discovering concepts that can be implemented. Although he operates mostly in Processing, the ideas presented there usually can be translated to a Vuo composition. Figuring out how is learning turned to 11. The mere realisation that you can use build list nodes as iterators, and lists as arrays was for me a huge step in breaking down how Vuo could work for me for stuff like that.

In the same vein, and if you’re not afraid of text-based coding, diving into the API, SDK and the node sources worked great to know exactly how a node works, and how it can be exploited.

I also love the cheesy bits of VJing, and usually have a greater problem with how a particular piece of technique gets applied, rather than the technique itself. Going back to the “simple” effects that there is a sport of hating in the VJ forums, I find it a lot more interesting to figure out if they can be used better! I’m of course talking about kaleidoscopes, tiling and feedback here if that is unclear. Although depending on context it can look great with simple applications of these, there is also a universe of options that only costs a bit more in terms of prep than lazily applying it to whatever. When an effect is ready, it can also be lazily applied later, but looking a lot better than the stock option. Even feeding back a masked version of a kaleidoscopic image to itself is infinitely more interesting than a pure kaleidoscope to me. Unless of course the content is made for the kaleidoscope ;). So playing around with already known, and perhaps boring concepts to see if they can be made more interesting is also a great source for learning.

Editor's note: See Magneson's tutorial Non-stinking kaleidoscopes.

What are your plans for using Vuo in the future?

Looking at the pipeline, there is a lot to be excited about. I’m really looking forward to the ability to export to FFGL (Resolume). As Resolume is both our main VJ tool and a widely used tool in the industry, this will mean new possibilities for using both Vuo and Resolume in tandem and open it up to a wider user base.

In other ways Vuo continues to prove itself as a swiss army knife for concerts, events and whatever else I might hit my head against on a daily basis. While I don’t have any concrete projects planned for the time being, there are a few ideas I’m working on that might come to fruition the coming year.

Comments

Add comment

Log in or register to post comments