MartinusMagneson's picture

Magneson (@MartinusMagneson)


    MartinusMagneson's picture
    Magneson commented on ooftv's Discussion, “Optimizing for video

    Effects are hard to guess the performance impact of as it depends heavily on the shader-code. Generally convolution-based effects (blur/edge detection etc.) are heavy as they do a lot of calculations per pixel based on the neighboring pixels. While a 3x3px convolution matrix is'n too heavy (8 calculations for the neighboring pixels), a 9x9px convolution matrix is exponentially more resource intensive (80 calculations for neighboring pixels). This in contrast with blending or color changing that just are a couple of simple math operations on the pixel itself.

    If sync is of critical importance, there are a few options for keeping it so for an extended period of time. One approach is to use a media server that is designed for the purpose (timeline based). This can get pretty expensive fast, but check with your local rental company for solutions if you can go this route.

    A second approach is to play the videos on a dedicated external device that can handle two simultaneous videos/outputs. Then use a capture card with at least two inputs to process the live video inside Vuo. Note that you should not use two separate/different capture cards for this as it then can have different latency on the inputs.

    A third option is to render the two videos together as one wide image. Then you can crop the main video into two video streams in Vuo, ensuring it cannot go out of sync at the source. This may be preferred as it is a relatively cheap option to ensure sync. This can of course be paired with the previous solution and one capture card (that has to support wide resolutions) to remove the overhead of video playback from the effect machine.

    To make sure the effect chain doesn't induce a timing mismatch between the sources, the solution is to run all the effects all the time. This ensures a constant load - but needs good enough hardware to run effectively. To do so you set up a blend image ladder where the blended output from one effect goes into the other, and the audience controls the blend value.

    MartinusMagneson's picture
    Magneson commented on Doggo's Discussion, “Control a light grid with the mouse

    This should be a starting point. It will hard-cut between intensity changes, and as it stands it should go to black between the boxes. To change this, set the "Is Within Box" width and height values to 0,25. You can get the "Make 2d Point Grid" node here:

    MartinusMagneson's picture
    Magneson commented on Jaymie's Discussion, “Plan for next releases

    Alastair I'm at what ever floats my goat as fast as possible. For some things it's Vuo, for other things it's whatever would be the appropriate tool - or usually a combination of several. Regarding the time outputs, they have changed. I don't remember now which version that happened in, but to avoid cable clutter and increase clarity you now use a fire node (or several) and get time from that instead. The "Updated Window" port only reflects changes to the window (resizing etc.). For timing you now can use for instance a "Fire on Display Refresh". This way the graph flows more intuitively from left to right. To ease you back into Vuo, maybe my compositing tutorial found here: could help? It should at least list a few pitfalls that are good to know about.

    When I got into QC there were quite a few addons from the community that were widely used, but perhaps more specific in nature than what is the case for Vuo's nodes. Some also had more interesting and somewhat self-explanatory names, like the Rutt Etra node from Vade. That one actually has an equivalent in Vuo named "Displace 3D Object with Image". Search for "Rutt Etra" on google, and it is pretty telling what the idea is. Search for "Displace 3D object with image" and you'll get Github or some smelly 3D-software forum. There are also more steps to the Vuo approach to get to a Rutt Etra-like composition. With the Rutt Etra node in QC, you could pop in an image, and you were ready to go. With the "Displace..." node you'll have to feed it an image and a 3D object made up of lines. To feed it that object, you'll have to make it first and shade it the way you like. The Vuo way is infinitely more modifiable in itself, but also a lot more fiddly to get going quickly if you're not used to the tasks. I think the idea is that someone should make a Rutt Etra subcomp, share it, and then it will get easier for others to look inside and tweak. But nobody does.

    I'm also not sure how many has actually looked around in the node gallery and tried out some of the custom nodes. I have at least 3 votes on my list tools for instance, but I have no idea how many times it has been downloaded or been in use. I make these nodes for myself so I don't really care isolated speaking, apart from them perhaps being a starting point for more people to create their own custom nodesets (which you should, my source is in the sidebar if that helps, and the API is neat!). When good ones gets shared, that would in turn benefit me. I think there are several nodes both from myself and others that are both convenient and expands the possibilities of Vuo that should be checked out by more people if the votes are anything to go by.

    For neatening I think sub-comps (as Macros) should be the way to go for the most part. When that is said, I find sub-compositions somewhat sluggish to work with at the current iteration (could be due to old mac, got a new one waiting for me at the office). I was working on a huge tutorial about UIs in Vuo that includes how to pack things into subcomps for clarity and sanity in a build. However I think I hit a wall with a bug somewhere, and it kind of fell into the backburner, and then time happened. Conceptually sub-compositions offers a great way to both stow away and ease complicated and repetative build tasks, but a more cohesive and quick workflow surrounding them would be nice.