When rendering images/textures at small sizes, a well-known problem in computer graphics is that they may not look right. While our eyes and brain perceive colors blending together as objects get farther away, an image run through a typical scaling algorithm can end up looking grainy.
There are various techniques (including mipmapping) to address this problem, with various tradeoffs in CPU, GPU, and memory usage. For this feature request, we would select a technique whose tradeoffs work well with Vuo's realtime image processing.
Is there an accepted way to apply a mask from layer to layer, like the Apply Mask node for images? Looks like I have to convert to an image, then apply a mask, then convert back to layer to stay within my current workflow?
@Bodysoulspirit provided a clear description of the problem here with a composition demonstrating the problem. Basically, with events going continually into the Playback Rate port, the composition gets stuck.