Fog

This emulates the Quartz Composer fog effect, where 3D objects fade to a set color the further away they are from the camera. Thanks to @George_Toledo for introducing the solution to use the depth image from Render Scene to Image.

Edit: @Scratchpole raised an important point – be mindful of event flow to the Fog subcomp. The cubes comp is animated so the Fog subcomp is receiving events at display refresh rate. This makes it seem like it should work everywhere all the time – this may not be true without some attention to event flow. For example, if the Objects port only receives one event at start, may not update if you change a port setting.

FogCubes.zip (6.33 KB)

Screen Shot 2021-10-20 at 11.51.28 PM.jpg

Screen Shot 2021-10-20 at 11.46.38 PM.jpg

3 Likes

That’s doing some slightly worrying glitchy things when I run it and change parameters.

No issue here. M1 mac mini. There are a lot of cubes, maybe reduce the amount?  

Can’t think what the glitching could be, the comp is running ~60fps at 1024x768 here. In this version the Fog precomp uses 6 image blends, gray scale color controls and a Mask Image by Brightness. Perhaps try some reconfiguring for optimization? It’s all about the depth image…

Couple unrelated things:

I noticed some haloing with multisampling set to 4x or 8x. Keeping it at 2x or none reduces this.

Couple questions regarding the output from Render Scene to Image:

I have color depth set to 8bpc, but it reports 16bpc. ?

I have multisampling set to 2x but it says @1x. ?

Screen Shot 2021-10-21 at 12.55.11 PM.png

Then one more question – can I edit the precomp module in a text editor without breaking anything to reduce the number blend modes in the pulldown of Blend Images? That is, there are only a handful of blending modes that do well for this, I’d like to clean up the module.  

Is anti-aliasing possible ?

I found that switching the Render Scene to Image into 16bit stopped the glitching.
Your composition was running very slowly for me all the same (nothing to do with the number of cubes), so I reconstructed a simpler version. See attached. I was getting glitching with this simpler version until I switched scene to image into 16bit. Maybe there’s a bug with using the depth image output?

Edit: I spoke too soon, went back to look at my simpler version again after this and it started flickering. :(  

FogCubes JC.vuo (7.68 KB)

Too bad about the flickering, wish I knew how to advise you…

I also cleaned up my version with some reduction to get to the core concept, replaced the original upload.

@Kewl, re antialiasing, I haven’t tested beyond this cube comp, though based on seeing the haloing at the higher multisampling setting, I’m wondering if the depth image gets generated with different/without multisampling? Output is aliased right now with no multisampling (btw, the pic of the fog output above is low res). Going from 0 to 2x helps the foreground, but there is some jitter and haloing that starts to creep in the middle ground. Etc. Wondering, might also help to add a little post-blur since this is about image/pixel processing.

I’m thinking now about things to try using the depth image to isolate foreground/middle ground/background of the scene. I ripped out the Mask Image by Brightness from the Fog subcomp, but that node can mask and pass alpha within a range. Possibilities there, depth of field, etc…  

Just tested with your new version. Works a little bit smoother.
I think I introduced the flicker to yours by adding a connection from Fire on Display Refresh into the Fog patch so that I could see the changes I was making without having to fire the node manually, it’s the same with my version too.

1 Like

Ah, event flow strikes again… display refresh driving the animation for the perspective camera in my comp was my anchor for forcing things to evaluate in real time (then via the objects port on the fog subcomp driving event flow inside).  

@Kewl – re antialiasing, there’s also mipmapping using Improve Downscaling Quality. (Check the notes for Render Scene to Image.) Supposed to help reduce aliasing for farther away objects (though more expensive).

More testing – multisampling here is a trade off, either no haloing + more aliasing, or haloing + less aliasing. I think Improve Downscaling Quality helps a little? Not sure what else to do at the moment. (I also tried a 1 and 2px blur on the depth image, meh. Maybe come back to that.)  

Ok, doubling the Fog dimensions does some good as well. (And can use Resize Image after to bring it back to original dimensions, i.e., upsample → process → downsample).  

Just to be sure we’re talking about the same thing: I’m not talking about banding (that could possibly be reduced with some dithering), but rather the “steps” that are visible on the cubes’ edges at certain angles (usually close to 0 and 90 degrees), in the foreground.  

That’s right, stairstepping along edges, and haloing is the light-ish flickering along said edges caused by image + depth image not quite matching up when composited. The haloing here makes the antialiasing stand out. Solving antialiasing diminishes the haloing.

Best settings so far for me is off or 2x multisampling for the Render Scene to Image in Fog, then use the “uprez, process, downrez” approach (2x px res for Fog, then back to 1x post-Fog using Resize Image).

(Regarding banding, add some noise. Vuo has a nice approach for that with their gradients.)  

1 Like

I haven’t looked yet, but there is probably not multisampling on the depth image. …there tends not to be, I think maybe it would skew the depth readings in some cases?This not having msaa would cause a discrepancy on the geometry edges of course.

Fxaa looks pretty good as an alternative, maybe better in some cases because it also handles some texture moire effect. I notice there isn’t a version in the node library, but it shouldn’t be very hard to get going.  

2 Likes

Makes sense from the haloing that there is no multisampling on the depth image.  

I have color depth set to 8bpc, but it reports 16bpc. ?

Yeah, I see how that would be unclear; I’ve made a note to improve the node documentation. What’s happening is that Render Scene to Image outputs an 8bpc image as requested for the color image, and a 16bpc image for the depth image (since that extra precision is typically necessary when working with depth buffers). When you then combine the images in Blend Images, it takes the higher of the two bit depths so as not to lose precision.

I have multisampling set to 2x but it says @1x. ?

The @1x in the port popover refers to the image scaling factor (e.g. @2x for Retina devices).

can I edit the precomp module in a text editor without breaking anything to reduce the number blend modes in the pulldown of Blend Images?

Well, sort of. This is about the best you can do with current functionality:

In summary:

  • Put the blend modes you want in the Get Item from List input list.
  • Right-click on the BlendMode published input port and select Edit Details…. Set the Suggested Max to the number of blend modes in the list.
  • The subcomposition node has a slider for its Blend Mode input port (an integer).

SelectBlendModes.zip (4.33 KB)

1 Like

Thanks so much for these clarifications and the example, @jstrecker. All makes sense.

I was wondering one more thing about blending modes, good time to ask here with the specifics of the depth image and how much the fog setup relies on blending. Would there be any reason not to use a glsl shader image filter to set up blending modes for this, any performance issues to consider or what not? (For one it would allow having a limited set with names in the port menu. Maybe a couple custom blends, too. Vuo image blends are pixel shaders after all, no?)  

Another shader question, related to depth image haloing because it’s not multisampled. Regarding @George_Toledo’s suggestion for fxaa antialiasing, what do I need to change in this to translate to the ISF format for a shader image filter?

// FXAA
[Vertex_Shader]
varying vec4 posPos;
uniform float FXAA_SUBPIX_SHIFT = 1.0/4.0;
uniform float rt_w; // GeeXLab built-in
uniform float rt_h; // GeeXLab built-in

void main(void)
{
  gl_Position = ftransform();
  gl_TexCoord[0] = gl_MultiTexCoord0;
  vec2 rcpFrame = vec2(1.0/rt_w, 1.0/rt_h);
  posPos.xy = gl_MultiTexCoord0.xy;
  posPos.zw = gl_MultiTexCoord0.xy - 
                  (rcpFrame * (0.5 + FXAA_SUBPIX_SHIFT));
}

[Pixel_Shader]
#version 120
uniform sampler2D tex0; // 0
uniform float vx_offset;
uniform float rt_w; // GeeXLab built-in
uniform float rt_h; // GeeXLab built-in
uniform float FXAA_SPAN_MAX = 8.0;
uniform float FXAA_REDUCE_MUL = 1.0/8.0;
varying vec4 posPos;

#define FxaaInt2 ivec2
#define FxaaFloat2 vec2
#define FxaaTexLod0(t, p) texture2DLod(t, p, 0.0)
#define FxaaTexOff(t, p, o, r) texture2DLodOffset(t, p, 0.0, o)

vec3 FxaaPixelShader( 
  vec4 posPos, // Output of FxaaVertexShader interpolated across screen.
  sampler2D tex, // Input texture.
  vec2 rcpFrame) // Constant {1.0/frameWidth, 1.0/frameHeight}.
{   
/*---------------------------------------------------------*/
    #define FXAA_REDUCE_MIN   (1.0/128.0)
    //#define FXAA_REDUCE_MUL   (1.0/8.0)
    //#define FXAA_SPAN_MAX     8.0
/*---------------------------------------------------------*/
    vec3 rgbNW = FxaaTexLod0(tex, posPos.zw).xyz;
    vec3 rgbNE = FxaaTexOff(tex, posPos.zw, FxaaInt2(1,0), rcpFrame.xy).xyz;
    vec3 rgbSW = FxaaTexOff(tex, posPos.zw, FxaaInt2(0,1), rcpFrame.xy).xyz;
    vec3 rgbSE = FxaaTexOff(tex, posPos.zw, FxaaInt2(1,1), rcpFrame.xy).xyz;
    vec3 rgbM  = FxaaTexLod0(tex, posPos.xy).xyz;
/*---------------------------------------------------------*/
    vec3 luma = vec3(0.299, 0.587, 0.114);
    float lumaNW = dot(rgbNW, luma);
    float lumaNE = dot(rgbNE, luma);
    float lumaSW = dot(rgbSW, luma);
    float lumaSE = dot(rgbSE, luma);
    float lumaM  = dot(rgbM,  luma);
/*---------------------------------------------------------*/
    float lumaMin = min(lumaM, min(min(lumaNW, lumaNE), min(lumaSW, lumaSE)));
    float lumaMax = max(lumaM, max(max(lumaNW, lumaNE), max(lumaSW, lumaSE)));
/*---------------------------------------------------------*/
    vec2 dir; 
    dir.x = -((lumaNW + lumaNE) - (lumaSW + lumaSE));
    dir.y =  ((lumaNW + lumaSW) - (lumaNE + lumaSE));
/*---------------------------------------------------------*/
    float dirReduce = max(
        (lumaNW + lumaNE + lumaSW + lumaSE) * (0.25 * FXAA_REDUCE_MUL),
        FXAA_REDUCE_MIN);
    float rcpDirMin = 1.0/(min(abs(dir.x), abs(dir.y)) + dirReduce);
    dir = min(FxaaFloat2( FXAA_SPAN_MAX,  FXAA_SPAN_MAX), 
          max(FxaaFloat2(-FXAA_SPAN_MAX, -FXAA_SPAN_MAX), 
          dir * rcpDirMin)) * rcpFrame.xy;
/*--------------------------------------------------------*/
    vec3 rgbA = (1.0/2.0) * (
        FxaaTexLod0(tex, posPos.xy + dir * (1.0/3.0 - 0.5)).xyz +
        FxaaTexLod0(tex, posPos.xy + dir * (2.0/3.0 - 0.5)).xyz);
    vec3 rgbB = rgbA * (1.0/2.0) + (1.0/4.0) * (
        FxaaTexLod0(tex, posPos.xy + dir * (0.0/3.0 - 0.5)).xyz +
        FxaaTexLod0(tex, posPos.xy + dir * (3.0/3.0 - 0.5)).xyz);
    float lumaB = dot(rgbB, luma);
    if((lumaB < lumaMin) || (lumaB > lumaMax)) return rgbA;
    return rgbB; }

vec4 PostFX(sampler2D tex, vec2 uv, float time)
{
  vec4 c = vec4(0.0);
  vec2 rcpFrame = vec2(1.0/rt_w, 1.0/rt_h);
  c.rgb = FxaaPixelShader(posPos, tex, rcpFrame);
  //c.rgb = 1.0 - texture2D(tex, posPos.xy).rgb;
  c.a = 1.0;
  return c;
}
    
void main() 
{ 
  vec2 uv = gl_TexCoord[0].st;
  gl_FragColor = PostFX(tex0, uv, 0.0);
}
``` &nbsp;

I’ve made a note to improve the node documentation.

Clarified documentation on color depth in Vuo 2.4.0.

1 Like

Would there be any reason not to use a glsl shader image filter to set up blending modes for this

Not that I know of. Vuo’s Blend Images node is implemented with a GLSL fragment shader along with a trivial vertex shader and mesh.

I’m wondering if the depth image gets generated with different/without multisampling?

Correct, the depth image is generated without multisampling. We clarified that in the node documentation in Vuo 2.4.0.

Here’s a reference on anti-aliasing techniques in case it’s helpful. @George_Toledo’s suggestion of FXAA could most likely help. Your approach of “upsample → process → downsample” is another one of the techniques in that reference (SSAA).

1 Like