Don't really know how these should be implemented ?
As options on some 3D cameras Objects ?
How GPU heavy would this be for non-offline (real-time) compositions ?
Don't really know how these should be implemented ?
As options on some 3D cameras Objects ?
How GPU heavy would this be for non-offline (real-time) compositions ?
When we (Team Vuo) plan each release, we try to implement as many of the community's top-voted feature requests as we have time for.
If anyone would like to help this happen sooner, we're also accepting commissioned work.
Read more about how Vuo feature requests work.
Comments
Opened for voting.
Opened for voting.
Can be achieved in GLSL using
Can be achieved in GLSL using available depth map.
Maybe this would be a good way to implement to make it as modular as possible. (As opposed to adding a setting to a camera for example)
I saw an example where a
I saw an example where a developer made lens style depth blurring as opposed to smooth Gaussian blurring, very nice!
@alexmitchellmus, we're
alexmitchellmus, we're considering various ways of implementing depth of field.
(a) Node that inputs the Depth Image output from
Render Scene to Image
. As you said, this has the advantage of being modular. A possible disadvantage is that you couldn't use it withRender Scene to Window
; you'd have to refactor your composition to useRender Scene to Image
.(b) Setting on camera nodes, with the option to disable it.
(c) Node that, like the camera and lighting nodes, you connect to
Render Scene to Image/Window
to enable depth of field.