Don’t really know how these should be implemented ?
As options on some 3D cameras Objects ?
How GPU heavy would this be for non-offline (real-time) compositions ?
Don’t really know how these should be implemented ?
As options on some 3D cameras Objects ?
How GPU heavy would this be for non-offline (real-time) compositions ?
Opened for voting.
Can be achieved in GLSL using available depth map.
Maybe this would be a good way to implement to make it as modular as possible. (As opposed to adding a setting to a camera for example)
I saw an example where a developer made lens style depth blurring as opposed to smooth Gaussian blurring, very nice!
@alexmitchellmus, we’re considering various ways of implementing depth of field.
(a) Node that inputs the Depth Image output from Render Scene to Image
. As you said, this has the advantage of being modular. A possible disadvantage is that you couldn’t use it with Render Scene to Window
; you’d have to refactor your composition to use Render Scene to Image
.
(b) Setting on camera nodes, with the option to disable it.
(c) Node that, like the camera and lighting nodes, you connect to Render Scene to Image/Window
to enable depth of field.