[Original title: Intelligent motion blur correction in-node]
Currently many nodes get effected by motion blur settings when rendering out. One such node is the
Blend Image with Feedback node. When motion blur is activated the feedback is reduced due to the fact that many more frames are rendered than when pre-viewing in realtime. Thus it is hit and miss to render out the final video in high quality, as you need to add in more feedback to recreate the same look.
Would it be possible to have some nodes (not all) respond to motion blur settings, and alter their values (or most probably scale their values) based on motion blur amount?
This would make it easier to render out video that is consistent from realtime preview to final render.
It may also be useful to allow the user to turn off this feature, if for example it was incorrectly working for a composition.
It would be possible for
It would be possible for
Blend Image with Feedbackto behave differently (increase opacity, reduce transform amount) to compensate for the additional frames rendered with motion blur.
To be clear about the limitations — this could correct for motion blur, but not for other movie export settings that render extra frames or out-of-order frames. When the shutter angle is set to less than 360º, a cluster of frames is rendered around each output frame time and the results averaged to get the output frame. This would put gaps in a feedback trail. With FR Distributed offline renderer, each computer would render a series of frames. It would take more than the compensation proposed above to avoid gaps in the feedback. (On each computer you'd have to render N frames leading up to the first frame.)
As a workaround for
Blend Image with Feedback, you can currently use the
offlineRenderpublished input port to detect if a movie is being exported, and compensate within the composition.
Blend Image with Feedback, what other nodes do you have in mind?
Hey @Jaymie, maybe it would
Hey @jaymie, maybe it would be good to have the
motion blurvalue also published like
offlineRender? Then maybe a few nodes that can be used to calculate correct settings for other nodes? Like an
offlineFeedbackCalculatethat you set the value of feedback and it changes dynamically the feedback value based on frametime and motionblur amount. Also an event trigger that only responds to new frames not new time events, (would take in time and framerate) to spit out events at specific frames. (A movie based firePeriodically for example.)
maybe it would be good to
Yes, that sounds good. So, in addition to the existing
offlineFramerateinputs, the image generator protocol would have optional input
Yes, we could add a node that inputs the
offlineMotionBluramount and desired opacity, and outputs the corrected opacity to feed to
Blend Image with Feedback.
I've opened this FR for voting.
You mean to compensate for shutter angle < 360º? If so, it seems like a separate topic (feel free to create FR/discussion).
the image generator protocol
Added in Vuo 1.2.4.