[Original title: Intelligent motion blur correction in-node]
Currently many nodes get effected by motion blur settings when rendering out. One such node is the
Blend Image with Feedback node. When motion blur is activated the feedback is reduced due to the fact that many more frames are rendered than when pre-viewing in realtime. Thus it is hit and miss to render out the final video in high quality, as you need to add in more feedback to recreate the same look.
Would it be possible to have some nodes (not all) respond to motion blur settings, and alter their values (or most probably scale their values) based on motion blur amount?
This would make it easier to render out video that is consistent from realtime preview to final render.
It may also be useful to allow the user to turn off this feature, if for example it was incorrectly working for a composition.