Position/orient 3D objects by using the mouse on the rendered output

A Vuo Editor mode where there are little handles on the 3D objects in the rendered composition, which you can drag to transform the object. These transformations would be saved into the Transform input port (if a constant value) on the node that produces the object (Make Cube, Make Sphere, etc.).

Opened for voting.

So is this a request for an Interaction patch equivalent (to use QC conceptualisation some of us might be familiar with)?

I’ve had situations where even selecting an object with the mouse (say a 3D point object) so I can interact with just that isolated object using the input sliders or on-screen buttons I created inside a 3D scene would have been incredibly useful. Please consider abstracting the selection of objects from the positioning interaction with them too so we can notify our composition which object has been selected.

Alastair, I just added some details about this feature request. It wouldn’t really be the same as QC’s interaction patch. The interaction patch is aimed at end-user input (similar to Vuo’s Receive Mouse Drags on Layer. This FR is aimed at the person editing the composition, as an alternative to filling in numbers in the Transform input editor.

Your idea about being able to click on a rendered 3D object in a scene (hit testing) sounds like a separate FR. Would be nice to have.