The shader doesn't actually do anything until the scene is rendered. If you have
Shade with Unlit Image going into
Make Tube, the image does not get applied the moment the shader hits the tube. Rather, the shader (a set of instructions) and the tube (a set of vertices + other information) are carried along until they reach the
Render Scene to Window node. On the next display refresh after that, the computer actually does the work of rendering.
At that point (render time), the GPU applies the shader to the tube. The shader created by
Shade with Unlit Image is a program that inputs a position on the tube (texture coordinate) and outputs the color that the tube should be at that position — that is, the color of the corresponding point in the image. The GPU executes this program a bunch of times in parallel to color all the pixels of the rendered image.
So, to answer your questions...
Is the image going to be applied to the tube before scaling (1x1x1 tube, so image stretched to 3.14:1) to be re-stretched after the tube scaling to 1x2x1? Or is the tube "constructed" first (with scaling) and than the image applied to it?
The image isn't mapped to / painted onto the tube until the tube is rendered. No to the first part, yes to the second.
if I create a very detailed tube with > 100 lines and apply a live image to it, is it better to reshade the tube lower in the node pipeline with some Change All Shaders or can I apply it directly on the tube node and only the shader part of the node is being recreated (and not the whole tube).
With a frequently changing image, using
Change All Shaders would probably be faster — not because of the shader, but because it saves
Make Tube from having to do its work every time the image changes.
If you're interested in learning more about shaders and the graphics pipeline, see An intro to modern OpenGL.
Scheduled for work.
Scheduled for work. Demonstration of the problem: noise-comparison.vuo
As a workaround, you can use
Make Noise Image with
Crop Image, as in noise1px.vuo.
@Bodysoulspirit, I meant that the tentative plan for this feature request was to provide just 1 way of disabling a node (the "black hole" behavior). So a node could either be enabled or disabled.
The issue of "what state would those nodes be in when I restart the application" is that when you restart (meaning hit the Stop button and the Run button) the composition starts fresh. It doesn't carry over data from the previous run. (The ability to save state and resume later is covered by FR A UI for storing and loading the state of running compositions.) If nodes had the ability to continue outputting their current data when disabled, then things would work OK if you ran the composition and disabled the node, but the next time you ran the composition the node wouldn't have any meaningful data to output.
I guess it would be possible, though probably beyond the scope of this feature request, for certain nodes, when disabled, to optionally pass their input data through to the output without processing it. For select groups of nodes (image filters like
Blur Image and
Ripple Image, for example) the expected behavior seems pretty obvious (don't filter the image). For a lot of other nodes, it's either not so obvious (like
Blend Image, as pointed out above) or, even for many 1-input/1-output nodes (such as
Convert Text to Image), not feasible.