Kewl's picture

Let's say that Vuo makes a tube and that the tube is scaled to 1x2x1. The tube outside material is an image that has already the good ratio for that scaled tube, so 1.57:1 rather than 3.14:1.

Is the image going to be applied to the tube before scaling (1x1x1 tube, so image stretched to 3.14:1) to be re-stretched after the tube scaling to 1x2x1? Or is the tube "constructed" first (with scaling) and than the image applied to it?

Comments

Well I guess it depends on

Bodysoulspirit's picture
Submitted by

Perhaps it depends on how you build the composition.

If you connect Fire on Start to Shade With Image that is connected to the Make Tube its material port, or if you connect Fire on Start to both the Make Tube and the Fetch Image image node.

You mean because you see the tube first appear at some size before being resized or what ?

I may have a very dumb

Bodysoulspirit's picture
Submitted by

I may have a very dumb question here but how would you want to prepare the image differently then with the correct ratio ? If you input a false one it would not look on the right ?

Either way you'd need to give it the correct ratio no ?

Yes, you're right. But... For

Kewl's picture
Submitted by

Yes, you're right. But... For me it changes the strategy for the image rendering from layers.

Let's say I want a 1x3x1 tube scaling. A good image for that would be 3140x3000 pixels. But if the image is applied to the tube before the tube scaling, the image will be squeezed to 3140x1000 pixels (loss of data) and than re-stretched to 3140x3000 (extrapolation of data). I have waisted CPU cycles to render an image bigger than necessary and to squeeze that image.

If I know that the image is applied before the tube scaling, I can distort the layers (taking into account the tube scaling) before render to image and render the image at 3140x1000. Once applied and scaled on the tube, the image will have the good proportions, but I will have potentially saved a few CPU cycles by avoiding the generation of "useless" pixels that get lost on the image squeezing.

Or maybe not... Is Vuo vectorizing matrix data for processing?

Yes but correct me if I'm

Bodysoulspirit's picture
Submitted by

Yes but correct me if I'm wrong, if you render the layers at 3000*1000 and stretch that image to the tube it will have the correct proportions and ratio but would there not be a huge loss of quality ?

For example if I take an image that is 1000 px large, stretch export it to 300 px large (with same original height) and reload it and stretch it at 1000 again, quality drops very much.

So your question still stands, what is rendered in what order, but somehow even if the image shortly is stretched and applied to a 111 tube it perhaps still has access to the original image so that you loose some extra hardware work but at least your image would be at better quality ?

Don't know quite technical for me ;)

If even the image was

Bodysoulspirit's picture
Submitted by

If even the image was stretched during the tube processing, I thought about adding the image later, with change shaders, but that will perhaps make the shader appear later. I thought it would have shaded the top and bottom as well but with a thickness of 0 it seems not (or it's invisible).

What I still wonder, there may be some info about about that that I have missed on the website but if I create a very detailed tube with > 100 lines and apply a live image to it, is it better to reshade the tube lower in the node pipeline with some Change All Shaders or can I apply it directly on the tube node and only the shader part of the node is being recreated (and not the whole tube).

The shader doesn't actually

jstrecker's picture
Submitted by

The shader doesn't actually do anything until the scene is rendered. If you have Shade with Unlit Image going into Make Tube, the image does not get applied the moment the shader hits the tube. Rather, the shader (a set of instructions) and the tube (a set of vertices + other information) are carried along until they reach the Render Scene to Window node. On the next display refresh after that, the computer actually does the work of rendering.

At that point (render time), the GPU applies the shader to the tube. The shader created by Shade with Unlit Image is a program that inputs a position on the tube (texture coordinate) and outputs the color that the tube should be at that position — that is, the color of the corresponding point in the image. The GPU executes this program a bunch of times in parallel to color all the pixels of the rendered image.

So, to answer your questions...

Is the image going to be applied to the tube before scaling (1x1x1 tube, so image stretched to 3.14:1) to be re-stretched after the tube scaling to 1x2x1? Or is the tube "constructed" first (with scaling) and than the image applied to it?

The image isn't mapped to / painted onto the tube until the tube is rendered. No to the first part, yes to the second.

if I create a very detailed tube with > 100 lines and apply a live image to it, is it better to reshade the tube lower in the node pipeline with some Change All Shaders or can I apply it directly on the tube node and only the shader part of the node is being recreated (and not the whole tube).

With a frequently changing image, using Change All Shaders would probably be faster — not because of the shader, but because it saves Make Tube from having to do its work every time the image changes.

If you're interested in learning more about shaders and the graphics pipeline, see An intro to modern OpenGL.

So, to answer your questions.

Kewl's picture
Submitted by

Is the image going to be applied to the tube before scaling (1x1x1 tube, so image stretched to 3.14:1) to be re-stretched after the tube scaling to 1x2x1? Or is the tube "constructed" first (with scaling) and than the image applied to it?

The image isn't mapped to / painted onto the tube until the tube is rendered. No to the first part, yes to the second.

Thanks!