I want to load this image onto the layer and have the layer be a scaled image size.
All of the get dimensions nodes seem to give pixel dimensions, not units.
The layer seems to take units, not pixels.
Also, the layer that can be scaled does not seem to have height and width, just scale for x and y combined as a single value, as if the layer was always a square.
Do I have to use Get Window Dimensions as well as math objects, to derive the unit size of an image? How do I control width and height individually?
(ps - how to change color for each layer instance?)
What do you mean with "have
What do you mean with "have the layer be a scaled image size" ? What size do you want it to be ?
While it is true that most "Get" nodes are in pixels, which led me to make custom nodes uploaded in the gallery : "Get Composition Height in Vuo Coordinates" and "Get Image Height in Vuo Coordinates", in Vuo coordinates images seem to have a width of 2, and the height is calculated accordingly.
It's not supposed to be a square if your image isn't a square. This node is meant to select one axis, x or y, and adapt the other one automatically based on the image ratio.
If you still need to manually set width & height, there is another Image Layer node
Make Image Layer (Stretched) 2.
Then there is also the
What do you mean with "how to change color for each layer instance" ?
Thanks for the notes.
Thanks for the notes.
What I mean by change color, is to have a color multiply accessible.
I had been using some other make layer object which had a color value accessible as well, then replaced it with the object above. Which led to the question.
I think that there should be a stock node that outputs in VUO coord values since that is the VUO position system. It is more efficient for this to happen on a node level than by adding extra nodes to a graph. I will take a look at your node soon, I appreciate that.
Could be that some older Vuo
Could be that some older Vuo Image Layer node had some color port too, I don't remember if that's what you mean.
If yes I guess one should multiply the image beforehand or add a second Color Layer with a blending mode set to multiply ?
Well, it is much much less
Well, it is much much less efficient to have to multiply large textures together on a pixel level than to have the texture get multiplied by a vec4 overall. Rendering two layers for one isn't that great either.
Certainly true, you're coding
Certainly true, you're coding skills are over 1000x mines, but how often will people use a multiply color on an image vs those who won't, and how does the extra code on that node affect efficiency for all the users who won't use it ? 😜
I think that if there is no
I think that if there is no data at a port that part has no toll. I guess there is potentially one multiply instruction added in the case of not hooking something to a color port.
Whereas in the case of multiplying one texture by another, there could be a distinct multiply value for each and every pixel. Which immediately grows to unwieldy amounts of processing, if you are talking about even a dozen or so layers with even just a single image across all of them, but where you want each layer tinted a different color.
It also takes up more texture memory of the GPU, which is limited, for each and every different color multiply texture one would have to use.
“coding skills are 1000x
“coding skills are 1000x times mines”
I appreciate that, but I also don’t personally believe that for a second! Don’t sell yourself short either!!!
persistence can also be a somewhat functional substitution for skill :-)
I appreciate that, but I also
I appreciate that too, but to be realistic, I don't even know what a vec2 is or UV :)
Yes but a color port always has a default color does it not ? Then it would need an extra port with a boolean ?
Oh, no…if white is at the
Oh, no…if white is at the color port everything is the same. Like objects in QC, or many others in VUO for that matter.
It may be that this layer node group uses some mac framework that doesn’t allow this…maybe it is CG backed, and in that case, I am unsure if Core Graphics exposes that. But why would the layer engine be CG backed? So that may be completely wrong. It just came to mind, thinking of any possible reason.