In Vuo, images and layers really have different roles. Images are grids of pixels that you can manipulate. You can resize them (Resize Image
, Resize Image if Larger
), crop them (Crop Image
, Crop Image Pixels
) or apply any one of a number of image filters.
Layers are designed to be stacked on top of each other, a bit like layers in an app like Photoshop. You can create layers in Vuo, either from shapes or from an image, but you can't load layers from a file. Layers are sized based on the Vuo coordinate system.
As you noted, images can be made into 3D objects.
It’s not computationally expensive to make a layer from an image. Creating a Layer or 3D Object from an Image (nodes like Make Image Layer
and Make Lit Image Shader
-> Make Sphere
) is computationally fairly efficient, since Layers and 3D Objects just keep a reference to the image you feed into it.
It is more computationally expensive when you start to create new images. One example would be creating an image from a group of layers or 3D objects, such as Render Layers/Scene to Image
.
There is a tabled feature request Execution time, which includes a composition (which can be packaged as a subcomposition) for testing the execution time of a node or group of nodes in a composition. This might be helpful as you examine your own compositions.
Magneson, you make some good points.
videopiglet, I guess what we'll do is open up this feature request for voting, and limit the scope to the simple keystone node.
For diagonal cropping/masking, I'd echo Magneson's suggestion. Possibly we could create additional nodes related to Apply Mask
that would be more convenient for specific situations. If you know of any that would be useful, please create a separate feature request.
Here's a rudimentary solution using stock nodes that reports the first mismatch.
There are different ways to approach things in Vuo, depending on the final result you want. When I thought about this, I thought about sequential images that had random input, but where the input values were different enough so it was easy to see the change between them. I also thought that it would be nice to have a separate control of how fast one image merged into the next. So here is my composition. You can change the Threshold
and Sharpness
values to serve your needs, while the Period
input will change how fast the images transition.
Ok -- success if I convert the 32-bit float .wav to 16-bit integer. Attached zip file includes comp, updated subcomps and .wav files.
Here's what I did to get things working. First thing was to set up some fresh analysis using Terminal:
https://wiki.lazarus.freepascal.org/macOS_Sound_Utilities
Used afinfo
on my 32-bit float file:
File type ID: WAVE
Num Tracks: 1
----
Data format: 1 ch, 44100 Hz, 'lpcm' (0x00000009) 32-bit little-endian float
no channel layout.
estimated duration: 0.743039 sec
audio bytes: 131072
audio packets: 32768
bit rate: 1411200 bits per second
packet size upper bound: 4
maximum packet size: 4
audio data file offset: 80
optimized
source bit depth: F32
----
Then used Terminal afconvert -f WAVE -d LEI16
to convert it to a 16-bit .wav file. (Just cuz already in terminal and learning these new tools -- assuming could have used any audio software to convert. EDIT: WRONG assumption, see below -- Audacity export is different.)
New file info:
File type ID: WAVE
Num Tracks: 1
----
Data format: 1 ch, 44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer
no channel layout.
estimated duration: 0.743039 sec
audio bytes: 65536
audio packets: 32768
bit rate: 705600 bits per second
packet size upper bound: 2
maximum packet size: 2
audio data file offset: 4096
optimized
source bit depth: I16
----
Important clue:
audio data file offset: 4096
Essentially this offset value helped solve the issue. I wish I could extract it in Vuo. The number 4096 here apparently relates to the Apple 'FLLR' subchunk -- listed in the Read Wave Header
subcomp "File info" readout -- which designates (IIUC) >4k bytes before the audio data "payload" starts.
So I try offsetting the data start byte to 4097 -- works. I also noticed in the Read Wave Header "File info" readout that the "Sub-chunk 2 size" reads 4044 -- related but 52 bytes off -- 44 byte header + 8 bytes? So I tried setting the data start byte to 4045 and that also works. So i am a little confused why both work, still a lot I'm not getting about the numbers.... (Btw, also had to rejigger the Read Wave Header
subcomp to properly calculate the data section size, took a minute to get that sorted.)
Finally -- success!!!
Part 2: tried a simple conversion from Audacity, since the export to .wav only exports 16-bit. (I set up a macro -- now I can easily batch convert my 32-bit files to 16-bit). Terminal afinfo shows that it is different from the .wav using Apple's afconvert, the Audacity file is presumably more "universal" (i.e., audacity does not add the "FLLR" chunk and the +4k byte padding -- why oh why would Apple do that...).
And using the data file offset to set the data start byte (to 45) works:
File type ID: WAVE
Num Tracks: 1
----
Data format: 1 ch, 44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer
no channel layout.
estimated duration: 0.743039 sec
audio bytes: 65536
audio packets: 32768
bit rate: 705600 bits per second
packet size upper bound: 2
maximum packet size: 2
audio data file offset: 44
optimized
source bit depth: I16
----
Yet another doc has proved helpful in all this RIFF stuff: https://code.google.com/archive/p/opentx/issues/192 (which originated here: https://stackoverflow.com/questions/6284651/avaudiorecorder-doesnt-write... ).
Notable:
Reading WAVE files properly must really begin as an exercise in locating and identifying RIFF subchunks.
And:
It is allowable to insert subchunks after the data payload.
Which gets back to Steve's point not to trust where chunks are. Case in point, learned today that "acidized" .wav's -- a common format for adding loop metadata readable by audio sampler synths -- put their loop metadata after the audio data "payload".
Finally...
Magneson wrote:
If you're not scared of some heavy nerding, you can also just get the bytes from the wav files via the Data nodes and convert the sample range from the file to the Y-values you need. That way you get straight to the data you want ....
::ROFL:: Well, I guess I'm learning a few things. :-0
(Ps. is drag/drop broken for adding new files to posts?)
Still haven't had to time to dig in with this, but I can at least report that the sine wave file you posted works fine, as well as any other 16-bit single cycle file I have on hand. Apparently something about the 32-bit float type that needs to be sorted. Byte order? Conversion calculation?
Pages
Welcome!
Vuo is more than nodes and cables, it's a community! Feel free to browse or add your voice.
Browse Discussions
Browse Feature Requests
Browse the Composition Gallery
Learn more about the community
Learn more about Vuo
Policies
Vuo Announcements
Sign up for the Vuo announcements mailing list to get news and join the community. We post about once per month.