- Prefers to be called
- Hareid, Norway
- 3 years 1 month ago
- Last seen
- 1 month 4 days ago
The sample color from image node needs a hold node in front of it (between resize image), also both hold nodes will need to be triggered from the "process item" port, not the showed window.
From an audio engineering perspective, I can't really say I like the idea of ms as a measure for samples. How about the default node using samples, and then display the ms in the output info?
Samples for buffer size is pretty much the conventional way for good reasons. The latency you'll get from 512 samples is also variable dependent on the sample rate of the audio interface. If you increase the sample rate of your audio interface to 96kHz you'll be able to cut the latency in about half as well. Provided you use the conventional buffer sizes (^2), the step from 128 to 256 at 44.1kHz would result in 2.9xxx ms and 5.8xxx ms respectively. The next step up (512) is at 11.6xxx ms. This results in having to round up or down, and you as a user/programmer wouldn't realistically know where you were regarding the buffer size if you set it somewhere in between. In reality you wouldn't get anything but a rounding to one of the sample sizes. That way I'd think it would be simpler to easily select the desired sample rate and the see the resulting latency.
Furthermore, when getting to the nitty gritty audio programming if/when that time comes, having a ms base/convention when dealing with audio programming isn't that easy or accurate. FFT and the likes where you need to know where you are on a sample basis would then become near impossible as you're dealing with a bunch of samples at 1ms - even at 22100. Then real -> int and indexing also becomes an issue.
When that is said, 512 samples is a huge buffer in itself, not to mention 4096 - at least when dealing with fewer than 16 channels with little processing.
Thanks for the input Jaymie (@jstrecker)! Brilliantly simple solution with the enqueue-lists!
I found the auto-add/-remove thingy so intrigueing that I didn't even consider the option of just letting all the items exist all the time. I also tried a simpler solution, where I was just using a hold-list node to add the timebase to. This didn't work as the hold-list, when getting a new item, would inherit the value from the first item (current time) instead of starting at the initial value (0). I might have done some triggering errors in that - or it's the way the hold list node works. I could at least not figure it out.
It seems like the calculation of the timebase should be about the same (produce the same output), but the other way around. The way I did it, I get the time between the frames by subtracting the previous frame time from the current frame time. This result then gets added to itself in a feedback loop producing an increasing value that should follow changes in framerate, but keep the time-scale.
If this gets hard to follow for anyone else reading, the difference between our approaches is that Jaymies way will create an enqueued list of n-items, and the animation is a result of the different items in the list. The way I do it, the animation is a result of the item in the list itself. This means that when a circle grows and fades in Jaymies composition, it is actually the next item (circle) in the list that gets displayed with those values (think stop-motion). The example I show is the item (circle) getting updated with a different value (think melting ice). Jaymies way will probably be a lot easier to get your head around, and is also probably way easier to deal with, but the time-scale and step-size between the items (circles) will be dependent on the size (amount) of items in the list.