Pure data uses two audio buffer settings:
- audiobuffer (set in Milliseconds- but converts that value to the nearest multiple of the blocksize)
- blocksize (typically set to 64 samples)
The audiobuffer multiplies the blocksize buffer to accommodate system I/O, so the blocksize is the internal audio buffer size set typically to 64 samples.
http://booki.flossmanuals.net/pure-data/
This allows greater accuracy when control rate operations are interlacing with audio nodes. Such as controlling volume from midi slider etc. Currently Vuo's audiobuffer runs at 512 samples- and only allows maximum accuracy of 93.75 control events per second. It would be fantastic for Vuo to use this dual buffer approach to improve audio control accuracy. Settings for buffers can also be published in preference menu.
EDIT: fixed explanation about blocksize v's buffers to correct explanation that Steve mentioned below- otherwise confusing.
Comments
If I understand correctly, Pd
If I understand correctly, Pd's
-blocksize
parameter specifies the number of samples per buffer, and Pd's-audiobuf
specifies the number of buffers (it converts milliseconds to a power of 2 quantity of buffers).In Vuo:
…so I'll interpret this feature request as allowing the composition to change the number of output buffers from its default of 8. I updated the title and opened it for community voting. I envision this being a port on nodes that output audio (currently just
Send Audio Data
).I'm unsure whether in Vuo the port's value should be specified in milliseconds (which would be easier to understand from a latency perspective, but is imprecise since it requires rounding to a multiple of the buffer size), or an integer number of buffers (which directly specifies what Vuo should do, but may be more difficult to understand).
Just found this info for PD
Just found this info for PD (since we are using that as an example of a visual audio environment):
https://puredata.info/docs/developer/PdMemoryModel/
Here it states that within PD you can have nodes that have per-sample accuracy mix with other buffers simply by increasing or decreasing the -audiobuf.
Thanks for clarifying that
Thanks for clarifying that @Steve. So that means that the I/O audio buffer for vuo currently is 4096 samples large?
I am interested in being able to change both- however currently being able to change the buffer would hugely improve MIDI timing etc (for example using an ADSR node- currently we only get ~100 events a second, we really need more as the data only enters the node once per event- unless it's buffered)
I'm unsure whether in Vuo the
I think that keep it to Milliseconds, that way there will never be any confusion- if you are technically minded then you would know how to do it.
Thought: Make the
Thought: Make the milliseconds setting multiples of buffer value automatically.
the I/O audio buffer for vuo
Yes; 8 buffers, each with 512 sample frames.
From an audio engineering
From an audio engineering perspective, I can't really say I like the idea of ms as a measure for samples. How about the default node using samples, and then display the ms in the output info?
Samples for buffer size is pretty much the conventional way for good reasons. The latency you'll get from 512 samples is also variable dependent on the sample rate of the audio interface. If you increase the sample rate of your audio interface to 96kHz you'll be able to cut the latency in about half as well. Provided you use the conventional buffer sizes (^2), the step from 128 to 256 at 44.1kHz would result in 2.9xxx ms and 5.8xxx ms respectively. The next step up (512) is at 11.6xxx ms. This results in having to round up or down, and you as a user/programmer wouldn't realistically know where you were regarding the buffer size if you set it somewhere in between. In reality you wouldn't get anything but a rounding to one of the sample sizes. That way I'd think it would be simpler to easily select the desired sample rate and the see the resulting latency.
Furthermore, when getting to the nitty gritty audio programming if/when that time comes, having a ms base/convention when dealing with audio programming isn't that easy or accurate. FFT and the likes where you need to know where you are on a sample basis would then become near impossible as you're dealing with a bunch of samples at 1ms - even at 22100. Then real -> int and indexing also becomes an issue.
When that is said, 512 samples is a huge buffer in itself, not to mention 4096 - at least when dealing with fewer than 16 channels with little processing.