Currently it is my understanding that a node accepts data when fired. When an audio node is fired it currently can input and output an audio buffer. However when midi data enters the node it has to wait until the next node execution to enter the node. It is my understanding that other softwares employ a midi buffer to allow the node to access this data between fires- so as to allow audio midi realtime performance. (Midi instruments)

Possibly vuo could implement this in a new and exciting way- many possibilities - one is to time stamp each event- (midi and otherwise) to allow nodes to access sub buffer timeing if needed. This extra piece of data could be added to events at the port-type level. This way if a node wants sample-accurate timing it would be possible.

More info: http://expressiveness.org/2012/12/04/midi-jitter

https://www.ableton.com/en/manual/midi-fact-sheet/

http://www.juce.com/doc/classMidiBuffer#details

Component: 

Notes from Team Vuo

Vuo Pro: 

No — available with both Vuo and Vuo Pro licenses

Complexity: 

●●○○ — A few weeks of work

Potential: 

●○○ — Appeals to current community

Comments

Otherwise the node would

alexmitchellmus's picture
Submitted by

Otherwise the node would disregard time stamp completely.

So for example if a node allows timestamped data the node would receive a list (of the values and time stamps) - the node can then use the time stamp data however it wants- and distribute the events throughout the buffer- comparing time stamps.

Events would have to wait for the next execution of the node- but there would now be a small structure of events - all time stamped- and the node could then use the small list within the buffer loop however it wanted to.

Obviously it could also simply take the first item from the small list and disregard the time stamp.

I am unsure if it would be a good idea to make this a universal style of event firing?

  • drop
  • enqueue
  • buffer (time-stamp)

Having thought about this a

alexmitchellmus's picture
Submitted by

Having thought about this a bit - I think an enqueue node with time-stamp would be great!

However it would have to be worked out how to process said time-stamped list. This list can be processed in three ways:

  • timed from first event and distribute throughout buffer based on timming difference- but each new buffer aligns to first event
  • events follow a delay setting so no jitter is generated
  • hybrid que - first event sets time, all other events add onto that time dynamically, time code events are remembered, system tries to minimise jitter

@alexmitchellmus, thanks for the

jstrecker's picture
Submitted by

@alexmitchellmus, thanks for the links. The first article was especially helpful toward understanding the issue.

Attached are some compositions to illustrate the issue (and make sure we're on the same page). "MIDI Beats.vuo" sends MIDI events to play a note repeatedly at 180 bpm. "MIDI Instrument.vuo" receives the MIDI events and converts them to audio with Make Audio Wave.

The Make Audio Wave node calculates and outputs a buffer of audio samples each time its refresh port gets an event. The audio samples are calculated based on the Frequency and Wave port values at the moment when the event hits the refresh port. If the Frequency port gets an event fired from Receive MIDI Events, the changed frequency doesn't take effect until the next time Make Audio Wave gets an event into its refresh port. Same problem for the Adjust Loudness node.

I did a quick test to verify that this actually results in audio jitter (as it theoretically should given the above explanation of how these nodes work). I ran the compositions and recorded the audio into Audacity. Then I generated a click track (lower track) for comparison. There is some jitter of ~10 ms, as predicted by the article you linked above for an audio buffer size of 512 and audio samples per second of 44100.

The feature request Ability to change audio buffer size could mitigate the problem, although it's not an ideal solution. You'd have to know to pick a small buffer size to avoid latency, and a small buffer size would not be as efficient computationally.

@alexmitchellmus, you had suggested a solution involving timestamps throughout Vuo. I wonder if a simpler / more efficient solution would be to enable audio nodes like Make Audio Wave and Adjust Loudness to react to changes in Frequency/Loudness received in between audio events. For example, if Make Audio Wave receives a Frequency event halfway through an audio cycle, then on the next audio (refresh port) event it should output a sample buffer that's half at the old frequency and half at the new frequency. This would be essentially like using timestamps, but would be restricted to the audio nodes that actually need them.

Thanks Jaymie,

alexmitchellmus's picture
Submitted by

Thanks Jaymie,

Yes this is exactly what I am taking about. Thanks for following up and confirming this area of improvement.

Although I am no software expert I think that changing the audio buffer dynamically may not be the best idea. When it comes to midi and musical instrument timing- the greater resolution the better. If the buffer is halved that only achieves a buffer of 256 samples (~200 events per second- not very high accuracy).

A MIDI buffer is really the only thing that can work. Possibly time-stamping events (midi or otherwise) from an enqueue node (and timing the refresh to audio rate could allow events to enque before buffer and then be cleared between buffers.)

That way events (even other events not just midi) can be used within audio nodes. Which would increase the perceived audio latency when sending data to audio nodes.

Possibly there should be a review of current best practice for audio event triggering.

Also I think that currently RTMidi gives time-stamps from MIDI currently, so if the data is there already it may be good to use that?

@Jaymie here is a copy of

alexmitchellmus's picture
Submitted by

Jaymie (@jstrecker) here is a copy of RTMidi timestamp functionality:

MIDI input and output functionality are separated into two classes, RtMidiIn and RtMidiOut. Each class instance supports only a single MIDI connection. RtMidi does not provide timing functionality (i.e., output messages are sent immediately). Input messages are timestamped with delta times in seconds (via a double floating point type). MIDI data is passed to the user as raw bytes using an std::vector.

From: https://www.music.mcgill.ca/~gary/rtmidi/

Feature status

When we (Team Vuo) plan each release, we try to implement as many of the community's top-voted feature requests as we have time for. Vote your favorite features to the top! (How do Vuo feature requests work?)

  • Submitted to vuo.org
  • Reviewed by Team Vuo
  • Open for community voting
  • Chosen to be implemented
  • Released