Human ear is said to hear frequencies between 20 - 20 000 Hz.
This app and composition allows you to play with these frequencies and run the test for yourself.
I can hear frequencies up to 17.500 Hz when the volume is set quite loud.
By default I sat the loudness quite low to avoid brutal high frequency in your ears when you start the app, but still I RECOMMEND TO SET YOUR SPEAKER VOLUME QUITE LOW ON START and tune it up slowly ;)
It'd be great if Vuo could simplify the process of converting FFT data to MIDI notes. Suggested node would have a monophonic (melodies) and polyphonic (chords) detection modes. There might be some challenges with requiring a high sample buffer size...
A variation on the PlayBluesOrgan example composition. This version demonstrates the Track Single Note node, which keeps track of which keys are pressed and selects a certain tone based on its Note Priority input. By using the "Last Note" priority, you can hold one key while tapping another, and when you release the second key the tone switches back to the first key, allowing you to perform trills.
Currently we have audio-samples port type for audio. This port type is a list of samples that make up the sample buffer.
This means every node that processes audio needs to read- and re-render an audio buffer.
This feature request is for an audio-object similar to 3d objects. That is to say there are no samples within the audio-object, only audio DSP code that is "inserted" into the audio renderer where there is one audio buffer.
This would be similar to MAX/MSP gen, and other software that allows deep access to the audio buffer.
This would also allow a user to make an advanced synth or sound design generator with a combination of very simple nodes.
Also for audio effects that need an audio buffer (for example a delay effect) we can still use the output of the audio objects renderer with other audio-buffer nodes. In a similar way to render layer as image allows layers to be rastered.
Currently it is my understanding that a node accepts data when fired. When an audio node is fired it currently can input and output an audio buffer. However when midi data enters the node it has to wait until the next node execution to enter the node. It is my understanding that other softwares employ a midi buffer to allow the node to access this data between fires- so as to allow audio midi realtime performance. (Midi instruments)
Possibly vuo could implement this in a new and exciting way- many possibilities - one is to time stamp each event- (midi and otherwise) to allow nodes to access sub buffer timeing if needed. This extra piece of data could be added to events at the port-type level. This way if a node wants sample-accurate timing it would be possible.