Im starting to get a better understanding of how to manipulate lists but looking for tips on how to convert a real list of frequencies and their amplitude information into a list of 3D points where parameters about each points coordinates or shaded color values are affected by their originating frequency & amplitude real list data.

For example I've tried assigning the "Calculate amplitude for frequencies" real list output to create individual 2D points but they all stack on top of each other so I tried creating a parametric mesh of points to get an overall starting shape as a guideline to space out a list of 3D points but I'm not sure how to manipulate individual points in the mesh. Is that doable?

### I've tried assigning the

I've tried assigning the "Calculate amplitude for frequencies" real list output to create individual 2D points but they all stack on top of each other

I'm not sure I understand what you mean.

### Sorry I'm not being clear.

Sorry I'm not being clear.

The goal is to create a list of 3D points from a list of amplitudes across all the audio frequencies and "map" these 3D points spatially (distributed around a sphere for example). Each 3D point placement and possibly individual shader values would be affected by its own amplitude data.

### stack on top of each other

stack on top of each other

That part I don't understand.

### Ah I see. I was able to

Ah I see. I was able to generate a list of 2D or 3D points from the list of different amplitudes across frequencies but I'm struggling to assign coordinates to each point based on the amplitude or frequency of each point so that they are spaced out in the XYZ axis. The show frequencies example is informative but the placement in the X axis is just a linear curve and not based on a dynamic variable from the sound itself. Is that any clearer?

### Maybe you could generate a

Maybe you could generate a list of points in the exact shape of a sphere (or whatever), then use the result of `Calculate Amplitudes for Frequencies` to perturb each point.

There's probably some formula you could plug into `Calculate List` or `Make Parametric Points` to generate points along the surface of a sphere. `Make Parametric Grid Points` could also be useful.

Once you generate the list of points, you could feed that into an `Add Lists` node and have the other input come from `Calculate Amplitude for Frequencies`.

Then you could feed the resulting list into the Transforms port of `Copy 3D Object (TRS + Material)`. And you could feed a list of colors into the Materials port that is also controlled by the output of `Calculate Amplitudes for Frequencies`.

Or maybe another way to approach the problem would be to use `Displace 3D Object with Image`. You could use the result of `Calculate Amplitudes for Frequencies` to draw points in an image (appropriately distorted to map onto a sphere), and use that to distort your sphere. And you could draw points similarly but using the result of `Calculate Amplitudes for Frequencies` to control the color, and use `Shade with Unlit Image` to color points or areas on the sphere.

### What are you planning to make

What are you planning to make? Cymatics and Oscilloscopes are perhaps better placed in a shader of some sorts. I at least haven't found a satisfactory way to emulate that in Vuo (that doesn't necessarily mean it doesn't exist). I see from the way you pose the question that you might attack the problem from the wrong angle though. As audio input is a one dimensional value from 0 - 1 it doesn't lend itself well to xy(z) manipulation in its raw form. What it does well however, is scaling. So if you start by making a circle/sphere, then you can apply the audio values as scaling for those points, either by adding/mulitplying lists, or by scaling the points that go into them.

I'm attaching two compositions I whipped up quickly to demonstrate two ways of doing it, although there are a lot more. I suspect that if you go into 3D space with them, the z-axis (provided the object has origin at 0,0,0) will be the best one to scale.

### Nice contributions @Magneson.

Nice contributions Magneson. A virtual Cymascope is definitely on our to-do list! What are you referring to by a using a shader?

Have been making good progress with manipulating nodes of a 3D sphere via scaling (as you empathized) and will share results soon, however modeling sound wave phenomena would definitely be the holy grail for digital cymatics!

### The shader approach is so

The shader approach is so that you get more control over the individual pixels, and maybe a different approach to audio. As it stands now, I'm not sure about how phase is handled between the L/R channels.

OscTest.mov

Here I'm using Jerobeam Fendersons Oscilloscope music (no audio, sorry, check out http://oscilloscopemusic.com for some absolutely fantastic visuals), which is tailored for this sort of thing, and although I can see some definitive shapes in there, it doesn't seem to line up with what I get from Vuo.

However, when sending two sinewaves and adjusting the sample delay for one of them, I can do this:

OscTest2Sine.mov

But again the phase seems off:

So Vuo might be capable of an approximation initially at least. Would be nice to have a comment from Jaymie about the phasing, maybe there is something to clear up :)

### Very nice @Magneson. Not

Very nice Magneson. Not totally clear on how you are generating those movie tests, are those in Vuo? Or is it Oscilloscope music software (not seeing which one it is on http://oscilloscopemusic.com/software.php ?)

As for shaders, it sounds like you're referring to coding a custom shader for vuo? Probably beyond our capabilities at the moment but sometimes we are fortunate to have talented programmer volunteers working with us. What are the main tips/resources we can look at to get programmers started in the right direction; Vuo SDK?

### Yep, those are Vuo with point

Yep, those are Vuo with point meshes. The reference image is from Reaktor which generates the sine waves. To get it I split the receive live audio into First/Last in list, enqueued the values and merged them into an x/y list. I then triggered the enqueue node from a 'fire periodically' node that probably fires too fast for stable/efficient usage.

Shaders shouldn't be a too big of a problem in Vuo, they are quite easy to set up after you understand where stuff is supposed to go (but it probably helps to have someone knowing what they are doing (I'm not though)) . The SDK and the source are great resources, as are shadertoy, Paul Bourkes image filters and a bunch of other websites I don't remember.

### @Magneson, how does this

Magneson, how does this composition compare to your test?

### The goal is to create a list

The goal is to create a list of 3D points from a list of amplitudes across all the audio frequencies and "map" these 3D points spatially (distributed around a sphere for example). Each 3D point placement and possibly individual shader values would be affected by its own amplitude data.

You could accomplish this by first mapping the frequencies to an image, then projecting that onto a mesh. Here's an example I made up (it uses a custom node `Make Image From Pixels`, available here).

The interesting bits are tinted in cyan.

### @Jaymie, I thought I had

Jaymie, I thought I had saved it, but apparently not. The general concept was the same I think. I at least used first/last approach to the receive live audio list. For the oscilloscope shape thing I put the enqueue node into overdrive by fire periodically set to as fast as possible.