The operating system imposes a limit on the number of "files" that a process can open. "Files" includes not only regular files but other things like network sockets, pipes, and the "kqueue" mentioned in the error message.
Each NDI send/receive node adds to the number of open "files" — because of course it needs those to be able to communicate with other nodes or devices.
The reason your composition doesn't crash when run from the Vuo editor is because we happen to have already increased the number of open files it is allowed (to fix a different problem). To resolve this bug report, we'll similarly increase the number of open files allowed for exported apps. That will fix your crash.
As a workaround until then, you can run this command in Terminal to increase the number of open files allowed:
sudo launchctl limit maxfiles 1024 unlimited
(This is a system-wide setting, affecting all processes and not just your NDI app. It will reset after you reboot, unless you do more work to make it stick.)
Magneson hey, not rude we are coming from different directions. I had thought about locking the movement to the layer bounds but hadn't managed to get it to work yet. Your example comp is exactly what I was thinking. Thanks for taking the time to share the method, cheers.
Not to be rude (is saying this in itself making this seem rude?), but I'm not sure I quite understand what you want to achieve? If it is a distance thing inside a layer, it is pretty easy to get with the calculate nodes. If it's about not changing values outside a layer (XY field for instance), you can lock it with a select input and a hold node.
You can do this without the calculate node as well, but it does make it a bit more tidy. The smoothstep() function is really neat for scaling values and removes any necessity for 2D translations and such when moving around the objects/layers. Check the attached comp for the details.
The thing about UI elements/nodes is that a lot of the stuff currently in the feature requests can already be done with the existing nodes as is. Adding features/nodes will often clutter the nodes/library, and be for very specific scenarios. This means that adding a specific function will work for a user wanting X, but not for a user wanting X.b without adding yet another port/function to an existing node - or building the function with other nodes. In the case of the latter solution, X could have been achieved from the start by using a sub-composition tailored to that users wishes and then be shared and modified by user B. Then team Vuo could focus on stability improvements for existing nodes, and adding more spicy base features for the things that are hard to do instead. :)
Magneson not sure, in your comp the mapping is taking place inside a separate scaled window from the output. In my case I only have one window with a target layer for mouse interaction. I had a quick play and can see it might be a workaround but it would be more elegant to have the 'restrict to layer' option duplicated across all mouse interactions to reduce node clutter and to make things less complicated. Nice layout for the mapping comp BTW.
It does help to have some music/production theory & practice under the belt 😅! I have used NIs Reaktor for quite a few years for custom stuff, so I've never gotten into Supercollider (although I've heard it mentioned a few times). I find the Reaktor "core" approach to be pretty cool. You get a sample rate clock, a few basic math operators, some basic memory modules, and off you go. All of the more high level stuff is made with the most basic blocks/nodes in nested macros. This way you can really dig down in the rabbit hole in the same framework if you wonder how something works and/or you want to modify or build on it. I haven't looked at the audio generating capabilities of Vuo, but I assume all standard additive/subtractive synthesis options should be viable.
Magneson for the visual people this is where it gets hard, I have little if any music theory but have worked with beats as a VJ for 30+ years (not quite retired yet). Cut me in half and you will find a 120bpm tattoo in there somewhere :-) I guess the natural framework for this would be something like SuperCollider but I like the idea and challenge of using Vuo's limited audio synthesis. Joëlle and my projects will have the same starting point but will undoubtedly end up being unique.
@Joëlle One thing I might suggest is narrowing the sample area or at least making this flexible. It could help find a sweet spot for analysis instead of averaging out the full frame. I've used this method for analysing live video feeds and by having a small sample area you can cut out a lot of noise and create a smoother data stream. This could be a mouse click to select the sample area and sliders to adjust the height and width.
This noisy example (in the experimental music domain) is based on a simple frequency-from-image method with the visual created in VDMX, syphoned to Vuo to generate the frequency, then the waveform layer is syphoned back to VDMX, the frequency-from-image audio is used to animate the objects in VDMX (using the LoopBack app for audio routing)... and around it goes. Not sure it's a nice noise but I can see where I'm going now :-)
Sign up for the Vuo announcements mailing list to get news and join the community. We post about once per month.