keithlang True! But then again, you are de facto quantized in any digital application. I might have a strict definition of what a delay is (repetition of a previous event), but you can't delay anything that didn't exist in the first place at a place it isn't played back.
This means that if you in any application at a 30 Hz refresh rate delay something at for instance 0.95 seconds, it will visually play back after either 0.93 or 0.96 seconds.
From a UX perspective, a frame-independent presentation of a delay would certainly be neater, but I think you can still visually achieve the same results.
If the hypothetical action at 0.95 seconds were really really important to get right, there is also an option to subtract your delay from the source time for a duplicate node of what creates the control data. As far as I know, noise nodes in the same composition are for instance not random from each other, but will produce the same results given the same time/position input. This means you can visually achieve a delay-like effect with data that isn't present in the original flow.
jersmi yup, I think the enqueue node will enqueue channels of 512 samples each if i remember it right. I'm not sure if there exists a good solution for delaying audio yet either. I think the "easiest" approach would be to write the raw data to a data buffer of some sort, and then play it back from a set position in the buffer. Or make a feature request for audio delay :)
Understood. My main thought here would really be for a viewer app, so that iOS devices could be used for deployment, such as in an art installation, but I’m not sure that Apple’s guidelines around “running code” could be adhered to. Agreed that distribution of apps is a pain.
I realize I would need join the Apple Developer Program to distribute this under at a cost of $99US/year. Would there be any other expenses and fees I'm not aware of?
If selling your app, an additional cost would be the 30% cut that Apple takes (or 15% for approved small businesses). Besides that, there's sales tax / VAT, business registration fees, accountant fees, etc.
To get through the app submission process, you'd either need to put in significant time toward understanding it and going through it with command-line tools, or you'd need to hire a developer (such as Team Vuo) to walk you through it.
There would also be the cost of your own time for getting through Apple's review process and updating the app from time to time as required by Apple.
The message I get when exporting an App that it "requires macOS on an Intel (X86-64) CPU",
With the latest version of Vuo, you should only see that message if your composition contains nodes that aren't compatible with Apple Silicon, specifically Leap or NDI nodes. If you see it in any other case, I'd appreciate if you could create a bug report so we can figure out what's going on. Normally, the apps exported by Vuo 2.3.0+ are "universal" in Apple's parlance, meaning that they contain both Intel and ARM binaries so can run on either.
funwithstuff, thanks for pointing out iOS, since I hadn't mentioned it. Here's our team's thinking on that…
Our ballpark estimate is that the time needed to implement deploy-to-iOS would lie somewhere between deploy-to-Linux and deploy-to-Windows. iOS is a close relative to macOS in some respects (Apple frameworks for things like video and multithreading), while Linux is closer to macOS in others (user interface paradigms, device support).
Besides that, the other main difference is that iOS has stricter requirements than macOS for the apps that it allows to run, and what those apps are allowed to do. In order to distribute any iOS app, you have to purchase a $99 USD/year Apple Developer membership. The process of getting an app to run on a device — whether you're distributing it through the App Store or running it on a predefined set of devices — is quite complicated even for seasoned developers. We could potentially automate parts of it and provide instructions for the rest, with the understanding that it would require continual support and maintenance, since sometimes Apple makes changes to the process and sometimes there are bugs you have to work around.
We're still thinking it would be best to implement deploy-to-Linux first, not least because it would make it possible to run Vuo compositions on hardware other than Apple's. When we reassess after deploy-to-Linux is complete, we'll take another look at deploy-to-iOS and see how it compares to the other possible next steps at that point (deploy compositions to Windows, edit compositions on Linux, …).
Magneson This assumes you don't mind having your events time-quantised. Conceptually, I think Chris is looking for something where the timecode of the event coming in is considered, when it comes out the other end of the pipe.
We've accepted this. It seems that the pixelbuffer gets distorted when the width is both greater than 1024 and is not a multiple of 32. As a temporary workaround, you could use a width that's a multiple of 32 (then Resize Image or Crop Image if needed).