I am specifically interested in hand tracking. Ie, imagine a node that was similar to the Find Faces in Image node. 'Hand and Body Pose detection' came as part of Big Sur.
One current workaround, running HandPose OSC standalone app is ok, but is limited to one hand, and doesn't have the option of sending single frames for recognition. It also has the downside of not being able to be included in App export, and I assume has some tradeoffs in performance as a separate app.
In our testing so far, Vuo 2.2.0 works fine on macOS 11 and mostly works on ARM/M1 Macs under Rosetta. We aren't aware of any new issues specific to macOS 11. On ARM/M1, NDI doesn't work yet — NewTek hasn't yet released an ARM-compatible version of their SDK, and it doesn't work under Rosetta. We haven't tested everything though (e.g. FxPlug), so there may still be things that don't work on those systems.
We appreciate you sending the crash report, but we weren't able to diagnose this from the report. Is this happening reliably for you? If so, can you send us the composition and the nodes you were trying to package when this happens?
Just to begin with, Core ML currently has 6 very different model types. Our goal with Vuo is to keep nodes understandable and task-focused. We think this is too broad for a Vuo feature request. Can you submit requests for specific things you'd like to be able to do using ML? We have several requests for tracking in images: Camera tracking, Tracking blobs, that seem similar to the types of things you might want to do with Core ML.
We're currently working on making Vuo run natively on ARM-based Macs, but haven't yet gotten to the point of testing FxPlugs. Apple hasn't mentioned that in their documentation or made a public statement about it. We'll know more after further testing.