Thanks Jaymie - I'll give it a try next time. I was scrambling at the last minute so ended up not using those compositions and the stream went well in the end. More exasperated with their silence than anything else, but thanks for responding so quickly 😊
No reply :( I'm quite concerned if I'm honest. There's no communication from VDMX at all, and using Vuo with VDMX is a pretty risky option at the moment which is worrying. Some files randomly exhibit strange behaviour like cropping the output , which then breaks other files from then onwards almost like something gets put into memory and even restarting VDMX doesn't help - but it's completely inconsistent as I've use the files for two streams before. Not sure what to do at this stage i I'm honest, it's quite a bummer. When I've got a bit of time I can try capture some of the behaviours but main concern is no reply for months now from them :( pasted the thread below, this is so frustrating :/
👨 Face Landmark Detection
Now we have already "Find Faces" with eyes recognition, but what about like a 3D mesh of the face like TensorFlow again their "Face Landmark Detection ?) Would that be useful in Vuo, don't know ;) Would we want to create some AR Masks with Vuo ? ;)
Can I create some feature requests for those ? Or have I missed some that already exist ?
A general question about those from my limited understanding about the topic, does for example those TensorFlow models come like pre-trained algorithms ? Or do you have to train them yourself ? Or is training them yourself a possibility also if you wanted to ? I see on the Apple CoreML page you can load trained models but also create some with CreateML ?
Regarding Apple Models or libraries like Tensor's Models, I guess it's again kinda the question about Vulkan/OpenGL or Metal ? If a Windows version is still in the pipelines, the Vuo Team would have to implement CoreML for Mac users, and add some workload to implement different techniques for other platforms, so open source multi platform tools require less effort ?
Of course, some Apple tools are really optimised for Apple products, so I guess as Jean Marie seem to say, it's about testing out performances and possibilities and finding the right balance ? How for example does Apple's new Skeletal Tracking performs VS TensorFlow's one ?
I've stumbled across some what seem to be very efficient technologies, like the Banuba ones which of course come with a paid license, maybe those are the technologies used by Zoom for background removal (thought Snapchat would use it too, but I see they acquired another startup in this domain for their technology).
I say this because sometimes the open source and free libraries seem less performant, and I guess it's up to the team to find those that work best, but those from Tensor seem to work pretty great !?
Of course I'd love to be able to implement such libraries and models myself into Vuo, but I can barely create some stateless nodes using the given API functions, I can't even create stateful nodes or custom functions ;)
I remember Martinus asked some questions about implementing stuff, can't wait to see what he's coming up with ;)
And also there is a feature request about a tutorial to implement libraries. Can't wait ;)
I was able to mute or hear audio with a mouse click over the volume icon, but I wasn't able to control the volume. We suspect that YouTube requires mouse hover events in order to show the volume control and other settings. We're using Apple's underlying implementation, and that doesn't allow Vuo to send hover events to the web view.
Vuo captures images of the webpage using the most efficient method Apple provides.