Render Image to Window uses the refresh rate of the monitor you are rendering to. ( I think).
The Blackmagic inputs work at whatever framerate you have coming in from the camera.
As your cameras are different framerates there is a conflict. ( I'm pretty damn sure that's where the issue begins).
Are you rendering both windows to the same monitor? Have you tried it with two monitors?
Can you tell the windows how often to render with Fire Periodically, and make those framerates match both the cameras and the monitors refresh rates?
Maybe add a Hold Value with a Fire Periodically on the input so that you are telling it to drop frames on the higher FPS camera? Maybe take them both down to 25fps....
Do you use Resolume? I believe you could easily set up both inputs as sources in the demo version to see if it works in that software.
I'll borrow the V2 again and try to answer your question about random and missing points this week sometime.
Although NImate is good at what it does it's just a little bit flakey, randomly shuffling it's Syphon outputs is one such annoyance. I'm guessing what you see in your OSC monitor is another example of the flakiness. It would be so much better to have the skeleton tracking done within Vuo.
If I understand correctly from your other posts on the Skeleton Tracking feature request, you will likely not be implementing skeleton tracking from the Kinect, but from regular video cameras using one of the new ML libraries. That'll be great, other cameras are so much more flexible. If it's at least as reliable and accurate as what is currently available it will be a boon! I guess the ML is only going to get better as it is used more and more.
Had a fault when trying to upload the attached file initially as I'd moved it before uploading and then the website would not allow me to link it a second time.
I think I've seen reports of the original bug previously but upon searching it returns pages and pages of links to the manual.
I also find searching through the bug reports to be over laborious as they are split into sections and I'd need to search through every section.
The width, height and time inputs are not in the order as they are in the subcomp file.
This may be unrelated but I made a more complicated subcomp with the kinect creating a mask image and also outputting the rgb and depth images, when I run this into point grid/line grid and displace scene with image it looses sync, the rgb image is way behind the other images. (Edit: this seems to only happen when I increase the number of points in my grid to over 200 x 150 so it's probably just my GPU struggling to keep up).