eenixon's picture


eenixon's picture

OK. Just to get me organized here. Let's focus on the composition that Jaymie put together in the previous thread here:

I'll forgo the reactive stuff for the moment as being irrelevant to my current problem. I'm still a newbee so I have only a high level understanding of Jaymie's code. One thing that is foggy is where/how the length or duration of the composition file is determined. There is a Calculate node but I don't see the relationship to duration, e.g., of the audio file.

What would be most useful in my case would be a process that existed in one of two situations: a) the duration of the audio file being played, or, alternatively, the duration of a video file being played. I guess that means that "finished playback" would be used to relation to or in place of "Stop Composition" in Jaymie's example. Subtleties I'm not getting here?

Now the "Make Audio/Video Frame" nodes are putting Timestamps on each frame; this looks good. But is it really necessary in this use case? Because the video and audio are being merged or synched in the "Save Frames to Movie" node. On the other hand, if for some strange reason, you wanted to create two outputs -- one audio, one video -- perhaps the Timestamp is essential (if you actually want to reunite the two in some later process, e.g., an NLE edit.)

So I have some questions: * is a Vuo Timestamp like or even better conformant with Timecode as described here: ? * is there a way of Timestamping a movie that is being exported in non-real-time? * is there a way of writing out an audio file containing Timestamps? Presumably the same Timestamps as those on an exported video.

Thanks again for your patience as I do some noisy gear/paradigm shifting. ...edN

eenixon's picture

Thanks for hanging in on this; I appreciate it. This is getting a bit long in the tooth and I'm going to have to go back and review the project and try again to understand some of the concepts.

Give me a few days and I'll try to get something back here that, at best, is a success or, alternatively, clarifies the issue I think I'm having.

Thanks again. ...edN

eenixon's picture

I'm grateful for your input and suggestions.

To answer your second question: I want timecode because it is offered as a feature in this software, it promises a possible enhancement to my workflow and, finally, as I hope I've mentioned earlier, I don't understand how it works based on my reading of the documentation and in relation to my previous experience in shooting and editing video.

If you have ever done video work using a dual system approach, i.e., video to camera, audio to external recorder, you know that synching audio to video is a rudimentary first step in the editing process. By convention, there are tow methods: a) comparing the audio wave forms of the scratch track on the video to the high quality audio from the external recorder, or b) imprinting matching timecode on both video and audio and then merging the two in the editor. Lining up video visually to an audio file, if that is what you're proposing, is fraught. Particularly in a scenario where you might want to modify frame rates and/or use a different piece of video as part of the edit.

All that aside, my reason for posting here is not to critique my way of working or my creative goals. It is to try to get some clarity around how some of the features of Vuo, a truly estimable piece of software, work. I can read what the documentation says the features do but the docs, probably because of my ignorance of the product, do not tell me what the features mean or imply in terms of the usage scenarios I've been describing.

Am I being too obtuse? Nit-picky? My apologies.

eenixon's picture

Yes, thanks. Synching audio to video in post is the only current alternative I think. The question for me is about time code. I don't understand the what and how of timecode in Vuo. It appears to be the only synch choice given there is no scratch audio in the video output against which a waveform synch might operate.

So Is this the procedure?

  • export the video NRT with timecode attached -- somehow
  • write out the audio with timecode in real time or, alternatively, as a video-less movie in NRT -- at the same time as the the video...
  • finally import to NLE and synch using timecode.

I haven't been able to find anything in the docs about what type of time code is written to these files; whether it consumes one of the audio tracks or is put into meta-data. How it is 'jammed' to the output files, etc., etc. So I'm feeling like I've been flying blind on this and have lost the appetite for just banging around until something happens.

eenixon's picture

Thanks, Jaymie. I'm glad to see the feature request is ongoing although I don't yet understand the voting process. I'll look at all that more closely.

I tried the Frames to Movie and Make Audio/Video Frame route yesterday. But I don't have a handle on how the timestamp works in detail.

I'll look at it again. I assume in using Export that I'll end up with two files: one with video only and the other with audio only?

Can you help me understand what the shutter angle slider is used for. In other contexts, i.e., on DSLR or Mirrorless cameras, the shutter angle is an analog for shutter speed. If memory serves, it is set to a number roughly twice that of the shutter speed. But I don't see how it plays in the Export process.

Thanks for your help. ...edN