I had a curious thing happen when using Final Cut and Vegas to edit a project and I wanted to see if someone could maybe explain what happened.
I captured some old VHS through my Canopus AVDC110 convertor to Final Cut. I edited, and then printed to digital 8 tape. I then captured the D8 tape in Vegas to make a DVD. During print to tape, some glitches appeared so I used the Final Cut timeline and just rendered those segments to a QuickTime DV codec file.
The video I captured from tape in Vegas had the expected 100ire whites and 0 blacks for DV. But the QT DV file, when imported came in with brighter whites and lower blacks. I was always under the impression that the print to tape would be identical as the render to file since I was not changing the file format of the video itself.
Anyone know why the levels between the tape and the rendered QT would be different? The rendered QT matched the original video on the timeline when re-imported to Final Cut. Is it maybe because the way Windows/Vegas handles the QT codec?
Thanks,
Dave T2
I captured some old VHS through my Canopus AVDC110 convertor to Final Cut. I edited, and then printed to digital 8 tape. I then captured the D8 tape in Vegas to make a DVD. During print to tape, some glitches appeared so I used the Final Cut timeline and just rendered those segments to a QuickTime DV codec file.
The video I captured from tape in Vegas had the expected 100ire whites and 0 blacks for DV. But the QT DV file, when imported came in with brighter whites and lower blacks. I was always under the impression that the print to tape would be identical as the render to file since I was not changing the file format of the video itself.
Anyone know why the levels between the tape and the rendered QT would be different? The rendered QT matched the original video on the timeline when re-imported to Final Cut. Is it maybe because the way Windows/Vegas handles the QT codec?
Thanks,
Dave T2