Recapture in M2t

Comments

bruceo wrote on 9/19/2007, 10:29 AM
With HDV there is no reliable way except to have a 2nd copy off your project and media on a RAID 5+

Another thing that I found that sucks with the DR60 is the fat32 4GB limit. Shoot a ceremony and murphys law will say that the important parts will occur every 15 minutes where the DR60 will split the footage when it reaches 4GB then take that footage and line it up and almost everytime the split you will lose a few frames, so if your primary audio is from this feed you are Fed.
megabit wrote on 9/19/2007, 11:40 AM
Agreed - a compromise enforced by FCP.

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

riredale wrote on 9/19/2007, 3:24 PM
I don't know how to say this except by example.

I take an HDV tape, used in a Sony FX1 camera last summer in France.

I capture it onto my PC using HDVSplit.77b. I tell HDVSplit to split the capture as it goes, and give the clips the name "testxxxxxxxxx", where xxxxxx is the shooting date and time. I capture the first 15 minutes of HDV video, which results in 18 separate clips. I capture to a folder on my desktop called Folder1.

I repeat the capture after rewinding the tape to the beginning, but this time I put the clips into a new folder called Folder2.

I repeat, putting the captured clips into Folder3.

I open Vegas and pull all the clips from Folder1 onto the timeline.

I pull in the clips from Folders 2 & 3, putting them under the Folder1 clips. All three batches are butted up against the left edge of the timeline.

Here is a photo of my screen at a part of the zoomed-in timeline close to the end of the 15 minutes of clips. The timeline has been expanded to single-frame resolution. Note the audio pulse at the left exactly matches for the three separate captures. Note, too, that the file name for the clip starting in the middle is exactly the same for all three captures (test-20070630-145331, showing that this clip was shot at 2:53pm on June 30, 2007). What you can't see in the photo is that any given frame is identical when A/B'd with the ones directly above or below.

So this tells me that the captures are indeed identical. I could build a veg file, throw away the original clips, then recapture a new batch of clips and the veg file would be perfectly happy with them.

EDIT:
The DV proxies built from the m2t clips with GearShift all line up, too. Really, I have no qualms about throwing away my m2t clips, knowing I could rebuild them with the process described above.
bruceo wrote on 9/20/2007, 7:41 AM
riredale, Interesting. I would hope it would work with HDVsplit and V8 so I could reduce by bu overhead, but from my extensive testing and understanding of the lack of frame accuracy of HDV even on all of the Sony decks, I am skeptical.

So all of your HDV split m2ts are identified by Vegas properly and play at full speed? What does Vegas identify as the plug format for the hdvsplit m2ts when you look at the properties in vegas explorer? Sony M2TS or mainconcept?

I saw less than half of HDVsplit captures identified properly and once captured could find no way to change this info so Vegas would use the right plug. I wonder if this was corrected in V8?

Who else is successfully using HDVsplit with native full speed editing in vegas?
riredale wrote on 9/20/2007, 1:37 PM
Bruceo:

The properties page shows Sony M2TS.

I noted in an earlier post that the results would vary depending on the capture utility I used. An m2t clip captured with the Vegas utility would not match up with one from the HDVSplit utility. However, as the photo above illustrates, HDVSplit is completely consistent from capture to capture (in my experience, anyway). As mentioned earlier, I'm using version .77b, not .75.
bruceo wrote on 9/21/2007, 2:37 PM
So every one of your HDV split m2ts are identified as Sony M2T? If so I will try my next batch this way. I am still skeptical as I am reminded of lack of frame accuracy because when notice an error or bad edit in a render I drop the m2t that was rendered out onto the original source timeline and cut out the bad for rerender and it is dropped into the exact render loop and every time I have to drop the opacity and slide the footage at least 2 frames to the right for the footage to match up per frame.

As a side note the smart render really helps with the rerender in accelerated speed and quality, but the garbage problem still randomly occurs in the rerendered portions....