Scene Dedection Fails with new HDR-HC1 HDV

Wes C. Attle wrote on 7/8/2005, 10:47 AM
Just got my HDV HDR-HC1 this week. Every time I capture with Vegas 6 or Cineform HD Connect, the scenes all get captured into one giant clip. I generally record many many 10 to 30 second clips on each tape with my cams, so I need scene detection during capture. Never have this problem with my DV cam, only this new HDV cam.

I noticed the only separate clips which I do get during capture correlate to each time I turned the camera off between recordings. Simply pressing the Record button to stop and start the record will not register different scenes in the HDV media which Vegas can read.

Just to be clear. When I capture 20 clips from my new HDV cam, Vegas and Cineform capture utilities do not compute the 20 clips. Instead, they just capture one big clip made up of the 20 clips.

Have I done something wrong, or missed a setting change in Vegas (or CineForm standalone HD Link utility)?? I noticed other users of FX1 and other HDV cams reported some poor scene detection, in my case it is 100% no scene detection, unless the cam was shut off between scenes.

The giant captured clips are showing the correct format and frame rate, so the camera settings do not appear to be the issue.

Any ideas?

Comments

Wolfgang S. wrote on 7/8/2005, 11:38 AM
Peter,

Vegas6 does not support automatic scene detection, when you capture m2t files. And as far as I know, Connect HD does not either.

More, you should *not* capture into many small m2t files - since we know today, that Vegas 6 crashes when you try to import more then 60 m2t files. That is an unsolved bug, where Sony will have to work on it.

What you can do, is following:

- capture with Vegas to 20 to 30 minutes m2t files; take care, that you do not go over 20 or 30 m2t files (what should be easy).

workflow A:
- convert these large files to proxy files, using Gearshift; ideal are mjpeg-avi proxys, but even DV-avi widescreen proxys will work; make an automatic scene detection, based on AV-cutty. Export the edl file list from AV-Cutty to the Vegas-5 template. Import the edl file in Vegas. Run the script to combine the audio and video part, as developed by Johnny (download also here:
http://videotreffpunkt.com/thread.php?threadid=1172&boardid=36&styleid=6

group_aud_video.zip)

cut and edit those files - and replace them finally with gearshift back to the original m2t files, to render out the final video.

workflow B
- converte the m2t file to Cineform intermediates, can also be done with Gearshift but also without Gearshift; apply again AV-Cutty, what supports Cineform intermediates, but render this time each scene to an separate Cineform intermediate file on the harddis; import the single intermediate files back to Vegas, cut and edit the intermediates, and render your final video directly from the cineform intermediates.

Both workflows are not perfect, take some time for additional rendering, will take some additional hard-disc space (cineform intermediates much more then proxy-files). But that is what we have now with Vegas6.

Kind regards from Austria,
Wolfgang

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Quryous wrote on 7/8/2005, 2:37 PM
VERY interesting, Wolfgang, and Danke!
Serena wrote on 7/8/2005, 7:41 PM
Cineform HD will cut the clips while generating the intermediates, but I find (v1.6) that the process isn't entirely reliable and therefore it's less work if the option is disabled. I suspect this is related to computer power.
Initially I was displeased on this, but have found cutting out the clips in Vegas is a better approach because in the process I review the material in shot context and can give the clips meaningful identifiers (CFHD0002.xx doesn't convey a lot to me). I think you can't start editing until you're familiar with the material, so this is just the foundation of that process.
MH_Stevens wrote on 7/8/2005, 8:53 PM
I do exactly as Serena does. Capture using vegas6 to m2t. Play file and as you watch hit "s" (snip) key as you go at every potential edit point. Then start again from the begining and the edit just motors on. I am compleatly accustomed to lack of scene detection now with no down-side.

ALSO I have migrated to editing in HDV - no intermeadiary. Now I am experienced with the big GOPs and have got used to working with an inprecise edit line I find the time saved by not having to render has helped me work slower and more accurately and I am very happy with my results. And I don't have to waste all that HD space with all thoses massive avi files.

Mike S
Wolfgang S. wrote on 7/8/2005, 11:48 PM
I agree that there are different workflows - and one possibility is to work with m2t files in the timeline directly. However, given the limitations in todays PC-power, I tend to work with proxys. Based on a 3.2 GHz P4 and 1 GB RAM, I have made some tests to evaluate both hard disc space and render time.

An mjpeg-avi proxy (PIC codec) is rendered in 1.31x of the runtime of the video. The proxy has been optimized for the purpose of a proxy, so it was rendered in draft quality only, PIC quality level 9, and it was sized to 940x540 (square pixel). File size of such proxys is 33% of the original m2t footage only, so much smaller then cineform intermediates. The nice thing is that Gearshift does that for you - it is not necessary to stay with the PC while Gearshift renders the proxys.

Yes, it is additional effort to generate proxys and work with them. But that is what it takes to achieve good realt time preview quality on an average today PC, to my opinion required to cut and edit the video.

And yes, I have also tried to cut m2t footage native in the timeline also - with a preview quality of 10 to 15 fps of one m2t stream in the timeline it seems possible to do so. However, when you apply transitions, PiP or other effects, all what you can do is to apply RAM preview to evaluate a sequence. That takes time, too.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Serena wrote on 7/9/2005, 1:12 AM
Agree with Wolfgang there. I want realtime replay when I'm editing. Timing is the essence of cutting and it would become a bit tricky having to scale time in my mind when deciding on length of a shot. Probably I expressed myself poorly -- I work with Cineform intermediates and just disable the clipping when I capture. In fact I capture outside of Vegas 6, using Cineform HD to capture the m2t files and then generate the intermediates. If something goes wrong in the second phase I don't have to recapture, but it never happens. I started doing this because the simultaneous capture/convert/clip process often left me with gaps in the captured media and extra work to identify, recapture and so on. So I went belt + braces, but my experience is that retaining the m2t files is unnecessary. The time required by Cineform to generate the intermediates isn't great -- a recent job involved 4 hours of tape -- and I have plenty else to do while the computer works away! In that case I started editing tape 1 while Cineform worked in the background on the remainder.

I have enough power, using Cineform intermediates, to run real time with several video tracks with effects plus audio tracks, so I haven't seen advantages to Gearshift (although it is installed) --- just haven't put in the effort needed to make a comparative evaluation.
Wolfgang S. wrote on 7/9/2005, 1:54 AM
One significant difference between my workflow A and workflow B is, that you render your final video directly from orignial m2t footage, versus render the final video from the cineform intermediate.

As far as I know, there is no independ evaluation now, how much quality will be lost, if you compare these two workflows. Maybe I will start some evaluatons this weekend. There are ongoing discussions about that also in our German discussion groups, but there are no sound tests published yet (as far as I know).

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Wes C. Attle wrote on 7/9/2005, 2:24 AM
Wow, thanks Wolfgang and everyone.

I will go with Serena's workflow for now for ease of use. Figure out the other stuff later when I have time. Timeline splicing is what I have been doing so far and it is "satisfactory" but having all the clips broken out at capture is really the most efficient. You can delete what you do not want and more easily save and archive that way.

I also need to experiment with the intermediate format workflow. My dual Opterons should handle it well.

Does anyone know if this is a temporary limitation in Vegas, Cineform and other NLE's? Still catching up to new technology?
Wolfgang S. wrote on 7/9/2005, 3:10 AM
What do you mean with "temporary limitations"?

With a dual opteron, you could also give a workflow a try where you edit original m2t footate in the timeline. However, even here other companies like Canopus tend to recommend to work with intermediates - you will definitively see more realtime preview perfomance if you work with cineform intermediates. Try it and let us know what are your impressions.

We will test the quality loss maybe this weekend - based on the JND method. Tests should come up with answers
a) how the render time compares;
b) if quality losses are significant for different workflow A and B

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

farss wrote on 7/9/2005, 3:19 AM
If you're going straight from captured m2t file to say WMV9 the I don't think you'll see any loss if you convert to CF DIs first. Where you will see a difference is in generational loss, the HDV codec is quite lossy so going down generations could get ugly real quick whereas the CF DI is claimed to hold up very well even after 10 generations.
It seems that the FCP camp are not too happy with being forced to use intermediates as they're taking forever to conform and from what I hear they're not as good as the CF codec either so I guess we shouldn't complain too much.
Bob.
Wolfgang S. wrote on 7/9/2005, 3:42 AM
Bob,

quality is relativ - there are up to now no independent quality measurements, tending to evaluate the different workflows. So a first step should be to measure some workflows within Vegas, to be able to evaluate quality losses with Vegas.

A second step could be to compare that with other products - like Canopus Edius 3.3, what has the Canopus HQ codec on board. I have ordered Edius 3.3, to be able to do that in a next step.

At the moment I am happy if I am able to perform the first test sequence.

@ wmv9: what template do you tend to use for highest quality?

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Serena wrote on 7/9/2005, 4:00 AM
Wolfgang, I'll be most interested in your comparisons. I too am interested in getting maximum quality and would like to have the parameters for doing that set out in one place. There is a lot of information about related matters scattered through this forum, but it is always hard to find again and is generally qualitative. It's difficult to be sure that people are speaking around the same standard when they assert things about image quality. How do you propose to evaluate differences resulting from various work flows?
Wolfgang S. wrote on 7/9/2005, 5:40 AM
I am thinking about that - there has been a proposal in the past, to apply a parent-child relationship, deduct the test clip from the original, and intcrease intensity with the filter levels and invert that. So you get an optical impression, what is different in the original and the rendered material.

There is also the possibility to quantify that, by using histogram, mean and standard deviation. However, I am not sure if that is a good idea, I tend to prefer the optical difference picture.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

epirb wrote on 7/9/2005, 6:21 AM
IF you look on the Cineform site and I think VASST has it somehwere there is comparisions of generation loss using the Difference composting mode between the two.
My method has gone somewhat different.
Capture m2ts, some small some large , convert w/ GearShift to dv widescreen.
I then use Gearshift to create Cineform avi's of all the small clips, but leave the 2 or 4 longer m2ts, then when I convert back to the HQ media it uses the avi's for the many small clips and uses the m2ts for the larger ones , thus not creatinga huge avi file on my hard drive of the long m2ts.
More detail here if it helps.
http://www.sonymediasoftware.com/forums/ShowMessage.asp?Forum=4&MessageID=401233
Serena wrote on 7/9/2005, 7:11 AM
Are there already accepted testing methods that can be used? I would have expected something like the Modulation Transfer Function would directly quantify any degradation from the original image.
Wolfgang S. wrote on 7/9/2005, 9:10 AM
I am aware of these tests on the Cineform side. There are some major drawbacks there:

- they are based on an unrealistic workflow - e.g., you see no comparision with the original footprint material and do not evaluate the whole workflow.

- this is not really an independent test, isn´t it?

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

David Newman wrote on 7/9/2005, 2:31 PM
On original topic :

CineForm's Connect HD does have a scene detection option within the preferences panel in HDLink. However if that option is not working with the Sony HDR-HC1, we would like to get it fixed. Note: make sure you are using at least Connect HD 1.7 for your tests. If there is a scene detection issue with the HC1, we unfortnately don't have one of those cameras (Sony please help here.) So the quickest solution is to upload a short M2T file with a scene break in the middle. Anyone open to vonunteer this data from a HC1? If you have no means upload the data an email of a tiny 5-6MBytes M2T (2 seconds with a scene break in the middle) may work.

David Newman
CTO, CineForm
Wes C. Attle wrote on 7/9/2005, 7:21 PM
_dan, thanks for the reply. I did have "Split file on scene changes" selected during the capture in HDLink. I will try again with new footage tonight.

I have a sample file posted which contains four scene changes. The file was renamed to .mpg, but is a raw unchanged .m2t file.

Please try your luck at downloading the "Lavender more (62 mb)" - file name is "lavender5.mpg" file on this link http://202.213.136.131/hc1/.

Site is bandwidth throttled and limited to three concurrent downloads. Ping me if you have a problem and we can find another way to get you some HC1 samples.

The guys at SonyHDInfo.com also setup Bittorrent for these files. (see info on page, you would have to download all 27 files in a single zipped file via bittorrent).
David Newman wrote on 7/9/2005, 10:05 PM
That is working here, are you using Connect HD 1.7?

David Newman
CTO, CineForm
Wolfgang S. wrote on 7/10/2005, 2:19 AM
David,

thanks for the correction that Connect HD includes scene detection - I had overseen that at your homepage http://www.cineform.com/products/ConnectHD.htm

By the way, is there still a price reduced product available, if you are Vegas-6 user? There was something here on the Sony page, but it seems to be gone now.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Wes C. Attle wrote on 7/10/2005, 5:30 AM
Ok, the Vegas 6 manual is vague. And I am guilty of not reading the HDLink ReadMe. I had "capture only to .m2t" in CF HDLink 1.7 checked last time. I'm capturing and converting to intermediate with scene dedection just fine now. I was confused on process.

The Vegas manual and online help are very vague with their summary of HDV capture process. So if I want to capture in Vegas 6... I capture to .m2t within Vegas, then drop the huge file on the timeline and render out to CF Intermediate? If I understand now, there is no automatic scene detection by rendering my .m2t to YUV or other intermediate format within Vegas 6? I need to buy CF HDLink if I want scene detection? I believe this is the bottom line?

And yes, how come Vegas 6.0 users would have to pay full price for CineForm when we already paid for the Codec cost within Vegas 6? I saved $500 on Boris Red because Vegas already had Boris Graffitti Ltd. CineForm would make a lot of money by giving veggies a 50% discount. Boris did!

And If I look on page 231 of the vegas 6 Manual, there is no mention of HDV cam in the "Destination - Intermediate Format" table on that page. It refers to Cineform's Codec, but does not tell me how to Render out to it. (I know how to, but the manual is really incomplete with it's HDV workflow details).
Wolfgang S. wrote on 7/10/2005, 6:14 AM
Peter,

what you can do - at least - is to render the huge m2t file to the cineform codec in the timeline.

And then run an automatic scene detection in av-cutty - what will deliver a cineform file for every scene. Import that back to Vegas, and continue to edit those cineform scene-files.

I agree that the workflow is vague at the moment - but the whole HDV editing is quite new, workflows are unclear. That is why more tests are required.

And I would like to know if there is still a cheaper way to buy Connect HD for Vegas-User - or if that is gone really.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

PeterWright wrote on 7/10/2005, 7:01 AM
Related to this - the Maccites on various forums are going on about how FCP edits native HDV without the need for intermediate codecs. How true is this, and if so what equivalent PC would be able to perform this way?

Ironically, some are also saying they can only capture with scene detection, whilst folks here seem to be dealing with the opposite ....?
Xander wrote on 7/10/2005, 7:48 AM
I use the same workflow as Wolfgang S: m2t -> Cineform -> AV-Cutty. Advantage of AV-Cutty is it will do scene detection, but will also allow you to do trimming before creating the final files for working on.