Still Problem in V17 when using GPU Acceleration of Video processing

Comments

Former user wrote on 8/15/2019, 1:40 PM

@j-v Yes, I remember, fully awake now :D.

What I meant was (more scientific) say a script that would measure the average FPS while playing back part or all of a project.

Anyway, great improvements, can't wait for the first update to get all of the bugs sorted.

j-v wrote on 8/15/2019, 2:23 PM

What I meant was (more scientific) say a script that would measure the average FPS while playing back part or all of a project.

Nothing scientific possible, the only important thing for me is how things go on my hardware with my settings and my programs and my experience.
No computer is equal anymore after a user had used it and made his own settings.
And because luckely no human is equal a scientific approach and comparisons like benchmarks are useless to users imho.

met vriendelijke groet
Marten

Camera : Pan X900, GoPro Hero7 Hero Black, DJI Osmo Pocket, Samsung Galaxy A8
Desktop :MB Gigabyte Z390M, W11 home version 24H2, i7 9700 4.7Ghz,16 DDR4 GB RAM, Gef. GTX 1660 Ti with driver
566.14 Studiodriver and Intel HD graphics 630 with driver 31.0.101.2130
Laptop  :Asus ROG Str G712L, W11 home version 23H2, CPU i7-10875H, 16 GB RAM, NVIDIA GeForce RTX 2070 with Studiodriver 576.02 and Intel UHD Graphics 630 with driver 31.0.101.2130
Vegas software: VP 10 to 22 and VMS(pl) 10,12 to 17.
TV      :LG 4K 55EG960V

My slogan is: BE OR BECOME A STEM CELL DONOR!!! (because it saved my life in 2016)

 

Former user wrote on 8/15/2019, 3:02 PM

@j-v "Nothing scientific possible"

The following extract is from the second page of the pdf for the original Red Car test. It looks to me that there was indeed a more scientific tool available, I've highlighted some items.

"In the first test, the observed fps range was recorded for each region of the timeline as indicated in the Video Preview window.  Note that this value is more subjective and depends on observations of the fps at the correct instance during playback. 
A second pass was performed using an internal profile tool that returned the calculated fps value programmatically across each selected region. 
A third pass was done with the profile tool to calculate the internal fps averaged across the entire project timeline.
"

I do agree with your overall sentiments. I'm only here talking about benchmarking, comparing systems, not the human creative aspects of using say VP.

AVsupport wrote on 8/15/2019, 5:55 PM

I have notice Dynamic Ram preview can cause issues with nVidia accelerated playback & rendering. Best set to 0.

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

32GB (4x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

Crucial BX500 1TB EDIT 3D NAND SATA 2.5-inch SSD

2x 4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.

Howard-Vigorita wrote on 8/15/2019, 11:27 PM

@Grazie Yes it was a 0% render time change using Intel QSV decoding in VP17 vs simply using VP16.

19s duration clip length x 3 = test project duration of 57s, UHD 25fps.

HW Acc. = Nvidia for all.

VP17 .. UHD to FHD using Nvidia in file I/O, render time = 15s ..... 21% faster render.

VP17 .. UHD to FHD using Intel QSV in file I/O, render time = 19s ….. 0% faster render.

VP17 .. UHD to FHD using no file I/O decoding, render time = 19s

VP16 .. UHD to FHD …………………………………………. render time = 19s

@Former user I couldn't test w/Nvidia in file i/o but when I tried a straight up transcode from UHD to FHD I got no change with qsv enabled or disabled either. But then it occurred to me that a transcode might not really be representative of a project render. So I tried it on the 4K Red Car Project and got quite different results rendering to FHD:

w/clip 1&2 = Mainconcept AVC 4k/UHD mp4
GPU = AMD Radeon VII; iGPU= Intel UHD 630    

   file i/o qsv enable  =  checked ... unchecked

Encode Mode:
Mainconcept                  0:54          1:11
AMD VCE                      0:43          0:55
Intel QSV                    0:31          0:40

With v16 I got almost identical results checking and unchecking the qsv option in general settings ... 1 sec slower for MC but 1 sec faster for the rest. But I expect Nvidia decode would show a bigger delta than Intel in v17.

Howard-Vigorita wrote on 8/16/2019, 12:07 AM
Nothing scientific possible, the only important thing for me is how things go on my hardware with my settings and my programs and my experience.

No computer is equal anymore after a user had used it and made his own settings.
And because luckely no human is equal a scientific approach and comparisons like benchmarks are useless to users imho.


@j-v Agree with the 1st part, particularly with regard to display performance while editing. Which is why I think we need to quantify 4k performance the way we use Vegas... with proxies enabled and manipulating the preview-to-best settings. Disagree with the 2nd part, however... benchmarks yield valuable information on how best to use Vegas and assemble a platform for it which can be invaluable to users. And the development team too, which in turn ends up benefiting users with a better product to work with.