AMD Radeon Vll or Nvidia 2070 Super?

Comments

fr0sty wrote on 1/16/2020, 11:52 AM

I can comment that timeline decoding is not supported on AMD cards, so you might as well just be using the 9900k, as that is all Vegas will be using to do the decoding.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

tripleflip18 wrote on 1/16/2020, 1:47 PM

Here is whats going on on my end, i got 7980xe oc to 4.5, radeon vii, and gtx 1070. and this is only about iphone 11 hevc 4k60p

iphone hevc files with ONLY radeon = totally slow like .5 fps, with only gtx 1070 45-55fps, with both radeon vii and gtx1070, full 60fps,

Now i need to use timeline in 2x, and higher speed to get through footage faster and im only able to do with both 1070 and r7, 1070 alone and 2x is like 11fps, but with both video cards 2xspeed is around 30fps.....

the issue is i get black frames if gpu decoding is on for 1070 in I/O, in both timeline and final render, so i have to keep on switching it back to OFF when i do final render, such a damn pain in the ass.... i don't know of any other software that can handle iphone hevc files, FCPX does it with ease but their export h264 quality sux, DR and PP can't handle these files, so damn annoying!

fred-w wrote on 1/16/2020, 2:50 PM

I can comment that timeline decoding is not supported on AMD cards, so you might as well just be using the 9900k, as that is all Vegas will be using to do the decoding.

@fr0sty So why do the tests on Techgage show the AMD cards besting the others on timeline playback? Vega 64 and Radeon VII in particular.

And SIGNIFICANTLY so.

Playback issues ONLY seem to matter when there is some sort of heavier load FX (as above) on the tracks. This is where AMD has the advantage. AS per LUT applied (Lighter load) no real difference between the prosumer cards, even lower range, and we can conclude that would be the same with ZERO effects applied to tracks. So, exactly WHAT are we discussing with this decoding tracks/timeline idea?

fr0sty wrote on 1/16/2020, 3:23 PM

Because there are fx applied. Both nvidia and amd gpus support accelerating effects on the timeline, but only nvidia supports hardware decoding as well. On AMD cards, the CPU decodes the video, then, assuming a gpu accelerated effect is used on the timeline, the AMD card will take over with accelerating the effect.

This is why tripleflip sees such a jump in performance with their hevc files, cpu's struggle to decode hevc, but the hardware decoding on that 1070 does it with ease.

 

Tripleflip, see if updating that nvidia card to the latest studio driver helps with your black frames.

Last changed by fr0sty on 1/16/2020, 3:30 PM, changed a total of 3 times.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

fred-w wrote on 1/16/2020, 5:11 PM

Where there is no effects, or a very light effect, (here they have a couple of LUTs) all the cards play back about the same FPS. Without the two LUTs, we can safely assume the playback rate is going to be even higher, IOW, no dropped frames at 60 FPS video files.

Per test parameters:
"Our playback testing is performed using 4K/60 AVC MP4 source footage, using the LUT and Median FX filters (Median FX: this would represent a heavier load, comment mine). For the LUT project, two different LUTs are used across light and dark scenes, with the highest quality setting chosen." 

More cogently, I have no playback issues UNTIL I'm applying fx or layering, compositing, pan crops, and this sort of thing, so it seems to me that that sort of "distinction" per this "capability" falls flat, unless, as you seem to be suggesting, one is using one of those particular file types (hevc). IOW, unless one is using the hevc, there is no difference, or none to speak of, re: playback of timeline with no effects. Is that a fair summation?

fred-w wrote on 1/16/2020, 5:16 PM

In light of these tests, don't you think you should temper some of your "absolute" recommendation of NVIDIA, because if one is NOT using the HEVC format, there is No difference on playback and AMD has SIGNIFICANT advantages on playback once effects are added, and the cards are cheaper, more bang for the buck, and the "head to head" is exactly the Radeon VII vs. the 2080ti, $500 vs $900 - 1000 +. Is that FAIR to say??

fr0sty wrote on 1/16/2020, 7:41 PM

HEVC and H264, the most common capture codecs on the market below the truly pro (not just prosumer) level, are the formats that it supports decoding, so I'd say that is a pretty solid reason, along with the better driver reliability/configurability and 10 bit support on all screens even on consumer cards.

Also, your tests do not seem to specify whether or not they had hardware decoding enabled on the nvidia tests.

Last changed by fr0sty on 1/16/2020, 7:44 PM, changed a total of 3 times.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

tripleflip18 wrote on 1/16/2020, 9:22 PM

Because there are fx applied. Both nvidia and amd gpus support accelerating effects on the timeline, but only nvidia supports hardware decoding as well. On AMD cards, the CPU decodes the video, then, assuming a gpu accelerated effect is used on the timeline, the AMD card will take over with accelerating the effect.

This is why tripleflip sees such a jump in performance with their hevc files, cpu's struggle to decode hevc, but the hardware decoding on that 1070 does it with ease.

 

Tripleflip, see if updating that nvidia card to the latest studio driver helps with your black frames.

I update drivers every time they come out hoping it would fix it, but im stuck with black frames no matter what firmware.

Former user wrote on 1/16/2020, 10:06 PM

In light of these tests, don't you think you should temper some of your "absolute" recommendation of NVIDIA, because if one is NOT using the HEVC format, there is No difference on playback and AMD has SIGNIFICANT advantages on playback once effects are added,

60p at 1440p or 4k uses a lot of cpu to decode. much better to have a dedicated chip do that instead. R7 card can't help. add enough high fps videos cpu will overload while the same cpu with the 2070super won't. The extra power of R7 would benefit in very limited circumstances for timeline playback and in these instances you could pre-render that effect with 2070super .

2070super is a smart efficient turbo powered fuel injected 4/6cyl. R7 is a big dumb fuel guzzling V8 with a carburettor

TheRhino wrote on 1/16/2020, 10:09 PM

I update drivers every time they come out hoping it would fix it, but im stuck with black frames no matter what firmware.

If I understand correctly, you have a Radeon VII and GTX 1070 in the system. Do you have black frames with just one installed? IMO the AMD & Nvidia drivers are not playing nice together... This is from past experience building editing workstations...

 

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Kinvermark wrote on 1/16/2020, 10:16 PM

+1. This seems like a recipe for unforeseen problems.

Current prices in Canada: Zotac 2070 super $639 CAD. XFX Radeon VII $800 CAD. At these prices the Nvidia card is by far the better value. Occasionally the Radeon gets offered at approx. $640 CAD. At that price it may be worth a look for some people, not all.

 

 

 

 

fred-w wrote on 1/17/2020, 1:56 AM

In light of these tests, don't you think you should temper some of your "absolute" recommendation of NVIDIA, because if one is NOT using the HEVC format, there is No difference on playback and AMD has SIGNIFICANT advantages on playback once effects are added,

60p at 1440p or 4k uses a lot of cpu to decode. much better to have a dedicated chip do that instead. R7 card can't help. add enough high fps videos cpu will overload while the same cpu with the 2070super won't. The extra power of R7 would benefit in very limited circumstances for timeline playback and in these instances you could pre-render that effect with 2070super .

2070super is a smart efficient turbo powered fuel injected 4/6cyl. R7 is a big dumb fuel guzzling V8 with a carburettor

I'm not just an AMD fan boy, hardly, as I have the Nvidia RTX 2060 and a Quadro card and couple of Black magic as well, and AMD, a bit older. The 2060 does not knock me out, I mean, it's OK and I'm sure 2070 Super is a bit better, but c'mon. I would take the Radeon VII all day long, They were more expensive when I bought my 2060, I think some are overestimating that power. Now, if you're talking the 2080ti, that is a different animal.

Former user wrote on 1/17/2020, 2:25 AM

If every video produced required extensive noise reduction and film grain I'd be getting that R7 for sure. If VP18 has AMD GPU video decode it becomes even more powerful again

fr0sty wrote on 1/17/2020, 8:34 AM

^It woulds still lack 10 bit support being unlocked on all cards like the Nvidia studio drivers do. As for hard numbers, here's the benchmarks from these forums.

I'm not just an AMD fan boy, hardly, as I have the Nvidia RTX 2060 and a Quadro card and couple of Black magic as well, and AMD, a bit older. The 2060 does not knock me out, I mean, it's OK and I'm sure 2070 Super is a bit better, but c'mon. I would take the Radeon VII all day long, They were more expensive when I bought my 2060, I think some are overestimating that power. Now, if you're talking the 2080ti, that is a different animal.

I am a current Radeon 7 user, and yes, it's very powerful, when it works, but it is not reliable at all. I often get blackouts while editing that require a reboot, sometimes external monitors randomly drop signal, I have to go through a process of resetting settings on my main monitor to make it stop overscanning, my OLED never gets a signal the first time I turn it on, there are a lot of apps that GPU acceleration simply does not work on because I'm using this card (it took Neat Video months to support it, for instance, and Vegas still doesn't fully support it), it's far from perfect. I would never use this card in a performance environment, I actually put my old GTX 970 back in my system when I am doing video projection mapping performances, where a secondary (or third, or fourth) monitor blackout could ruin the gig. I can't go into settings and change things like the bit depth of the image, the refresh rate, my resolution options are very limited, etc.

There are a lot of caveats that come with that ridiculously high amount of extremely fast RAM and 12 teraflops of performance that enticed me to buy it to begin with. Before I could ever recommend this card to someone else, I feel it necessary to make sure they know of these things, as for some people they could be deal breakers.

Even though I'd take a performance hit by doing so, I'd opt for the 2080 if I could go back and make that purchase again.

Last changed by fr0sty on 1/17/2020, 8:43 AM, changed a total of 3 times.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

Howard-Vigorita wrote on 1/17/2020, 10:37 AM

^It woulds still lack 10 bit support being unlocked on all cards like the Nvidia studio drivers do. As for hard numbers, here's the benchmarks from these forums.

I'm not just an AMD fan boy, hardly, as I have the Nvidia RTX 2060 and a Quadro card and couple of Black magic as well, and AMD, a bit older. The 2060 does not knock me out, I mean, it's OK and I'm sure 2070 Super is a bit better, but c'mon. I would take the Radeon VII all day long, They were more expensive when I bought my 2060, I think some are overestimating that power. Now, if you're talking the 2080ti, that is a different animal.

I got the exact same numbers as the 2080ti on the exact same Sample Project benchmark on my Radeon7/9900k system. Which is why I am not considering the lesser 2070. But keep in mind that the Sample Project benchmark doesn't really benchmark video camera footage as much as Vegas video generator output which is why I also benchmarked along side of the original Red Car as well as 2 more intensive versions, one with multiple format 4k transcodes and another with 3 high bit-rate ProRes transcodes. In the Proxy-Play column, a rating of 16 means I was able to play the 4k Red car versions with proxies enabled at Best-Full with no slowdown of the frames per second rate shown in the preview window.

http://www.rtpress.com/roundup.htm

fred-w wrote on 1/17/2020, 11:21 AM

 

Even though I'd take a performance hit by doing so, I'd opt for the 2080 if I could go back and make that purchase again.

@fr0sty OK, thanks, good input. Vegas has always been laggy, and that is why, generation to generation, the fastest, best "bang-for-the-buck" card is sought out, and the Vega 64 (water cooled) and the VII (once the price came down) seemed to be exactly that, and for some, they really do work, and for others, probably not. Heat and power consumption are issues, spotty performance (even some games are great, some not) and I almost went with the Vega 64 (water cooled) myself, but I decided the additional power, and upping the power supply on my rig, was not worth it, especially as my HP Z workstation is set up more for Nvidia.

They say the Radeon VII card actually cost more to manufacture than the price they ended up charging ($700-750, now reduced to sub $500) and so it really does represent beastly power and bang for buck. I'd say if you'd water cool that bad boy, or get the somewhat cheaper Vega 64, you'd be doing pretty well, as testified to here by @TheRhino

That said, the ONLY real number (and thanks for the charts) that I'm concerned with is FPS timeline playback, and on that chart, someone ( J-V ) is getting 25 FPS (while other top perfomers getting 14) with a 1660ti and Intel i7-97000 and that piques my attention.

JN- wrote on 1/17/2020, 1:43 PM

@fred-w I'm sure that result of jv's has piqued others attention as well. Check out the very last post in that Benchmarking thread, or the very first post, where a screenshot is given of all of the requirements needed to give a fps result. I've added it below now.

The fps value is not a machine derived value like the render time result. It depends on users following the suggestions in the screenshot and eyeballing the fps as best they can. I'm not in any doubt that all users that supplied results did their best with the fps value, but it needs to be done correctly.

https://www.vegascreativesoftware.info/us/forum/benchmarking--116285/

https://www.vegascreativesoftware.info/us/forum/benchmarking--116285/?page=10

 

Last changed by JN- on 1/18/2020, 2:26 PM, changed a total of 3 times.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

TheRhino wrote on 1/18/2020, 9:47 AM

My son's laptop with same 6-core i7-9750H CPU as mine has a GTX 1660ti while mine has a 2060. The 2060 outperforms the 1660ti in Vegas preview FPS so I'm not sure how J-V got his numbers... IMO getting a $350 GPU is a good place-holder until the next batch of high-end GPUs are introduced later this year. 2019 was the year of mid-range GPU releases which have been optimized for games but not content creation...

Last changed by TheRhino on 1/18/2020, 4:21 PM, changed a total of 1 times.

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

JN- wrote on 1/18/2020, 1:42 PM

@TheRhino +1 If any users wish to add new data or update their existing entries please post same in the newer “Benchmarking Continued” thread, not in the older “Benchmarking” thread, thanks.

Last changed by JN- on 1/18/2020, 4:40 PM, changed a total of 3 times.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

fred-w wrote on 1/18/2020, 2:17 PM

My son's laptop with same 6-core i7-9750H CPU as mine has a GTX 1660 while mine has a 2060. The 2060 outperforms the 1660 in Vegas preview FPS so I'm not sure how J-V got his numbers...

The card in question is the 1660ti, which always seems to make a significant difference, AFAICT. Maybe not enough to erase legit skepticism here, but some difference.

TheRhino wrote on 1/18/2020, 4:22 PM

The laptop has a 1660ti, but of course it is the laptop version which does not quite match the desktop version.... Same with the 2060...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

JN- wrote on 1/19/2020, 6:16 AM

In the grand scheme of things this is where it's at, Screenshot of GPU shootout from Guru3D.

Perhaps J-V overlooked something in his testing, or perhaps their is something really special with his GPU/hardware combination within VP.  Maybe he could kindly re-check the FPS test? 

If not then until someone else posts figures with the same GPU and at least a comparable CPU/system to compare with, then I'll leave J-V's result as is in place.

Last changed by JN- on 1/19/2020, 6:20 AM, changed a total of 1 times.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

Former user wrote on 1/19/2020, 6:37 AM

DX12 isn't a good measure of compute, as Nvidia wins mostly against more powerful AMD cards. Also to a degree how powerful a card is irrelevant. For most it matters only how well it can help with timeline playback. How much faster it will render compared to another GPU is of interest but less of a concern.

JN- wrote on 1/19/2020, 7:03 AM

As I said, in the grand scheme of things it gives us some idea of where it sits. I would be surprised to find that with a dedicated timeline playback test of all of the above named cards it can jump from the lower part of the table to the very top, yet the Benchmark test result of 25 fps is at the top of our results, the nearest is 16.5 fps.

Last changed by JN- on 1/19/2020, 7:07 AM, changed a total of 1 times.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080