Comments

RogerS wrote on 6/1/2021, 8:55 AM

Vegas supports the most types of files using Intel decoding so for that reason alone there is an advantage to having a processor with the UHD 630 GPU on it.

Some relevant benchmarks here:
https://techgage.com/article/best-cpu-for-rendering-video-encoding-spring-2021/3/

Howard-Vigorita wrote on 6/1/2021, 12:22 PM

The i9-10900K has the same uhd630 igpu as the 9900k that I have which will get you a decoding advantage over a 5900x if the video on your timeline is hardware decoder friendly and decoding intensive. Makes the biggest difference working with hevc 10-bit 4:2:0 mp4 and mov clips. Render times on Vegas benchmarks (4K Sample Project and Red Cars) that I get on my 9900k seem to be right up there with those reported on Ryzens. But allot also depends on what gpu is used.

But personally, if I was building a new desktop system right now, my cpu pick would be the 11th-gen i9-11900K. It only has 8 cores but I think that has less impact on Vegas than than having an onboard igpu. Big difference from the 10th gen is it's got the uhd750 which shares capabilities with the Iris/Xe line. Render output units go from 3 to 8, execution units go from 24 to 32, and igpu clock rate bumps up to 1.3 ghz. So it should perform decoding operations faster than a 630. And it supports decoding some formats no other gpu can do like 4:2:2 hevc, vp9, and av1. And surprisingly, the latest Intel drivers have been delivering pretty darn decent quality qsv renders in my tests which include uhd630s... so I expect the uhd750 will do at least as well.

Hulk wrote on 6/1/2021, 9:10 PM

5900X all day every time for me. It smokes Rocket Lake, Comet Lake, Tiger Lake, everything Intel has in single thread and multithread in nearly every benchmark. Plus it's much more power efficient. Go look at some benchmarks, especially encoding like Handbrake, which are quite representative of the Vegas workload. Even a low end discrete GPU will be faster than the iGPU in Rocket Lake (11900K). The Rocket Lake iGPU only has 32 EU's compared to the 96 in Tiger Lake.

BTW, I'm an Intel fan, never built an AMD rig but there is no denying they are killing Intel currently and my next build will be AMD unless Alder Lake is a real ground breaking performer.

Also, Vegas doesn't like frequency so much as compute. Zen 3 has higher IPC than Rocket Lake as well as 4 more cores, meaning it has more compute.

https://www.anandtech.com/show/16495/intel-rocket-lake-14nm-review-11900k-11700k-11600k/9

Also for my money I always frameserve Vegas to Handbrake. Compared to the built in codecs in Vegas or hardware encoding Handbrake creates much better video quality at the same file size.

RogerS wrote on 6/1/2021, 9:21 PM

Just as an aside, decoding capability is distinct from the cards' 3D and other capabilities. So your lowly iGPU can be more compatible with different types of files and even do hardware decoding of 4:2:2 where high-end AMD and NVIDIA cards can't despite their overall greater power (which nobody disputes). You'll also see QSV renders go head-to-head with NVENC and VCE for speed (search for "benchmarking results continued.")

Hulk wrote on 6/2/2021, 9:04 AM

There are of course a lot of varied opinions on this subject and none are right or wrong. Everybody has different priorities.

For me I want good preview performance while editing so that I can edit quickly and creatively. For Vegas I have found that means a lot of compute from the CPU and good OpenCL performance from the GPU. I agree with RogerS in that I have found Intel iGPU's to be very compatible with Vegas, meaning they work without stability issues. Even the 630 iGPU found in Skylake cores does a pretty decent job of accelerating preview performance when paired with a good CPU. That being said, the 32EU iGPU in Rocket Lake will do even better, and the 96EU one in Tiger Lake better yet, and they should have great stability.

I think the "top dog" Vegas system today would be a 5950X and a fast OpenCL graphics card, which because of the miners are in short supply.

If I were to build a Rocket Lake system I would skip the tiny performance increase of the 11900K over the 11700K and use the 11700K. I would also consider the 10850K as it gives up a little IPC and iGPU performance but adds two extra cores.

Also keep in mind that AMD announced just two days ago at Computex that they will be releasing APU's, iGPU included Aug. 5. The 8 core 5700G with Vegas iGPU could be a real Vegas workhorse. We'll have to wait and see of course but on paper it looks to be really great for the money.

The other part of my Vegas workflow is rendering out the project. As I wrote above I always frameserve to Handbrake as I haven't found anything "inside" of Vegas that can produce as good a result with anywhere near equal the data rate. This is where a fast CPU with lots of cores comes in handy as both Vegas and Handbrake will need the compute during the encoding process.

If I had to build a new rig for Vegas today I'd go with a 5900X and a lower end AMD graphics card since preview (OpenCL) performance is more important to me than video card accelerated rendering.

Alternately I don't think the 11700K would be a bad choice, it's not too expensive, and includes a decent iGPU. But I would not consider the 11900K due to high price and miniscule performance improvement over the 11700K. If I were to spend that amount of money I'd definitely go with the 5900X and a discrete graphics card. Plus you have an upgrade path to the 5950X in the future for a whopping 16 cores of Vegas compute madness!

For what they are worth those are my thoughts.

JN- wrote on 6/2/2021, 12:20 PM

While Cpu, Gpu, render times and playback are key considerations, good connectivity can be something to consider. If it’s a laptop, how many new gen usb3, Thunderbolt connections etc. These can vary between Amd and Intel systems for obvious reasons.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

miroslav-h wrote on 6/2/2021, 1:39 PM

For me is the most important preview as I need it for smooth editing and grading to pair shots on the timeline. I do not care about rendering times it is not really important to me.

I know that 5900X is superb but according this Vegas benchmark it seems that i9-10900K is better in encoding than 5900X.

https://techgage.com/article/best-cpu-for-rendering-video-encoding-spring-2021/3/

 

Now I am about to decide what to buy so what do you think?

Yelandkeil wrote on 6/2/2021, 3:27 PM

You made a self-conflict😇.

As long as the AMD decodes smooth editing, who cares about how better the Intel does encode.

I'm going to order a 5950x tonight (for saving some postage) and will report perhaps on weekend or next Monday about it.

-- Hard&Software for 5.1RealHDR10 --

ASUS TUF Gaming B550plus BIOS3202: 
*Thermaltake TOUGHPOWER GF1 850W 
*ADATA XPG GAMMIX S11PRO; 512GB/sys, 2TB/data 
*G.SKILL F4-3200C16Q-64GFX 
*AMD Ryzen9 5950x + LiquidFreezer II-240 
*XFX Speedster-MERC319-RX6900XT <-AdrenalinEdition 24.12.1
Windows11Pro: 24H2-26100.3915; Direct3D: 9.17.11.0272

Samsung 2xLU28R55 HDR10 (300CD/m², 1499Nits/peak) ->2xDPort
ROCCAT Kave 5.1Headset/Mic ->Analog (AAFOptimusPack 6.0.9403.1)
LG DSP7 Surround 5.1Soundbar ->TOSLINK

DC-GH6/H-FS12060E_HLG4k120p: WB=manual, Shutter=125, ISO=auto/manual
HERO5_ProtuneFlat2.7k60pLinear: WB=4800K, Shutter=auto, ISO=800

VEGASPro22 + XMediaRecode/Handbrake + DVDArchi7 
AcidPro10 + SoundForgePro14.0.065 + SpectraLayersPro7 
K-LitecodecPack17.8.0 (MPC Video Renderer for HDR10-Videoplayback on PC) 

JN- wrote on 6/2/2021, 6:09 PM

@Yelandkeil If you have the time, consider adding your 5950x to the Benchmarking results, theres a link to it with instructions via my signature.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

Yelandkeil wrote on 6/2/2021, 7:06 PM

I'll do it.

-- Hard&Software for 5.1RealHDR10 --

ASUS TUF Gaming B550plus BIOS3202: 
*Thermaltake TOUGHPOWER GF1 850W 
*ADATA XPG GAMMIX S11PRO; 512GB/sys, 2TB/data 
*G.SKILL F4-3200C16Q-64GFX 
*AMD Ryzen9 5950x + LiquidFreezer II-240 
*XFX Speedster-MERC319-RX6900XT <-AdrenalinEdition 24.12.1
Windows11Pro: 24H2-26100.3915; Direct3D: 9.17.11.0272

Samsung 2xLU28R55 HDR10 (300CD/m², 1499Nits/peak) ->2xDPort
ROCCAT Kave 5.1Headset/Mic ->Analog (AAFOptimusPack 6.0.9403.1)
LG DSP7 Surround 5.1Soundbar ->TOSLINK

DC-GH6/H-FS12060E_HLG4k120p: WB=manual, Shutter=125, ISO=auto/manual
HERO5_ProtuneFlat2.7k60pLinear: WB=4800K, Shutter=auto, ISO=800

VEGASPro22 + XMediaRecode/Handbrake + DVDArchi7 
AcidPro10 + SoundForgePro14.0.065 + SpectraLayersPro7 
K-LitecodecPack17.8.0 (MPC Video Renderer for HDR10-Videoplayback on PC) 

JN- wrote on 6/2/2021, 7:16 PM

@Yelandkeil That should be interesting.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

Howard-Vigorita wrote on 6/2/2021, 8:20 PM

@miroslav-h I find using hardware decoding enhances Vegas editing smoothness. Especially if it's being done asynchronously from both cpu operations and main gpu chores like timeline, fx, render, and display optimization. 3-heads being better than one or two. Which can be accomplished with 1 Intel cpu + 1 video board. Or 1 amd cpu + 2 video boards. But the 5700G might even things up a bit... if it can do better than 8-bit rendering.

@Hulk wrote

The other part of my Vegas workflow is rendering out the project. As I wrote above I always frameserve to Handbrake as I haven't found anything "inside" of Vegas that can produce as good a result with anywhere near equal the data rate. This is where a fast CPU with lots of cores comes in handy as both Vegas and Handbrake will need the compute during the encoding process.

For rendering inside Vegas with identical x264 and x265 libs, try Voukoder. That would give you the benefit of Vegas doing the decoding concurrently via an Intel igpu or other gpu while rendering with these cpu-only libs. And save you the expense and complication of buying and running 2 computers. Btw, both Voukoder and Handbrake can also be configured to use accelerated libs that call on Intel, AMD, or Nvidia hardware for rendering. But I think Handbrake only supports using an Intel igpu for accelerated decoding. In any case it'll both blow the doors off of using their cpu-only libs as was done in that benchmark review posted earlier.

Hulk wrote on 6/2/2021, 9:06 PM

For me is the most important preview as I need it for smooth editing and grading to pair shots on the timeline. I do not care about rendering times it is not really important to me.

I know that 5900X is superb but according this Vegas benchmark it seems that i9-10900K is better in encoding than 5900X.

https://techgage.com/article/best-cpu-for-rendering-video-encoding-spring-2021/3/

 

Now I am about to decide what to buy so what do you think?

Those tests that put 10900K above the 5900X are suspect in my opinion. Look at every other test and the 5900X is faster. They found some strange combinations that favors the 10900K. It's a well known fact that the 5900X for 99% of applications is much faster than either the 10900K or 11900K. All of the Adobe tests show the 5900X and even the 5800X faster than the 10900K, which is generally what happens when those two are compared.

Hulk wrote on 6/2/2021, 9:10 PM

@miroslav-h I find using hardware decoding enhances Vegas editing smoothness. Especially if it's being done asynchronously from both cpu operations and main gpu chores like timeline, fx, render, and display optimization. 3-heads being better than one or two. Which can be accomplished with 1 Intel cpu + 1 video board. Or 1 amd cpu + 2 video boards. But the 5700G might even things up a bit... if it can do better than 8-bit rendering.

@Hulk wrote

The other part of my Vegas workflow is rendering out the project. As I wrote above I always frameserve to Handbrake as I haven't found anything "inside" of Vegas that can produce as good a result with anywhere near equal the data rate. This is where a fast CPU with lots of cores comes in handy as both Vegas and Handbrake will need the compute during the encoding process.

For rendering inside Vegas with identical x264 and x265 libs, try Voukoder. That would give you the benefit of Vegas doing the decoding concurrently via an Intel igpu or other gpu while rendering with these cpu-only libs. And save you the expense and complication of buying and running 2 computers. Btw, both Voukoder and Handbrake can also be configured to use accelerated libs that call on Intel, AMD, or Nvidia hardware for rendering. But I think Handbrake only supports using an Intel igpu for accelerated decoding. In any case it'll both blow the doors off of using their cpu-only libs as was done in that benchmark review posted earlier.

Good info. Thanks. I'm going to give it a go now...

RogerS wrote on 6/2/2021, 10:18 PM

For me is the most important preview as I need it for smooth editing and grading to pair shots on the timeline. I do not care about rendering times it is not really important to me.

I know that 5900X is superb but according this Vegas benchmark it seems that i9-10900K is better in encoding than 5900X.

https://techgage.com/article/best-cpu-for-rendering-video-encoding-spring-2021/3/

 

Now I am about to decide what to buy so what do you think?

Those tests that put 10900K above the 5900X are suspect in my opinion. Look at every other test and the 5900X is faster. They found some strange combinations that favors the 10900K. It's a well known fact that the 5900X for 99% of applications is much faster than either the 10900K or 11900K. All of the Adobe tests show the 5900X and even the 5800X faster than the 10900K, which is generally what happens when those two are compared.

I think a better way to read these tests is that with GPU enabled all these CPUs perform similarly in Vegas for rendering (which is a limitation of Vegas, not the CPU). So one should look at the entire system not just the processor.

With CPU-only renders the AMD has an advantage.

Want more benchmarking results? This is about the only other place to find them for Vegas:
https://www.vegascreativesoftware.info/us/forum/benchmarking-results-continued--118503/

Howard-Vigorita wrote on 6/3/2021, 3:19 AM

I feel slighted. You can get additional results here... the "Sample Project" column is the same bench referenced above which I've run on my own systems which are all Intel cpu-based.

RogerS wrote on 6/3/2021, 8:11 AM

I didn't mean to slight anyone. I didn't understand this was a different benchmark.

Howard-Vigorita wrote on 6/3/2021, 8:00 PM

Ha, ha, @RogerS I was ribbing you. I started accumulating my RedCar benches with 4k extensions a number of years ago to help me determine what computer and camera I could build into in a practical 4K workflow. When my main computer was a pretty big Xeon. Which has no igpu, much like a typical Ryzen. The dye was cast in the direction of Intel when I built a little 4-core Nuc for the road and found out that with both amd and Intel igpus, it was outperforming my big boy. I added "Sample Project" to my charts when it came out and built my 9900k system around the same time. My charts have been instrumental to my choosing cameras and computers not to mention evaluating updates and tuning Vegas itself for optimal performance. And I've designed an online html approach to them for easy access to all. Though I don't have any Ryzen systems myself, it's easy enough to square numbers I get on my Intel systems with those in the @JN-'s Benchmarking thread and see that much of the common wisdom in contrasting Vegas performance of amd and Intel cpus ain't necessarily so.

For the last week or two I've been taking a similar approach to quality analysis of different codecs with ffmetrics (great utility, btw) and am getting near measuring all the codecs and render formats I can think of. At some point I'll probably unify my Vegas charts for both performance and quality. Links in my sig.

RogerS wrote on 6/3/2021, 8:31 PM

@Howard-Vigorita That's good, upon rereading it I couldn't be sure and didn't want to make the world any more toxic than it already is.

I wasn't quite sure how you gathered these benchmarks- who is contributing these Red Car projects? I count 5 CPUs and way more GPUs- are these all your own systems?

For quality comparisons I don't really understand the different methodologies. I remember this thread concluded that QSV quality hadn't really changed with the newer processors. Do you know why there is a difference?
 

JN- wrote on 6/4/2021, 1:49 AM

Ok Howard, since you tagged my name here, and speaking of toxic, in the interests of having a fuller understanding of how we got here with the Benchmarking Continued thread, it might be in @RogerS's interest to have background, (assuming he doesn’t) as of course it came from the Benchmarking (not Continued) thread.

Well worth a look, especially from page 5 onwards ... https://www.vegascreativesoftware.info/us/forum/benchmarking--116285/?page=5

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

Howard-Vigorita wrote on 6/4/2021, 4:18 AM

I wasn't quite sure how you gathered these benchmarks- who is contributing these Red Car projects? I count 5 CPUs and way more GPUs- are these all your own systems?

For quality comparisons I don't really understand the different methodologies. I remember this thread concluded that QSV quality hadn't really changed with the newer processors. Do you know why there is a difference?

@RogerS The docs are all in the google cloud drive link in my sig where you'll also find the zips with the projects. My systems are also in my sig and all my charts are from running benches on those systems. The RedCar is the original Sony benchmark, unchanged. I designed the rest just by swapping out the media files with 4k transcodes to match clip media parameters from my 4K camera... AVC, Hevc, and recently ProRes. I don't fully understand the nuts and bolts of ffmetrics myself and have only started looking at it in the last week. The thread I linked and its author are the best understanding on that. I do know that NetFlix has been a big motivator in the concept of assessing the quality of its transmission compression and I believe they've used all three metrics but have settled on VMAF as the most accurate quality metric... in rendering and looking at all the render formats myself, I agree with that.

Regarding qsv, I'm scratching my head on that too. I stopped using it for deliverables back with Vegas v16 because I didn't think it looked that great. VMAF tells me it's pretty decent now. Maybe the drivers got better?

@JN- I don't understand??? If I've offended you in some way, that was not my intent. I pointed out the technical limitations of that bench to it's author which I thought were understood and accepted. In any event, I prefer to present my results with more accurate labeling of my run parameters. It's a researcher thing. And in context with other benches like Red Car which I believe is fairer, more balanced, and informative. I think that it's great that you've coordinated the community to run it in a standardized way and have assembled the results from so many systems.

RogerS wrote on 6/4/2021, 7:24 AM

Thanks for the clarification, and hopefully we can put what looks like a pretty modest disagreement in the past.

TheRhino wrote on 6/5/2021, 9:11 AM

As I noted in THIS thread, I think the $370 (USD) Intel 11700K & $200 ASRock W480 motherboard make a best bang/buck Vegas workstation. Mine overclocks well, so there is not much difference between my 11700K & the pricier 11900K. The W480 comes with onboard Thunderbolt 3 & 10G networking, saving PCIe slots for my LSI RAID card, BM Decklink, etc. I'm using my used $200 VEGA 56 until new GPU prices & stock stabilize...

Note that MANY benchmarks on other websites DISABLE GPU rendering when comparing CPUs, etc. so you have to compare AMD & INTEL CPUs using the SAME GPU, which Techgage did HERE. In this benchmark, CPU clock speed rules alongside a decent GPU. With Intel, you get the CPU, iGPU, and PCIe GPU all working together.

For my daily PAID work, I currently use (2) best bang/buck workstations (9900K & 11700K) to get more work done than one single top-of-the-line workstation with pricier components. I edit using the full resources of one workstation while the second renders-out all of the required file types from my backup copy of the source video & VEG files.

With the Intel CPUs and AMD VEGAs, I can render-out 6+ different projects at once on the same system, all batch rendering to different file formats as requested by my clients. In comparison, Nvidia GPUs only let me render (3) MP4s at once.

Last changed by TheRhino on 6/5/2021, 9:36 AM, changed a total of 1 times.

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Howard-Vigorita wrote on 6/5/2021, 9:20 PM

With the Intel CPUs and AMD VEGAs, I can render-out 6+ different projects at once on the same system, all batch rendering to different file formats as requested by my clients. In comparison, Nvidia GPUs only let me render (3) MP4s at once.

My sense has always been that the high-end AMD video boards have higher throughput than Nvidia. And comparable to Intel. Unfortunately, however, all the AMD boards I have can only render 8-bit color to an output file and none of them even pretend to render lossless. The Nvidia and Intel gpus can do 10-bit and Nvidia presets have "lossless" selections which yield high quality (but a far cry from lossless) output.