10-Bit Projects in VP?

AVsupport wrote on 9/8/2018, 3:36 AM

Observing the release of more and more ILCE mirrorless cameras capable of 10-Bit video levels (either internally or via HDMI) at this Photokina, thus enabling meaningful LOG and HDR useage makes me Want to have the ability to work natively with 10-Bit projects in VP also.

32-Bit floating point being 4x the data of 8 bit seems like a waste of data and processing power in some instances (since GPU performance for contemporary cards is not yet fully optimized), and sometimes 10-Bit (being only 25% more) could well be enough yet yield much better outcomes and provide more meaningful capabilities at a better speed/performance balance.

I believe the Magix's sister product ProX already has this feature.

Please consider implementing this feature

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

32GB (4x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

Crucial BX500 1TB EDIT 3D NAND SATA 2.5-inch SSD

2x 4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.

Comments

Wolfgang S. wrote on 9/8/2018, 4:59 AM

To my opinion, you will not see that Vegas will implement additional 10bit project settings as you find it in other applications like Edius. The way they go seems to be to use the 32bit floating point mode, here you have ACES 1 and the required transformations for HDR. To my opinion the target should be to make the 32bit floating point mode, especially ACES, performing much better (one sees that this is possible if you have a look to Resolve).

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Musicvid wrote on 9/8/2018, 8:58 AM

32-Bit floating point being 4x the data of 8 bit seems like a waste of data and processing power in some instances

That reflects only a lack of understanding of the basics. Float math has no endpoints, so it has no predefined range. Not 32 bits, not 2 bits, not nuthin'. Nor is there any waste of data, which may be far less than 8 bits per pixel in the uncompressed pipeline.

Would thinking of it as "up to" 32 bits of data help?

fr0sty wrote on 9/8/2018, 9:03 AM

Exactly, it also refers to floating point precision, not the number of bits defining each pixel's color. I've been editing 10 bit in Vegas 16 for weeks now, it works great. The scopes are very helpful, just be sure to set your view transform to something that closest matches your TV. I also use full range when working with 10 bit (for HDR), as suggested by Magix.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

Musicvid wrote on 9/8/2018, 9:05 AM

I agree, " precision" may be a better word.

AVsupport wrote on 9/8/2018, 9:21 AM

Thanks for your input gentlemen, looks like I might have a bit of light reading to do ;-)

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

32GB (4x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

Crucial BX500 1TB EDIT 3D NAND SATA 2.5-inch SSD

2x 4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.

Wolfgang S. wrote on 9/8/2018, 11:50 AM

Most of us may have 10bit footage at the best, in some cases 12bit maybe. The 32bit floating point mode is fine for that. For HDR I will be happy when we may have a waveform diagram with a nit scale. And beloved ACES should become better in terms of performance.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Tim L wrote on 9/8/2018, 12:14 PM

"Exactly, it also refers to floating point precision, not the number of bits defining each pixel's color."

I think it does describe the number of bits (per channel) defining each pixel's color, and I think the question in the original post comes down to this: "Is 32-bit float overkill for video representation, given the disadvantages of slower calculations and using more memory?"

I'm not going to second-guess Sony's decision to go to 32-bit float, and I don't have any real insight into the inner workings of Vegas, but I'll add some "light reading" to the discussion.

A good visualization of "floating point" is a digital multi-meter: you have 4 digits to display -- i.e. 4 decimal digits of precision -- but you can select different ranges: Ohms, Kilohms, Megohms, etc.  Or the meter might autorange and just move the decimal point left and right as needed, but you still have only 4 digits of precision -- i.e. levels of detail -- regardless of the selected range.

A 32-bit floating point number essentially has a 24-bit significand (mantissa), coupled with a binary exponent from -126 to +127.  So the effective precision is the same as a 24 bit integer: 1 part in ~16.8 million of whatever range (exponent) you are on. So if you decide, arbitrarily, that 0.00000000 is pure black and 1.00000000 is pure white, you can represent about 16.8 million shades of gray in between (for each channel).

  • An 8-bit unsigned integer occupies 1 byte and can range from 0 to 255, so it has a precision of 1/256 of full scale, essentially a "level of detail" of representing some value to the nearest 0.4% of the maximum possible value.
  • A 16-bit unsigned integer occupies 2 bytes and can range from 0 to 65535, so it has a precision of 1/65536 of full scale, or a detail level of 0.0015% of the maximum possible value.
  • A 32-bit float occupies 4 bytes and has a precision of 1/16777267, or 0.000000059% of the established maximum working value (whatever you decide that to be).

I think AV's comment about "waste" is that Vegas' 32-bit float uses 4 bytes per channel per pixel, which is double what a 16-bit integer would use.  A 1920x1080 RGBA 32-bit "canvas" requires 33.2 MB, compared to 16.6 MB for the same in 16-bit format.  That's a lot of data to move around, and float calculations are generally slower than integer calculations.  But I don't know the inner workings of Vegas, and it could be that the 32-bit float advantages outweigh the disadvantages. Again, I'm not trying to second guess their decision.
 

Kinvermark wrote on 9/8/2018, 12:40 PM

Well, given the pace of Vegas' development vs new hardware, video formats, etc. Seems like 32bit float is a good choice. Resolve & Premiere are 32 bit internally.

AVsupport wrote on 9/8/2018, 4:28 PM

 

I think AV's comment about "waste" is that Vegas' 32-bit float uses 4 bytes per channel per pixel, which is double what a 16-bit integer would use.  
 

Well I guess if that was true it was exactly what I was worried about..data rate to chew through. And a move from 8 Bit to 10 Bit offering significant improvements without the bottlenecks. A little like 2.5K being a viable acquisition format if you want to deliver 1080 without the 4K problems. I would love to see a fast, agile Vegas kick ass to some of the big NLEs in terms of performance. But I'm not yet seeing the future proof vision for the product just yet

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

32GB (4x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

Crucial BX500 1TB EDIT 3D NAND SATA 2.5-inch SSD

2x 4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.

Trensharo wrote on 9/8/2018, 8:58 PM

Well, given the pace of Vegas' development vs new hardware, video formats, etc. Seems like 32bit float is a good choice. Resolve & Premiere are 32 bit internally.

Premiere Pro CC and Resolve have proper (and extensive, esp. w/ Resolve) GPGPU support. They are handing off a ton of stuff to GPU Compute Units for computation, which do those types of computations faster and more efficiently than a CPU - it's called Hardware Acceleration for a reason.

Remember when PC microprocessors had an FPU as an add-on? Well, we're back to those days. The GPU is the new FPU for computationally-intensive applications.

Vegas is like a spreadsheet that doesn't support the FPU in a computer. They need the next release to re-introduce proper GPGPU support in the NLE - and, for more than just encoding footage. Not having this puts all of their users behind the 8 Ball.

 

I think AV's comment about "waste" is that Vegas' 32-bit float uses 4 bytes per channel per pixel, which is double what a 16-bit integer would use.  
 

Well I guess if that was true it was exactly what I was worried about..data rate to chew through. And a move from 8 Bit to 10 Bit offering significant improvements without the bottlenecks. A little like 2.5K being a viable acquisition format if you want to deliver 1080 without the 4K problems. I would love to see a fast, agile Vegas kick ass to some of the big NLEs in terms of performance. But I'm not yet seeing the future proof vision for the product just yet

That exists already. You have two choices: Edius Pro or Resolve.

It's really about what you need. If you need better performance and support for these workflows, you will get it. Otherwise, it's really nothing more than a consideration.

There can be no future proof vision for a product that is years behind the competition due to slow/laggard development. VEGAS isn't setting the standard, the competition is. It has yet to catch up, so how can it present a vision that is future-proof. They're scrambling to maintain any semblance of competitiveness that remains - and frankly, it isn't competitive with what's out there, yet.

You simply have to give them a couple more years and see what they can deliver. 2020, IMO, is the year to look out for.

Until then, I'm enjoying Premiere Pro CC and Resolve, whilst using my Version of VEGAS just to render ProRes out on Windows.

Wolfgang S. wrote on 9/9/2018, 1:06 AM

Trensharo is writing here again and again wrong points. There IS a GPU acceleration in Vegas and that works.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

zdogg wrote on 9/9/2018, 1:32 AM

I think Trensharo is correct, and Magix, the new "rich uncle" had better step up its game here. Vegas still is a monster program waiting to be unleashed, but they don't seem to even get it's potential, they keep nickle and diming this, or IF they do (let's give them one more year) like Sony did, and it really will be over. And I am the Vegas fan of Vegas fans, but it's just too glitchy (and I normally cheerlead here, as some know, too well) and it really should work on streamlining some of its slick compositing potential, some routines and quick scripting or key commands, because what it does it does well, when it's not glitching and tripping over itself with its' wonky GUI routines (I could help you guys, seriously, I KNOW what to do).... which is WAY TOO OFTEN...sorry Gary and Co.....maybe they need to let some other people have a crack at it, after years they have become a bit to myopic, and don't really see objectively their own masterwork....sadly.

Mindmatter wrote on 9/9/2018, 3:22 AM

Do I understand this right when I gather that 10bit 4.2.2. footage can't be properly edited in Vegas?

Also,

32-Bit floating point being 4x the data of 8 bit seems like a waste of data and processing power in some instances (since GPU performance for contemporary cards is not yet fully optimized), and sometimes 10-Bit (being only 25% more) could well be enough yet yield much better outcomes and provide more meaningful capabilities at a better speed/performance balance.

My understanding was that 8bit-Video = 256x256x256 =16.7 million colors, and 10bit-Video = 1024x1024x1024 =1.07 billion colors.

So it's not a mere 25% more information. Or did I misunderstand your post?

 

Last changed by Mindmatter on 9/9/2018, 4:15 AM, changed a total of 2 times.

AMD Ryzen 9 5900X, 12x 3.7 GHz
32 GB DDR4-3200 MHz (2x16GB), Dual-Channel
NVIDIA GeForce RTX 3070, 8GB GDDR6, HDMI, DP, studio drivers
ASUS PRIME B550M-K, AMD B550, AM4, mATX
7.1 (8-chanel) Surround-Sound, Digital Audio, onboard
Samsung 970 EVO Plus 250GB, NVMe M.2 PCIe x4 SSD
be quiet! System Power 9 700W CM, 80+ Bronze, modular
2x WD red 6TB
2x Samsung 2TB SSD

OldSmoke wrote on 9/9/2018, 3:32 AM

Do I understand this right when I gather that 10bit 4.2.2. footage can't be properly edited in Vegas?

That is very much depending on your hardware.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Mindmatter wrote on 9/9/2018, 3:46 AM

OK, I think I understand that I can only work with 10 bit footage when in Vegas' 32bit mode. Which indeed makes my PC / preview choke.

I guess I could edit in the 8 bit mode and switch to 32 just for the final grading / export?

Last changed by Mindmatter on 9/9/2018, 4:04 AM, changed a total of 1 times.

AMD Ryzen 9 5900X, 12x 3.7 GHz
32 GB DDR4-3200 MHz (2x16GB), Dual-Channel
NVIDIA GeForce RTX 3070, 8GB GDDR6, HDMI, DP, studio drivers
ASUS PRIME B550M-K, AMD B550, AM4, mATX
7.1 (8-chanel) Surround-Sound, Digital Audio, onboard
Samsung 970 EVO Plus 250GB, NVMe M.2 PCIe x4 SSD
be quiet! System Power 9 700W CM, 80+ Bronze, modular
2x WD red 6TB
2x Samsung 2TB SSD

Wolfgang S. wrote on 9/9/2018, 4:58 AM

I edit UHD 10bit footage in the 32bit floating point mode. 8 cores and a R9 390X card. So yes, that is possible. However, ACES for HDR is a tough exercise.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Wolfgang S. wrote on 9/9/2018, 5:07 AM

(I could help you guys, seriously, I KNOW what to do).... which is WAY TOO OFTEN...sorry Gary and Co.....maybe they need to let some other people have a crack at it, after years they have become a bit to myopic, and don't really see objectively their own masterwork....sadly.

Without knowing the source code and the issues they may be faced by, it is very unserious to state that you can do it better.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

AVsupport wrote on 9/9/2018, 6:58 AM

There IS a GPU acceleration in Vegas and that works.

Well if I'm seeing dropped frames on playback and my GTX1060/6 gets used only 13% then I'm not completely convinced about this statement. My hardware profile should allow for better performance than what I see. And that has to be code base and hardware acceleration for sure. I like VP and will continue to support it.

My understanding is that 8 Bits will roughly deliver 256 gradations 'per stop' in a 709 environment (omitting some caps top and bottom), and 10 will do 1028 hence there's more data left in highlights and shadows when squeezing a 14+ stop LOG profile into that space. Hence there's more data left in those areas where you need it. And I was hoping this could prove as a viable & efficient alternative until 32-bit performance is sorted out

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

32GB (4x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

Crucial BX500 1TB EDIT 3D NAND SATA 2.5-inch SSD

2x 4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.

Marco. wrote on 9/9/2018, 7:14 AM

"I guess I could edit in the 8 bit mode and switch to 32 just for the final grading / export?"

Yes, this works pretty fine.

Wolfgang S. wrote on 9/9/2018, 8:13 AM

You are right. In Vegas it is possible to use an 8bit workflow using LUTs to grade log and even HDR footage - and switch for final rendering to 32bit project settings. However, there are some constraints in such a workflow.  Depends on the details.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Mindmatter wrote on 9/9/2018, 8:20 AM

hmmmm... looks like a serious hardware upgrade just stepped a few steps up the whishlist, as a Sony fs7 is due soon for 10 bit 4.2.2 footage. It‘s mainly things like banding that bug me most in 8 bit.

AMD Ryzen 9 5900X, 12x 3.7 GHz
32 GB DDR4-3200 MHz (2x16GB), Dual-Channel
NVIDIA GeForce RTX 3070, 8GB GDDR6, HDMI, DP, studio drivers
ASUS PRIME B550M-K, AMD B550, AM4, mATX
7.1 (8-chanel) Surround-Sound, Digital Audio, onboard
Samsung 970 EVO Plus 250GB, NVMe M.2 PCIe x4 SSD
be quiet! System Power 9 700W CM, 80+ Bronze, modular
2x WD red 6TB
2x Samsung 2TB SSD

AVsupport wrote on 9/9/2018, 8:25 AM

exactly what I'm talking about @Mindmatter , [edit: plus you can't effectively grade 8-bit LOG source without falling apart] as I am sitting here waiting & hoping for a meaningful [10bit] A7Siii surprise I want to have my options sorted..

Last changed by AVsupport on 9/9/2018, 8:58 AM, changed a total of 1 times.

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

32GB (4x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

Crucial BX500 1TB EDIT 3D NAND SATA 2.5-inch SSD

2x 4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.

Mindmatter wrote on 9/9/2018, 8:57 AM

exactly what I'm talking about @Mindmatter , as I am sitting here waiting & hoping for a meaningful [10bit] A7Siii surprise I want to have my options sorted..


AVsupport, I'm mostly still using my A7s I, and I doubt the A7sIII will have 10 bit 4.2.2. internal. They just won't reach into the specs of their higher end cameras.

AMD Ryzen 9 5900X, 12x 3.7 GHz
32 GB DDR4-3200 MHz (2x16GB), Dual-Channel
NVIDIA GeForce RTX 3070, 8GB GDDR6, HDMI, DP, studio drivers
ASUS PRIME B550M-K, AMD B550, AM4, mATX
7.1 (8-chanel) Surround-Sound, Digital Audio, onboard
Samsung 970 EVO Plus 250GB, NVMe M.2 PCIe x4 SSD
be quiet! System Power 9 700W CM, 80+ Bronze, modular
2x WD red 6TB
2x Samsung 2TB SSD

AVsupport wrote on 9/9/2018, 9:05 AM

If Sony was smart they would provide a bridging product as a pathway to work in conjunction with their higher end cameras. These days there can't really be another reason not to do it other than a political decision / fight between their divisions. I'd be happy to have 1080/50/10bit over 4K/25/8bit. But that's another conversation for another forum..

Last changed by AVsupport on 9/9/2018, 9:06 AM, changed a total of 1 times.

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

32GB (4x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

Crucial BX500 1TB EDIT 3D NAND SATA 2.5-inch SSD

2x 4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.