Comments

Former user wrote on 9/16/2017, 6:15 PM

John Rofrano, a good while ago, made up a .veg file, no external media required. He gave results for his system. This is the download link he gave ...

https://f1.creativecow.net/file.php?id=7248&folder=vegas-pro-120-gpu-render-test-project

I did my own tests with his file, definitely Cuda used with this test, below is extract from my tests ...

 

CPU 4790K GTX 580 16gb ram

MainConcept AVC (Cuda) . . . . . 0:18

 

CPU 4790K GTX 1080 16gb ram

Magix MC Nvenc, default .......... 0:08

Former user wrote on 9/17/2017, 6:44 AM

Oldsmoke, I changed out the gtx 1080 for the gtx 580 and did some tests, I'll put them all here later.

I used VP15 for testing, VP 13 had error messages so I couldn't replicate exactly my previous tests.

When I tried enabling Legacy gpu in VP 14 it went on forever, some issue, had to use task manager to close.

The important one is that with the GTX 580 enabled for HW acceleration and using QSV for render assistance the time to render the "Legacy" Red Car test is 0:27s.

That was using the MC QSV default of 4 in the quality setting, data rate 24/12. 29.97fps.

 

Red Car test, 2K, ...Using VP15 ...

MC 29.97 fps, data rate 24/12 mbps, render templates default to NV and QSV.

CPU 4790k, 16gb ram

 

Using GTX 580 ...

With GTX 580 HW acceleration, render using CPU only 2:0s

With GTX 580 HW acceleration, render using Intel QSV 0:27s

 

Without GTX 580 HW acceleration, render using CPU only 2:48s

Without GTX 580 HW acceleration, render using Intel QSV 1:27s

 

With 4600 graphics HW acceleration, render using CPU only 1:59s

With 4600 graphics HW acceleration, render using Intel QSV 0:50s

 

Using GTX 1080 ...

With GTX 1080 HW acceleration, render using Nvenc 0:28s

With GTX 1080 HW acceleration, render using Intel QSV 0:24s

 

With Intel QSV HW acceleration, render using Nvenc 0:47s

With Intel QSV HW acceleration, render using Intel QSV 0:48s

 

Its fairly obvious that if you return the Ti card you'll be making a big mistake ...

Looking at the figures above, the GTX 1080 is a full 3 seconds! faster in render😂

OldSmoke wrote on 9/17/2017, 10:29 AM

It seems you have not used the GTX 580 with CUDA in VP15, which is what I did. On your system that would mean using the Intel GPU for general timeline performance and the GTX 580 with MC AVC and CUDA; you would have to enable it first.

Aside form the GTX 580, I use my Fury X for timeline acceleration, my 3930K doesn't have a GPU. The combination of Fury X and GTX 580 with CUDA is as fast as Fury X and the 1080Ti with NVENC. The 1080Ti however isn't as good as the Fury X for timeline acceleration, just a tiny bit better than the GTX 580. Those are the facts based on my system and the basis for my decision to return the 1080Ti. I rather keep my money and save it towards a system update, x299 with a 7820X and DDR4 Ram.

I also hope that MAGIX will eventually support AMD's VEC engine.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Former user wrote on 9/17/2017, 12:15 PM

"I also hope that MAGIX will eventually support AMD's VEC engine."

I'd say that's definitely on the developers agenda.

As you can see from my post, I had major problem with VP 13 and VP 14. VP13 simply wouldn't render at all. VP 14 went awol when I attempted to use Legacy gpu, so that left me with VP 15 to test the GTX 580. After the problems with VP 14 I didn't attempt using Legacy Gpu in VP 15, to test Cuda, probably a driver issue, I'm using the latest Nvidia drivers and it may not suit the GTX 580.

Anyway, I've now removed the GTX 580 and put back the GTX 1080. The tests I did do hopefully should be useful to others contemplating a video card upgrade.

 

 

 

OldSmoke wrote on 9/17/2017, 12:32 PM

I'm using the latest Nvidia drivers and it may not suit the GTX 580.

I believe there are issues with the latest drivers and older cards. I am on 382.05 and that works well in all Vegas and Sony Vegas versions.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

bitman wrote on 9/18/2017, 1:33 PM

John Rofrano, a good while ago, made up a .veg file, no external media required. He gave results for his system. This is the download link he gave ...

https://f1.creativecow.net/file.php?id=7248&folder=vegas-pro-120-gpu-render-test-project

I did my own tests with his file, definitely Cuda used with this test, below is extract from my tests ...

 

CPU 4790K GTX 580 16gb ram

MainConcept AVC (Cuda) . . . . . 0:18

 

CPU 4790K GTX 1080 16gb ram

Magix MC Nvenc, default .......... 0:08

If theres any problem with his link i've temporarily taken the liberty of putting it in my dropbox for download ... name begins with 7248 ...

https://www.dropbox.com/sh/j6p4adbm983p435/AABq-0tGLVRzqRDHnAufINjIa?dl=0

CPU 4790K, Titan-X (Maxwell), 16gb ram - Nvidea in preference for all tests (even with Intel render template)

Nvidea driver 385.41

Magix Render, default Internet HD 1080p 59.94 fps (NVidia NVENC) ---> 22s

Magix Render, default Internet HD 1080p 59.94 fps (Intel QSV) ======> 19s

Magix Render, default Internet HD 1080p 50 fps (NVidia NVENC) -------> 18s

Magix Render, default Internet HD 1080p 50 fps (Intel QSV) ========> 16s

Magix Render, default Internet HD 1080p 29.97 fps (NVidia NVENC) ----> 11s

Magix Render, default Internet HD 1080p 29.97 fps (Intel QSV) =======> 9s

Magix Render, default Internet HD 1080p 25 fps (NVidia NVENC) ---------> 9s

Magix Render, default Internet HD 1080p 25 fps (Intel QSV) =========> 8s

 

 

Last changed by bitman on 9/18/2017, 1:39 PM, changed a total of 3 times.

Current system: VP 18 (build 284), VP 17 (build 452), (uninstalled VP 12,13,14,15, 16 Suite), Vegasaur, Magix Video Pro X (VPX11), Corel VS ultimate 2019, a lot of NEWBLUE plugins, Titler Pro 6, Mercalli 4.0, Respeedr, Vasco Da Gamma 12, VASST stuff, Production Assistent pro3, Boris Continuum 2020, Davinci Resolve Studio 16,...

  • OS: Windows 10 Pro 64, version 2004
  • CPU: i9900K stepping R0 (since October 2019), previously, der8auer i7-8700K (advanced edition), default speed (no overclock), Cooler: Noctua NH-D15s
  • RAM: G.Skill Trident Z 3200C14 DDR4 64GB, XMP set to profile 1 in BIOS
  • Videocard: NVIDEA RTX 2080Ti (Founders edition), NVIDEA studio drivers
  • Monitor: LG 38 inch ultra-wide (21x9) - Resolution: 3840x1600
  • C-drive (games & APPS): Samsung NVMe SSD 2TB 960 pro

  • Current Video source work drive: Samsung NVMe SSD 2T 970 EVO plus

  • Mass Data storage & Backup: WD gold 6TB + WD Yellow 4TB

  • MOBO: Gigabyte Z370 Aorus Gaming 7, BIOS F14
  • PS: Corsair HX1200i, Case: Silverstone fortress 2,
  • Misc: Logitech G915 (replaced G910), Evoluent Vertical Mouse, shuttlePROv2

 

Former user wrote on 9/18/2017, 2:39 PM

Nice Bitman. Theres a certain similarity, lowest time is 8 seconds, the new HW render capabilities are really great.

OldSmoke wrote on 9/18/2017, 10:14 PM

VP15 with Fury X for timeline and GTX580 CUDA rendered to:

MC AVC 1080 29.97p -> 6 sec

MC AVC 1080 59.94p -> 6sec

MC AVC 1080 50.00p -> 6sec

MC AVC 1080 25.00p -> 4sec

VP15 with Fury X for timeline:

XDCAM EX 1080 29.97p took 8 seconds

I think that underlines my findings. The old Fermi architecture with CUDA is still as fast or faster than today's NVENC with GPUs almost 6x the power. However, Intel's QSV is catching up but not available on HEDT CPUs.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

fr0sty wrote on 9/18/2017, 10:40 PM

My times with the above project file (the one that uses generated media):

MCAVC 4K 59.94 = 2:08

MCAVC 4K 59.94 NVENC = 0:58

MCAVC 1080p 59.94 = 0:33

MCAVC 1080p 59.94 NVENC = 0:27

XDCAM EX 35mbps VBR HQ 29.97 = 0:13

System: Ryzen 7 1800x

Nvidia GTX 970 for both render and timeline

32GB DDR4 3000mhz

Windows 10

All stock speeds.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Radeon VII

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

X3Inspire wrote on 10/23/2017, 10:24 AM

Guys i need some help and advise, this is my PC spec.

[CPU]
cpu="Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz"

[Memory]
memory="Available: 33,473,184 bytes, Free: 26,801,672 bytes"

[Graphics]
g1="NVIDIA GeForce GTX 1080 1920x1080"

Im using Sony Vegas Pro 14 Suite, but its render using my CPU. I already set allow gpu to TRUE, and GPU acceleration set to my NVIDIA GTX 1080 after installed Cuda. Does this problem solved if i upgrade to Version 15?

liork wrote on 10/23/2017, 3:11 PM

Yes, only VP15 will support GPU rendering with the added Nvidia NVENC support.

Cornico wrote on 10/23/2017, 4:51 PM

Not only NVENC encoding, but when you allow your Intel GPU to show up, you also have the possibility of QSV.

That one goes a little bit faster than NVENC.

Former user wrote on 10/23/2017, 5:54 PM

So you're saying people with modern intel processors don't need to buy Nvidia graphics cards for vegas encoding or timeline playback. There's no advantage just disadvantage?

Former user wrote on 10/23/2017, 6:21 PM

We agree. There appears to be no point to GPU acceleration if you have intel CPU with QSV for encoding. However I'm not sure about timeline playback. Not sure if QSV can do that or if it needs GPU. Do you know?

It's a lot of money to save, if you're not a gamer, and otherwise had no need for a modern GPU

NickHope wrote on 10/23/2017, 9:36 PM

We agree. There appears to be no point to GPU acceleration if you have intel CPU with QSV for encoding. However I'm not sure about timeline playback. Not sure if QSV can do that or if it needs GPU. Do you know?

It's a lot of money to save, if you're not a gamer, and otherwise had no need for a modern GPU

If you have a CPU with integrated Intel graphics then that becomes available as an option in "GPU acceleration of video processing". I don't know how the performance compares to NVIDIA or AMD GPUs, and of course it all depends on the spec. *If* Vegas gets its NVIDIA/AMD acceleration right, then in theory the high spec discrete GPUs should smoke Intel graphics. But as of today, in VP15 build 216, Intel may be a good option.

Not all Intel CPUs have integrated graphics, especially the "top end" ones, which max out on cores instead. It's kind of "expected" that those would be used with a powerful discrete GPU and so the integrated graphics are not necessary.

X3Inspire wrote on 10/23/2017, 10:24 PM

Nvenc testing in VP15 ... Red Car test ...

I left the RC value to be the auto selected one chosen by selecting first the Preset.

In VP15 using Magix 1920x1080 template that "matches=" 29.97fps and data rate of 24/12

 

  ------------------------------   Preset    -------------------------      Rate Connect      ----          Time 

-0 Nvenc not used           N/A                                                       N/A                     1m:59s

-1 Nvenc used               Default                                                   VBR                           27s

-2 Nvenc used               High Performance                                CBR                           26s

-3 Nvenc used               High Quality                                          VBR                           27s

-4 Nvenc used               Low Latency- High Quality       Low Delay, CBR-HQ           27s

-5 Nvenc used               Low Latency- High Perf.           Low Delay, CBR-HQ           27s 

-6 Nvenc used               Low Latency- Default                Low Delay, CBR-HQ           30s

 

link to nvenc output files ... https://www.dropbox.com/sh/j6p4adbm983p435/AABq-0tGLVRzqRDHnAufINjIa?dl=0

I have updated the dropbox files with Intel QSV samples. In another thread I updated the results ... https://www.vegascreativesoftware.info/us/forum/someone-with-threadripper-1950x-or-skylake-x-7900x-please-test-this--108381/#ca668333

If you wish to check out the internals of the output files using mediainfo you'll find them in dropbox link under NVENC folder.

DESKTOP PC ...

Motherboard .. MAXIMUS VII RANGER.  Z97 Express chipset. 4th and 5th. generation intel.  Supports 6 x 6GB/s SATA ports.

CPU .. Haswell Core i7-4790K. (Intel® HD Graphics 4600) socket FCLGA1150

Memory .. Corsair DDR3 16GB.  Vengeance Pro Black DDR3 1866MHz CL9.

MSI GTX 1080 Z

CPU cooling .. Corsair H80i.

Case .. Corsair Graphite 760T, white.

Power supply .. Corsair HX750i. 750 watt.

Samsung SSD 840 series, 250GB.

Samsung SSD 850 series, 500GB.

Seagate HD ST3000DM001-1CH166 3 TB.

Seagate HD ST8000AS0002-1NA17Z 8TB.

This is my result;

Internet 4K 2160p 59.94 fps (NVidia NVENC) High Performance    47sec
Internet HD 1080p 59.94 fps (NVidia NVENC) High Performance   17sec
Internet HD 1080p 29.97 fps (NVidia NVENC) High Performance     7sec

 

So finally Vegas 15 working with Nvidia hoorayyy

Former user wrote on 10/23/2017, 11:15 PM

If you have a CPU with integrated Intel graphics then that becomes available as an option in "GPU acceleration of video processing". I don't know how the performance compares to NVIDIA or AMD GPUs, and of course it all depends on the spec. *If* Vegas gets its NVIDIA/AMD acceleration right, then in theory the high spec discrete GPUs should smoke Intel graphics. But as of today, in VP15 build 216, Intel may be a good option.

Not all Intel CPUs have integrated graphics, especially the "top end" ones, which max out on cores instead. It's kind of "expected" that those would be used with a powerful discrete GPU and so the integrated graphics are not necessary.

The thing is NVIDIA GPU encoding gives the lowest quality encodes of all, and yet slower than Intel Quick Sync. So it makes no sense to use NVIDA GPU. AMD GPU's are more powerful when it comes to encoding so it will be interesting to see how the modern AMD gpu's work in the future.

I hope and I've not tested this but maybe others have, If you for example double the bit rate with an NVIDIA encode or increase the INTEL QSV encode by 1/3 I would hope the hardware encodes can equal the quality of a software AVC encode. For me file size is not a problem, I have fast internet, and not making videos hours in length. But maybe no matter what the bitrate there will be artifacts introduced by hardware encode. But I would hope that isn't true, and the real problem is lack of efficiency compared to software encode, so one just needs to bump up the bitrate.

AVsupport wrote on 10/23/2017, 11:40 PM

I've done a little Wiki digging (in my quest for finding my new hardware) I thought I'd share:

new Processor: Coffee Lake Hexacore, with UHD Graphics 630 (GT2), this is pretty much the same as Kaby Lake just a new name [[opQSV, GVT, openCL2.2 (1.2 up until including Haswell), OpenGL4.6, Vulcan1.0 for windows new] Will do encode H.264 encode 4K-HEVC, 4K-MPEG-4 AVC and FHD-MPG2, VP9 8-bit only (BT2020pre/post processing); via Z370chipset which supports Displayport + HDMI 1.4 [HDMI 2.0a for HDR support, 2.0b for HLG, 2.0 for BT2020 is required]

GFX card nVidia GTX 10xx Will do OpenGL4.5, OpenCL1.2 (all 10xx will), Vulcan1.0 on a GDDR5 bus 256w for 1070/1080m, and support DP 1.4 (which supports HDR10/BT2100) and HDMI 2.0b)

So, in short, looks like If the Timeline playback uses a higher openCL version than 1.2, you would be better off with the built-in Intel GPU, but if you want HDR output etc and faster rendering, it'll be nVidia.-?

Does anyone know what actual openCL version VP15 is running, and does it matter? @VEGAS_EricD?

Last changed by AVsupport on 10/23/2017, 11:42 PM, changed a total of 1 times.

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

16GB (2x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.

NickHope wrote on 10/24/2017, 12:46 AM
The thing is NVIDIA GPU encoding gives the lowest quality encodes of all, and yet slower than Intel Quick Sync. So it makes no sense to use NVIDA GPU...

It might do if Vegas gets its support right. Maybe some Nvidia cards have already caught up or passed AMD *in VP15*, in terms of GPU acceleration of video processing. With so many random reports on this forum, I've lost track. A properly set up benchmark test will clarify things, but I don't think it makes sense to start that until we've had another update.

Don't forget, GPU acceleration of video processing (set in video preferences) and NVENC/QSV/OpenCL/CUDA rendering (set in rendering Encode Mode) are 2 completely different things.

Former user wrote on 10/24/2017, 2:01 AM
It might do if Vegas gets its support right. Maybe some Nvidia cards have already caught up or passed AMD *in VP15*, in terms of GPU acceleration of video processing. With so many random reports on this forum, I've lost track. A properly set up benchmark test will clarify things, but I don't think it makes sense to start that until we've had another update.

With my gtx1070 and using only GPU accelerated filters I find it's very laggy trying to playback. The usual suspect filters where tutorials from older versions of vegas will say you need to pre-render to get a good playback speed still playback terribly (need to pre-render). the GPU accelerated video processing doesn't seem to be very GPU processor heavy about 37%

I ran the benchmark/optimisation for neat video filter & before it crashes (did in vegas14 too) you can see how terrible the gpu performance is in relation to my ageing i7 cpu

matthias-krutz wrote on 10/25/2017, 1:55 AM

I ran the benchmark/optimisation for neat video filter & before it crashes (did in vegas14 too) you can see how terrible the gpu performance is in relation to my ageing i7 cpu

The test shows only the Neat Video performance.

Here are my results with an old AMD FX-6100 Six-Core and Neat Video 4.6.0.

Desktop: Ryzen R7 2700, RAM 2 x Ballistix DIMM 16 GB DDR4-2666, X470 Aorus Ultra Gaming, Radeon R9 380 4GB, Win10

Laptop: T420, W7 SP1, i5-2520M 4GB, SSD, HD Graphics 3000

VEGAS Pro 14-17, Movie Studio 12 Platinum, Vegasaur, HitfilmPro

Former user wrote on 10/25/2017, 8:02 AM

That's certainly a lot more impressive. That's what it should look like, reasonably priced also.

snacky wrote on 11/4/2017, 10:02 PM

VP15 with Fury X for timeline and GTX580 CUDA rendered to:

MC AVC 1080 29.97p -> 6 sec

MC AVC 1080 59.94p -> 6sec

MC AVC 1080 50.00p -> 6sec

MC AVC 1080 25.00p -> 4sec

VP15 with Fury X for timeline:

XDCAM EX 1080 29.97p took 8 seconds

I think that underlines my findings. The old Fermi architecture with CUDA is still as fast or faster than today's NVENC with GPUs almost 6x the power. However, Intel's QSV is catching up but not available on HEDT CPUs.

hi, how did you manged to enable mc avc in v15 for gtx580

i see only magix avc, which is not allowing use gtx580

and sony avc, that supports FHD only for CUDA, if I select UHD, CUDA is not available

Desktop: MB:x58a-ud3r, CPU: i7-930, 6GB RAM (OCZ,triple channel) GPU: GTX580, ATI 5830

Laptop #1 Lenovo W541, i7-4810Mq 2.8Ghz, 16GB RAM, GPU: K1100M 2GB

Laptop #2 Lenovo P50, i7-6820HQ 2.7GHz, Intel HD Graphics 530, 32 GB DDR4, GPU Quadro  M1000M 4GB

 

bitman wrote on 11/5/2017, 2:03 AM

@snacky I would not bother with cuda anymore, the experiments I did with cuda on my titan-x (maxwell) were indeed fast, but I was not impressed by the quality, I would never use it for final results. I am currently building a complete new system ground up with the new intel 6 core 8700K, new mobo, a new videocard nvidea 1800 ti, 32GB etc... I have all the components, except the processor 😟. That is sold out everywhere.

Just a question to the forum members, I am still hesitating if I would I would go for 64GB of DDR4 ram or leave it at the current (also new) 32GB DDR4 I have bought already for the new Z370 motherboard. I know more memory is always better, but I wonder if this not overkill for vegas 15 (with 6 cores 8700K).

 

Current system: VP 18 (build 284), VP 17 (build 452), (uninstalled VP 12,13,14,15, 16 Suite), Vegasaur, Magix Video Pro X (VPX11), Corel VS ultimate 2019, a lot of NEWBLUE plugins, Titler Pro 6, Mercalli 4.0, Respeedr, Vasco Da Gamma 12, VASST stuff, Production Assistent pro3, Boris Continuum 2020, Davinci Resolve Studio 16,...

  • OS: Windows 10 Pro 64, version 2004
  • CPU: i9900K stepping R0 (since October 2019), previously, der8auer i7-8700K (advanced edition), default speed (no overclock), Cooler: Noctua NH-D15s
  • RAM: G.Skill Trident Z 3200C14 DDR4 64GB, XMP set to profile 1 in BIOS
  • Videocard: NVIDEA RTX 2080Ti (Founders edition), NVIDEA studio drivers
  • Monitor: LG 38 inch ultra-wide (21x9) - Resolution: 3840x1600
  • C-drive (games & APPS): Samsung NVMe SSD 2TB 960 pro

  • Current Video source work drive: Samsung NVMe SSD 2T 970 EVO plus

  • Mass Data storage & Backup: WD gold 6TB + WD Yellow 4TB

  • MOBO: Gigabyte Z370 Aorus Gaming 7, BIOS F14
  • PS: Corsair HX1200i, Case: Silverstone fortress 2,
  • Misc: Logitech G915 (replaced G910), Evoluent Vertical Mouse, shuttlePROv2