VEGAS Pro 23 Performance Bug with H.265 10-bit 4:2:2 in 32-bit Float

Abdali wrote on 10/15/2025, 12:28 AM

Hello everyone,

I've discovered a serious performance issue in VEGAS Pro 23 when working with high-quality H.265 10-bit 4:2:2 media in 32-bit floating point (full range) Mode.

The core issue is that playback becomes jerky and stuttery when the project's Pixel Format is set to 32-bit floating point (full range) in Vegas Pro 23. This issue does not occur in VEGAS Pro 22 with the same media and project settings.

Media Used: H.265 10-bit 4:2:0 and H.265 10-bit 4:2:2 footage.

Project Setting: Set Pixel Format to 32-bit floating point (full range).

Play back of H.265 10-bit 4:2:2 media on the timeline extremely jerky and stuttery, making editing or preview impossible. Even without any effects.

Vegas Pro 23 only runs fine when the Pixel Format is set to 8-bit (full range). But the same project runs fine when its opened in Vegas Pro 22 in both (8-bit (full range & 32-bit float).

During normal playback of various codecs, VEGAS Pro utilizes the dedicated Hardware Decoder NVIDIA RTX 3060 for GPU decoding. However, the moment the playhead hits the H.265 10-bit 4:2:2 footage (in 32-bit float mode), Vegas Pro 22 and 23 switches the decoding task to the Intel® UHD Graphics 770 iGPU of my i9-14900K, leading to immediate performance degradation only in Vegas Pro 23. This behavior did not occur in VEGAS Pro 22.

Has anyone else noticed this specific decoding behavior or performance drop with H.265 10-bit 4:2:2 in 32-bit float mode in VEGAS Pro 23.?

Intel Core i7 14900K
Nvidia RTX 3060 / Intel Arc A380
XPG LANCER BLADE RGB 64GB DDR5 6400MT/s
ASUS ProArt Z690 Creative Wifi Motherboard
2 Samsung NVME SSD
Corsair iCUE H100i ELITE CAPELLIX Liquid CPU Cooler
Corsair RM850X Power Supply

Intel Core i5 13600K
Nvidia RTX 3090
Corsair Vengeance DDR5 5200MHz 32 GB
ASUS ProArt Z690 Creative Wifi Motherboard
2 Samsung NVME SSD
Corsair iCUE H100i ELITE CAPELLIX Liquid CPU Cooler
Corsair AX1600i Digital ATX Power Supply 

Comments

RogerS wrote on 10/15/2025, 12:51 AM

NVIDIA GPUs older than the 50XX series never supported 4:2:2 decoding so it's expected that VEGAS switches to the iGPU for these. Here I'm getting decent performance with my iGPU but it depends on the source footage.

One thing that did change in VP 23 is ACES- is view transform off (ACES disabled) in the project properties in both 22 and 23?

Abdali wrote on 10/15/2025, 9:32 PM

NVIDIA GPUs older than the 50XX series never supported 4:2:2 decoding so it's expected that VEGAS switches to the iGPU for these. Here I'm getting decent performance with my iGPU but it depends on the source footage.

One thing that did change in VP 23 is ACES- is view transform off (ACES disabled) in the project properties in both 22 and 23?


You are correct that older NVIDIA cards (like RTX 30xx or RTX 40xx) do not support 4:2:2 decoding, so the switch to the iGPU is expected. However, I want to clarify my main point:

Automatic Switching: I am aware that prior to VP22, we had to manually select the Intel iGPU for dedicated hardware decoding. The automatic switching between NVIDIA and Intel GPUs that was introduced in VP22 is actually a fantastic feature. It allows us to get proper 4:2:2 decoding while efficiently managing memory, avoiding the extra-large amount of shared iGPU memory or excessive VRAM usage, and it's not the source of the problem.

For example, when using a modern discrete card (like an RTX 30XX or 40XX, or current AMD cards) in combination with an 11th Gen or newer Intel CPU (like my i9-14900K), VEGAS Pro performs intelligent decoding.

Specifically, when the timeline contains both H.265 10-bit 4:2:0 and H.265 10-bit 4:2:2 media, VEGAS Pro intelligently manages the workload. It utilizes the Intel iGPU exclusively for the H.265 10-bit 4:2:2 decoding. Critically, it does this without the excessive VRAM usage by the RTX 3060 that would normally be dedicated to the H.265 10-bit 4:2:0 files, or the large amount of shared iGPU memory used if the iGPU were manually set as the sole decoder in the File I/O preferences.

In my opinion, this intelligent Automatic Switching leaves significantly more RAM and VRAM free to be utilized by other demanding elements in VEGAS Pro.

The Actual Issue: The true regression is that this automatic iGPU decoding (which works fine in VP22) fails specifically in VP23 when the Pixel Format is set to 32-bit floating point (full range). It functioned perfectly in 32-bit float in both VP21 and VP22.

This is simply a bug I wanted to highlight for the developers. Thank you for asking about ACES

I've uploaded the project settings screenshot for reference.

Intel Core i7 14900K
Nvidia RTX 3060 / Intel Arc A380
XPG LANCER BLADE RGB 64GB DDR5 6400MT/s
ASUS ProArt Z690 Creative Wifi Motherboard
2 Samsung NVME SSD
Corsair iCUE H100i ELITE CAPELLIX Liquid CPU Cooler
Corsair RM850X Power Supply

Intel Core i5 13600K
Nvidia RTX 3090
Corsair Vengeance DDR5 5200MHz 32 GB
ASUS ProArt Z690 Creative Wifi Motherboard
2 Samsung NVME SSD
Corsair iCUE H100i ELITE CAPELLIX Liquid CPU Cooler
Corsair AX1600i Digital ATX Power Supply 

VEGASFlorian wrote on 10/16/2025, 2:06 AM

@Abdali: Please try to switch to ACES 1.3 and see if it improves the performance. This version supports DirectX and connects better to the new engine components.

RogerS wrote on 10/16/2025, 2:13 AM

View transform is off so it doesn't seem any ACES is in use.

RogerS wrote on 10/16/2025, 4:46 AM

On my laptop (Intel Xe + RTX 4060) I tested a few 10-bit 422 media. Only one ever worked with decoding and that was when I manually set the Xe in file io. So the switchover isn't working properly here. 32-bit is slower than 8-bit though that's as expected.

I can't test on my desktop as 422 media isn't decoding at all with 50XX GPUs.

Abdali wrote on 10/19/2025, 8:40 PM

@Abdali: Please try to switch to ACES 1.3 and see if it improves the performance. This version supports DirectX and connects better to the new engine components.

I have already tried switching the project to ACES 1.3, but unfortunately, this did not improve the performance or resolve the stuttering issue either.

Intel Core i7 14900K
Nvidia RTX 3060 / Intel Arc A380
XPG LANCER BLADE RGB 64GB DDR5 6400MT/s
ASUS ProArt Z690 Creative Wifi Motherboard
2 Samsung NVME SSD
Corsair iCUE H100i ELITE CAPELLIX Liquid CPU Cooler
Corsair RM850X Power Supply

Intel Core i5 13600K
Nvidia RTX 3090
Corsair Vengeance DDR5 5200MHz 32 GB
ASUS ProArt Z690 Creative Wifi Motherboard
2 Samsung NVME SSD
Corsair iCUE H100i ELITE CAPELLIX Liquid CPU Cooler
Corsair AX1600i Digital ATX Power Supply 

Abdali wrote on 10/19/2025, 8:45 PM

On my laptop (Intel Xe + RTX 4060) I tested a few 10-bit 422 media. Only one ever worked with decoding and that was when I manually set the Xe in file io. So the switchover isn't working properly here. 32-bit is slower than 8-bit though that's as expected.

I can't test on my desktop as 422 media isn't decoding at all with 50XX GPUs.

It's strange that the switchover isn't working for you, because I've successfully tested the automatic switching feature on both of my systems, and it works great at correctly delegating the decoding task of H265 10Bit 422 file to the iGPU.

To be clear about my core issue: I'm not just saying the performance is slower in 32-bit float—which is expected—I'm saying the playback is severely stuttering (or jerky). This stuttering is a noticeable bug that did not exist in the previous versions (VP21/22), and that's the real problem I'm highlighting.

Intel Core i7 14900K
Nvidia RTX 3060 / Intel Arc A380
XPG LANCER BLADE RGB 64GB DDR5 6400MT/s
ASUS ProArt Z690 Creative Wifi Motherboard
2 Samsung NVME SSD
Corsair iCUE H100i ELITE CAPELLIX Liquid CPU Cooler
Corsair RM850X Power Supply

Intel Core i5 13600K
Nvidia RTX 3090
Corsair Vengeance DDR5 5200MHz 32 GB
ASUS ProArt Z690 Creative Wifi Motherboard
2 Samsung NVME SSD
Corsair iCUE H100i ELITE CAPELLIX Liquid CPU Cooler
Corsair AX1600i Digital ATX Power Supply 

RogerS wrote on 10/19/2025, 10:31 PM

@Abdali For the stuttering issue I'd suggest contacting support so they can work with you directly.

A 5080 user had this diagnosis but I have no way of knowing if that's relevant to a 30XX GPU. My own GPUs are of a different generation and aren't exhibiting the issue you noted.