You wont notice any difference over your SSD. Usually you use M.2 for your operating system. I have an M.2, an SSD and mechanical drive. I have noticed no throughput difference between the M.2 and SATA in anything other than synthetic benchmarks.
There are which confirm this.
I use my M.2 for windows and key apps. SSD for cache and temp folders and everything else is done from either the network attached storage or the local mechanical drive as often the bottleneck is Vegas rather than any drive throughput. In my playback and rendering testing the difference between network vs SSD vs M.2 vs Mechanical is negligible.
It depends on your project. If you do 4K or even 1080 60p mulitcam editing with 4 or more camera angels, a SATA-III drive will be your limiting factor.
M.2 has a physical layer specification that allows different interfaces. The fastest interface, PCI Express 3.0, will help make some process run faster, multicam as OldSmoke mentioned, uncompressed or high bit rate intermediates, but some process won't benefit at all.
Fast SOURCE drives are important for editing multi-cam 4K. The target drive does not matter as much unless you are saving to uncompressed… My new 9900K system has (2) 2TB Intel 660P M.2 for SOURCE video setup in RAID0 on the motherboard. The 2TB 660P's are $180 USD and have fast read speeds and fast write speeds until they are half-full... Connected to the motherboard's fast U.2 connector via an adapter & cable is another 2TB M.2 for TARGET video. A 4th 2TB M.2 is in on a PCIe adapter card where the 1st partition is the OS & the 2nd partition is background music & commonly used video files for effects, etc. For many years I had a 6-drive SATA RAID0 as SOURCE and a 6-drive SATA RAID10 as the TARGET, so file transfer speeds were around 400 Mbs but that setup was hot & power hungry...
@TheRhino your 2x 660's in raid 0 can have I suppose sequential read 2x1.8 = 3.6 GB/s of bandwidth which almost can saturate the DMI link at 3.93 GB/s between the CPU and Chipset. Then there is 2 more M.2's connected via pce adapters, either adding to the DMI bottleneck or stealing PCI lanes from the CPU itself if connected together with the videocard (in the latter case it will not add to the DMI bottleneck, but this will drop the number of PCI lanes available for the videcard from 16 to 8 (or 4 depending), effectively reducing the bandwidth of the videocard. A difficult choice right? Or am I wrong?
Bandwidth requirement for uncompressed 4K is about 12 Gbit/s so 1.5 GB/s. For any compressed stream it is lower (magicyuv capable of 90 fps 4K and compression would be 1.5+ so 1GB/s required per stream, this is CPU dependable).
I'd say that SSDs in raid0 is only needed if you are often editing with more than one uncompressed 4K+ stream on more than 1 track. This is also make sense only if every other component of the system is capable of processing the stream in realtime.
So for me it is an easy choice: I would not set the raid0
I placed my 2TB M.2 in RAID0 to have 4TB of combined space more than the need to have faster speeds... The ASUS Z390 WS has a PLX chip to manage PCIe throughput so real world performance has been excellent. This would not be the case with a lesser motherboard... My Vega64 is setup to use 8 PCIe lanes & it seems to run fine on 8 because my system completes the Red Car Test in 14s, which is the same as a Threadripper 1950 with all of those PCIe lanes available... The liquid-cooled 9900K, Vega64 liquid-cooled, 32GB of DDR4 & ASUS Z390 WS only cost $1350 to place into my existing Xeon ATX case. I'm getting 2X the render speeds on every project vs. my former/aging 4.0ghz Xeon 5660's on X58 motherboards... Real world performance is what matters!