Here is a way they could speed up renders and make better use of multiple cores when you need to render to several formats.
Presently, when you render to multiple formats, you have either use the antiquated script driven batch renderer or push each one through by hand. Every time you render to a format, the renderer has to process each frame and then pass it to the codec to encode in you desired format.
So why not have a renderer that produces each frame, then passes it to four codecs? Or seven codecs? When not run each codec on it's own core? That way instead of wasting time rendering the same stuff over and over again just so it can pass the same frames to different codecs.
Actually I think the Vegas architecture is not quite as simple as that. I think the way it redners frames is determined by settings in the codec, for exampel as with codec conversions. However whichever way you look at it, there is a certain stage where there is identical work being done over and over again in multiple renders to different formats. With the huge amounts of redundant processor time some people are experiencing with multi core systems (I have a dual core here that never seems to run above 70%), using the redundant time to make the most of the reusable data for other codecs seems like a potential massive benefit.
Of course, it would not be useful for mixing multiple renders to Studio RGB and Computer RGB formats, but you can't do that with the clunky old rendering script anyway. Where it would be very useful is for say two pass renders. I've just completed a one pass render and straight away started a two pass render of the same thing. Common sense would mean that two codec tasks are called, one producing the one pass render, while the other task does the first pass of the two pass render. We check the one pass version for and if we find an error, we can stop the second pass in it's tracks. Otherwise we have saved over a day on the final render.
I have a render on a dual core at the moment going that will not finish until Sunday afternoon. Even if we had a similarly clocked quad core processor running on that machine, it would most liekly not give us much of a speed improvement as it is barely using 58% of the available processor time on the dual core. I have criticised Sony for not moving on the multi threaded code, and this is one example of where they need to think in new ways. It is quite something when computer architecture moves beyond the ability the developers to actually exploit it. Poor coding used to mean that it would create too much of a processor overhead. Now you get less overhead and more redundancy, and things just happen slower.
Presently, when you render to multiple formats, you have either use the antiquated script driven batch renderer or push each one through by hand. Every time you render to a format, the renderer has to process each frame and then pass it to the codec to encode in you desired format.
So why not have a renderer that produces each frame, then passes it to four codecs? Or seven codecs? When not run each codec on it's own core? That way instead of wasting time rendering the same stuff over and over again just so it can pass the same frames to different codecs.
Actually I think the Vegas architecture is not quite as simple as that. I think the way it redners frames is determined by settings in the codec, for exampel as with codec conversions. However whichever way you look at it, there is a certain stage where there is identical work being done over and over again in multiple renders to different formats. With the huge amounts of redundant processor time some people are experiencing with multi core systems (I have a dual core here that never seems to run above 70%), using the redundant time to make the most of the reusable data for other codecs seems like a potential massive benefit.
Of course, it would not be useful for mixing multiple renders to Studio RGB and Computer RGB formats, but you can't do that with the clunky old rendering script anyway. Where it would be very useful is for say two pass renders. I've just completed a one pass render and straight away started a two pass render of the same thing. Common sense would mean that two codec tasks are called, one producing the one pass render, while the other task does the first pass of the two pass render. We check the one pass version for and if we find an error, we can stop the second pass in it's tracks. Otherwise we have saved over a day on the final render.
I have a render on a dual core at the moment going that will not finish until Sunday afternoon. Even if we had a similarly clocked quad core processor running on that machine, it would most liekly not give us much of a speed improvement as it is barely using 58% of the available processor time on the dual core. I have criticised Sony for not moving on the multi threaded code, and this is one example of where they need to think in new ways. It is quite something when computer architecture moves beyond the ability the developers to actually exploit it. Poor coding used to mean that it would create too much of a processor overhead. Now you get less overhead and more redundancy, and things just happen slower.