I seems to me that all those supposedly AI assisted masking FX have quite some way to go. The contours of moving objects are still messy and jittery. At least to my eyes it's far from good enough.
I seems to me that all those supposedly AI assisted masking FX have quite some way to go. The contours of moving objects are still messy and jittery. At least to my eyes it's far from good enough.
No AI masking is going to get it perfect, but it will get you a lot closer than having to do it manually. That's the main draw for it at this point, it's just shaving time off the process.
That's how we've been approaching it behind the scenes, advising them to make it as easy as possible, and some of those fixes have already been implemented in the Smart Mask tool... for instance, it now generates fewer points when creating the mask, so there aren't hundreds of little points you have to zoom into to correct. More progress is on the way as they continue to develop these tools.
So in layman's terms for me...what exactly is the AI part in this? If it's about a more "clever" or accurate calculation / algo to distinguish a predefined pixel containing a color or luma intensity from another, I'd, rather ignorantly of course, say that I wonder what's so difficult about that...? I tried the masking even with a dark moving shape in front completely uniformily colored backround, and it was still messy in the contours. I just can't seem to understand where the difficulty lies. If we can do reasonably good greenscreen, even in TV, why aren't we more advanced on this, given today's processing power? I really wouldn't mind having to wait for a clip to process, like with motion tracking or stabilization, for it to then be accurate.
Machine learning is all about pattern recognition and that can help determine what is an object and where it lies in 3D space which may help improve the separation even when edges are ambiguous.