Compression Settings... "Accepted colorspace:"
Of course, if set up correctly. If the codec receives YUV 4:2:0 and compresses as YUV 4:2:0, there is no loss 🙂
The only mistake that can happen is when VirtualDub actually does this: YUV420 (input) -> RGB -> YUV420 -> codec. Or some similar variation, like decoding as YUV 4:2:2, etc. This requires careful settings in VirtualDub to make sure it decodes as YUV420, does no conversion internally, and send YUV420 untouched to the codec.
Thank you Balázs, I'm totally clear on that now. Really appreciate your knowledge.
I will do my own research into the Internal Colour Spaces used by the various applications we use in our own workflow. I mentioned earlier that Vegas uses sRGB, but thinking about it I'm sure that it's more likely to be Linear RGB (not sRGB) for Internal Processing. That would be much better to then process filters and FX in a Linear Colour Space. But can I confirm that this would make no difference as far as the MYUV codec is concerned, as it would "know" anyway what was being requested by the application, either Linear RGB or sRGB, and make the appropriate conversion on ingest? Is that correct? And same when writing to the MYUV codec, I presume?
Well, there are more things here, I'm not claiming to be an expert on everything about color spaces, but I'll try to give an opinion here.
How Vegas does it's internal processing (or any other app) doesn't really concern the codec. The only thing an app MUST know is what is the colorspace of the input it's using. 8-bit RGB is normally sRGB. At least, that's assumed. When the codec is doing NO conversion, it doesn't matter. Because the codec is mathematically lossless, it gets numbers, and gives back the same numbers, be it sRGB or any other gamut RGB.
Whend doing conversions, like RGB<->YUV, it does matter, and the codec assumes sRGB, and for YUV either Rec.601 or Rec.709 parameters, depending on how it's set up.
Even if Vegas does it's internal processing in linear RGB (which I believe it does when switched to float mode), the codec doesn't know about it, because what it gets is already converted to 8-bit sRGB (also when decoding, the app knows it gets sRGB from the codec, so converts it to linear RGB for internal processing appropriately).
So a short answer is that the App-Codec boundary for 8-bit uses sRGB. If you work with the RGB codec variant and hence have no conversions, it could be other gamut too, but that the codec doesn't know about, and it's the app's job to store it in metadata of what RGB color space the codec encoded.
OK, I *think* I get it!!!
So in our example workflow-
8-bit YUV420 H264 MP4 camera file enters Vegas, the h264 codec "knows" that Vegas is requesting RGB Colour Space so decodes the camera input to an 8-bit sRGB Colour Space, Vegas then further decodes this into a 32-bit Floating-Point Linear RGB (it's Internal Processing Colour Space).
When then exporting from Vegas, the MYUV codec "tells" Vegas it wants an 8-bit sRGB colour space, so Vegas encodes it's Linear 32-bit RGB to an 8-bit sRGB Colour Space and passes this to the codec, the codec writes the AVI as 8-bit sRGB.
Did I understand that correctly?
That's a plausible explanation, and I believe it probably works that way.
The only minor correction would be that sRGB is implied, Vegas only requests "RGB24" or "RGB32" pixel format, but cannot tell anything about gamut, so sRGB is assumed.
Yes, I think you're correct from some other stuff I read. I believe it is now just 'accepted as standard', 'implicitly understood' that ingested RGB will be sRGB, and exported RGB will be sRGB.