Clear all

ICM_COMPRESS_QUERY rejected due to bad header?


Topic starter


I'm working on a personal CINE project and was looking to use this codec in my capture rig.  The issue is that when I query codec for its attributes (input RGB32 160x120, output YUV422 160x120), your codec is responding with an invalid header (size: 0) 

This is on a windows 10 machine, with a 32bit build of a custom c# app.   Do you have any documentation on what your expecting from these queries?   The VFW appears to be working for VirtualDub, but not my app.  My code works correctly with Lagarythm and XVID.


2020-05-12 0805] Log opened - MagicYUV lossless video codec 2.2.0 (Ultimate) (trial) (32 bit)
[2020-05-12 0805] Codec variant: MagicYUV - YUV 42
[2020-05-12 0805] Application: C:\Users\TK2\source\toupcam_new\SharpAvi-master\Sample\bin\Debug\Sample.exe
[2020-05-12 0805] --------------------------
[2020-05-12 0805] Setting codec preferences (init from global config):
[2020-05-12 0805] Encode:
[2020-05-12 0805] Auto threads: YES
[2020-05-12 0805] Threads: 4
[2020-05-12 0805] Color space: Rec.601
[2020-05-12 0805] Full range YUV: NO
[2020-05-12 0805] Assume interlaced: NO
[2020-05-12 0805] Interpolate downsampling: YES
[2020-05-12 0805] Compression method: Dynamic
[2020-05-12 0805] Decode:
[2020-05-12 0805] Auto threads: YES
[2020-05-12 0805] Threads: 4
[2020-05-12 0805] Allow YUV 44 conv. to: RGB
[2020-05-12 0805] Allow YUV 42 conv. to: RGB, YUV 44
[2020-05-12 0805] Allow YUV 40 conv. to: RGB, YUV 44, YUV 42
[2020-05-12 0805] Allow YUV 40 conv. to: RGB, YUV 44, YUV 42, YUV 40
[2020-05-12 0805] Suggest RGB first: YES
[2020-05-12 0805] --------------------------
[2020-05-12 0805] Compress query: RGB32 (160 x 120) -> M8Y2 [INVALID HEADER (size: 0)] (160 x 120): NOT SUPPORTED
[2020-05-12 0805] --------------------------
[2020-05-12 0805] Log closed





1 Answer

I treat lpbiOutput of ICM_COMPRESS_QUERY as a full description of a compressed MagicYUV format, ie. a BITMAPINFOHEADER plus additional codec data appended after it, so it is not enough to specify only the fourcc (lpbiOutput->biCompression) for that message. This is because the fcc does not fully describe the compressed format (interlaced setting, YUV matrix etc. are specified as additional data).

You first have to send ICCompressGetFormatSize() (ie. ICM_COMPRESS_GET_FORMAT with lpbiOutput set to NULL), which will tell you the compressed format size (which is sizeof(BITMAPINFOHEADER) + additional codec header size), then send ICM_COMPRESS_GET_FORMAT again to get the actual format.

Or if you are simply interested whether the given codec variant can compress a given input format at all, simply leave lpbiOutput = NULL for ICM_COMPRESS_QUERY. This is sort of a general question like "Can you compress this input at all?", whereas if you specify lpbiOutput then it becomes a direct question like "Can you compress this input as this specific output (compressed) format", in which case the compressed format must be described fully and not just using the codec fcc.


Thanks much, I'll look into modifying the ICM_COMPRESS_QUERY.