We are trying to record raw video with existing HDMI recorders (that don't know anything about recording raw).

Unfortunately, it seems that some of these recorders apply some processing on the image, like sharpening or blurring. Therefore, it may be a good idea to attempt to undo some of these filters applied to the image without our permission :P

Note: uncompressed versions of all of the images from this page can be found at http://files.apertus.org/AXIOM-Beta/snapshots/reversing-digital-filters/


Source image (R, G, B):

red-input-raw-encoded.jpg green-input-raw-encoded.jpg blue-input-raw-encoded.jpg

Recorded image (Atomos Shogun, ProRes 422):

red-hdmi-from-tif.jpg green-hdmi-from-tif.jpg blue-hdmi-from-tif.jpg

The compression looks pretty strong; according to simulation (compressing the source image with prores in ffmpeg), I would expect the recorded image to look like this:

red-422-prores-ffmpeg.jpg green-422-prores-ffmpeg.jpg blue-422-prores-ffmpeg.jpg

Color versions (source, image from shogun, prores ffmpeg profile 2):

rgb-input-raw-encoded.jpg rgb-hdmi-from-tif.jpg rgb-422-prores-ffmpeg.jpg

My guess: the HDMI recorder appears to sharpen the image before compressing, causing the ProRes codec to struggle.

So... good luck recovering the raw image from this!


General idea: feed some test images to the HDMI, compare with the output image from the recorder, and attempt to undo the transformations in order to recover the original image.

We will start by experimenting with simple linear filters on grayscale images, as they are easiest to work with.

Grayscale images

Linear filters on grayscale images


  • source image
  • altered image (with an unknown filter)

To find a digital linear filter that would undo the alteration on our test image, we may solve a linear system: each pixel can be expressed as a linear combination of its neighbouring pixels. For a MxN image and a PxP filter, we will have P*P unknowns and M*N equations.

To simplify things, we'll consider filters with odd diameters, so filter size would be PxP = (2*n+1) x (2*n+1).

If we assume our filter is horizontally and vertically symmetrical, the number of unknowns decreases to (n+1) * (n+1).

If we assume our filter is also diagonally symmetrical, the number of unknowns becomes n * (n+1) / 2.

Let's try some examples.

We will use a training data set (a sample image used to compute the filter), and a validation data set (a different image, to check how well the filter does in other situations, not just on one particular test image). This is a simple strategy to avoid overfitting [1][2][3]. Maybe not the best one [4], but for a quick experiment, it should do the trick.

Training and validation images:

training.jpg validation.jpg

f1 = @(x) imfilter(x, fspecial('disk', 1));
  0.025079   0.145344   0.025079
  0.145344   0.318310   0.145344
  0.025079   0.145344   0.025079


Left: altered image (in this case, blurred). Middle: recovered image (by undoing the alteration). Right: largest filter identified.

Tip: for pixel peeping, open the above image in a new tab in Firefox, then open the validation image in another tab. They will align perfectly, so you can now switch between them, back and forth.

Standard deviations of residuals:


Note: when filter size = 0, the filter becomes simply a scaling factor, so the largest value shows the mismatch between the two images (original vs filtered) after scaling them to look equally bright.

3x3 averaging blur
f2 = @(x) imfilter(x, fspecial('average', 3));
  0.11111   0.11111   0.11111
  0.11111   0.11111   0.11111
  0.11111   0.11111   0.11111



f3 = @(x) imfilter(x, fspecial('unsharp'));
 -0.16667  -0.66667  -0.16667
 -0.66667   4.33333  -0.66667
 -0.16667  -0.66667  -0.16667



This one was reversed completely. Not bad!

So, can we really undo sharpening artifacts completely? Usually, those sharpening artifacts may result in values outside the usual black...white range (0-255 in this experiment), and in practice, these values get clipped. Let's add clipping to our filter:

f3c = @(x) max(min(imfilter(x, fspecial('unsharp')),255),0);



Not so good this time...

Blur followed by sharpen
f4 = @(x) imfilter(imfilter(x, fspecial('disk', 1)), fspecial('unsharp'));
 -0.0041798  -0.0409430  -0.1052555  -0.0409430  -0.0041798
 -0.0409430  -0.1381697   0.3357311  -0.1381697  -0.0409430
 -0.1052555   0.3357311   0.9750399   0.3357311  -0.1052555
 -0.0409430  -0.1381697   0.3357311  -0.1381697  -0.0409430
 -0.0041798  -0.0409430  -0.1052555  -0.0409430  -0.0041798



f5 = @(x) imfilter(x, fspecial('laplacian')) + 128;
  0.16667   0.66667   0.16667
  0.66667  -3.33333   0.66667
  0.16667   0.66667   0.16667



This one is a little harder :)

Laplacian plus the original image
f6 = @(x) imfilter(x, fspecial('laplacian')) + x;
  0.16667   0.66667   0.16667
  0.66667  -2.33333   0.66667
  0.16667   0.66667   0.16667



So, yeah, this algorithm is not exactly magic.

Nonlinear filters

Median 3x3
h1 = @(x) medfilt2(x, [3 3]);



Median filter seems pretty hard to undo. Compare it to the 3x3 averaging filter (f2) from above.

Median 1x3
h2 = @(x) medfilt2(x, [1 3]);



Added noise

n1 = @(x) x + randn(size(x)) * 20;



Not exactly the best denoising filter, but it seems to do something :)

Column noise
n2 = @(x) x + ones(size(x,1),1) * randn(1,size(x,2)) * 10;



This one seems a little better, but still doesn't beat --fixpn from raw2dng :)

Different filters on odd/even columns

Average odd/even columns

(1 and 2, 3 and 4, similar to a YUV422 subsampling)

g1 = @(x) imresize((x(:,1:2:end) + x(:,2:2:end))/2, size(x), 'nearest');
 0 0.5 0.5 on columns 1:2:N
 0.5 0.5 0 on columns 2:2:N

Trying to recover it with a simple linear filter:



Do we have better luck with two linear filters?



Um... nope. Figure out why.

Blur on odd columns, sharpen on even columns
function y = g2aux(x,fa,fb)
   y = x;
   y(:,1:2:end) = fa(x)(:,1:2:end);
   y(:,2:2:end) = fb(x)(:,2:2:end);
g2 = @(x) g2aux(x,f1,f3);

Assuming the image can be recovered with a simple linear filter gives this: images-g2.jpg


Any luck with two linear filters?



Looks like it worked this time!

Green on odd columns, red on even columns, attempt to recover green (similar to the debayering problem)
function y = g3aux(g,r)
   y = g;
   y(:,2:2:end) = r(:,2:2:end);
g3 = @(g) g3aux(g,r);

Recovery with one filter:



Recovery with two filters:



RGB images

WIP, just a sneak preview for now.

HDMI recovery experiment

HDMI image (crop from 1080p, made up by dumping raw channels to sRGB, with gamma 2: [R, 0.9*(G1+G2)/2, B]^2):


3.8K render from the corresponding raw12 image that was sent to HDMI (AMaZE, RawTherapee, then adjusted with gamma 2):


HDMI frame, white-balanced and brightness-adjusted (simple scaling on R,G,B):


HDMI frame, resized 200% (convert -resize 200%):


3.8K image recovered by filtering the HDMI image (first frame):


While it's far away from the ideal rendering from raw image, the result looks a little better than simply scaling to 200%, right?

With a recorder that doesn't compress like crazy, and by sending alternate green channels (G1 on even frames and G2 on odd frames), this method might have a chance to recover some more detail.

Recovered image downsized to 1080p (convert -resize 50%):


Ideal 1080p (convert -resize 50%):


Will it work on real-world images, or was it all overfitting? We'll have to try and see.

YCbCr images