Reversing digital filters - apertus° wiki

[WIP]

Contents

[edit] Motivation

We are trying to record raw video with existing HDMI recorders (that don't know anything about recording raw).

Unfortunately, it seems that some of these recorders apply some processing on the image, like sharpening or blurring. Therefore, it may be a good idea to attempt to undo some of these filters applied to the image without our permission :P

Note: uncompressed versions of all of the images from this page can be found at http://files.apertus.org/AXIOM-Beta/snapshots/reversing-digital-filters/

Example:

Source image (R, G, B):

red-input-raw-encoded.jpg green-input-raw-encoded.jpg blue-input-raw-encoded.jpg

Recorded image (Atomos Shogun, ProRes 422):

red-hdmi-from-tif.jpg green-hdmi-from-tif.jpg blue-hdmi-from-tif.jpg

The compression looks pretty strong; according to simulation (compressing the source image with prores in ffmpeg), I would expect the recorded image to look like this:

red-422-prores-ffmpeg.jpg green-422-prores-ffmpeg.jpg blue-422-prores-ffmpeg.jpg

Color versions (source, image from shogun, prores ffmpeg profile 2):

rgb-input-raw-encoded.jpg rgb-hdmi-from-tif.jpg rgb-422-prores-ffmpeg.jpg

My guess: the HDMI recorder appears to sharpen the image before compressing, causing the ProRes codec to struggle.

So... good luck recovering the raw image from this!

[edit] Intro

General idea: feed some test images to the HDMI, compare with the output image from the recorder, and attempt to undo the transformations in order to recover the original image.

We will start by experimenting with simple linear filters on grayscale images, as they are easiest to work with.

[edit] Grayscale images

[edit] Linear filters on grayscale images

Input:

  • source image
  • altered image (with an unknown filter)

To find a digital linear filter that would undo the alteration on our test image, we may solve a linear system: each pixel can be expressed as a linear combination of its neighbouring pixels. For a MxN image and a PxP filter, we will have P*P unknowns and M*N equations.

To simplify things, we'll consider filters with odd diameters, so filter size would be PxP = (2*n+1) x (2*n+1).

If we assume our filter is horizontally and vertically symmetrical, the number of unknowns decreases to (n+1) * (n+1).

If we assume our filter is also diagonally symmetrical, the number of unknowns becomes n * (n+1) / 2.

Let's try some examples.

We will use a training data set (a sample image used to compute the filter), and a validation data set (a different image, to check how well the filter does in other situations, not just on one particular test image). This is a simple strategy to avoid overfitting [1][2][3]. Maybe not the best one [4], but for a quick experiment, it should do the trick.

Training and validation images:

training.jpg validation.jpg

[edit] Blur
f1 = @(x) imfilter(x, fspecial('disk', 1));
  0.025079   0.145344   0.025079
  0.145344   0.318310   0.145344
  0.025079   0.145344   0.025079

images-f1.jpg

Left: altered image (in this case, blurred). Middle: recovered image (by undoing the alteration). Right: largest filter identified.

Tip: for pixel peeping, open the above image in a new tab in Firefox, then open the validation image in another tab. They will align perfectly, so you can now switch between them, back and forth.

Standard deviations of residuals:

resid-f1.png

Note: when filter size = 0, the filter becomes simply a scaling factor, so the largest value shows the mismatch between the two images (original vs filtered) after scaling them to look equally bright.

[edit] 3x3 averaging blur
f2 = @(x) imfilter(x, fspecial('average', 3));
  0.11111   0.11111   0.11111
  0.11111   0.11111   0.11111
  0.11111   0.11111   0.11111

images-f2.jpg

resid-f2.png

[edit] Sharpen
f3 = @(x) imfilter(x, fspecial('unsharp'));
 -0.16667  -0.66667  -0.16667
 -0.66667   4.33333  -0.66667
 -0.16667  -0.66667  -0.16667

images-f3.jpg

resid-f3.png

This one was reversed completely. Not bad!

So, can we really undo sharpening artifacts completely? Usually, those sharpening artifacts may result in values outside the usual black...white range (0-255 in this experiment), and in practice, these values get clipped. Let's add clipping to our filter:

f3c = @(x) max(min(imfilter(x, fspecial('unsharp')),255),0);

images-f3c.jpg

resid-f3c.png

Not so good this time...

[edit] Blur followed by sharpen
f4 = @(x) imfilter(imfilter(x, fspecial('disk', 1)), fspecial('unsharp'));
 -0.0041798  -0.0409430  -0.1052555  -0.0409430  -0.0041798
 -0.0409430  -0.1381697   0.3357311  -0.1381697  -0.0409430
 -0.1052555   0.3357311   0.9750399   0.3357311  -0.1052555
 -0.0409430  -0.1381697   0.3357311  -0.1381697  -0.0409430
 -0.0041798  -0.0409430  -0.1052555  -0.0409430  -0.0041798

images-f4.jpg

resid-f4.png

[edit] Laplacian
f5 = @(x) imfilter(x, fspecial('laplacian')) + 128;
  0.16667   0.66667   0.16667
  0.66667  -3.33333   0.66667
  0.16667   0.66667   0.16667

images-f5.jpg

resid-f5.png

This one is a little harder :)

[edit] Laplacian plus the original image
f6 = @(x) imfilter(x, fspecial('laplacian')) + x;
  0.16667   0.66667   0.16667
  0.66667  -2.33333   0.66667
  0.16667   0.66667   0.16667

images-f6.jpg

resid-f6.png

So, yeah, this algorithm is not exactly magic.

[edit] Nonlinear filters

[edit] Median 3x3
h1 = @(x) medfilt2(x, [3 3]);

images-h1.jpg

resid-h1.png

Median filter seems pretty hard to undo. Compare it to the 3x3 averaging filter (f2) from above.

[edit] Median 1x3
h2 = @(x) medfilt2(x, [1 3]);

images-h2.jpg

resid-h2.png

[edit] Added noise

[edit] Gaussian
n1 = @(x) x + randn(size(x)) * 20;

images-n1.jpg

resid-n1.png

Not exactly the best denoising filter, but it seems to do something :)

[edit] Column noise
n2 = @(x) x + ones(size(x,1),1) * randn(1,size(x,2)) * 10;

images-n2.jpg

resid-n2.png

This one seems a little better, but still doesn't beat --fixpn from raw2dng :)

[edit] Different filters on odd/even columns

[edit] Average odd/even columns

(1 and 2, 3 and 4, similar to a YUV422 subsampling)

g1 = @(x) imresize((x(:,1:2:end) + x(:,2:2:end))/2, size(x), 'nearest');
 0 0.5 0.5 on columns 1:2:N
 0.5 0.5 0 on columns 2:2:N

Trying to recover it with a simple linear filter:

images-g1.jpg

resid-g1.png

Do we have better luck with two linear filters?

images-g1h.jpg

resid-g1h.png

Um... nope. Figure out why.

[edit] Blur on odd columns, sharpen on even columns
function y = g2aux(x,fa,fb)
   y = x;
   y(:,1:2:end) = fa(x)(:,1:2:end);
   y(:,2:2:end) = fb(x)(:,2:2:end);
end
g2 = @(x) g2aux(x,f1,f3);

Assuming the image can be recovered with a simple linear filter gives this: images-g2.jpg

resid-g2.png

Any luck with two linear filters?

images-g2h.jpg

resid-g2h.png

Looks like it worked this time!

[edit] Green on odd columns, red on even columns, attempt to recover green (similar to the debayering problem)
function y = g3aux(g,r)
   y = g;
   y(:,2:2:end) = r(:,2:2:end);
end
g3 = @(g) g3aux(g,r);

Recovery with one filter:

images-g3.jpg

resid-g3.png

Recovery with two filters:

images-g3h.jpg

resid-g3h.png

[edit] RGB images

WIP, just a sneak preview for now.

[edit] HDMI recovery experiment

HDMI image (crop from 1080p, made up by dumping raw channels to sRGB, with gamma 2: [R, 0.9*(G1+G2)/2, B]^2):

hdmi.jpg


3.8K render from the corresponding raw12 image that was sent to HDMI (AMaZE, RawTherapee, then adjusted with gamma 2):

ideal4k.jpg


HDMI frame, white-balanced and brightness-adjusted (simple scaling on R,G,B):

hdmi1x-wbs.jpg


HDMI frame, resized 200% (convert -resize 200%):

hdmi2x.jpg


3.8K image recovered by filtering the HDMI image (first frame):

recovered4k.jpg

While it's far away from the ideal rendering from raw image, the result looks a little better than simply scaling to 200%, right?

With a recorder that doesn't compress like crazy, and by sending alternate green channels (G1 on even frames and G2 on odd frames), this method might have a chance to recover some more detail.


Recovered image downsized to 1080p (convert -resize 50%):

recovered2xto1x.jpg


Ideal 1080p (convert -resize 50%):

ideal1x.jpg


Will it work on real-world images, or was it all overfitting? We'll have to try and see.

[edit] YCbCr images

(todo)

© 2017 apertus° Community • Theme based on work by BorkWeb • Powered by MediaWiki