Difference between revisions of "Response Curves"

From apertus wiki
Jump to: navigation, search
Line 50: Line 50:
This assumes the exposure setting from the sensor is accurate, so the input signal that will be digitzed is proportional to number of photons (scene radiance multiplied by exposure time.
This assumes the exposure setting from the sensor is accurate, so the input signal that will be digitzed is proportional to number of photons (scene radiance multiplied by exposure time.


http://files.apertus.org/AXIOM-Beta/snapshots/nonlinearitytests-21.01.2016/curve-new.png
Unfortunately, this method delivered inconsistent results (two experiments resulted in two different curves - [http://files.apertus.org/AXIOM-Beta/snapshots/nonlinearitytests-21.01.2016/curve-new.png] [http://files.apertus.org/AXIOM-Beta/snapshots/nonlinearitytests-21.01.2016/curve-old.png]).
 
Unfortunately, this method delivered inconsistent results (two experiments resulted in two different curves).


====Matching per-pixel curves====
====Matching per-pixel curves====

Revision as of 20:45, 7 February 2016

Identifying response curves on the CMV12000 sensor

1 Intro

We would like the data coming from the sensor to be linear (that is, proportional to the number of photons received). Plus some constant offset.

Is it already linear, or do we have to adjust it?

This sensor has a high dynamic range mode called PLR (piecewise linear response). In this mode, the sensor response is highly nonlinear (configurable, but the output is not exactly piecewise linear, so we still have to identify the curves). Details about this mode on the PLR page.

Here we'll focus on identifying the response curves.

2 Naive method

One may simply assume there is some input range where the sensor response is linear, and use that range to correct nonlinearities that occur outside this range (for example, with underexposure, overexposure, or by comparing a PLR mode with a regular exposure mode). This can be good enough for initial tests, but it has a major problem.

Suppose we have a sensor with this response: y = (t*x)^2 (where t is exposure time, x is radiance, x*t is the number of photons captured, and y is the sensor output value, all in arbitrary units). Let's "expose" two synthetic images from it, in octave:

x = linspace(0, 1, 1000);             % synthetic "image" with range from 0 to 1 (black to white)
a = x.^2;                             % "expose" the image for 1 time unit
b = (3*x).^2;                         % "expose" the image for 3 time units
subplot(131), plot(x, a, x, b)        % plot the sensor responses vs input signal (radiance)
subplot(132), plot(a, b);             % plot the second image, under the assumption that first one might be linear
subplot(133), plot(log2(a), log2(b)); % also try a logarithmic plot
hold on, plot(log2(a),log2(3*a),'k'); % expected response for the logarithmic plot

naive.png

Did you expect that? Our simulated sensor has obviously nonlinear sensor response, yet comparing two images suggest its output might be actually linear!

However, the log plot indicates there may be a problem, so this type of nonlinearity is not really 100% hidden. One could also think the exposure controls on the sensor were not accurate (in this case, you would also get an offset between the two curves, so it's easy to mistake these two situations).

This nonlinearity was pretty obvious, but subtler variations may be much more difficult to spot, so we should really find a way to recover the true response curve. Without a photon counter, that is.

3 Existing algorithms

  • Review: Best algorithms for HDR image generation. A study of performance bounds. [1]
  • Debevec97 [2][3], implemented in mkhdr [4]
  • Robertson02 [5], implemented in psfhdrcalibrate [6]
  • other algorithms that I can try?

3.1 Debevec97 results

3.2 Robertson02 results

4 Custom algorithms

4.1 Median vs exposure

This assumes the exposure setting from the sensor is accurate, so the input signal that will be digitzed is proportional to number of photons (scene radiance multiplied by exposure time.

Unfortunately, this method delivered inconsistent results (two experiments resulted in two different curves - [7] [8]).

4.2 Matching per-pixel curves

This algorithm is roughly inspired from Robertson02, but without any solid mathematical background; only with the hope that it will converge to a good solution.

Test data: variable exposures in fine increments, 100 frames averaged at each exposure. From these, we can extract a few per-pixel response curves, and match them to recover an overall response curve for the entire sensor. Test image can be a grayscale from black to white (for example, the grayscale line from an IT8 chart).

  1. Initial guess:
    • plot per-pixel response curves on a graph
    • for each sensor output value, let's say from 16 to 2048 in 0.25-stop increments, compute the median exposure required to get that output
    • shift each curve horizontally, in log space, to match the median exposure
    • repeat until convergence
  2. Refinement
    • Assume each pixel curve may be shifted by a constant offset vertically
    • Repeat the same algorithm used for initial guess, but also shift the curves vertically, in linear space, by a constant offset
    • Assume the average shift value is 0 (the algorithm may converge to a wrong solution without this constraint)

Problems:

  • you can't operate on too many pixel curves at once (it can get slow and memory-intensive)
  • there's no proof on whether it will converge, and if yes, how accurate the solution will be (but we can try it on synthetic data)

4.3 Direct per-pixel curves

Storing response curves for each pixel would be really expensive in terms of storage (memory), but may be worth trying.

Test data: bracketed exposures at 1,2,5,10,20,30 ms. 700-frame average for each exposure (total 4200 images, captured overnight). This would reduce dynamic noise by log2(sqrt(700)) = 4.7 stops, so the static variations should be reasonably clean.

Since the test image was not perfectly uniform (since we didn't actually shoot a flat field frame), we probably won't be able to find out the PRNU.