Response Curves - apertus° wiki

Identifying response curves on the CMV12000 sensor


[edit] Intro

We would like the data coming from the sensor to be linear (that is, proportional to the number of photons received). Plus some constant offset.

Is it already linear, or do we have to adjust it?

This sensor has a high dynamic range mode called PLR (piecewise linear response). In this mode, the sensor response is highly nonlinear (configurable, but the output is not exactly piecewise linear, so we still have to identify the curves). Details about this mode on the PLR page.

Here we'll focus on identifying the response curves.

[edit] Naive method (why it's not good)

One may simply assume there is some input range where the sensor response is linear, and use that range to correct nonlinearities that occur outside this range (for example, with underexposure, overexposure, or by comparing a PLR mode with a regular exposure mode). This can be good enough for initial tests, but it has a major problem.

Suppose we have a sensor with this response: y = (t*x)^2 (where t is exposure time, x is radiance, x*t is the number of photons captured, and y is the sensor output value, all in arbitrary units). Let's "expose" two synthetic images from it, in octave:

x = linspace(0, 1, 1000);             % synthetic "image" with range from 0 to 1 (black to white)
a = x.^2;                             % "expose" the image for 1 time unit
b = (3*x).^2;                         % "expose" the image for 3 time units
subplot(131), plot(x, a, x, b)        % plot the sensor responses vs input signal (radiance)
subplot(132), plot(a, b);             % plot the second image, under the assumption that first one might be linear
subplot(133), plot(log2(a), log2(b)); % also try a logarithmic plot
hold on, plot(log2(a),log2(3*a),'k'); % expected response for the logarithmic plot


Did you expect that? Our simulated sensor has obviously nonlinear sensor response, yet comparing two images suggest its output might be actually linear!

However, the log plot indicates there may be a problem, so this type of nonlinearity is not really 100% hidden. One could also think the exposure controls on the sensor were not accurate (in this case, you would also get an offset between the two curves, so it's easy to mistake these two situations).

This nonlinearity was pretty obvious, but subtler variations may be much more difficult to spot, so we should really find a way to recover the true response curve. Without a photon counter, that is.

[edit] Test data

Bracketed image of the grayscale line from an IT8 chart, 1...100ms in 1ms increments, exposure sequence executed 100 times, images at identical settings averaged.


[edit] Existing algorithms

  • Review: Best algorithms for HDR image generation. A study of performance bounds. [1]
  • Debevec97 [2][3], implemented in mkhdr [4]
  • Robertson02 [5], implemented in pfshdrcalibrate [6]
  • ArgyllCMS, shaper+matrix algorithm
  • other algorithms that I can try?

[edit] Debevec97 results


[edit] Robertson02 results


These curves seem to indicate that our black level may be a little too high. Figure out why.

Scripts, logs:

[edit] ArgyllCMS shaper+matrix results

WIP, source code here.

Q: can this be used on bracketed images?

[edit] Custom algorithms

[edit] Median vs exposure

This assumes the exposure setting from the sensor is accurate, so the input signal that will be digitzed is proportional to number of photons (scene radiance multiplied by exposure time.

Unfortunately, this method delivered inconsistent results (two experiments resulted in two different curves - [7] [8]).

[edit] "Black Hole" anomaly

Here's an example showing how much the exposure setting can be trusted. Crop taken from a bracketed exposure (1,2,5,10,20,30 ms) repeated 700 times and averaged. Image from the figure is from the 30ms exposure.

blackhole.jpg blackhole.png

That's right - when exposure time increases, very dark pixels become even darker!

Also note the black level, after correcting the image with dark frames and black reference columns, is set to 128 (the entire image is adjusted by adding a constant, so the black reference columns reach this value). The pixels showing this anomaly are below this black level. There is detail in the image, even in a single (non-averaged) exposure, but for some unknown reason, it ends up below the black level.

This behavior cannot be reproduced on dark frames though, so probably (unconfirmed hypothesis) the black level goes down (not sure if globally or locally) when the image content gets brighter. The detail in the dark area on the test image looks normal (not reversed), so probably the total number of photons captured is the variable we should account for?

[edit] Matching per-pixel curves

This algorithm is roughly inspired from Robertson02, but without any solid mathematical background; only with the hope that it will converge to a good solution.

  1. Initial guess:
    • plot per-pixel response curves on a graph, trusting exposure control on the sensor for accuracy
    • for each sensor output value, let's say from 16 to 2048 in 0.25-stop increments, compute the median exposure required to get that output
    • shift each curve horizontally, in log space, to match the median exposure
    • repeat until convergence
  2. Refinement:
    • Assume each pixel curve may be shifted by a constant offset vertically
    • Repeat the same algorithm used for initial guess, but also shift the curves vertically, in linear space, by a constant offset
    • Assume the average shift value is 0 (the algorithm may converge to a wrong solution without this constraint)


  • you can't operate on too many pixel curves at once (it can get slow and memory-intensive)
  • there's no proof on whether it will converge, and if yes, how accurate the solution will be (but we can try it on synthetic data)

Example (only a single line from the test images was used):

Initial guess:


Second guess:


Note the response looks a bit different from Robertson02. Who is right?

[edit] Direct per-pixel curves

Storing response curves for each pixel would be really expensive in terms of storage (memory), but may be worth trying.

Test data: bracketed exposures at 1,2,5,10,20,30 ms. 700-frame average for each exposure (total 4200 images, captured overnight). This would reduce dynamic noise by log2(sqrt(700)) = 4.7 stops, so the static variations should be reasonably clean.

Since the test image was not perfectly uniform (since we didn't actually shoot a flat field frame), we probably won't be able to find out the PRNU.


[edit] Curves from grayscale IT8 reference data

Response curves can be also estimated from the IT8 reference data, which was (hopefully) measured with much better accuracy than what we can achieve with our current setup.


  • we have some absolute reference data!
  • we can get a quick estimation of some part of the curve from a single image


  • in our setup, the illumination is nonuniform
  • the dynamic range of the IT8 chart is not very high (7.33 stops on the bottom gray scale)
  • few data points (because, without a matrix, we can only use grayscale swatches; on the good side, these few data points are not very noisy)


  • we can account for the nonuniform illumination by checking the brightness levels at the edges of the chart (light gray); it's not very exact, just better than nothing
  • to cover the entire dynamic range, we can use bracketed images, but this is not perfect - the image levels appear to vary with number of photons, see the "Black Hole" anomaly
  • we may be able to use the curve matching algorithm to account for these unwanted variations with exposure time
[edit] Correcting for nonuniform illumination

Sampling data from the edge of the chart (left) lets us correct for nonuniform illumination: one can either adjust the chart itself (right), or the reference data (preferred).

it8_lum_sampling_areas.jpg it8_lum_correction.jpg

More graphs: [9] [10]

Why it's better to adjust the reference data instead of chart pixels?

  • black level in our setup is uncertain
  • the chart borders are fairly bright, so the measurements of border brightness are not really influenced by small black level variations
  • adjusting the image must be done in log space, so we must operate on linearized data
  • if the black level is uncertain, the adjusted dark swatches will have large errors
  • if the response curve is not linear (and not known), adjusting the data under the assumption of linearity will introduce extra errors
  • on the other hand, reference IT8 data is already in linear XYZ space, so adjusting it will not introduce extra errors (other than our less-than-accurate measurements)
  • the measurement errors can be hopefully solved with an iterative procedure (anyone able to prove it?)

Plotting raw data vs reference values on IT8 gray swatches gives:


In the left figure, the two grayscale lines from the IT8 chart diverge because of nonuniform illumination. After adjusting the reference data, the match is much better, though it's still not perfect (because we can't really measure the nonuniform illumination in the chart, other than by including it in the model when performing the color profiling step).

[edit] Iterative procedure for estimating the response curve in the presence of nonuniform illumination
  1. initial estimation of response curve, from noisy data (because of nonuniform illumination)
  2. adjust the reference data from brightness on chart borders (use griddata to interpolate)
  3. estimate the response curve again, this time from much cleaner data
  4. repeat steps 2-3 until convergence

Unfortunately, this is not going to solve the mismatch between the two scales. Better get some properly illuminated charts :)

[edit] Estimating response curve from bracketed IT8 images


© 2017 apertus° Community • Theme based on work by BorkWeb • Powered by MediaWiki