Difference between revisions of "Response Curves"

From apertus wiki
Jump to: navigation, search
 
(33 intermediate revisions by the same user not shown)
Line 11: Line 11:
Here we'll focus on identifying the response curves.
Here we'll focus on identifying the response curves.


===Naive method===
====Naive method (why it's not good)====


One may simply assume there is some input range where the sensor response is linear, and use that range to correct nonlinearities that occur outside this range (for example, with underexposure, overexposure, or by comparing a PLR mode with a regular exposure mode). This can be good enough for initial tests, but it has a major problem.
One may simply assume there is some input range where the sensor response is linear, and use that range to correct nonlinearities that occur outside this range (for example, with underexposure, overexposure, or by comparing a PLR mode with a regular exposure mode). This can be good enough for initial tests, but it has a major problem.
Line 17: Line 17:
Suppose we have a sensor with this response: y = (t*x)^2 (where t is exposure time, x is radiance, x*t is the number of photons captured, and y is the sensor output value, all in arbitrary units). Let's "expose" two synthetic images from it, in octave:
Suppose we have a sensor with this response: y = (t*x)^2 (where t is exposure time, x is radiance, x*t is the number of photons captured, and y is the sensor output value, all in arbitrary units). Let's "expose" two synthetic images from it, in octave:
   
   
<pre style="white-space: pre-wrap">
  x = linspace(0, 1, 1000);            % synthetic "image" with range from 0 to 1 (black to white)
  x = linspace(0, 1, 1000);            % synthetic "image" with range from 0 to 1 (black to white)
  a = x.^2;                            % "expose" the image for 1 time unit
  a = x.^2;                            % "expose" the image for 1 time unit
Line 24: Line 25:
  subplot(133), plot(log2(a), log2(b)); % also try a logarithmic plot
  subplot(133), plot(log2(a), log2(b)); % also try a logarithmic plot
  hold on, plot(log2(a),log2(3*a),'k'); % expected response for the logarithmic plot
  hold on, plot(log2(a),log2(3*a),'k'); % expected response for the logarithmic plot
</pre>


http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/naive.png
http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/naive.png
Line 32: Line 34:


This nonlinearity was pretty obvious, but subtler variations may be much more difficult to spot, so we should really find a way to recover the true response curve. Without a photon counter, that is.
This nonlinearity was pretty obvious, but subtler variations may be much more difficult to spot, so we should really find a way to recover the true response curve. Without a photon counter, that is.
===Test data===
Bracketed image of the grayscale line from an IT8 chart, 1...100ms in 1ms increments, exposure sequence executed 100 times, images at identical settings averaged.
http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/it8-grayscale.jpg


===Existing algorithms===
===Existing algorithms===
Line 38: Line 46:
* Debevec97 [http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf][http://pages.cs.wisc.edu/~csverma/CS766_09/HDRI/hdr.html], implemented in mkhdr [http://duikerresearch.com/mkhdr-archive/]
* Debevec97 [http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf][http://pages.cs.wisc.edu/~csverma/CS766_09/HDRI/hdr.html], implemented in mkhdr [http://duikerresearch.com/mkhdr-archive/]
* Robertson02 [http://pages.cs.wisc.edu/~lizhang/courses/cs766-2008f/projects/hdr/Robertson2003ETA.pdf], implemented in pfshdrcalibrate [http://resources.mpi-inf.mpg.de/hdr/calibration/pfs.html]
* Robertson02 [http://pages.cs.wisc.edu/~lizhang/courses/cs766-2008f/projects/hdr/Robertson2003ETA.pdf], implemented in pfshdrcalibrate [http://resources.mpi-inf.mpg.de/hdr/calibration/pfs.html]
* ArgyllCMS, shaper+matrix algorithm
* other algorithms that I can try?
* other algorithms that I can try?


====Debevec97 results====
====Debevec97 results====
TODO


====Robertson02 results====
====Robertson02 results====
Line 47: Line 58:


These curves seem to indicate that our black level may be a little too high. Figure out why.
These curves seem to indicate that our black level may be a little too high. Figure out why.
Scripts, logs: http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/pfs/
====ArgyllCMS shaper+matrix results====
WIP, source code [https://github.com/apertus-open-source-cinema/misc-tools-utilities/commit/155be5e8cac9c7d158b8cf1a3055c833c9eab9a9 here].
Q: can this be used on bracketed images?


===Custom algorithms===
===Custom algorithms===
Line 55: Line 74:


Unfortunately, this method delivered inconsistent results (two experiments resulted in two different curves - [http://files.apertus.org/AXIOM-Beta/snapshots/nonlinearitytests-21.01.2016/curve-new.png] [http://files.apertus.org/AXIOM-Beta/snapshots/nonlinearitytests-21.01.2016/curve-old.png]).
Unfortunately, this method delivered inconsistent results (two experiments resulted in two different curves - [http://files.apertus.org/AXIOM-Beta/snapshots/nonlinearitytests-21.01.2016/curve-new.png] [http://files.apertus.org/AXIOM-Beta/snapshots/nonlinearitytests-21.01.2016/curve-old.png]).
====="Black Hole" anomaly=====
Here's an example showing how much the exposure setting can be trusted. Crop taken from a bracketed exposure (1,2,5,10,20,30 ms) repeated 700 times and averaged. Image from the figure is from the 30ms exposure.
http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/blackhole.jpg http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/blackhole.png
That's right - when exposure time increases, very dark pixels become even darker!
Also note the black level, after correcting the image with dark frames and black reference columns, is set to 128 (the entire image is adjusted by adding a constant, so the black reference columns reach this value). The pixels showing this anomaly are below this black level. There is detail in the image, even in a single (non-averaged) exposure, but for some unknown reason, it ends up below the black level.
This behavior cannot be reproduced on dark frames though, so probably (unconfirmed hypothesis) the black level goes down (not sure if globally or locally) when the image content gets brighter. The detail in the dark area on the test image looks normal (not reversed), so probably the total number of photons captured is the variable we should account for?


====Matching per-pixel curves====
====Matching per-pixel curves====


This algorithm is roughly inspired from Robertson02, but without any solid mathematical background; only with the hope that it will converge to a good solution.
This algorithm is roughly inspired from Robertson02, but without any solid mathematical background; only with the hope that it will converge to a good solution.
Test data: variable exposures in fine increments (1...100ms in 1ms increments), 100 frames averaged at each exposure. From these, we can extract a few per-pixel response curves, and match them to recover an overall response curve for the entire sensor. Test image can be a grayscale from black to white (for example, the grayscale line from an IT8 chart).
http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/it8-grayscale.jpg


# Initial guess:
# Initial guess:
#* plot per-pixel response curves on a graph
#* plot per-pixel response curves on a graph, trusting exposure control on the sensor for accuracy
#* for each sensor output value, let's say from 16 to 2048 in 0.25-stop increments, compute the median exposure required to get that output
#* for each sensor output value, let's say from 16 to 2048 in 0.25-stop increments, compute the median exposure required to get that output
#* shift each curve horizontally, in log space, to match the median exposure
#* shift each curve horizontally, in log space, to match the median exposure
#* repeat until convergence
#* repeat until convergence
# Refinement
# Refinement:
#* Assume each pixel curve may be shifted by a constant offset vertically
#* Assume each pixel curve may be shifted by a constant offset vertically
#* Repeat the same algorithm used for initial guess, but also shift the curves vertically, in linear space, by a constant offset
#* Repeat the same algorithm used for initial guess, but also shift the curves vertically, in linear space, by a constant offset
Line 78: Line 105:
* you can't operate on too many pixel curves at once (it can get slow and memory-intensive)
* you can't operate on too many pixel curves at once (it can get slow and memory-intensive)
* there's no proof on whether it will converge, and if yes, how accurate the solution will be (but we can try it on synthetic data)
* there's no proof on whether it will converge, and if yes, how accurate the solution will be (but we can try it on synthetic data)
Example (only a single line from the test images was used):
Initial guess:
http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/response-curve-test-init.png
Second guess:
http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/response-curve-test.png
Note the response looks a bit different from Robertson02. Who is right?


====Direct per-pixel curves====
====Direct per-pixel curves====
Line 86: Line 125:


Since the test image was not perfectly uniform (since we didn't actually shoot a flat field frame), we probably won't be able to find out the PRNU.
Since the test image was not perfectly uniform (since we didn't actually shoot a flat field frame), we probably won't be able to find out the PRNU.
WIP
====Curves from grayscale IT8 reference data====
Response curves can be also estimated from the IT8 reference data, which was (hopefully) measured with much better accuracy than what we can achieve with our current setup.
Advantage:
* we have some absolute reference data!
* we can get a quick estimation of some part of the curve from a single image
Problems:
* in our setup, the illumination is nonuniform
* the dynamic range of the IT8 chart is not very high (7.33 stops on the bottom gray scale)
* few data points (because, without a matrix, we can only use grayscale swatches; on the good side, these few data points are not very noisy)
Solutions/workarounds:
* we can account for the nonuniform illumination by checking the brightness levels at the edges of the chart (light gray); it's not very exact, just better than nothing
* to cover the entire dynamic range, we can use bracketed images, but this is not perfect - the image levels appear to vary with number of photons, see the [[Response_Curves#.22Black_Hole.22_anomaly|"Black Hole" anomaly]]
* we may be able to use the curve matching algorithm to account for these unwanted variations with exposure time
=====Correcting for nonuniform illumination=====
Sampling data from the edge of the chart (left) lets us correct for nonuniform illumination: one can either adjust the chart itself (right), or the reference data (preferred).
http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/it8_lum_sampling_areas.jpg http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/it8_lum_correction.jpg
More graphs: [http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/it8_lum_correction_3d.png] [http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/it8_lum_correction_2d.png]
Why it's better to adjust the reference data instead of chart pixels?
* black level in our setup is uncertain
* the chart borders are fairly bright, so the measurements of border brightness are not really influenced by small black level variations
* adjusting the image must be done in log space, so we must operate on linearized data
* if the black level is uncertain, the adjusted dark swatches will have large errors
* if the response curve is not linear (and not known), adjusting the data under the assumption of linearity will introduce extra errors
* on the other hand, reference IT8 data is already in linear XYZ space, so adjusting it will not introduce extra errors (other than our less-than-accurate measurements)
* the measurement errors can be hopefully solved with an iterative procedure (anyone able to prove it?)
Plotting raw data vs reference values on IT8 gray swatches gives:
http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/it8_lum_check.png
In the left figure, the two grayscale lines from the IT8 chart diverge because of nonuniform illumination. After adjusting the reference data, the match is much better, though it's still not perfect (because we can't really measure the nonuniform illumination in the chart, other than by including it in the model when performing the color profiling step).
=====Iterative procedure for estimating the response curve in the presence of nonuniform illumination=====
# initial estimation of response curve, from noisy data (because of nonuniform illumination)
# adjust the reference data from brightness on chart borders (use griddata to interpolate)
# estimate the response curve again, this time from much cleaner data
# repeat steps 2-3 until convergence
Unfortunately, this is not going to solve the mismatch between the two scales. Better get some properly illuminated charts :)
=====Estimating response curve from bracketed IT8 images=====
WIP

Latest revision as of 16:01, 12 November 2018

Identifying response curves on the CMV12000 sensor

1 Intro

We would like the data coming from the sensor to be linear (that is, proportional to the number of photons received). Plus some constant offset.

Is it already linear, or do we have to adjust it?

This sensor has a high dynamic range mode called PLR (piecewise linear response). In this mode, the sensor response is highly nonlinear (configurable, but the output is not exactly piecewise linear, so we still have to identify the curves). Details about this mode on the PLR page.

Here we'll focus on identifying the response curves.

1.1 Naive method (why it's not good)

One may simply assume there is some input range where the sensor response is linear, and use that range to correct nonlinearities that occur outside this range (for example, with underexposure, overexposure, or by comparing a PLR mode with a regular exposure mode). This can be good enough for initial tests, but it has a major problem.

Suppose we have a sensor with this response: y = (t*x)^2 (where t is exposure time, x is radiance, x*t is the number of photons captured, and y is the sensor output value, all in arbitrary units). Let's "expose" two synthetic images from it, in octave:

 x = linspace(0, 1, 1000);             % synthetic "image" with range from 0 to 1 (black to white)
 a = x.^2;                             % "expose" the image for 1 time unit
 b = (3*x).^2;                         % "expose" the image for 3 time units
 subplot(131), plot(x, a, x, b)        % plot the sensor responses vs input signal (radiance)
 subplot(132), plot(a, b);             % plot the second image, under the assumption that first one might be linear
 subplot(133), plot(log2(a), log2(b)); % also try a logarithmic plot
 hold on, plot(log2(a),log2(3*a),'k'); % expected response for the logarithmic plot

naive.png

Did you expect that? Our simulated sensor has obviously nonlinear sensor response, yet comparing two images suggest its output might be actually linear!

However, the log plot indicates there may be a problem, so this type of nonlinearity is not really 100% hidden. One could also think the exposure controls on the sensor were not accurate (in this case, you would also get an offset between the two curves, so it's easy to mistake these two situations).

This nonlinearity was pretty obvious, but subtler variations may be much more difficult to spot, so we should really find a way to recover the true response curve. Without a photon counter, that is.

2 Test data

Bracketed image of the grayscale line from an IT8 chart, 1...100ms in 1ms increments, exposure sequence executed 100 times, images at identical settings averaged.

it8-grayscale.jpg

3 Existing algorithms

  • Review: Best algorithms for HDR image generation. A study of performance bounds. [1]
  • Debevec97 [2][3], implemented in mkhdr [4]
  • Robertson02 [5], implemented in pfshdrcalibrate [6]
  • ArgyllCMS, shaper+matrix algorithm
  • other algorithms that I can try?

3.1 Debevec97 results

TODO

3.2 Robertson02 results

response-curve-pfs.png

These curves seem to indicate that our black level may be a little too high. Figure out why.

Scripts, logs: http://files.apertus.org/AXIOM-Beta/snapshots/response-curves/pfs/

3.3 ArgyllCMS shaper+matrix results

WIP, source code here.

Q: can this be used on bracketed images?

4 Custom algorithms

4.1 Median vs exposure

This assumes the exposure setting from the sensor is accurate, so the input signal that will be digitzed is proportional to number of photons (scene radiance multiplied by exposure time.

Unfortunately, this method delivered inconsistent results (two experiments resulted in two different curves - [7] [8]).

4.1.1 "Black Hole" anomaly

Here's an example showing how much the exposure setting can be trusted. Crop taken from a bracketed exposure (1,2,5,10,20,30 ms) repeated 700 times and averaged. Image from the figure is from the 30ms exposure.

blackhole.jpg blackhole.png

That's right - when exposure time increases, very dark pixels become even darker!

Also note the black level, after correcting the image with dark frames and black reference columns, is set to 128 (the entire image is adjusted by adding a constant, so the black reference columns reach this value). The pixels showing this anomaly are below this black level. There is detail in the image, even in a single (non-averaged) exposure, but for some unknown reason, it ends up below the black level.

This behavior cannot be reproduced on dark frames though, so probably (unconfirmed hypothesis) the black level goes down (not sure if globally or locally) when the image content gets brighter. The detail in the dark area on the test image looks normal (not reversed), so probably the total number of photons captured is the variable we should account for?

4.2 Matching per-pixel curves

This algorithm is roughly inspired from Robertson02, but without any solid mathematical background; only with the hope that it will converge to a good solution.

  1. Initial guess:
    • plot per-pixel response curves on a graph, trusting exposure control on the sensor for accuracy
    • for each sensor output value, let's say from 16 to 2048 in 0.25-stop increments, compute the median exposure required to get that output
    • shift each curve horizontally, in log space, to match the median exposure
    • repeat until convergence
  2. Refinement:
    • Assume each pixel curve may be shifted by a constant offset vertically
    • Repeat the same algorithm used for initial guess, but also shift the curves vertically, in linear space, by a constant offset
    • Assume the average shift value is 0 (the algorithm may converge to a wrong solution without this constraint)

Problems:

  • you can't operate on too many pixel curves at once (it can get slow and memory-intensive)
  • there's no proof on whether it will converge, and if yes, how accurate the solution will be (but we can try it on synthetic data)

Example (only a single line from the test images was used):

Initial guess:

response-curve-test-init.png

Second guess:

response-curve-test.png

Note the response looks a bit different from Robertson02. Who is right?

4.3 Direct per-pixel curves

Storing response curves for each pixel would be really expensive in terms of storage (memory), but may be worth trying.

Test data: bracketed exposures at 1,2,5,10,20,30 ms. 700-frame average for each exposure (total 4200 images, captured overnight). This would reduce dynamic noise by log2(sqrt(700)) = 4.7 stops, so the static variations should be reasonably clean.

Since the test image was not perfectly uniform (since we didn't actually shoot a flat field frame), we probably won't be able to find out the PRNU.

WIP

4.4 Curves from grayscale IT8 reference data

Response curves can be also estimated from the IT8 reference data, which was (hopefully) measured with much better accuracy than what we can achieve with our current setup.

Advantage:

  • we have some absolute reference data!
  • we can get a quick estimation of some part of the curve from a single image

Problems:

  • in our setup, the illumination is nonuniform
  • the dynamic range of the IT8 chart is not very high (7.33 stops on the bottom gray scale)
  • few data points (because, without a matrix, we can only use grayscale swatches; on the good side, these few data points are not very noisy)

Solutions/workarounds:

  • we can account for the nonuniform illumination by checking the brightness levels at the edges of the chart (light gray); it's not very exact, just better than nothing
  • to cover the entire dynamic range, we can use bracketed images, but this is not perfect - the image levels appear to vary with number of photons, see the "Black Hole" anomaly
  • we may be able to use the curve matching algorithm to account for these unwanted variations with exposure time
4.4.1 Correcting for nonuniform illumination

Sampling data from the edge of the chart (left) lets us correct for nonuniform illumination: one can either adjust the chart itself (right), or the reference data (preferred).

it8_lum_sampling_areas.jpg it8_lum_correction.jpg

More graphs: [9] [10]

Why it's better to adjust the reference data instead of chart pixels?

  • black level in our setup is uncertain
  • the chart borders are fairly bright, so the measurements of border brightness are not really influenced by small black level variations
  • adjusting the image must be done in log space, so we must operate on linearized data
  • if the black level is uncertain, the adjusted dark swatches will have large errors
  • if the response curve is not linear (and not known), adjusting the data under the assumption of linearity will introduce extra errors
  • on the other hand, reference IT8 data is already in linear XYZ space, so adjusting it will not introduce extra errors (other than our less-than-accurate measurements)
  • the measurement errors can be hopefully solved with an iterative procedure (anyone able to prove it?)

Plotting raw data vs reference values on IT8 gray swatches gives:

it8_lum_check.png

In the left figure, the two grayscale lines from the IT8 chart diverge because of nonuniform illumination. After adjusting the reference data, the match is much better, though it's still not perfect (because we can't really measure the nonuniform illumination in the chart, other than by including it in the model when performing the color profiling step).

4.4.2 Iterative procedure for estimating the response curve in the presence of nonuniform illumination
  1. initial estimation of response curve, from noisy data (because of nonuniform illumination)
  2. adjust the reference data from brightness on chart borders (use griddata to interpolate)
  3. estimate the response curve again, this time from much cleaner data
  4. repeat steps 2-3 until convergence

Unfortunately, this is not going to solve the mismatch between the two scales. Better get some properly illuminated charts :)

4.4.3 Estimating response curve from bracketed IT8 images

WIP