Identifying response curves on the CMV12000 sensor
We would like the data coming from the sensor to be linear (that is, proportional to the number of photons received). Plus some constant offset.
Is it already linear, or do we have to adjust it?
This sensor has a high dynamic range mode called PLR (piecewise linear response). In this mode, the sensor response is highly nonlinear (configurable, but the output is not exactly piecewise linear, so we still have to identify the curves). Details about this mode on the PLR page.
Here we'll focus on identifying the response curves.
One may simply assume there is some input range where the sensor response is linear, and use that range to correct nonlinearities that occur outside this range (for example, with underexposure, overexposure, or by comparing a PLR mode with a regular exposure mode). This can be good enough for initial tests, but it has a major problem.
Suppose we have a sensor with this response: y = (t*x)^2 (where t is exposure time, x is radiance, x*t is the number of photons captured, and y is the sensor output value, all in arbitrary units). Let's "expose" two synthetic images from it, in octave:
x = linspace(0, 1, 1000); % synthetic "image" with range from 0 to 1 (black to white) a = x.^2; % "expose" the image for 1 time unit b = (3*x).^2; % "expose" the image for 3 time units subplot(131), plot(x, a, x, b) % plot the sensor responses vs input signal (radiance) subplot(132), plot(a, b); % plot the second image, under the assumption that first one might be linear subplot(133), plot(log2(a), log2(b)); % also try a logarithmic plot hold on, plot(log2(a),log2(3*a),'k'); % expected response for the logarithmic plot
Did you expect that? Our simulated sensor has obviously nonlinear sensor response, yet comparing two images suggest its output might be actually linear!
However, the log plot indicates there may be a problem, so this type of nonlinearity is not really 100% hidden. One could also think the exposure controls on the sensor were not accurate (in this case, you would also get an offset between the two curves, so it's easy to mistake these two situations).
This nonlinearity was pretty obvious, but subtler variations may be much more difficult to spot, so we should really find a way to recover the true response curve. Without a photon counter, that is.
Bracketed image of the grayscale line from an IT8 chart, 1...100ms in 1ms increments, exposure sequence executed 100 times, images at identical settings averaged.
These curves seem to indicate that our black level may be a little too high. Figure out why.
WIP, source code here.
Q: can this be used on bracketed images?
This assumes the exposure setting from the sensor is accurate, so the input signal that will be digitzed is proportional to number of photons (scene radiance multiplied by exposure time.
Here's an example showing how much the exposure setting can be trusted. Crop taken from a bracketed exposure (1,2,5,10,20,30 ms) repeated 700 times and averaged. Image from the figure is from the 30ms exposure.
That's right - when exposure time increases, very dark pixels become even darker!
Also note the black level, after correcting the image with dark frames and black reference columns, is set to 128 (the entire image is adjusted by adding a constant, so the black reference columns reach this value). The pixels showing this anomaly are below this black level. There is detail in the image, even in a single (non-averaged) exposure, but for some unknown reason, it ends up below the black level.
This behavior cannot be reproduced on dark frames though, so probably (unconfirmed hypothesis) the black level goes down (not sure if globally or locally) when the image content gets brighter. The detail in the dark area on the test image looks normal (not reversed), so probably the total number of photons captured is the variable we should account for?
This algorithm is roughly inspired from Robertson02, but without any solid mathematical background; only with the hope that it will converge to a good solution.
Example (only a single line from the test images was used):
Note the response looks a bit different from Robertson02. Who is right?
Storing response curves for each pixel would be really expensive in terms of storage (memory), but may be worth trying.
Test data: bracketed exposures at 1,2,5,10,20,30 ms. 700-frame average for each exposure (total 4200 images, captured overnight). This would reduce dynamic noise by log2(sqrt(700)) = 4.7 stops, so the static variations should be reasonably clean.
Since the test image was not perfectly uniform (since we didn't actually shoot a flat field frame), we probably won't be able to find out the PRNU.
Response curves can be also estimated from the IT8 reference data, which was (hopefully) measured with much better accuracy than what we can achieve with our current setup.
Sampling data from the edge of the chart (left) lets us correct for nonuniform illumination: one can either adjust the chart itself (right), or the reference data (preferred).
Why it's better to adjust the reference data instead of chart pixels?
Plotting raw data vs reference values on IT8 gray swatches gives:
In the left figure, the two grayscale lines from the IT8 chart diverge because of nonuniform illumination. After adjusting the reference data, the match is much better, though it's still not perfect (because we can't really measure the nonuniform illumination in the chart, other than by including it in the model when performing the color profiling step).
Unfortunately, this is not going to solve the mismatch between the two scales. Better get some properly illuminated charts :)