Difference between revisions of "Factory Calibration (firmware 2.0)"

From apertus wiki
Jump to: navigation, search
 
(49 intermediate revisions by 2 users not shown)
Line 1: Line 1:
In the following instructions, commands that need to be done on the Beta are prefixed with
  [operator@beta] ~ command
... while commands that need to be done on your computer are prefixed with
  $ command
=Notice=
=Notice=


This page is for firmware 2.0.  
This page is for firmware 2.0. For firmware 1.0, check this page [[Factory Calibration]].
 
This page is assuming you have installed firmware 2.0 on a microSD card of '''at least 16GB''', in order to capture darkframes on the Beta directly.
 
Hint : create a variable containing your Betas IP for easy access, for example :
 
$ export BETA=192.168.1.101


For firmware 1.0, check this page [[Factory Calibration]].
If your startup script doesn't do it already, don't forget to perfom axiom_start.sh


=Hint:=
  [operator@beta] ~ sudo axiom_start.sh


Create a variable containing your Betas IP for easy access.
TO DO


export BETA=192.168.1.101
Good practice is to add a "blank snap" when changing exposure timing, ie


  [operator@beta] ~ axiom_snap  -2 -b -r -e 10ms -z && axiom_snap -2 -b -r -e 10ms > test3.raw12


----
----


=Preparations=
=Preparations=
Install on your AXIOM Beta:
Install on your AXIOM Beta:
  pacman -S python-numpy
  [operator@beta] pacman -S python-numpy
 
Install the following packages on your PC:
dcraw octave


For Ubuntu this would look like:
Install the dcraw and octave packages on your PC. For Ubuntu this would look like:
  sudo apt-get install dcraw octave
  $ sudo apt-get install dcraw octave


Download and compile raw2dng on your PC: https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw2dng
Download and compile raw2dng on your PC: https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw2dng
Line 33: Line 46:




==Step 1: Check range of the input signal==
=Check range of the input signal=


On the Beta set gain to x1 by running:
On the Beta set gain to x1 by running:
  ./set_gain.sh 1
  [operator@beta] ~ sudo /usr/axiom/script/axiom_set_gain.sh 1


Download this Octave file to your PC into your current work directory:
Download this Octave file to your PC into your current work directory:
  wget https://raw.githubusercontent.com/apertus-open-source-cinema/misc-tools-utilities/master/darkframes/read_raw.m
  $ wget https://raw.githubusercontent.com/apertus-open-source-cinema/misc-tools-utilities/master/darkframes/read_raw.m


Capture an overexposed image with the Beta and check the levels:
Capture an overexposed image with the Beta and check the levels:


  ssh root@$BETA "./cmv_snap3 -2 -b -r -e 100ms" > snap.raw12 <br />
  $ ssh operator@$BETA "axiom_snap -2 -b -r -e 100ms" > snap.raw12 <br />
  ./raw2dng snap.raw12 --totally-raw<br />
  $ ./raw2dng snap.raw12 --totally-raw<br />
  octave<br />
  $ octave<br />
     octave:1> a = read_raw('snap.DNG')<br />
     octave:1> a = read_raw('snap.DNG')<br />
     octave:2> prctile(a(:), [0.1 1 50 99 99.9])
     octave:2> prctile(a(:), [0.1 1 50 99 99.9])
Line 55: Line 68:


Repeat for gains 2, 3, 4.
Repeat for gains 2, 3, 4.
Put this in startup script (ie: [[kick_manual.sh]]) :
./set_gain.sh 1




Line 63: Line 73:
----
----


==Step 2: RCN calibration==
=RCN calibration=


RCN stands for Row Coloumn Noise correction meaning we filter out the fixed pattern noise.
RCN stands for Row Coloumn Noise correction meaning we filter out the fixed pattern noise.


Make sure you have these scripts already in your Betas /root/ directly: https://github.com/apertus-open-source-cinema/beta-software/tree/master/software/scripts
Clear the old RCN values, by typing on the Beta :
 
Clear the old RCN values:
 
Firmware 2.0
ssh operator@axiom.camera "sudo axiom-start.sh"
ssh operator@axiom.camera "sudo /usr/axiom/script/axiom-rcn_clear.py"


Before firmware 2.0
sudo axiom_start.sh
  ssh root@$BETA "./rcn_clear.py"
  sudo /usr/axiom/script/axiom_rcn_clear.py


Now you need to make sure that your Beta is not capturing any light (really not a single photon should hit the sensor :) ):
Now you need to make sure that your Beta is not capturing any light (really not a single photon should hit the sensor :) ):
Line 84: Line 88:
# turn off all lights in the room - do this at night or in a completely dark room
# turn off all lights in the room - do this at night or in a completely dark room


Take 64 dark frames at 10ms, gain x1 with the following script executed on your PC (1.2 GB needed):
Take 64 dark frames at 10ms (~1.2GB), gain x1 with the following script executed on the Beta:
   
   
  ssh root@$BETA " ./set_gain.sh 1" <br />
  sudo /usr/axiom/script/axiom_set_gain.sh 1<br />
  ssh root@$BETA ". ./cmv.func; fil_reg 15 0" # disable HDMI stream <br />
  axiom_sequencer_stop.sh # disable HDMI stream <br />
  for i in `seq 1 64`; do <br />
  for i in `seq 1 64`; do <br />
   ssh root@$BETA "./cmv_snap3 -2 -b -r -e 10ms" > dark-x1-10ms-$i.raw12  <br />
   axiom_snap -2 -b -r -e 10ms > dark-x1-10ms-$i.raw12  <br />
  done  <br />
  done  <br />
  ssh root@$BETA ". ./cmv.func; fil_reg 15 0x01000100"  # enable HDMI stream
  axiom_sequencer_start.sh # re-enable HDMI stream


Compute a temporary dark frame for RCN calibration:
Compute a temporary dark frame for RCN calibration:
  raw2dng --no-blackcol --calc-darkframe dark-x1-10ms-*.raw12
  raw2dng --no-blackcol --calc-darkframe dark-x1-10ms-*.raw12
If cmv_snap3 (or axiom-snap) is before v1.11, you may need to add (this should not be the case for any 2.0 firmware):
--swap-lines


This should process quite quickly and output something like the following at the end:
This should process quite quickly and output something like the following at the end:
Line 110: Line 111:


Rename and upload darkframe to your Beta:
Rename and upload darkframe to your Beta:
  mv darkframe-x1.pgm darkframe-rcn.pgm <br />
  mv darkframe-x1.pgm darkframe-rcn.pgm
scp darkframe-rcn.pgm root@$BETA:/root/


Set the RCN values:
Set the RCN values:
  ssh root@$BETA "./rcn_darkframe.py darkframe-rcn.pgm"
  sudo /usr/axiom/script/axiom_rcn_darkframe.py darkframe-rcn.pgm
 
Put this in startup script (ie : [[axiom_start.sh]]) :
/usr/axiom/script/axiom_rcn_darkframe.py darkframe-rcn.pgm


Put this in startup script (ie : [[kick_manual.sh]]) :
==Troubleshooting==
./rcn_darkframe.py darkframe-rcn.pgm


If you get an error report like this:
If you get an error report like this:
Line 125: Line 127:
  ImportError: No module named 'png'
  ImportError: No module named 'png'


Make sure the Beta is connected to the Internet via Ethernet and run:
Make sure the Beta is connected to the Internet (via Ethernet) and run:
  pip install pypng
  pip install pypng


and then run the python script again
and then run the python script again


[operator@beta] ~ $ sudo /usr/axiom/script/axiom-rcn-darkframe.py darkframe-rcn.pgm
If you got something like this :
[sudo] password for operator:
target black level: 128
reading darkframe-rcn.pgm...
Traceback (most recent call last):
  File "/usr/axiom/script/axiom-rcn-darkframe.py", line 75, in <module>
    dark = read_pnm(filename)
  File "/usr/axiom/script/axiom-rcn-darkframe.py", line 53, in read_pnm
    format, width, height, samples, maxval = png.read_pnm_header( fd )
AttributeError: module 'png' has no attribute 'read_pnm_header'


Traceback (most recent call last): <br />
  File "/usr/axiom/script/axiom_rcn_darkframe.py", line 75, in <module> <br />
    dark = read_pnm(filename) <br />
  File "/usr/axiom/script/axiom_rcn_darkframe.py", line 53, in read_pnm <br />
    format, width, height, samples, maxval = png.read_pnm_header( fd ) <br />
AttributeError: module 'png' has no attribute 'read_pnm_header' <br />


----
Make sure the Beta is connected to the Internet (via Ethernet) and run:
 
 
 
===Validation===
 
====Method 1:====
 
Put a lens cap on the camera and check the image on a HDMI monitor.
 
In the camera set the matrix gains to:
<nowiki>./mat4_conf.sh  20 0 0 0  0 10 10 0  0 10 10 0  0 0 0 10  0 0 0 0</nowiki>
 
run:
<nowiki>./rcn_clear.py</nowiki>


The static noise profile should be visible.
  sudo pip install --force "pypng==0.0.18"
 
run:
  ./rcn_darkframe.py darkframe-rcn.pgm
 
The static noise profile should be gone.
You will still see dynamic row noise (horizontal lines flickering) - thats expected.




Line 170: Line 149:
----
----


=RCN validation=


==Method 1 : on the Beta==


====Method 2:====
On the Beta, capture one darkframe without compensations:
 
   
This method is now entirely automated with running one script inside the camera: https://github.com/apertus-open-source-cinema/beta-software/blob/master/software/scripts/rcn_validation.sh
[operator@beta] $ sudo /usr/axiom/script/axiom_rcn_clear.py <br />
 
  [operator@beta] $ axiom_snap  -2 -b -r -e 10ms > dark-check-1.raw12
 
Capture one darkframe without compensations:
  ssh root@$BETA "./rcn_clear.py" <br />
  ssh root@$BETA "./cmv_snap3 -2 -b -r -e 10ms" > dark-check-1.raw12  


Capture one darkframe with compensations:
Capture one darkframe with compensations:
  ssh root@$BETA "./rcn_darkframe.py darkframe-rcn.pgm" <br />
  [operator@beta] $ sudo /usr/axiom/script/axiom_rcn_darkframe.py darkframe-rcn.pgm <br />
  ssh root@$BETA "./cmv_snap3 -2 -b -r -e 10ms" > dark-check-2.raw12  
  [operator@beta] $ axiom_snap  -2 -b -r -e 10ms > dark-check-2.raw12




Then use raw2dng to analyze the differences:
Then use raw2dng to analyze the differences:
  raw2dng --no-darkframe --check-darkframe dark-check-1.raw12 <br />
  [operator@beta] $ raw2dng --no-darkframe --check-darkframe dark-check-1.raw12 <br />
  raw2dng --no-darkframe --check-darkframe dark-check-2.raw12
  [operator@beta] $ raw2dng --no-darkframe --check-darkframe dark-check-2.raw12


With the compensated snapshot the column noise should disappear, and only row noise left should be dynamic (not static). Visual inspection: the dark frame should have only horizontal lines, not vertical ones.
With the compensated snapshot the column noise should disappear, and only row noise left should be dynamic (not static). Visual inspection: the dark frame should have only horizontal lines, not vertical ones.
Line 203: Line 180:
----
----


==Method 2 : visual method==
Put a lens cap on the camera and check the image on a HDMI monitor.
In the camera set the matrix gains to:
<nowiki>[operator@beta] $ sudo /usr/axiom/script/axiom_mat4_conf.sh  20 0 0 0  0 10 10 0  0 10 10 0  0 0 0 10  0 0 0 0</nowiki>
run:
<nowiki>[operator@beta] $ sudo /usr/axiom/script/axiom/rcn_clear.py</nowiki>
The static noise profile should be visible.


Being in the folder where the darkframe-rcn.pgm file is (should be ~), run:
[operator@beta] $ sudo /usr/axiom/script/axiom/rcn_darkframe.py darkframe-rcn.pgm


====Method 3:====
The static noise profile should be gone.
You will still see dynamic row noise (horizontal lines flickering) - thats expected.
 
 
 
----
 
==Method 3 : with octave==


Capture 2 frames:
Capture 2 frames:
  ssh root@$BETA "./cmv_snap3 -2 -b -r -e 10ms" > dark-check-1.raw12  <br />
  ssh operator@$BETA "axiom_snap -2 -b -r -e 10ms" > dark-check-1.raw12  <br />
  ssh root@$BETA "./cmv_snap3 -2 -b -r -e 10ms" > dark-check-2.raw12  
  ssh operator@$BETA "axiom_snap -2 -b -r -e 10ms" > dark-check-2.raw12  


Convert the two darkframes with raw2dng:
Convert the two darkframes with raw2dng:
Line 236: Line 233:
----
----


==Step 3: Dark frame calibration==
=Step 3: Dark frame calibration=
This step significantly improves image noise levels. More info [https://www.apertus.org/magic-lantern-getting-to-grips-with-axiom-beta-image-sensor-article-feb-2016 here] and [https://www.magiclantern.fm/forum/index.php?topic=11787.msg129672#msg129672 there].
Make sure the RCN calibration from previous steps is in place before continueing here.


Make sure the RCN calibration from previous steps is in place before continueing here.
Take 400 dark frames at various exposure times for the 4 gains, and compute darkframes for each gain. This will require around 8GB of disk space.


Take 1600 dark frames at various exposure times and gains. This will require around 30GB of space on your PC.
Script to do it remotely :


  for i in 1 2 3 4; do<br />
  for g in 1 2 3 4; do <br />
   for e in `seq 1 100`; do<br />
  ssh operator@$BETA sudo /usr/axiom/script/axiom_set_gain.sh ${g} <br />
     for g in 1 2 3 4; do<br />
   for t in {1..100}; do<br />
       ssh root@$BETA "./set_gain.sh $g"<br />
     for i in {1..4}; do <br />
      ssh root@$BETA "./cmv_snap3 -2 -b -r -e ${e}ms" > dark-x${g}-${e}ms-$i.raw12<br />
       ssh operator@$BETA "axiom_snap -2 -r -e ${i}ms" > dark-x${g}-${t}ms-v${i}.raw12 <br />
     done<br />
     done <br />
   done<br />
   done <br />
  raw2dng --calc-dcnuframe dark-x$g-*.raw12 <br />
  rm *.raw12 <br />
  done
  done


Compute dark frames for each gain:
Script to do it on the Beta directly :


  raw2dng --calc-dcnuframe dark-x1-*.raw12<br />
  for g in 1 2 3 4;  do <br />
  raw2dng --calc-dcnuframe dark-x2-*.raw12<br />
  sudo /usr/axiom/script/axiom_set_gain.sh ${g} <br />
raw2dng --calc-dcnuframe dark-x3-*.raw12<br />
  for t in {1..100}; do<br />
raw2dng --calc-dcnuframe dark-x4-*.raw12
    for i in {1..4}; do <br />
      axiom_snap -2 -r -e ${i}ms > dark-x${g}-${t}ms-v${i}.raw12 <br />
    done <br />
  done <br />
  raw2dng --calc-dcnuframe dark-x$g-*.raw12 <br />
  rm *.raw12 <br />
  done


This produces the following files:
This produces the following files:
Line 263: Line 270:


Store these files in a save place as they will be used in post-processing.
Store these files in a save place as they will be used in post-processing.
Place them in the directory where you capture raw12 files or experimental raw HDMI recordings, so raw2dng will use them.
Place them in the directory where you capture raw12 files or experimental raw HDMI recordings, so raw2dng will use them.


Line 269: Line 277:
----
----


 
=Dark Frame Validation=
 
===Validation===


Capture a few new raw12 darkframes with rcn enabled and PGMs in place for raw2dng.
Capture a few new raw12 darkframes with rcn enabled and PGMs in place for raw2dng.
Line 292: Line 298:
----
----


==Step 4: Color profiling==
=Color profiling=


Set gain x1.
Set gain x1.
Line 313: Line 319:
----
----


 
=Color profiling validation=
 
===Validation===


Render the IT8 chart in Blender, using the OCIO configuration.
Render the IT8 chart in Blender, using the OCIO configuration.
Line 327: Line 331:
----
----


 
=Step 5: HDMI dark frames=
 
==Step 5: HDMI dark frames==


For experimental 4k raw recording (https://www.apertus.org/axiom-beta-uhd-raw-mode-explained-article-may-2016) step 3 calibration is not required (step 2 should be in place though). Instead darkframes are collected from HDMI recordings.
For experimental 4k raw recording (https://www.apertus.org/axiom-beta-uhd-raw-mode-explained-article-may-2016) step 3 calibration is not required (step 2 should be in place though). Instead darkframes are collected from HDMI recordings.
Line 349: Line 351:




==Step 6: HDMI filters for raw recovery==
=Step 6: HDMI filters for raw recovery=


This calibration is for to the recorder, not for the camera.
This calibration is for to the recorder, not for the camera.
Line 370: Line 372:




==TODO==
=TODO=


* batch script to copy all the utilities for the workflow
* batch script to copy all the utilities for the workflow
* implement this: https://wiki.apertus.org/index.php/Calibration_files
* implement this: https://wiki.apertus.org/index.php/Calibration_files
* automate HDMI calibration
* automate HDMI calibration
* remind Herbert to fix the line swapping bug
 





Latest revision as of 15:44, 18 January 2024

In the following instructions, commands that need to be done on the Beta are prefixed with

  [operator@beta] ~ command

... while commands that need to be done on your computer are prefixed with

  $ command


1 Notice

This page is for firmware 2.0. For firmware 1.0, check this page Factory Calibration.

This page is assuming you have installed firmware 2.0 on a microSD card of at least 16GB, in order to capture darkframes on the Beta directly.

Hint : create a variable containing your Betas IP for easy access, for example :

$ export BETA=192.168.1.101

If your startup script doesn't do it already, don't forget to perfom axiom_start.sh

  [operator@beta] ~ sudo axiom_start.sh

TO DO

Good practice is to add a "blank snap" when changing exposure timing, ie

  [operator@beta] ~ axiom_snap  -2 -b -r -e 10ms -z && axiom_snap -2 -b -r -e 10ms > test3.raw12

2 Preparations

Install on your AXIOM Beta:

[operator@beta] pacman -S python-numpy

Install the dcraw and octave packages on your PC. For Ubuntu this would look like:

$ sudo apt-get install dcraw octave

Download and compile raw2dng on your PC: https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw2dng




3 Check range of the input signal

On the Beta set gain to x1 by running:

[operator@beta] ~ sudo /usr/axiom/script/axiom_set_gain.sh 1

Download this Octave file to your PC into your current work directory:

$ wget https://raw.githubusercontent.com/apertus-open-source-cinema/misc-tools-utilities/master/darkframes/read_raw.m

Capture an overexposed image with the Beta and check the levels:

$ ssh operator@$BETA "axiom_snap -2 -b -r -e 100ms" > snap.raw12 
$ ./raw2dng snap.raw12 --totally-raw
$ octave
octave:1> a = read_raw('snap.DNG')
octave:2> prctile(a(:), [0.1 1 50 99 99.9])

If everything worked you will get a wall of numbers now. TODO: We should extract the essential pieces of information here... (min/max maybe)?

Lower numbers should be around 50...300 (certainly not zero). Higher numbers should be around 4000, but not 4095.

Repeat for gains 2, 3, 4.



4 RCN calibration

RCN stands for Row Coloumn Noise correction meaning we filter out the fixed pattern noise.

Clear the old RCN values, by typing on the Beta :

sudo axiom_start.sh
sudo /usr/axiom/script/axiom_rcn_clear.py

Now you need to make sure that your Beta is not capturing any light (really not a single photon should hit the sensor :) ):

  1. close the lens aperture as far as possible
  2. attach lens cap
  3. put black lens bag over Beta
  4. turn off all lights in the room - do this at night or in a completely dark room

Take 64 dark frames at 10ms (~1.2GB), gain x1 with the following script executed on the Beta:

sudo /usr/axiom/script/axiom_set_gain.sh 1
axiom_sequencer_stop.sh # disable HDMI stream
for i in `seq 1 64`; do
axiom_snap -2 -b -r -e 10ms > dark-x1-10ms-$i.raw12
done
axiom_sequencer_start.sh # re-enable HDMI stream

Compute a temporary dark frame for RCN calibration:

raw2dng --no-blackcol --calc-darkframe dark-x1-10ms-*.raw12

This should process quite quickly and output something like the following at the end:

Averaged 64 frames exposed from 12.00 to 12.00 ms. 
Could not compute dark current.
Please use different exposures, e.g. from 1 to 50 ms.
Dark offset : 0.00
Writing darkframe-x1.pgm...
Done.


Rename and upload darkframe to your Beta:

mv darkframe-x1.pgm darkframe-rcn.pgm

Set the RCN values:

sudo /usr/axiom/script/axiom_rcn_darkframe.py darkframe-rcn.pgm

Put this in startup script (ie : axiom_start.sh) :

/usr/axiom/script/axiom_rcn_darkframe.py darkframe-rcn.pgm

4.1 Troubleshooting

If you get an error report like this:

Traceback (most recent call last): 
File "rcn_darkframe.py", line 17, in <module>
import png
ImportError: No module named 'png'

Make sure the Beta is connected to the Internet (via Ethernet) and run:

pip install pypng

and then run the python script again

If you got something like this :

Traceback (most recent call last): 
File "/usr/axiom/script/axiom_rcn_darkframe.py", line 75, in <module>
dark = read_pnm(filename)
File "/usr/axiom/script/axiom_rcn_darkframe.py", line 53, in read_pnm
format, width, height, samples, maxval = png.read_pnm_header( fd )
AttributeError: module 'png' has no attribute 'read_pnm_header'

Make sure the Beta is connected to the Internet (via Ethernet) and run:

sudo pip install --force "pypng==0.0.18"



5 RCN validation

5.1 Method 1 : on the Beta

On the Beta, capture one darkframe without compensations:

[operator@beta] $ sudo /usr/axiom/script/axiom_rcn_clear.py  
[operator@beta] $ axiom_snap -2 -b -r -e 10ms > dark-check-1.raw12

Capture one darkframe with compensations:

[operator@beta] $ sudo /usr/axiom/script/axiom_rcn_darkframe.py darkframe-rcn.pgm 
[operator@beta] $ axiom_snap -2 -b -r -e 10ms > dark-check-2.raw12


Then use raw2dng to analyze the differences:

[operator@beta] $ raw2dng --no-darkframe --check-darkframe dark-check-1.raw12 
[operator@beta] $ raw2dng --no-darkframe --check-darkframe dark-check-2.raw12

With the compensated snapshot the column noise should disappear, and only row noise left should be dynamic (not static). Visual inspection: the dark frame should have only horizontal lines, not vertical ones.

Sample output:

Average     : 127.36               # about 128, OK 
Pixel noise : 5.44 # this one is a bit high because we only corrected row and column offsets (it's OK)
Row noise  : 2.30 (42.2%) # this one should be only dynamic row noise - see Method 3 below.
Col noise  : 0.20 (3.8%) # this one is very small, that's what we need to check here



5.2 Method 2 : visual method

Put a lens cap on the camera and check the image on a HDMI monitor.

In the camera set the matrix gains to:

[operator@beta] $ sudo /usr/axiom/script/axiom_mat4_conf.sh  20 0 0 0  0 10 10 0  0 10 10 0  0 0 0 10  0 0 0 0

run:

[operator@beta] $ sudo /usr/axiom/script/axiom/rcn_clear.py

The static noise profile should be visible.

Being in the folder where the darkframe-rcn.pgm file is (should be ~), run:

[operator@beta] $ sudo /usr/axiom/script/axiom/rcn_darkframe.py darkframe-rcn.pgm 

The static noise profile should be gone. You will still see dynamic row noise (horizontal lines flickering) - thats expected.



5.3 Method 3 : with octave

Capture 2 frames:

ssh operator@$BETA "axiom_snap -2 -b -r -e 10ms" > dark-check-1.raw12  
ssh operator@$BETA "axiom_snap -2 -b -r -e 10ms" > dark-check-2.raw12

Convert the two darkframes with raw2dng:

raw2dng dark-check-*

Make sure you have the required octave function file in place:

wget https://raw.githubusercontent.com/apertus-open-source-cinema/misc-tools-utilities/master/darkframes/read_raw.m

Also you need to install the octave "signal" and "control" packages from: http://octave.sourceforge.net/packages.php then inside octave run to install:

pkg install package_name 

To check whether the entire row noise is dynamic, load the two raw images in octave and check the autocorrelation between the two row noise samples:

pkg load signal 
a = read_raw('dark-check-1.DNG');
b = read_raw('dark-check-2.DNG');
ra = mean(a'); ra = ra - mean(ra);
rb = mean(b'); rb = rb - mean(rb);
xcov(ra, rb, 0, 'coeff')

Result should be very small (about 0.1 or lower). When running this check on two uncalibrated dark frames, you will get around 0.8 - 0.9.



6 Step 3: Dark frame calibration

This step significantly improves image noise levels. More info here and there. Make sure the RCN calibration from previous steps is in place before continueing here.

Take 400 dark frames at various exposure times for the 4 gains, and compute darkframes for each gain. This will require around 8GB of disk space.

Script to do it remotely :

for g in 1 2 3 4;  do 
ssh operator@$BETA sudo /usr/axiom/script/axiom_set_gain.sh ${g}
for t in {1..100}; do
for i in {1..4}; do
ssh operator@$BETA "axiom_snap -2 -r -e ${i}ms" > dark-x${g}-${t}ms-v${i}.raw12
done
done
raw2dng --calc-dcnuframe dark-x$g-*.raw12
rm *.raw12
done

Script to do it on the Beta directly :

for g in 1 2 3 4;  do 
sudo /usr/axiom/script/axiom_set_gain.sh ${g}
for t in {1..100}; do
for i in {1..4}; do
axiom_snap -2 -r -e ${i}ms > dark-x${g}-${t}ms-v${i}.raw12
done
done
raw2dng --calc-dcnuframe dark-x$g-*.raw12
rm *.raw12
done

This produces the following files:

darkframe-x1.pgm, dcnuframe-x1.pgm, darkframe-x2.pgm, dcnuframe-x2.pgm, darkframe-x3.pgm, dcnuframe-x3.pgm, darkframe-x4.pgm, dcnuframe-x4.pgm

Store these files in a save place as they will be used in post-processing.

Place them in the directory where you capture raw12 files or experimental raw HDMI recordings, so raw2dng will use them.



7 Dark Frame Validation

Capture a few new raw12 darkframes with rcn enabled and PGMs in place for raw2dng.

raw2dng --check-darkframe dark*.raw12 > dark-check.log

If the calibration worked you should get lower noise values as in step 2.

average value: close to 128

pixel noise: about 3 or 4 (may increase at longer exposure times)

row noise and column noise should look similar to this:

Pixel noise : 3     
Row noise  : 1.70
Col noise  : 0.15



8 Color profiling

Set gain x1.

ssh root@$BETA "./set_gain.sh 1"

Take a picture of the IT8 chart, correctly exposed.

Edit the coordinates and the raw file name in calib_argyll.sh.

ssh root@$BETA "./cmv_snap3 -2 -b -r -e 10ms" > it8chart.raw12
./calib_argyll.sh IT8

Save the following files:

  • ICC profile (*.icc)
  • OCIO configuration (copy/paste from terminal) + LUT file (*.spi1d)



9 Color profiling validation

Render the IT8 chart in Blender, using the OCIO configuration.

Same with the ICC profile (Adobe? RawTherapee? What apps support ICC?)

(todo: detailed steps)



10 Step 5: HDMI dark frames

For experimental 4k raw recording (https://www.apertus.org/axiom-beta-uhd-raw-mode-explained-article-may-2016) step 3 calibration is not required (step 2 should be in place though). Instead darkframes are collected from HDMI recordings.

Record a 1-minute clip with lens cap on.

Average odd and even frames.

(todo: polish and upload the averaging script)

(todo: check if the HDMI dark frames can be computed from regular dark frames)

Results: darkframe-hdmi-A.ppm and darkframe-hdmi-B.ppm.




11 Step 6: HDMI filters for raw recovery

This calibration is for to the recorder, not for the camera. It's for recovering the original raw data from the HDMI, so it has nothing to do with sensor profiling and such.

Record some scene with high detail AND rich colors.

Take a raw12 snapshot in the middle of recording. The HDMI stream will pause for a few seconds.

Upload two frames from the paused clip, together with the raw12 file. This calibration will be hardcoded in hdmi4k.

The two frames must be in the native format of your video recorder (not DNG). You should be able to cut the video with ffmpeg -vcodec copy.




12 TODO