Difference between revisions of "AXIOM Beta/AXIOM Beta Software"

From apertus wiki
Jump to: navigation, search
(→‎Prep your computer for use with your camera: - explain about command prompt $ used in documentation)
m (→‎Operating System: Add "Pattern noise correction" section)
 
(160 intermediate revisions by 18 users not shown)
Line 1: Line 1:
[[File:BetaGettingStarted.jpg | thumb | 400px]]
==Misc Scripts==
 
 
Note: Some of the instructions we have prepared are written in a way they can also be followed by people without advanced technical knowledge. If you  are more of a "techie", please keep this in mind and skip or ignore the  steps or passages which deal with information you already know.
 
==Getting Started==
 
===Prep your AXIOM Beta camera for use===
# Use a micro-USB cable to connect the camera's MicroZed development board (USB UART) to a computer. The MicroZed board is the backmost, red PCB. (There is another micro-USB socket on the Power Board, but that is the JTAG Interface.)
# Connect the ethernet port on the MicroZed to an ethernet port on your computer. You might have to use an ethernet adapter on newer, smaller machines which come without a native ethernet port.
# Connect the AC adapter to the camera's Power Board. (The power cord plugs into an adapter that connects to the Power Board; to power the camera off at a later point, you need not disconnect the adapter from the board but can just unplug the cord from the adapter.)
 
 
[[File:BetaGuide.jpg  | 500px]]
 
 
===Prep your computer for use with your camera===
To communicate with your AXIOM Beta camera, you will send it instructions via your computer's command line.
 
In case you have not worked with a shell (console, terminal) much or ever before, we have prepared detailed instructions to help you get you set up. The steps which need to be taken to prepare your machine sometimes differ between operating systems, so pick the ones that are applicable to you(r system).
 
Note that dollar signs <code>$</code> placed in front of commands are not meant to be typed in but denote the command line prompt (a signal indicating the computer is ready for user input). It is used in documentation to differentiate between commands and output resulting from commands. The prompt might look different on your machine (e.g. an angled bracket <code>></code>) and be preceded by your user name, computer name or the name of the directory which you are currently inside.
 
====USB to UART drivers====
For the USB connection to work, you will need drivers for bridging USB to UART (USB to serial). They [https://www.silabs.com/products/mcu/Pages/USBtoUARTBridgeVCPDrivers.aspx can be downloaded] from e.g. Silicon Labs' website – pick the software provided for your OS and install it.
 
====Serial console====
The tool we recommend for connecting to the AXIOM Beta camera via serial port with Mac OS X or Linux is [http://linux.die.net/man/1/minicom minicom]; for connections from Windows machines, we have used [http://www.putty.org Putty].
 
[[/Mac OS X setup/]]<br>
[[/Linux setup/]]
 
===Serial connection (via USB)===
By default, the Beta requests an IP address via DHCP.
 
If DHCP is not an option, you can set the IP manually with the <code>ifconfig</code> command, e.g.:
ifconfig eth0 192.168.0.9/24 up
Make sure to use the correct interface shortcut and IP address.
 
 
The camera is accessed via the Serial Console (USB UART).<br>
Drivers for bridging USB to UART [https://www.silabs.com/products/mcu/Pages/USBtoUARTBridgeVCPDrivers.aspx can be downloaded] from Silicon Labs' website.  We have successfully used Minicom, Screen or Putty to access UART.
 
Once the Beta is connected and powered on, it is listed as a USB device in the <code>/dev</code> directory of your file system, e.g. <code>/dev/ttyUSB0</code> (Linux) or <code>/dev/tty.SLAB_USBtoUART</code> (Mac OS X).<br>
You can use e.g. <code>ls -al /dev/ | grep -i usb</code> to list all USB devices connected to your machine.
 
 
To connect to the camera, use the command:
screen file_path 115200
 
e.g. (depending on your operating system):
screen /dev/ttyttyUSB0 115200
screen /dev/tty.SLAB_USBtoUART 115200
 
You might have to run the command with superuser rights, i.e.:
sudo screen /dev/ttyttyUSB0 115200
 
 
On successful connection, you will be prompted to enter user credentials needed for logging into the camera.<br>
If your terminal remains blank, try pressing enter.
 
The default credentials are:
user: root
password: beta
 
====Disconnect from the camera====
To disconnect from the camera again, use:
exit
The result will be a logout message followed by a new login prompt. To suspend or quit your <code>screen</code> session (and return to your regular terminal window) use one of the following commands:
CTRL+a CTRL+z
CTRL+a CTRL+\
 
===Ethernet connection (using SSH)===
 
To access the Beta via SSH – for working with it remotely or even on your local machine – authentication via SSH keys is required.
 
You will have to add your public key to the file<br>
<code>/root/.ssh/authorized_keys</code><br>
on your Beta camera.
 
'''Note'''<br>
We have previously run into problems with <code>screen</code> and copy-pasting entire keys, and eventually ended up splitting keys into two parts and copy-paste them separately.
 
===Start the camera===
 
The init script is automatically run at startup:
./kick.sh
 
This will initialize all systems and train the sensor communication.
 
 
==Capture an image==
 
write the image into file: snap.raw16 plus display it with imagemagick:
ssh root@*BETA-IP* "./cmv_snap3 -e 10ms" | tee snap.raw16 | display -size 4096x3072 -depth 16 gray:-
 
same for 12bit (more efficient, same data):
ssh root@*BETA-IP* "./cmv_snap3 -2 -e 10ms" | tee snap.raw12 | display -size 4096x3072 -depth 12 gray:-
 
==Overlay Images==
clear overlay
./mimg -a -o -P 0
 
enable overlay:
gen_reg 11 0x0104F000
 
disable overlay:
gen_reg 11 0x0004F000
 
==Addititonal outputs==


Display voltages and current flow:
Display voltages and current flow:
  ./pac1720_info.sh
  ./pac1720_info.sh
Output:
Output:
  <nowiki>ZED_5V        5.0781 V [2080] +29.0625 mV [2e8]  +968.75 mA
  ZED_5V        5.0781 V [2080] +29.0625 mV [2e8]  +968.75 mA<br />
BETA_5V      5.1172 V [20c0] +26.6016 mV [2a9]  +886.72 mA
BETA_5V      5.1172 V [20c0] +26.6016 mV [2a9]  +886.72 mA<br />
HDN          3.2422 V [14c0] -0.0391 mV [fff]    -1.30 mA
HDN          3.2422 V [14c0] -0.0391 mV [fff]    -1.30 mA<br />
PCIE_N_V      3.2422 V [14c0] -0.0391 mV [fff]    -1.30 mA
PCIE_N_V      3.2422 V [14c0] -0.0391 mV [fff]    -1.30 mA<br />
HDS          3.2422 V [14c0] +0.0000 mV [000]    +0.00 mA
HDS          3.2422 V [14c0] +0.0000 mV [000]    +0.00 mA<br />
PCIE_S_V      3.2422 V [14c0] -0.0391 mV [fff]    -1.30 mA
PCIE_S_V      3.2422 V [14c0] -0.0391 mV [fff]    -1.30 mA<br />
RFW_V        3.2812 V [1500] +0.2734 mV [007]    +9.11 mA
RFW_V        3.2812 V [1500] +0.2734 mV [007]    +9.11 mA<br />
IOW_V        3.2422 V [14c0] +0.0000 mV [000]    +0.00 mA
IOW_V        3.2422 V [14c0] +0.0000 mV [000]    +0.00 mA<br />
RFE_V        3.2812 V [1500] +0.2344 mV [006]    +7.81 mA
RFE_V        3.2812 V [1500] +0.2344 mV [006]    +7.81 mA<br />
IOE_V        3.2812 V [1500] +0.0781 mV [002]    +2.60 mA
IOE_V        3.2812 V [1500] +0.0781 mV [002]    +2.60 mA<br />
VCCO_35      2.5000 V [1000] +0.6641 mV [011]    +22.14 mA
VCCO_35      2.5000 V [1000] +0.6641 mV [011]    +22.14 mA<br />
VCCO_13      2.4609 V [ fc0] +1.2500 mV [020]    +41.67 mA
VCCO_13      2.4609 V [ fc0] +1.2500 mV [020]    +41.67 mA<br />
PCIE_IO      2.4609 V [ fc0] -0.0391 mV [fff]    -1.30 mA
PCIE_IO      2.4609 V [ fc0] -0.0391 mV [fff]    -1.30 mA<br />
VCCO_34      2.4609 V [ fc0] +0.8203 mV [015]    +27.34 mA
VCCO_34      2.4609 V [ fc0] +0.8203 mV [015]    +27.34 mA<br />
W_VW          1.9922 V [ cc0] -0.0781 mV [ffe]    -2.60 mA
W_VW          1.9922 V [ cc0] -0.0781 mV [ffe]    -2.60 mA<br />
N_VW          3.1641 V [1440] +0.0000 mV [000]    +0.00 mA
N_VW          3.1641 V [1440] +0.0000 mV [000]    +0.00 mA<br />
N_VN          1.8750 V [ c00] +15.4297 mV [18b]  +514.32 mA
N_VN          1.8750 V [ c00] +15.4297 mV [18b]  +514.32 mA<br />
N_VE          3.1641 V [1440] +0.0000 mV [000]    +0.00 mA
N_VE          3.1641 V [1440] +0.0000 mV [000]    +0.00 mA<br />
E_VE          1.9922 V [ cc0] -0.0391 mV [fff]    -1.30 mA
E_VE          1.9922 V [ cc0] -0.0391 mV [fff]    -1.30 mA<br />
S_VE          1.9531 V [ c80] +0.0000 mV [000]    +0.00 mA
S_VE          1.9531 V [ c80] +0.0000 mV [000]    +0.00 mA<br />
S_VS          2.9297 V [12c0] +0.3906 mV [00a]    +13.02 mA
S_VS          2.9297 V [12c0] +0.3906 mV [00a]    +13.02 mA<br />
S_VW          1.9922 V [ cc0] -0.1562 mV [ffc]    -5.21 mA
S_VW          1.9922 V [ cc0] -0.1562 mV [ffc]    -5.21 mA
</nowiki>


Read Temperature on Zynq:
Read Temperature on Zynq:
Line 142: Line 32:
  ZYNQ Temp    49.9545 °C
  ZYNQ Temp    49.9545 °C


==HDMI Modes==
==Image Processing Nodes==
cmv_hdmi3.bit is the FPGA bitstream loaded for the HDMI interface. We use symlinks to switch this file easily.
 
===Enable 1080p60 Mode===
rm -f cmv_hdmi3.bit
ln -s cmv_hdmi3_60.bit cmv_hdmi3.bit
sync
reboot now
 
===Enable 1080p30 Mode===
rm -f cmv_hdmi3.bit
ln -s cmv_hdmi3_30.bit cmv_hdmi3.bit
sync
reboot now
 
=Tools=
==cmv_reg==
Get and Set CMV12000 image sensor registers (CMV12000 sports 128x16 Bit registers).
 
Details are in the sensor datasheet: https://github.com/apertus-open-source-cinema/beta-hardware/tree/master/Datasheets
 
 
'''Examples:'''


Read register 115 (which contains the analog gain settings):
===Debayering===
cmv_reg 115


Return value:
A planned feature is to generate this FPGA code block with "dynamic reconfiguration" meaning that the actual debayering algorithm can be replaced at any time by loading a new FPGA binary block at run-time.
0x00
Means we are currently operating at analog gain x1 = unity gain


This tries to simplify creating custom debayering algorithms with a script like programming language that can be translated to FPGA code and loaded into the FPGA dynamically for testing.


===Peaking Proposal===


Set register 115 to gain x2:
[[Peaking]] marks high image frequency areas with colored dot overlays. These marked areas are typically the ones "in-focus" currently so this is a handy tool to see where the focus lies with screens that have lower resolution than the camera is capturing.
cmv_reg 115 1


==set_gain.sh==
'''Handy Custom Parameters:'''


Set gain and related settings (ADC range and offsets).
*color
*frequency threshold


./set_gain.sh 1
'''Potential Problems:'''
./set_gain.sh 2
./set_gain.sh 3/3 # almost the same as gain 1
./set_gain.sh 3
./set_gain.sh 4


==cmv_snap3==
* there are sharper and softer lenses so the threshold depends on the glass currently used. For a sharp lens the peaking could show areas as "in-focus" if they actually aren't and for softer lenses the peaking might never show up at all because the threshold is never reached
Capture and store image snapshots and sequences.


Example:
===Image Blow Up / Zoom Proposal===
ssh root@cameraip "./cmv_snap3 -B0x08000000 -x -N8 -L1024 -2 -e 5ms" >/tmp/test.seq12


-B setzt die buffer base und ist im augenblick erforderlich
Digital zoom into the center area of the image to check focus.  
0x08000000 bedeutet 128MB, ich hab das linux auf 128MB zusammengepfercht, damit bleibt 1024MB-128MB fuer sequenzen
bitte beachten, es gibt kein sicherheitsnetz und keine plausibilitaetschecks, d.h. wenn du addressen erreichst oder spezifizierst wo das linux laeuft, dann ist vermutlich ein reset notwendig
-x bedeutet das erste frame zu skippen (optional), da mir aufgefallen ist, dass das erste frame nach einer pause anders aussieht
-N setzt die anzahl der frames die zu capturen sind (bitte wieder beachten, wenn du ueber 0x3FFFFFFF kommst, dann ueberschreibst du den linux bereich)
-L setzt die anzahl der zeilen (nur gerade werte, max 3072)
-2 stellt auf 12bit output (einziges format das ich getestet habe :)
-e setzt die exposure wie gehabt


To split one seq12 file into multiple individual files:
As extra feature this zoomed area could be moved around the full sensor area.
split -b <size> <file.seq12> <prefix>


<size> the amount of bytes of one image (4096 x number-of-lines x 12bit / 8bit)
[[File:20140909152450-look-around.jpg | 300px]]
<prefix> name
optionally add --additional-suffix=.raw12 for the proper extension
-d or --numeric-suffixes for numeric numbering of the split files


Example for 1024 lines (input file: test.seq12):
This feature is also related to the "Look Around" where the viewfinder sees a larger image area than is being output to the clean-feed.
split -b 6291456 -d test.seq12 output --additional-suffix=.raw12


Example for 1080 lines (input file: test.seq12):
This re-sampling method to scale up/down an image in real-time can be of rather low quality (nearest neighbor/bilinear/etc.) as it is only for preview purposes.
split -b 6635520 -d test.seq12 output --additional-suffix=.raw12


== Clipping ==


Use imagemagick to convert raw12 file into a color preview image:
scn_reg 28 0x00 # deactivate clipping<br />
  cat test.raw12 | convert \( -size 4096x3072 -depth 12 gray:- \) \( -clone 0 -crop -1-1 \) \( -clone 0 -crop -1+0 \) \( -clone 0 -crop +0-1 \) -sample 2048x1536 \( -clone 2,3 -average \) -delete 2,3 -swap 0,1 +swap -combine test_color.png
scn_reg 28 0x10 # activate low clipping<br />
  scn_reg 28 0x20 # activate high clipping<br />
scn_reg 28 0x30 # activate high+low clipping


Capture directly to DNG, without saving the raw12, in the camera:
./cmv_snap3 -2 -b -r -e 10ms | raw2dng snap.DNG
=General Info=
Stop HDMI live stream:
fil_reg 15 0
Start HDMI live stream:
fil_reg 15 0x01000100
==Operating System==
==Operating System==


Line 237: Line 79:
I would suggest running Arch Linux on the AXIOM Beta for development purposes. If we need to shrink it down that will be quite trivial. Obviously we can take the embedded approach from there, as long as we don't fall in the trap of libc implementations with broken threading.
I would suggest running Arch Linux on the AXIOM Beta for development purposes. If we need to shrink it down that will be quite trivial. Obviously we can take the embedded approach from there, as long as we don't fall in the trap of libc implementations with broken threading.


==Moving Image Raw Recording/Processing==
==Pattern noise correction==
 
Note: This only works with the experimental raw mode enabled on the AXIOM Beta 1080p60 (A+B Frames) and is only tested with the Atomos Shogun currently.
 
To measure the required compensations with a different recorder follow this guide: [[raw processing recorder benchmarking | this guide]]
 
Postprocessing software to recover the raw information (DNG sequences) is on github: https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw-via-hdmi
 
required packages: ffmpeg build-essentials
 
Mac requirements for compiling: gcc4.9(via homebrew):
brew install homebrew/versions/gcc49
 
also install ffmpeg
 
To do all the raw processing in one single command (after ffmpeg codec copy processing):
./hdmi4k INPUT.MOV - | ./raw2dng --fixrnt --pgm --black=120 frame%05d.dng
 
===hdmi4k===
Converts a video file recorded in AXIOM raw to a PGM image sequence and applies the darkframe (needs to be created beforehand).
 
Currently clips must go through ffmpeg before hdmi4k can read them:
ffmpeg -i CLIP.MOV -c:v copy OUTPUT.MOV


To cut out a video between IN and OUT with ffmpeg but maintaing the original encoding data:
  --rnfilter=1        : FIR filter for row noise correction from black columns<br />
  ffmpeg -i CLIP.MOV -ss IN_SECONDS -t DURATION_SECONDS -c:v copy OUTPUT.MOV
--rnfilter=2        : FIR filter for row noise correction from black columns and per-row median differences in green channels<br />
 
--fixrn            : Fix row noise by image filtering (slow, guesswork)<br />
<nowiki>
--fixpn            : Fix row and column noise (SLOW, guesswork)<br />
hdmi4k
--fixrnt            : Temporal row noise fix (use with static backgrounds; recommended)<br />
HDMI RAW converter for Axiom BETA
--fixpnt            : Temporal row/column noise fix (use with static backgrounds)<br />
 
--no-blackcol-rn    : Disable row noise correction from black columns (they are still used to correct static offsets)<br />
Usage:
  --no-blackcol-ff    : Disable fixed frequency correction in black columns
  ./hdmi4k clip.mov
  raw2dng frame*.pgm [options]
 
Calibration files:
  hdmi-darkframe-A.ppm, hdmi-darkframe-B.ppm:
  averaged dark frames from the HDMI recorder (even/odd frames)
 
Options:
-                   : Output PGM to stdout (can be piped to raw2dng)
--3x3              : Use 3x3 filters to recover detail (default 5x5)
--skip              : Toggle skipping one frame (try if A/B autodetection fails)
--swap              : Swap A and B frames inside a frame pair (encoding bug?)
--onlyA            : Use data from A frames only (for bad takes)
--onlyB            : Use data from B frames only (for bad takes)</nowiki>
 
 
===raw2dng===
Converts a PGM image sequence to a DNG sequence.
 
  <nowiki>DNG converter for Apertus .raw12 files
 
Usage:
  ./raw2dng input.raw12 [input2.raw12] [options]
  cat input.raw12 | ./raw2dng output.dng [options]


Flat field correction:
Flat field correction:
- each gain requires two reference images (N=1,2,3,4):
- darkframe-xN.pgm will be subtracted (data is x8)
- gainframe-xN.pgm will be multiplied (1.0 = 16384)
- reference images are 16-bit PGM, in the current directory.


Options:
--dchp              : Measure hot pixels to scale dark current frame<br />
--black=%d          : Set black level (default: autodetect)
--no-darkframe      : Disable dark frame (if darkframe-xN.pgm is present)<br />
                      - negative values allowed
--no-dcnuframe      : Disable dark current frame (if dcnuframe-xN.pgm is present)<br />
--white=%d          : Set white level (default: 4095)
--no-gainframe      : Disable gain frame (if gainframe-xN.pgm is present)<br />
                      - if too high, you may get pink highlights
--no-clipframe      : Disable clip frame (if clipframe-xN.pgm is present)<br />
                      - if too low, useful highlights may clip to white
--no-blackcol      : Disable black reference column subtraction: - enabled by default if a dark frame is used. - reduces row noise and black level variations<br />
--width=%d          : Set image width (default: 4096)
--calc-darkframe    : Average a dark frame from all input files<br />
--height=%d        : Set image height
--calc-dcnuframe    : Fit a dark frame (constant offset) and a dark current frame (exposure-dependent offset) from files with different exposures (starting point: 256 frames with exposures from 1 to 50 ms)<br />
                      - default: autodetect from file size
--calc-gainframe    : Average a gain frame (aka flat field frame)<br />
                      - if input is stdin, default is 3072
--calc-clipframe    : Average a clip (overexposed) frame<br />
--swap-lines        : Swap lines in the raw data
--check-darkframe  : Check image quality indicators on a dark frame
                      - workaround for an old Beta bug
--hdmi              : Assume the input is a memory dump
                      used for HDMI recording experiments
--lut              : Linearize sensor response with per-channel LUTs
                      - probably correct only for one single camera :)
--fixpn            : Fix pattern noise (slow)
--no-darkframe      : Disable dark frame (if darkframe.pgm is present)
--no-gainframe      : Disable gain frame (if gainframe.pgm is present)


Debug options:
Debug options:
--dump-regs        : Dump sensor registers from the metadata block (no output DNG)
 
--fixpn-dbg-denoised: Pattern noise: show denoised image
--dump-regs        : Dump sensor registers from metadata block (no output DNG)<br />
--fixpn-dbg-noise  : Pattern noise: show noise image (original - denoised)
--fixpn-dbg-denoised: Pattern noise: show denoised image<br />
--fixpn-dbg-mask    : Pattern noise: show masked areas (edges and highlights)
--fixpn-dbg-noise  : Pattern noise: show noise image (original - denoised)<br />
--fixpn-dbg-row     : Pattern noise: debug rows (default: columns)
--fixpn-dbg-mask    : Pattern noise: show masked areas (edges and highlights)<br />
</nowiki>
--fixpn-dbg-col     : Pattern noise: debug columns (default: rows)<br />
--export-rownoise  : Export row noise data to octave (rownoise_data.m)<br />
--get-pixel:%d,%d  : Extract one pixel from all input files, at given coordinates, and save it to pixel.csv, including metadata. Skips DNG output.


Example:
Example:
  ./raw2dng --fixrnt --pgm --black=120 frame%05d.dng
  ./raw2dng --fixrnt --pgm --black=120 frame%05d.dng
===EDL Parser===
This script can take EDLs to reduce the raw conversion/processing to the essential frames that are actually used in an edit.
This way a finished video edit can be converted to raw DNG sequences easily.
Requirements: ruby
<nowiki>puts "BEFORE EXECUTION, PLS FILL IN YOUR WORK DIRECTORY IN THE SCRIPT (path_to_workdir)"
puts "#!/bin/bash"
i=0
ffmpeg_cmd1 = "ffmpeg -i "
tc_in = Array.new
tc_out = Array.new
clip = Array.new
file = ARGV.first
ff = File.open(file, "r")
ff.each_line do |line|
clip << line.scan(/NAME:\s(.+)/)
tc_in << line.scan(/(\d\d:\d\d:\d\d:\d\d).\d\d:\d\d:\d\d:\d\d.\d\d:\d\d:\d\d:\d\d.\d\d:\d\d:\d\d:\d\d/)
tc_out << line.scan(/\s\s\s\d\d:\d\d:\d\d:\d\d\s(\d\d:\d\d:\d\d:\d\d)/)
end
c=0
clip.delete_at(0)
clip.each do |fuck|
if clip[c].empty?
tc_in[c] = []
tc_out[c] = []
end
c=c+1
end
total_frames = 0
tc_in = tc_in.reject(&:empty?)
tc_out = tc_out.reject(&:empty?)
clip = clip.reject(&:empty?)
tc_in.each do |f|
tt_in = String.new
tt_out = String.new
tt_in = tc_in[i].to_s.scan(/(\d\d)\D(\d\d)\D(\d\d)\D(\d\d)/)
tt_out = tc_out[i].to_s.scan(/(\d\d)\D(\d\d)\D(\d\d)\D(\d\d)/)
framecount = ((tt_out[0][0].to_i-tt_in[0][0].to_i)*60*60*60+(tt_out[0][1].to_i-tt_in[0][1].to_i)*60*60+(tt_out[0][2].to_i-tt_in[0][2].to_i)*60+(tt_out[0][3].to_i-tt_in[0][3].to_i))
framecount = framecount + 20
tt_in_ff = (tt_in[0][3].to_i*1000/60)
frames_in = tt_in[0][0].to_i*60*60*60+tt_in[0][1].to_i*60*60+tt_in[0][2].to_i*60+tt_in[0][3].to_i
frames_in = frames_in - 10
new_tt_in = Array.new
new_tt_in[0] = frames_in/60/60/60
frames_in = frames_in - new_tt_in[0]*60*60*60
new_tt_in[1] = frames_in/60/60
frames_in = frames_in - new_tt_in[1]*60*60
new_tt_in[2] = frames_in/60
frames_in = frames_in - new_tt_in[2]*60
new_tt_in[3] = frames_in
frames_left = (tt_in[0][0].to_i*60*60*60+(tt_in[0][1].to_i)*60*60+(tt_in[0][2].to_i)*60+(tt_in[0][3].to_i))-10
new_frames = Array.new
new_frames[0] = frames_left/60/60/60
frames_left = frames_left - new_frames[0]*60*60*60
new_frames[1] = frames_left/60/60
frames_left = frames_left - new_frames[1]*60*60
new_frames[2] = frames_left/60
frames_left = frames_left - new_frames[2]*60
new_frames[3] = frames_left
tt_in_ff_new = (new_frames[3]*1000/60)
clip[i][0][0] = clip[i][0][0].chomp("\r")
path_to_workdir = "'/Volumes/getztron2/April Fool 2016/V'"
mkdir = "mkdir #{i}\n"
puts mkdir
ff_cmd_new = "ffmpeg -ss #{sprintf '%02d', new_frames[0]}:#{sprintf '%02d', new_frames[1]}:#{sprintf '%02d', new_frames[2]}.#{sprintf '%02d', tt_in_ff_new} -i #{path_to_workdir}/#{clip[i][0][0].to_s} -frames:v #{framecount} -c:v copy p.MOV -y"
puts ff_cmd_new
puts "./render.sh p.MOV&&\n"
puts "mv frame*.DNG #{i}/"
hdmi4k_cmd = "hdmi4k #{path_to_workdir}/frame*[0-9].ppm --ufraw-gamma --soft-film=1.5 --fixrnt --offset=500&&\n"
ff_cmd2 = "ffmpeg -i #{path_to_workdir}/frame%04d-out.ppm -vcodec prores -profile:v 3 #{clip[i][0][0]}_#{i}_new.mov -y&&\n"
puts "\n\n\n"
i=i+1
total_frames = total_frames + framecount
end
puts "#Total frame: count: #{total_frames}"</nowiki>
Pipe it to a Bash file to have a shell script.
Note from the programmer: This is really unsophisticated and messy. Feel free to alter and share improvements.
==Userspace==
Arch Linux comes with systemd, which has one advantage that the boot process is incredible fast. Standard tools such as sshd and dhcpcd have been preinstalled. We may need other tools such as ftp, webserver, etc.
* ftp; I would suggest vsftpd here
* webserver; I am able to modify cherokee with custom C code to directly talk to specific camera sections. Cherokee already powers the WiFi module of the GoPro.
One idea to store camera relevant parameters inside the camera and provide access from most programming languages is to use a database like http://en.wikipedia.org/wiki/Berkeley_DB

Latest revision as of 12:09, 14 October 2018

1 Misc Scripts

Display voltages and current flow:

./pac1720_info.sh

Output:

ZED_5V        	5.0781 V [2080] 	+29.0625 mV [2e8]   +968.75 mA
BETA_5V 5.1172 V [20c0] +26.6016 mV [2a9] +886.72 mA
HDN 3.2422 V [14c0] -0.0391 mV [fff] -1.30 mA
PCIE_N_V 3.2422 V [14c0] -0.0391 mV [fff] -1.30 mA
HDS 3.2422 V [14c0] +0.0000 mV [000] +0.00 mA
PCIE_S_V 3.2422 V [14c0] -0.0391 mV [fff] -1.30 mA
RFW_V 3.2812 V [1500] +0.2734 mV [007] +9.11 mA
IOW_V 3.2422 V [14c0] +0.0000 mV [000] +0.00 mA
RFE_V 3.2812 V [1500] +0.2344 mV [006] +7.81 mA
IOE_V 3.2812 V [1500] +0.0781 mV [002] +2.60 mA
VCCO_35 2.5000 V [1000] +0.6641 mV [011] +22.14 mA
VCCO_13 2.4609 V [ fc0] +1.2500 mV [020] +41.67 mA
PCIE_IO 2.4609 V [ fc0] -0.0391 mV [fff] -1.30 mA
VCCO_34 2.4609 V [ fc0] +0.8203 mV [015] +27.34 mA
W_VW 1.9922 V [ cc0] -0.0781 mV [ffe] -2.60 mA
N_VW 3.1641 V [1440] +0.0000 mV [000] +0.00 mA
N_VN 1.8750 V [ c00] +15.4297 mV [18b] +514.32 mA
N_VE 3.1641 V [1440] +0.0000 mV [000] +0.00 mA
E_VE 1.9922 V [ cc0] -0.0391 mV [fff] -1.30 mA
S_VE 1.9531 V [ c80] +0.0000 mV [000] +0.00 mA
S_VS 2.9297 V [12c0] +0.3906 mV [00a] +13.02 mA
S_VW 1.9922 V [ cc0] -0.1562 mV [ffc] -5.21 mA

Read Temperature on Zynq:

./zynq_info.sh 

Output:

ZYNQ Temp     	49.9545 °C

2 Image Processing Nodes

2.1 Debayering

A planned feature is to generate this FPGA code block with "dynamic reconfiguration" meaning that the actual debayering algorithm can be replaced at any time by loading a new FPGA binary block at run-time.

This tries to simplify creating custom debayering algorithms with a script like programming language that can be translated to FPGA code and loaded into the FPGA dynamically for testing.

2.2 Peaking Proposal

Peaking marks high image frequency areas with colored dot overlays. These marked areas are typically the ones "in-focus" currently so this is a handy tool to see where the focus lies with screens that have lower resolution than the camera is capturing.

Handy Custom Parameters:

  • color
  • frequency threshold

Potential Problems:

  • there are sharper and softer lenses so the threshold depends on the glass currently used. For a sharp lens the peaking could show areas as "in-focus" if they actually aren't and for softer lenses the peaking might never show up at all because the threshold is never reached

2.3 Image Blow Up / Zoom Proposal

Digital zoom into the center area of the image to check focus.

As extra feature this zoomed area could be moved around the full sensor area.

20140909152450-look-around.jpg

This feature is also related to the "Look Around" where the viewfinder sees a larger image area than is being output to the clean-feed.

This re-sampling method to scale up/down an image in real-time can be of rather low quality (nearest neighbor/bilinear/etc.) as it is only for preview purposes.

3 Clipping

scn_reg 28 0x00 # deactivate clipping
scn_reg 28 0x10 # activate low clipping
scn_reg 28 0x20 # activate high clipping
scn_reg 28 0x30 # activate high+low clipping

4 Operating System

At this moment we were able to reuse an Arch Linux image for the Zedboard on the Microzed. To do so, some software such as the FSBL and uboot were added. More information can be found here: http://stefan.konink.de/contrib/apertus/ I will commit myself on the production of a screencast of the entire bootstrap proces from the Xilinx software to booting the MicroZed.

I would suggest running Arch Linux on the AXIOM Beta for development purposes. If we need to shrink it down that will be quite trivial. Obviously we can take the embedded approach from there, as long as we don't fall in the trap of libc implementations with broken threading.

5 Pattern noise correction

--rnfilter=1        : FIR filter for row noise correction from black columns
--rnfilter=2  : FIR filter for row noise correction from black columns and per-row median differences in green channels
--fixrn  : Fix row noise by image filtering (slow, guesswork)
--fixpn  : Fix row and column noise (SLOW, guesswork)
--fixrnt  : Temporal row noise fix (use with static backgrounds; recommended)
--fixpnt  : Temporal row/column noise fix (use with static backgrounds)
--no-blackcol-rn  : Disable row noise correction from black columns (they are still used to correct static offsets)
--no-blackcol-ff  : Disable fixed frequency correction in black columns

Flat field correction:

--dchp              : Measure hot pixels to scale dark current frame
--no-darkframe  : Disable dark frame (if darkframe-xN.pgm is present)
--no-dcnuframe  : Disable dark current frame (if dcnuframe-xN.pgm is present)
--no-gainframe  : Disable gain frame (if gainframe-xN.pgm is present)
--no-clipframe  : Disable clip frame (if clipframe-xN.pgm is present)
--no-blackcol  : Disable black reference column subtraction: - enabled by default if a dark frame is used. - reduces row noise and black level variations
--calc-darkframe  : Average a dark frame from all input files
--calc-dcnuframe  : Fit a dark frame (constant offset) and a dark current frame (exposure-dependent offset) from files with different exposures (starting point: 256 frames with exposures from 1 to 50 ms)
--calc-gainframe  : Average a gain frame (aka flat field frame)
--calc-clipframe  : Average a clip (overexposed) frame
--check-darkframe  : Check image quality indicators on a dark frame

Debug options:

--dump-regs         : Dump sensor registers from metadata block (no output DNG)
--fixpn-dbg-denoised: Pattern noise: show denoised image
--fixpn-dbg-noise  : Pattern noise: show noise image (original - denoised)
--fixpn-dbg-mask  : Pattern noise: show masked areas (edges and highlights)
--fixpn-dbg-col  : Pattern noise: debug columns (default: rows)
--export-rownoise  : Export row noise data to octave (rownoise_data.m)
--get-pixel:%d,%d  : Extract one pixel from all input files, at given coordinates, and save it to pixel.csv, including metadata. Skips DNG output.

Example:

./raw2dng --fixrnt --pgm --black=120 frame%05d.dng