Difference between revisions of "AXIOM Beta/experimental raw video"
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
AXIOM Beta UHD Raw Workflow | Note that this method is outdated and replaced by a better method explained here: [[AXIOM_Beta/Raw_Recording| AXIOM Beta Raw Recording]] | ||
AXIOM Beta UHD Raw Workflow Article: | |||
https://www.apertus.org/axiom-beta-uhd-raw-mode-explained-article-may-2016 | https://www.apertus.org/axiom-beta-uhd-raw-mode-explained-article-may-2016 | ||
Line 35: | Line 37: | ||
Make sure the Beta is connected to the Internet via Ethernet and run: | Make sure the Beta is connected to the Internet via Ethernet and run: | ||
pip install pypng | pip install pypng | ||
==Processing== | ==Processing== | ||
Line 107: | Line 107: | ||
--onlyA : Use data from A frames only (for bad takes) | --onlyA : Use data from A frames only (for bad takes) | ||
--onlyB : Use data from B frames only (for bad takes)</pre> | --onlyB : Use data from B frames only (for bad takes)</pre> | ||
==EDL Parser== | |||
This script can take EDLs to reduce the raw conversion/processing to the essential frames that are actually used in an edit. This way a finished video edit can be converted to raw DNG sequences easily. | |||
Requirements: ruby | |||
<pre style="white-space: pre-wrap">puts "BEFORE EXECUTION, PLS FILL IN YOUR WORK DIRECTORY IN THE SCRIPT (path_to_workdir)" | |||
puts "#!/bin/bash" | |||
i=0 | |||
ffmpeg_cmd1 = "ffmpeg -i " | |||
tc_in = Array.new | |||
tc_out = Array.new | |||
clip = Array.new | |||
file = ARGV.first | |||
ff = File.open(file, "r") | |||
ff.each_line do |line| | |||
clip << line.scan(/NAME:\s(.+)/) | |||
tc_in << line.scan(/(\d\d:\d\d:\d\d:\d\d).\d\d:\d\d:\d\d:\d\d.\d\d:\d\d:\d\d:\d\d.\d\d:\d\d:\d\d:\d\d/) | |||
tc_out << line.scan(/\s\s\s\d\d:\d\d:\d\d:\d\d\s(\d\d:\d\d:\d\d:\d\d)/) | |||
end | |||
c=0 | |||
clip.delete_at(0) | |||
clip.each do |fuck| | |||
if clip[c].empty? | |||
tc_in[c] = [] | |||
tc_out[c] = [] | |||
end | |||
c=c+1 | |||
end | |||
total_frames = 0 | |||
tc_in = tc_in.reject(&:empty?) | |||
tc_out = tc_out.reject(&:empty?) | |||
clip = clip.reject(&:empty?) | |||
tc_in.each do |f| | |||
tt_in = String.new | |||
tt_out = String.new | |||
tt_in = tc_in[i].to_s.scan(/(\d\d)\D(\d\d)\D(\d\d)\D(\d\d)/) | |||
tt_out = tc_out[i].to_s.scan(/(\d\d)\D(\d\d)\D(\d\d)\D(\d\d)/) | |||
framecount = ((tt_out[0][0].to_i-tt_in[0][0].to_i)*60*60*60+(tt_out[0][1].to_i-tt_in[0][1].to_i)*60*60+(tt_out[0][2].to_i-tt_in[0][2].to_i)*60+(tt_out[0][3].to_i-tt_in[0][3].to_i)) | |||
framecount = framecount + 20 | |||
tt_in_ff = (tt_in[0][3].to_i*1000/60) | |||
frames_in = tt_in[0][0].to_i*60*60*60+tt_in[0][1].to_i*60*60+tt_in[0][2].to_i*60+tt_in[0][3].to_i | |||
frames_in = frames_in - 10 | |||
new_tt_in = Array.new | |||
new_tt_in[0] = frames_in/60/60/60 | |||
frames_in = frames_in - new_tt_in[0]*60*60*60 | |||
new_tt_in[1] = frames_in/60/60 | |||
frames_in = frames_in - new_tt_in[1]*60*60 | |||
new_tt_in[2] = frames_in/60 | |||
frames_in = frames_in - new_tt_in[2]*60 | |||
new_tt_in[3] = frames_in | |||
frames_left = (tt_in[0][0].to_i*60*60*60+(tt_in[0][1].to_i)*60*60+(tt_in[0][2].to_i)*60+(tt_in[0][3].to_i))-10 | |||
new_frames = Array.new | |||
new_frames[0] = frames_left/60/60/60 | |||
frames_left = frames_left - new_frames[0]*60*60*60 | |||
new_frames[1] = frames_left/60/60 | |||
frames_left = frames_left - new_frames[1]*60*60 | |||
new_frames[2] = frames_left/60 | |||
frames_left = frames_left - new_frames[2]*60 | |||
new_frames[3] = frames_left | |||
tt_in_ff_new = (new_frames[3]*1000/60) | |||
clip[i][0][0] = clip[i][0][0].chomp("\r") | |||
path_to_workdir = "'/Volumes/getztron2/April Fool 2016/V'" | |||
mkdir = "mkdir #{i}\n" | |||
puts mkdir | |||
ff_cmd_new = "ffmpeg -ss #{sprintf '%02d', new_frames[0]}:#{sprintf '%02d', new_frames[1]}:#{sprintf '%02d', new_frames[2]}.#{sprintf '%02d', tt_in_ff_new} -i #{path_to_workdir}/#{clip[i][0][0].to_s} -frames:v #{framecount} -c:v copy p.MOV -y" | |||
puts ff_cmd_new | |||
puts "./render.sh p.MOV&&\n" | |||
puts "mv frame*.DNG #{i}/" | |||
hdmi4k_cmd = "hdmi4k #{path_to_workdir}/frame*[0-9].ppm --ufraw-gamma --soft-film=1.5 --fixrnt --offset=500&&\n" | |||
ff_cmd2 = "ffmpeg -i #{path_to_workdir}/frame%04d-out.ppm -vcodec prores -profile:v 3 #{clip[i][0][0]}_#{i}_new.mov -y&&\n" | |||
puts "\n\n\n" | |||
i=i+1 | |||
total_frames = total_frames + framecount | |||
end | |||
puts "#Total frame: count: #{total_frames}"</pre> | |||
Pipe it to a Bash file to have a shell script. | |||
'''Note from the programmer:''' This is really unsophisticated and messy. Feel free to alter and share improvements. |
Latest revision as of 13:03, 17 January 2022
Note that this method is outdated and replaced by a better method explained here: AXIOM Beta Raw Recording
AXIOM Beta UHD Raw Workflow Article: https://www.apertus.org/axiom-beta-uhd-raw-mode-explained-article-may-2016
1 Experimental UHD Raw Recording
Note: This experimental raw mode works only in 1080p60 (A+B Frames) and is only tested with the Atomos Shogun currently.
To measure the required compensations with a different recorder follow this guide: raw processing recorder benchmarking
This mode requires darkframes which are created in the course of a camera Factory Calibration. Early Betas are not calibrated yet - this step needs to be completed by the user.
1.1 Enable raw Recording Mode
On the AXIOM Beta execute:
./hdmi_rectest.sh
Inside that script the following command is worth noting:
Enable experimental raw mode:
scn_reg 31 0x0A01
Disable experimental raw mode:
scn_reg 31 0x0001
if you get an error report like this:
Traceback (most recent call last): File "rcn_darkframe.py", line 17, in <module> import png ImportError: No module named 'png'
Make sure the Beta is connected to the Internet via Ethernet and run:
pip install pypng
1.2 Processing
Post-processing software to recover the raw information (DNG sequences) is on github: https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw-via-hdmi
required packages: ffmpeg build-essentials
Mac requirements for compiling: gcc4.9(via homebrew):
brew install homebrew/versions/gcc49
also install ffmpeg
To do all the raw processing in one single command (after ffmpeg codec copy processing):
./hdmi4k INPUT.MOV - | ./raw2dng --fixrnt --pgm --black=120 frame%05d.dng
1.2.1 Raw Processing Recorder Benchmarking
We can analyze footage recorded by the third party recorders but we would need the following:
- make sure your Beta is running in experimental 4k raw mode (1080p60 with A+B frames)
- short HDMI captured clip from the 3rd party recorder
- raw12 still image captured during the HDMI recording
This kind of script is helpful to execute during HDMI recording:
#stop HDMI stream: fil_reg 15 0 #capture image ./cmv_snap3 -r -2 -e 10ms > image.raw12 #start HDMI stream: fil_reg 15 0x01000100
Taking a snapshot during HDMI recording with the above script will pause the HDMI stream for a few seconds, where it will alternate between two frames. These two frames will be from the same raw data as image.raw12, so they contain all that's needed to figure out what kind of processing the HDMI recorder applies to the image, and how to undo it in order to recover the raw data.
Ideally, the scene should contain fine details (such as tissue, fine print) and rich colors. A color chart (which usually contains some fine print as well) is a very good choice.
- HDMI captured 1-minute clip with dark frames (lens cap on camera, black cloth covering camera in a dark room)
1.3 Experimental Raw Video to PGM Sequence Conversion
Converts a video file recorded in AXIOM raw to a PGM image sequence and applies the darkframe (which needs to be created beforehand).
Currently clips must go through ffmpeg before hdmi4k can read them:
ffmpeg -i CLIP.MOV -c:v copy OUTPUT.MOV
To cut out a video between IN and OUT with ffmpeg but maintaing the original encoding data:
ffmpeg -i CLIP.MOV -ss IN_SECONDS -t DURATION_SECONDS -c:v copy OUTPUT.MOV
hdmi4k HDMI RAW converter for Axiom BETA Usage: ./hdmi4k clip.mov raw2dng frame*.pgm [options] Calibration files: hdmi-darkframe-A.ppm, hdmi-darkframe-B.ppm: averaged dark frames from the HDMI recorder (even/odd frames) Options: - : Output PGM to stdout (can be piped to raw2dng) --3x3 : Use 3x3 filters to recover detail (default 5x5) --skip : Toggle skipping one frame (try if A/B autodetection fails) --swap : Swap A and B frames inside a frame pair (encoding bug?) --onlyA : Use data from A frames only (for bad takes) --onlyB : Use data from B frames only (for bad takes)
1.4 EDL Parser
This script can take EDLs to reduce the raw conversion/processing to the essential frames that are actually used in an edit. This way a finished video edit can be converted to raw DNG sequences easily.
Requirements: ruby
puts "BEFORE EXECUTION, PLS FILL IN YOUR WORK DIRECTORY IN THE SCRIPT (path_to_workdir)" puts "#!/bin/bash" i=0 ffmpeg_cmd1 = "ffmpeg -i " tc_in = Array.new tc_out = Array.new clip = Array.new file = ARGV.first ff = File.open(file, "r") ff.each_line do |line| clip << line.scan(/NAME:\s(.+)/) tc_in << line.scan(/(\d\d:\d\d:\d\d:\d\d).\d\d:\d\d:\d\d:\d\d.\d\d:\d\d:\d\d:\d\d.\d\d:\d\d:\d\d:\d\d/) tc_out << line.scan(/\s\s\s\d\d:\d\d:\d\d:\d\d\s(\d\d:\d\d:\d\d:\d\d)/) end c=0 clip.delete_at(0) clip.each do |fuck| if clip[c].empty? tc_in[c] = [] tc_out[c] = [] end c=c+1 end total_frames = 0 tc_in = tc_in.reject(&:empty?) tc_out = tc_out.reject(&:empty?) clip = clip.reject(&:empty?) tc_in.each do |f| tt_in = String.new tt_out = String.new tt_in = tc_in[i].to_s.scan(/(\d\d)\D(\d\d)\D(\d\d)\D(\d\d)/) tt_out = tc_out[i].to_s.scan(/(\d\d)\D(\d\d)\D(\d\d)\D(\d\d)/) framecount = ((tt_out[0][0].to_i-tt_in[0][0].to_i)*60*60*60+(tt_out[0][1].to_i-tt_in[0][1].to_i)*60*60+(tt_out[0][2].to_i-tt_in[0][2].to_i)*60+(tt_out[0][3].to_i-tt_in[0][3].to_i)) framecount = framecount + 20 tt_in_ff = (tt_in[0][3].to_i*1000/60) frames_in = tt_in[0][0].to_i*60*60*60+tt_in[0][1].to_i*60*60+tt_in[0][2].to_i*60+tt_in[0][3].to_i frames_in = frames_in - 10 new_tt_in = Array.new new_tt_in[0] = frames_in/60/60/60 frames_in = frames_in - new_tt_in[0]*60*60*60 new_tt_in[1] = frames_in/60/60 frames_in = frames_in - new_tt_in[1]*60*60 new_tt_in[2] = frames_in/60 frames_in = frames_in - new_tt_in[2]*60 new_tt_in[3] = frames_in frames_left = (tt_in[0][0].to_i*60*60*60+(tt_in[0][1].to_i)*60*60+(tt_in[0][2].to_i)*60+(tt_in[0][3].to_i))-10 new_frames = Array.new new_frames[0] = frames_left/60/60/60 frames_left = frames_left - new_frames[0]*60*60*60 new_frames[1] = frames_left/60/60 frames_left = frames_left - new_frames[1]*60*60 new_frames[2] = frames_left/60 frames_left = frames_left - new_frames[2]*60 new_frames[3] = frames_left tt_in_ff_new = (new_frames[3]*1000/60) clip[i][0][0] = clip[i][0][0].chomp("\r") path_to_workdir = "'/Volumes/getztron2/April Fool 2016/V'" mkdir = "mkdir #{i}\n" puts mkdir ff_cmd_new = "ffmpeg -ss #{sprintf '%02d', new_frames[0]}:#{sprintf '%02d', new_frames[1]}:#{sprintf '%02d', new_frames[2]}.#{sprintf '%02d', tt_in_ff_new} -i #{path_to_workdir}/#{clip[i][0][0].to_s} -frames:v #{framecount} -c:v copy p.MOV -y" puts ff_cmd_new puts "./render.sh p.MOV&&\n" puts "mv frame*.DNG #{i}/" hdmi4k_cmd = "hdmi4k #{path_to_workdir}/frame*[0-9].ppm --ufraw-gamma --soft-film=1.5 --fixrnt --offset=500&&\n" ff_cmd2 = "ffmpeg -i #{path_to_workdir}/frame%04d-out.ppm -vcodec prores -profile:v 3 #{clip[i][0][0]}_#{i}_new.mov -y&&\n" puts "\n\n\n" i=i+1 total_frames = total_frames + framecount end puts "#Total frame: count: #{total_frames}"
Pipe it to a Bash file to have a shell script.
Note from the programmer: This is really unsophisticated and messy. Feel free to alter and share improvements.