In the last few years, image sensor technology has advanced slowly with relatively small improvements being made. Conversely, image post-processing technology has advanced rapidly in comparison. Because camera manufacturers don’t allow access to real-raw data, this begs the question - Are today's advances in camera technology mostly attributable to better algorithms or methods that are designed to reduce noise and other artifacts/problems with “pure raw” data? This is exactly the question we want to investigate and, at the same time, develop/apply such methods/algorithms ourselves. The big difference, though, is that we want to do this in a fully transparent way where users have the option to utilise post-processing or not.
Inspired by Blender open movies (eg https://gooseberry.blender.org/), the idea is to combine creating content and undertaking development around it, involving both software/hardware developers as well as artists/filmmakers, etc.
It’s important that collected footage is not advertised as camera showreel or sample footage that’s meant to showcase the absolute best quality that the camera is capable of capturing (because we know we are not utilising the full potential of image sensor and post-processing yet). It’s important to emphasise that the purpose of test footage is to gradually improve image processing of AXIOM Beta shot footage. Footage should be seen and communicated as a ‘development snapshot’ of the current state the camera is in ( -> always improving). It should motivate others to collect sample footage and help search for and fix problems/issues as well.
The Goal is to collect nice and diverse footage, maybe short films - always with the intention to get the best out of the camera but also with the awareness to hopefully find and run into problems with any footage shot so we can identify problems and compensate/fix them (either inside the camera already or else in a post processing step).
The focus is less on usability or general camera operations (we know we are not quite there yet) but more on the created content (videos, time-lapse, stop motion, still images).
This project should not be promoted on social media, we want to specifically target tech people and AXIOM Beta owners in our community.
- Time Lapse and Stop Motion sequences -> Raw12
- Still images -> Raw12
- Video -> HDMI external recording
For all of the above it is essential that RCN and Dark frame calibration has been done properly: Factory Calibration
First images from the Brussels local group (IRL Brussels - Info & Research Lab Brussels) :
4 Animation & Overlays
Colophon link: Colophons
image overlay for released footage - axiom beta shot on overlay.psd: https://cloud.apertus.org/index.php/s/ScpzwtzPGX4GPt5?path=%2F
To read the current Firmware Version (hash) (only works in Firmware 2.0). AXIOM Beta Firmware Version 2.0 do:
cd /opt/axiom-firmware; git describe --always --abbrev=8 --dirty
5 Development Goals/Areas
- Color Calibration (LUTs, processing steps, guides)
- tools/algorithms for noise reduction (static, dynamic, interframe, all postprocessing, not in-camera)
- finding more optimal camera settings/register combinations
- best practice guides (for camera operators but also developers, etc.)
- image enhancements (moire removal, superresolution, dynamic range recovery, etc.) through post processing tools
Research topic superresolution from moving images: https://twitter.com/docmilanfar/status/1198862133821231104
Postprocessing Moire Removal Tests: https://www.magiclantern.fm/forum/index.php?topic=20999.0
AXIOM Beta Raw Image Processing Tutorial: https://docs.google.com/document/d/1-kFjebwvUg3e8jLKwXSemBWsNIL6_P-NPFUXbv7-jmk/edit
AXIOM Beta Image Processing Path (Proposed - USB3 not operational yet): https://docs.google.com/drawings/d/1wvNTYma4cDJ_wkUr9bzW8o8Q6uoexQrEMj5ubpIP3bA/edit