Difference between revisions of "Project Ambrosia"

From apertus wiki
Jump to: navigation, search
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=Scope=
==Scope==
In the last few years, image sensor technology has advanced slowly with relatively small improvements being made. Conversely, image post-processing technology has advanced rapidly in comparison. Because camera manufacturers don’t allow access to real-raw data, this begs the question - Are today's advances in camera technology mostly attributable to better algorithms or methods that are designed to reduce noise and other artifacts/problems with “pure raw” data? This is exactly the question we want to investigate and, at the same time, develop/apply such methods/algorithms ourselves. The big difference, though, is that we want to do this in a fully transparent way where users have the option to utilise post-processing or not.
In the last few years, image sensor technology has advanced slowly with relatively small improvements being made. Conversely, image post-processing technology has advanced rapidly in comparison. Because camera manufacturers don’t allow access to real-raw data, this begs the question - Are today's advances in camera technology mostly attributable to better algorithms or methods that are designed to reduce noise and other artifacts/problems with “pure raw” data? This is exactly the question we want to investigate and, at the same time, develop/apply such methods/algorithms ourselves. The big difference, though, is that we want to do this in a fully transparent way where users have the option to utilise post-processing or not.


Inspired by Blender open movies (eg https://gooseberry.blender.org/), the idea is to combine creating content and undertaking development around it, involving both software/hardware developers as well as artists/filmmakers, etc.
Inspired by Blender open movies (eg https://gooseberry.blender.org/), the idea is to combine creating content and undertaking development around it, involving both software/hardware developers as well as artists/filmmakers, etc.


=Perception=
==Perception==
It’s important that collected footage is not advertised as camera showreel or sample footage that’s meant to showcase the absolute best quality that the camera is capable of capturing (because we know we are not utilising the full potential of image sensor and post-processing yet). It’s important to emphasise that the purpose of test footage is to gradually improve image processing of AXIOM Beta shot footage. Footage should be seen and communicated as a ‘development snapshot’ of the current state the camera is in ( -> always improving). It should motivate others to collect sample footage and help search for and fix problems/issues as well.
It’s important that collected footage is not advertised as camera showreel or sample footage that’s meant to showcase the absolute best quality that the camera is capable of capturing (because we know we are not utilising the full potential of image sensor and post-processing yet). It’s important to emphasise that the purpose of test footage is to gradually improve image processing of AXIOM Beta shot footage. Footage should be seen and communicated as a ‘development snapshot’ of the current state the camera is in ( -> always improving). It should motivate others to collect sample footage and help search for and fix problems/issues as well.


Line 13: Line 13:
This project should not be promoted on social media, we want to specifically target tech people and AXIOM Beta owners in our community.
This project should not be promoted on social media, we want to specifically target tech people and AXIOM Beta owners in our community.


=Media=
==Media==
* Time Lapse and Stop Motion sequences -> Raw12
* Time Lapse and Stop Motion sequences -> Raw12
* Still images -> Raw12
* Still images -> Raw12
Line 21: Line 21:
[[Factory Calibration]]
[[Factory Calibration]]


=Animation & Overlays=
===Images===
First images from the Brussels local group (IRL Brussels - Info & Research Lab Brussels) :
 
https://cloud.apertus.org/index.php/s/Lpo9f28GmsNeeTs
 
https://cloud.apertus.org/index.php/s/tX5Zcao3JNjcsnL
 
==Animation & Overlays==
Colophon link: [[Colophons]]
Colophon link: [[Colophons]]


Line 36: Line 43:
  cd /opt/axiom-firmware; git describe --always --abbrev=8 --dirty
  cd /opt/axiom-firmware; git describe --always --abbrev=8 --dirty


=Development Goals/Areas=
==Development Goals/Areas==
* Color Calibration (LUTs, processing steps, guides)
* Color Calibration (LUTs, processing steps, guides)
* tools/algorithms for noise reduction (static, dynamic, interframe, all postprocessing, not in-camera)
* tools/algorithms for noise reduction (static, dynamic, interframe, all postprocessing, not in-camera)
Line 44: Line 51:




=Notes=
==Notes==
Research topic superresolution from moving images:  
Research topic superresolution from moving images:  
https://twitter.com/docmilanfar/status/1198862133821231104  
https://twitter.com/docmilanfar/status/1198862133821231104  


Line 55: Line 61:


AXIOM Beta Raw Image Processing Tutorial:
AXIOM Beta Raw Image Processing Tutorial:
https://docs.google.com/document/d/1-kFjebwvUg3e8jLKwXSemBWsNIL6_P-NPFUXbv7-jmk/edit
https://docs.google.com/document/d/1-kFjebwvUg3e8jLKwXSemBWsNIL6_P-NPFUXbv7-jmk/edit




AXIOM Beta Image Processing Path:
AXIOM Beta Image Processing Path (Proposed - USB3 not operational yet):
 
https://docs.google.com/drawings/d/1wvNTYma4cDJ_wkUr9bzW8o8Q6uoexQrEMj5ubpIP3bA/edit
https://docs.google.com/drawings/d/1wvNTYma4cDJ_wkUr9bzW8o8Q6uoexQrEMj5ubpIP3bA/edit

Latest revision as of 15:06, 14 May 2020

1 Scope

In the last few years, image sensor technology has advanced slowly with relatively small improvements being made. Conversely, image post-processing technology has advanced rapidly in comparison. Because camera manufacturers don’t allow access to real-raw data, this begs the question - Are today's advances in camera technology mostly attributable to better algorithms or methods that are designed to reduce noise and other artifacts/problems with “pure raw” data? This is exactly the question we want to investigate and, at the same time, develop/apply such methods/algorithms ourselves. The big difference, though, is that we want to do this in a fully transparent way where users have the option to utilise post-processing or not.

Inspired by Blender open movies (eg https://gooseberry.blender.org/), the idea is to combine creating content and undertaking development around it, involving both software/hardware developers as well as artists/filmmakers, etc.

2 Perception

It’s important that collected footage is not advertised as camera showreel or sample footage that’s meant to showcase the absolute best quality that the camera is capable of capturing (because we know we are not utilising the full potential of image sensor and post-processing yet). It’s important to emphasise that the purpose of test footage is to gradually improve image processing of AXIOM Beta shot footage. Footage should be seen and communicated as a ‘development snapshot’ of the current state the camera is in ( -> always improving). It should motivate others to collect sample footage and help search for and fix problems/issues as well.

The Goal is to collect nice and diverse footage, maybe short films - always with the intention to get the best out of the camera but also with the awareness to hopefully find and run into problems with any footage shot so we can identify problems and compensate/fix them (either inside the camera already or else in a post processing step).

The focus is less on usability or general camera operations (we know we are not quite there yet) but more on the created content (videos, time-lapse, stop motion, still images).

This project should not be promoted on social media, we want to specifically target tech people and AXIOM Beta owners in our community.

3 Media

  • Time Lapse and Stop Motion sequences -> Raw12
  • Still images -> Raw12
  • Video -> HDMI external recording

For all of the above it is essential that RCN and Dark frame calibration has been done properly: Factory Calibration

3.1 Images

First images from the Brussels local group (IRL Brussels - Info & Research Lab Brussels) :

https://cloud.apertus.org/index.php/s/Lpo9f28GmsNeeTs

https://cloud.apertus.org/index.php/s/tX5Zcao3JNjcsnL

4 Animation & Overlays

Colophon link: Colophons

image overlay for released footage - axiom beta shot on overlay.psd: https://cloud.apertus.org/index.php/s/ScpzwtzPGX4GPt5?path=%2F

Axiom beta shot overlay.jpg

To read the current Firmware Version (hash) (only works in Firmware 2.0). AXIOM Beta Firmware Version 2.0 do:

cat /etc/issue

or

cd /opt/axiom-firmware; git describe --always --abbrev=8 --dirty

5 Development Goals/Areas

  • Color Calibration (LUTs, processing steps, guides)
  • tools/algorithms for noise reduction (static, dynamic, interframe, all postprocessing, not in-camera)
  • finding more optimal camera settings/register combinations
  • best practice guides (for camera operators but also developers, etc.)
  • image enhancements (moire removal, superresolution, dynamic range recovery, etc.) through post processing tools


6 Notes

Research topic superresolution from moving images: https://twitter.com/docmilanfar/status/1198862133821231104


Postprocessing Moire Removal Tests: https://www.magiclantern.fm/forum/index.php?topic=20999.0


AXIOM Beta Raw Image Processing Tutorial: https://docs.google.com/document/d/1-kFjebwvUg3e8jLKwXSemBWsNIL6_P-NPFUXbv7-jmk/edit


AXIOM Beta Image Processing Path (Proposed - USB3 not operational yet): https://docs.google.com/drawings/d/1wvNTYma4cDJ_wkUr9bzW8o8Q6uoexQrEMj5ubpIP3bA/edit