In the interest of clarity it should be noted as you browse these entries (and those on other pages) that the AXIOM Gamma has been shelved for the time being. This means that, whilst we think there is scope for further development of the camera, this research may not recommence until some way into the future. There are three reasons for this:
The image sensor in the AXIOM Gamma will be mechanically adjustable in dimensions of flange focal distance (back focus = Z-axis), a shift in XY-axes and XYZ rotation.
For creating space for the heatpipes and decoupling the image sensor from the rest of the module two rigid-flex PCBs are added to the existing designs. The rigid part of the rigid-flex PCB will be screwed to the back of the module exposing the connectors to the back.
Two cooling options:
Usage of a Peltier-element proves to be problematic:
High power consumption renders mobile usages inefficient
Power consumption within the Peltier generates more additional heat
Water condensation within the electronics on the cool Peltier side, requirement of sealing -> increased complexity of manufacturing, design, maintenance, repairs, etc.
Sensor front end module without Peltier element for general usage, one version with Peltier element for people who are aware of the implications and/or for experimentation purposes
Sensor access is implemented by using a Xilinx Kintex-7 160T FPGA.
Captures raw image date from the sensor and responsible for general “raw” processing of the sensel (image sensor pixel) values, e.g. spatial and temporal binning of the raw data if lower resolutions and/or frame rates are desired in subsequent processing steps.
Board-To-Board Connection (B2B)
Finger on backplane and PCIE-164 connector on PPM.
For sensor adjustment a PCIE Riser card can be used to detach the PPM and the ISM from the Body and still access the set screws.
Connects PPM, CPM and HSIOMs via high-speed serial links and LSIOM via low-speed links.
Backplane circuitry include the power consumption measurement and management of an external Battery and also hot-plug detection and powering of the modules.
The Backplane contains a muxed JTAG interface to allow debugging of all add-on boards via a central connector.
Most likely a Zynq 7030 FPGA + dual ARM core System on Chip (SoC).
Also called High Speed Module
Generally used for storage, input or output of high-speed signals. Typical examples would be SSD storage, HDMI/SDI output or SDI input.
For this reason, the HSIOMs have dedicated high-speed access to the preprocessed image data coming from the PPM and the processed Data from the CPM.
Connections are typically 4 lanes, either connected directly or via a high-speed mux. The theoretically achievable data-rate would be 10Gbps per lane (the limit of the mux, allthough the FPGA transceivers are not much faster), but the practically reachable data rate will likely be lower depending on EMI-performance, power-consumption and board losses. For a 4K raw image stream, only a fraction of the theoretical speed is needed.
Cooling will be done as Module (Central heatsink plus fan).
The backplane needs to account for that as it will have to go “around” the Cooling Bay.
Finger on backplane and PCIE-164 connector on LSIO Bridge.
Docking to LSB
No high-speed access to raw image data stream, but are connected to CPM and PPM via multiple LVDS lanes to allow medium-rate traffic (1-2 Gbps total shared across all LSIOMs).
Typical examples: Audio, Timecode, Trigger or Genlock inputs/outputs, Gyroscopes, Accelerometers, GPS modules.
Last LSIOM in stack.
Zynq provides 1080p30 (or 1080p60 if bandwidth allows) stream via four LVDS lanes to a LSIOM with a NVIDIA Tegra K1 chip to further distribute the stream over AOSP, eg. an HDMI to be viewed on an external screen (preferrably with touchscreen support). Communication between Zynq and Tegra happens over Serial Protocol Interface (SPI).
Tegra K1 for encoding to 1080p60, SPI between K1 and Zynq, as Zynq has no GPU to encode on and it would take too much capacities from the CPU on dedicated LSIOM.
For faster prototyping, several readymade modules can be used: - made by GE (too expensive) - made by Toradex (Open Source baseboard)
To save up bandwidth on the lanes themselves and processing power of the Zynq, another possibility would be a dedicated raw stream to the Android module and debayer it with CUDA/OpenGL.
As the Zynq will generate a scaled down 1080p30 version of the sensor image, it will also be sent to a new LSIOM running AOSP with an Transition-Minimized Differential Signaling (TMDS), preferably HDMI.
The 1080p signal would take up approximately four LVDS data lanes.
The downscaled stream from the Zynq fakes being a standard camera, but without direct controls. Sensor configuration should always be done via Zynq. A partial sensor control will be achieved via a SPI connection between K1 and Zynq.
This page is work in progress.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 645560