(Add new parameters)
Line 16: Line 16:
 
All of those lead to a solid lockup of the Beta, because if you write to one of those memory addresses which are used for communicating with the PL, and there is no handler in the FPGA, the ARM cores lock up solid, no recovery possible they basically wait for an ACK/NACK forever.
 
All of those lead to a solid lockup of the Beta, because if you write to one of those memory addresses which are used for communicating with the PL, and there is no handler in the FPGA, the ARM cores lock up solid, no recovery possible they basically wait for an ACK/NACK forever.
  
Research is taking place in the Labs [https://lab.apertus.org/T757 here].
+
Research is taking place in the Labs [https://lab.apertus.org/T757 here]. Code can be found on [https://github.com/apertus-open-source-cinema/axiom-control-daemon GitHub]
  
  
Line 27: Line 27:
 
The control daemon project currently consists of three different modules:
 
The control daemon project currently consists of three different modules:
  
* '''Web UI''' - HTML5, sends REST requests to the backend, currently by using JQuery (still evaluating alternatives).
+
* '''Web UI''' - HTML5, sends requests to the backend, currently by using JQuery (still evaluating alternatives).
* <s>'''RESTServer''' - receives REST requests, converts them to Flatbuffers packages and sends them to the daemon, through socket.</s>
+
* '''WSServer''' - receives WebSocket requests, converts them to Flatbuffers packages and sends them to the daemon, through socket.
* '''WSServer''' - receives WebSocket requests, converts them to Flatbuffers packages and sends them to the daemon, through socket.
+
* '''Daemon''' - processes flatbuffer packages received over a UNIX domain socket and calls suitable handler.
* '''Daemon''' - processes received packages and calls suitable handler.
+
 
 +
The communication with the daemon is done via a UNIX domain socket. There are two different protocols used when communicating with the daemon. A handshake protocol which tells the user what parameters and modules are available, their minimum and maximum value, as well as a description. (The handshake protocol is not implemented yet).
 +
The second protocol is used for writing and reading the different parameters. It uses [https://google.github.io/flatbuffers/ Flatbuffers]. The schema for Flatbuffers can be found [https://github.com/apertus-open-source-cinema/axiom-control-daemon/blob/master/Schema/axiom_daemon.fbs here].
 +
 
 +
Communication of the Web UI with the daemon it bridged from the UNIX domain socket and flatbuffers to websocket and JSON by the WSServer. The WSServer accepts following fields, which get translated to the Flatbuffers schema (and vice versa):
 +
{|
 +
|field||type||example||comment
 +
|-
 +
|sender||string||WSServer||
 +
|-
 +
|module||string||"image_sensor"||
 +
|-
 +
|command||string||"set" or "get"||
 +
|-
 +
|parameter||string||"analog_gain"||
 +
|-
 +
|value1||string||"4"||
 +
|-
 +
|value2||string||"5.7"||
 +
|-
 +
|status||string||"success" or "fail" ||used for reply from Daemon
 +
|-
 +
|message||string|| ||status message, to get more info when request fails
 +
|-
 +
|timestamp||string|| ||date and time of camera, when the request was send back to client
 +
|}
 +
 
  
  
Line 39: Line 65:
 
'''Required packages''' (names are varying between Linux distributions):
 
'''Required packages''' (names are varying between Linux distributions):
 
* cmake
 
* cmake
* libsystemd-dev
+
* systemd
* libssl-dev
+
* clang
 
+
* ninja
 +
* git
  
 
'''Steps''':
 
'''Steps''':
 
* Install required packages
 
* Install required packages
* Clone beta-software repo
+
* Clone the [https://github.com/apertus-open-source-cinema/axiom-control-daemon repo]
 
* cd into cloned repo
 
* cd into cloned repo
* cd software/control_daemon
+
* <code>mkdir build && cd build</code>
* mkdir build
+
* <code>cmake -GNinja ..</code>
* cd build
+
* <code>ninja</code>
* cmake ..
 
* make -j4
 
  
 
=== Setup daemon ===
 
=== Setup daemon ===
Line 77: Line 102:
 
'''image_sensor'''
 
'''image_sensor'''
 
{|
 
{|
|gain||Digital gain
+
|digital_gain||Digital gain
 +
|-
 +
|analog_gain||Analog gain
 +
|-
 +
|config_register||read/write arbitrary sensor config register
 
|}
 
|}
  

Revision as of 15:32, 15 February 2019

1 Overview

>>>>TODO Overview text.


Currently, booting up the camera is as follows:

  • A systemd service is activated which runs a shell script.
  • This script then loads a bitstream into the FPGA and uses other scripts and C programs to train LVDS channels and set up the HDMI output

When the service is disabled, the user can run this script manually, which:

  • Does not keep the user from running if it has already been activated by systemd.
  • And on the other hand, simple scripts which query the registers (e.g. to get the temperature of the sensor) can be activated even if the FPGA bitstream didn't load.

All of those lead to a solid lockup of the Beta, because if you write to one of those memory addresses which are used for communicating with the PL, and there is no handler in the FPGA, the ARM cores lock up solid, no recovery possible they basically wait for an ACK/NACK forever.

Research is taking place in the Labs here. Code can be found on GitHub


2 Structure

TODO: Add image which shows structure of module communication (WebGUI -> RESTServer > Daemon, and back)


The control daemon project currently consists of three different modules:

  • Web UI - HTML5, sends requests to the backend, currently by using JQuery (still evaluating alternatives).
  • WSServer - receives WebSocket requests, converts them to Flatbuffers packages and sends them to the daemon, through socket.
  • Daemon - processes flatbuffer packages received over a UNIX domain socket and calls suitable handler.

The communication with the daemon is done via a UNIX domain socket. There are two different protocols used when communicating with the daemon. A handshake protocol which tells the user what parameters and modules are available, their minimum and maximum value, as well as a description. (The handshake protocol is not implemented yet). The second protocol is used for writing and reading the different parameters. It uses Flatbuffers. The schema for Flatbuffers can be found here.

Communication of the Web UI with the daemon it bridged from the UNIX domain socket and flatbuffers to websocket and JSON by the WSServer. The WSServer accepts following fields, which get translated to the Flatbuffers schema (and vice versa):

field type example comment
sender string WSServer
module string "image_sensor"
command string "set" or "get"
parameter string "analog_gain"
value1 string "4"
value2 string "5.7"
status string "success" or "fail" used for reply from Daemon
message string status message, to get more info when request fails
timestamp string date and time of camera, when the request was send back to client


TODO: Add JSON/REST package description from Lab (https://lab.apertus.org/T865)

3 Build

Required packages (names are varying between Linux distributions):

  • cmake
  • systemd
  • clang
  • ninja
  • git

Steps:

  • Install required packages
  • Clone the repo
  • cd into cloned repo
  • mkdir build && cd build
  • cmake -GNinja ..
  • ninja

4 Setup daemon

5 Setup WebUI

6 CLI

To set/get parameters from command line, DaemonCLI can be used:

Syntax: DaemonCLI <module> set_/get_<parameter> <value>

Prepend set_ or get_ to parameter, to tell daemon if parameter should be read or set.

DaemonCLI image_sensor set_gain 2

6.1 Available modules

general General methods, like getting available parameters through get_available_methods
image_sensor CMV12000 (currently)

6.2 Available parameters (per module)

image_sensor

digital_gain Digital gain
analog_gain Analog gain
config_register read/write arbitrary sensor config register

7 Development notes

CMV12000Adapter will be used as example. Please look at the class for examples of implementation.

7.1 Add new parameters

Before registering new parameters, two methods should be added: a setter and a getter.

Declaration syntax:

   bool <setter_name>(std::string value1, std::string value2, std::string& message)
bool <getter_name>(std::string& value, std::string& message)

Note the ampersand (&) after the type, it is very important for returning multiple values. As methods can return only one value, bool in this case, variables can be passed by reference (&) to be able to set their values inside methods. Values can be modified as usual and the value will be passed back to the caller (see IDaemonModule.h and MessageHandler.cpp for examples).

Setter receives two values, in case just one is required value2 will be 0 (zero).

After the setter and getter are implemented, it's time to attach them to a parameter name. Here they are registered to gain parameter:

 void CMV12000Adapter::RegisterAvailableMethods()
{
  AddParameterHandler("analog_gain", GETTER_FUNC(&CMV12000Adapter::GetAnalogGain), SETTER_FUNC(&CMV12000Adapter::SetAnalogGain));
  AddParameterHandler("digital_gain", GETTER_FUNC(&CMV12000Adapter::GetDigitalGain), SETTER_FUNC(&CMV12000Adapter::SetDigitalGain));
}

This code will attach methods like SetAnalogGain() and GetAnalogGain() to the corresponding parameter "analog_gain".

8 Unit tests

Unit tests have been added to the project to verify correct functionality. Catch2 framework is used because it's single-header only and utilizes the C++11 way of doing things.

Note: (for development on PC) - RAM access of the camera is different from x86/x64 CPUs, modified classes have to be used to bypass this, otherwise SEGFAULT would be the result.

In the CMake scripts a switch called ENABLE_MOCK was added so that users can disable any code which won't work on a regular PC (see CMV12000AdapterTests.cpp for an example). While running the build on camera cmake .. is sufficient, but for development one should use:

$ cmake -DENABLE_MOCK=ON ..