masterFIP Software
Overview
Following the evaluation of different implementation solutions, the technical choice for the masterFIP design was the use of Mock Turtle (also referred to as White Rabbit Node Core).
Mock Turtle is a HDL core of a generic distributed control system node,
based on multiple deterministic CPU cores where the users can run
any sort of hard real time applications.
The applications can be written in bare metal C, using standard GNU
tool set, cross-compiled and loaded into the CPUs. The CPUs can
communicate between each other through
a dedicated Shared Memory and with the host through Host Message
Queues (HMQ).
This project includes:
- real-time software, written in bare-metal C, running in the embedded CPUs
- associated libraries for the development of user-space applications
- example application
Currently the project is housed in gitlab
Folder Structure:
- rt/ba: real time software running on CPU0
- rt/cmd: real time software running on CPU1
- rt/common: shared memory allocation functions
- rt/common/hw: wbgen2-generated header file with registers definitions
- include: common definitions used by rt and user space applications
- mockturtle: Mock Turtle as a submodule
- lib: library for building user-space applications with masterFIP
- tools/testbed: example application
Real-time software
The real-time software runs in the Mock Turtle CPUs.
Note that with this architecture, even if the communication with the
host is lost, the WorldFIP macrocycle will continue running;
the produced data will not be up-to-date, as they come from the host,
but the WorldFIP macrocycle will not be disrupted.
In this design Mock Turtle has been configured as the following table shows:
Parameter | Value |
CPUs | CPU0, CPU1: running at 100 MHz |
White Rabbit Support | No |
Remote Message Queues | 0 |
CPU0 memory size | 98304 bytes |
CPU1 memory size | 8192 bytes |
Shared Memory size | 65536 bytes |
HMQ MT -> host | 8 |
HMQ host -> MT | 2 |
The following figure shows the main modules of Mock Turtle, as
configured for ths project.
The communication with the fmc_masterfip_core is through a set of
wbge2-defined control and status
registers.
Mock Turtle main modules and configuration*
CPU0* is the heart of the design; its purpose it to “play" in a
deterministic way the WorldFIP macrocycle, as described in the masterFIP
functional specs.
For example, it initiates the delivery of a WorldFIP question frame, by
providing the frame bytes to the fmc_masterfip_core,
and then awaits for the reception of the response frame. It retrieves
the consumed data from the fmc_masterfip_core, packs them
in the corresponding HMQ (according to the frame type) and can notify
the host through an IRQ.
The main interaction between the host and CPU0 is for the macrocycle
configuration. This in principle takes place only once at startup,
where
CPU0 is LOADed through a dedicated HMQ (RD HMQ0) with the macrocycle
configuration, for example: the number and size of produced/ consumed
variables,
the lengths of periodic/aperiodic windows etc.
The following figure shows the states of the CPU0 state machine.
During WorldFIP operation the state machine is at “RUNNING” state.
Note that at the end of every window (periodic/aperiodic window of a
macrocycle) CPU0 is polling the Shared memory to check for a "RESET/
STOP" from the host (placed there by
CPU1).
CPU1* is polling the host to retrieve new payload bytes for
production or a "RESET/ STOP" command. When new data is received from
the host through a dedicated HMQ, CPU1 puts them
into the Shared Memory for CPU0 to retrieve them and provide them to the
fmc_masterFIP_core for serialization. CPU1 does not need access to the
fmc_masterFIP_core;
however, access is possible for debugging purposes.
Library
The masterFIP API is a simplified version of Alstom's FDM.
It offers functions for the hardware configuration, macrocycle
configuration, definition of IRQs, exchange of consumed/produced data,
error handling.
A detailed description of all the functions/structures is available in
the Project info section below.
Project info
Project Status
Date | Event |
07-2015 | Definition of the wbgen2 interface |
10-2015 | First version 1Mbps master exchanging periodic vars with 25 nodes |
03-2016 | Implementation of aperiodic messages |
06-2016 | migration of Cryo application |
09-2016 | migration of RadMon application |
10-2016 | migration of QPS application |
02-2017 | migration of FGC application |
03-2017 | Section-wide software & gateware review: review intro |
Contacts
E.Gousiou, 27 February 2017