Following the evaluation of different implementation
the technical choice for the masterFIP design was the use of Mock
Turtle (also referred to
as White Rabbit Node Core).
Mock Turtle is a HDL core of a generic distributed control system node,
based on multiple deterministic CPU cores where the users can run
any sort of hard real time applications.
The applications can be written in bare metal C, using standard GNU
tool set, cross-compiled and loaded into the CPUs. The CPUs can
communicate between each other through
a dedicated Shared Memory and with the host through Host Message
This project includes:
real-time software, written in bare-metal C, running in the
associated libraries for the development of user-space
rt/common/hw: wbgen2-generated header file with registers
include: common definitions used by rt and user space
mockturtle: Mock Turtle as a submodule
lib: library for building user-space applications with masterFIP
tools/testbed: example application
The real-time software runs in the Mock Turtle CPUs.
Note that with this architecture, even if the communication with the
host is lost, the WorldFIP macrocycle will continue running;
the produced data will not be up-to-date, as they come from the host,
but the WorldFIP macrocycle will not be disrupted.
In this design Mock Turtle has been configured as the following table
CPU0, CPU1: running at 100 MHz
White Rabbit Support
Remote Message Queues
CPU0 memory size
CPU1 memory size
Shared Memory size
HMQ MT -> host
HMQ host -> MT
The following figure shows the main modules of Mock Turtle, as
configured for ths project.
The communication with the fmc_masterfip_core is through a set of
wbge2-defined control and status
CPU0* is the heart of the design; its purpose it to “play" in a
deterministic way the WorldFIP macrocycle, as described in the masterFIP
For example, it initiates the delivery of a WorldFIP question frame, by
providing the frame bytes to the fmc_masterfip_core,
and then awaits for the reception of the response frame. It retrieves
the consumed data from the fmc_masterfip_core, packs them
in the corresponding HMQ (according to the frame type) and can notify
the host through an IRQ.
The main interaction between the host and CPU0 is for the macrocycle
configuration. This in principle takes place only once at startup,
CPU0 is LOADed through a dedicated HMQ (RD HMQ0) with the macrocycle
configuration, for example: the number and size of produced/ consumed
the lengths of periodic/aperiodic windows etc.
The following figure shows the states of the CPU0 state machine.
During WorldFIP operation the state machine is at “RUNNING” state.
Note that at the end of every window (periodic/aperiodic window of a
macrocycle) CPU0 is also polling RD HMQ0 to check for a "RESET/
CPU0 state machine*
CPU1* is mainly polling the host to retrieve new payload bytes for
production. When new data is received from the host through a dedicated
HMQ, CPU1 puts them
into the Shared Memory for CPU0 to retrieve them and provide them to the
fmc_masterFIP_core for serialization. CPU1 does not need access to the
however, access is possible for debugging purposes.
The masterFIP API is a simplified version of Alstom's FDM.
It offers functions for the hardware configuration, macrocycle
configuration, definition of IRQs, exchange of consumed/produced data,
A detailed description of all the functions/structures is available in
the Project info section below.