How do you know the GPS receivers in Gran Sasso and CERN are not introducing any systematic error?
Metrology labs around the world use GPS time transfer techniques to
exchange observations of the delays of their local clocks with respect
to GPS clocks. These observations are then merged by the Bureau
International des Poids et Mesures (BIPM) in Paris to produce
Coordinated Universal Time (UTC). In addition to these techniques,
metrology labs also exchange observations using other independent means
such as Two-Way Satellite Time Transfer (TWSTT). In this setup, a
certain bandwidth is rented from a geostationary satellite operator to
establish a two-way link, which is completely independent of GPS.
Travelling atomic clocks (after accounting for relativity and other
effects) are a third way which is used to ascertain that GPS time
transfer is unbiased. The satellite receivers in Gran Sasso and CERN are
the same brand and model. So are the antennae and the antenna cables.
They have been characterized by two independent metrology labs (METAS
and PTB) using different methods, which agreed to within 2 ns even if
conducted with a 2-year separation in time.
Have you taken the antenna cable delay into account?
Calibrations conducted by METAS and PTB both included the final cables
and antennae. This is a very important concept: inclusive calibration.
It is also used to calibrate other things like fiber links. You have to
try to include as much as possible of your final setup in the
calibration procedure. So at METAS, for example, both GPS receivers were
set up with their final cables and antennae, and their PPS outputs were
measured against the METAS reference PPS. This provided, in addition to
the relative calibration of interest, an absolute calibration with
respect to UTC.
What software have you used to convert from RINEX to CGGTTS?
Does this system suffer from any kind of Sagnac effect, related to the rotation of the Earth?
Sagnac effect corrections and many other effects on the GPS time
transfer are properly treated by the RINEX to CGGTTS conversion
software. See USE OF GEODETIC RECEIVERS FOR
TAI by P.
Defraigne, G. Petit and C.
Was the PTB calibration done with a traveling cesium clock? If so, did you apply relativistic corrections to its time base?
No, it wasn't. The PTB calibration
was done with a traveling GPS used for time comparison between metrology
labs. This unit consists of a GTR50 GPS receiver, a SR60 Time interval
Counter (TIC), a GPS antenna, and ~50m of low tempco HELIAX foam coaxial
cable. This system has been traveling through several metrology labs
where it has been verified to be stable at the 1ns
What is the impact of the Ionosphere on the accuracy of the time transfer?
Geodetic receivers are able to decode the so-called ionosphere-free code
(P3), which is transmitted over the L1 and L2 carriers. Notice that L1
and L2 are two different frequencies. As ionospheric delay changes
affect differently L1 and L2, it is possible to calculate an ionospheric
delay approximation. Additionally the common view technique helps
reducing the impact of ionospheric delays on baselines at the 800 km
level, by selecting satellites which are visible simultaneously by both
base stations. On top of that, the use a cesium atomic clock allows for
additional filtering of daily delay oscillations. Once all these
techniques are applied the overall impact of ionospheric delays is
inferior to 1ns. See USE OF GEODETIC RECEIVERS FOR
TAI by P.
Defraigne, G. Petit and C. Bruyninx.
If needed, the time transfer could still be improved by adding a
posteriori knowledge of the satellite orbits, atmospheric delays and GPS
Why the GPS receivers are installed on the surface?
It is very convenient to keep an antenna cable length of ~50m, in such a
way that the ensemble can be dismounted and sent to calibration by
another lab (PolarRX2e sent to
METAS) or another lab can bring a
traveling GPS to our installations (PTB calibration at CERN and
The author of that paper assumes time is being measured using clocks
which are moving with respect to the detectors. This is not the case.
GPS Disciplined Oscillators (GPSDOs) indeed use signals coming from GPS
satellites to produce a clock signal, but both general and special
relativity effects are taken into account to manufacture this clock
signal. If these effects were not compensated, normal GPS applications
such as positioning of hand-held receivers would not work. Common-view
time transfer is a very well known and tested technique in the metrology
world. It is used in the production of Universal Coordinated Time (UTC)
and an effect such as that claimed in the paper would have made that
Fiber length calibration
Why haven't you used fiber reflectometry to measure fiber lengths?
We avoided using reflectometry to calibrate our delays for the following
Commercial reflectometers are known to be accurate at the ~10ns
level, whereas we know that our internal delay chain is stable at
the ~1ns level.
Lab tests show that swapping the transmitter and receiver over a
~5Km optical fibber roll induces a change in the delay at the order
of 0.1ns. Sagnac effects over an extended 10 km fiber would add an
additional asymmetry of only ~0.1ns on the earth equator. This means
that a simple two-way scheme over an spare fiber has the potential
to do a time-transfer between two points separated by 10 km with
~1ns accuracy (or better if especial care is taken during the
measurement and monitoring of the feedback delay is enabled).
Delay paths should include as many components as possible in order
to reduce any systematic error.
These fibers are not length-compensated on-line. What is the length change throughout the year?
The fiber delay tempco is of the order of 50ps/km/°C. The fibers in Gran
Sasso are buried deep underground where temperature changes below 1 °C
are expected. This represents a maximum change of less than 1 ns
throughout the year in the OPERA fiber. At CERN we have ~2km of fiber
between the CERN Control Room (CCR) and HCA442. This fiber is not buried
as deeply as the one used in LNGS. If we take that the fiber is buried
at 1m, the temperature change could be of ~15°C. This implies a possible
change on the fiber delay of 1.5ns.
Additionally the electronics could be affected by temperature changes.
We have tested in the lab that after heating with a hot gun the CTRI
(~80C), the reception delay increases by ~2ns. Notice that this is well
beyond the electronics working environment.
Beam Current Transformer cable calibration
What does the BCT measure?
The BCT is made of a toroidal magnetic core with a winding around it
that delivers a voltage when the beam goes through the center of the
toroid. It's a bit like a conventional transformer, with the beam
playing the role of the primary winding and its effect being observed in
the secondary winding.
Why is the BCT cabling delay so large?
There are no electronics near the beam pipe for radiation reasons. The
original mission of that BCT was to make sure the beam had the required
characteristics for CERN Neutrinos to Gran Sasso (CNGS) operation. Using
it as a means to provide a time-stamped acquisition for neutrino time of
flight experiments was an afterthought. In any case, the cable length
and the amplifier combined with the BCT still provide a bandwidth of
several hundreds of MHz so there is no appreciable distortion in the
You have calibrated using an LHC beam. How do those measurements apply to the CNGS beam?
Lab measurements indicate that there is a variation in beam-to-output
delay with respect to transverse beam position. We measured an effect of
almost 10 ns for a 10 cm transverse displacement. But the difference in
transverse position between an LHC beam and a CNGS beam is less than 2
mm. So this effect is not a problem. Also, for the purpose of measuring
BCT delay, the actual time structure of the beam is irrelevant. This is
why we chose the LHC beam, with its properly separated bunches that make
analysis of the delay much simpler.