Joe Neil, Director of Solution Architecture at Microsemi, travels the world working with mobile operators of all shapes and sizes. His role is primarily focussed on their network architecture and specifically the design for timing and synchronisation. With networks currently deploying LTE Advanced Pro and looking ahead towards 5G, he shares his insights on how and where tighter timing constraints will be met.
How would you summarise the current evolution of network timing today?
We’ve seen a steady adoption of LTE worldwide over the past few years, with available spectrum expanding both to lower (700MHz) and higher (3.5GHz) frequencies. Almost everyone is now heavily using the 2.6GHz band which offers both FDD and TDD modes.
Fundamentally the mobile paradigm is now about bandwidth to the user, and interference control. As we go to higher frequencies to increase bandwidth, we see greater problems with in-building penetration and dispersion. The distance between cell sites has reduced to a few hundred metres apart in major urban centres so coverage is less of an issue, but each site requires more antennas, supporting sophisticated features such as beam-forming, carrier aggregation and massive MIMO.
Everyone is now looking at deploying these LTE-Advanced features from later 3GPP releases to squeeze the most out of their networks. Managing co-channel interference remains the number one issue when addressing capacity growth and this means that synchronisation between cell sites is critical, especially where phase control is used to deliver the advanced services.
End-to-end network phase synchronisation of +/- 1.5us has been the stated requirement in recent years. This translates to as little as 1us between the eNodeB base station and the Primary Reference Time Clock - the source of timing.
PTP has become adopted by almost every network
An apparently simple solution to implement accurate local timing would be to use standalone GPS receivers at each cell site. It is now generally accepted that this is either too costly, is unable to provide consistent timing signals, or is too dependent on the GNSS systems, which can be jammed or interrupted quite easily. Not all cell sites have access to a view of the sky, which limits performance. An unusual and typical example I came across was in an Australian city, where it’s not permitted to prune the trees. The leaves periodically covered the GNSS antenna and blocked the signal, requiring a very tall pole to be erected specifically to rise high above the tree canopy. In Hong Kong a customer had to put his antenna on a 50 foot pole on the top of his building which was sandwiched between several much higher high-rises. Even then his view of the sky was severely restricted. These are amusing one-off anecdotes but they illustrate the difficulty of trying to rely on GNSS alone. 5G densification of eNB at street level and indoors will make this problem much worse.
An alternative solution is essential. The majority of networks have already adopted PTP and most of the LTE networks – even those with heavy GNSS deployment – are planning to use PTP as part of their network phase timing distribution. A common architecture is to host a very centralised enhanced Primary Reference Time Clock (ePRTC) incorporating robust Caesium clocks that can retain accurate time for up to 14 days and feed that signal to smaller, more cost effective, PRTC gateway clocks located strategically around the network.
When we first started using PTP (Packet Timing Protocol) to distribute accurate time across a packet network, many engineers doubted this could ever be adequately reliable. Fortunately, we’ve proved over the past decade that it can be done very cost effectively.
We’ve seen PTP technology evolve in several waves:
- 2008-12 was mainly for WCMDA, when most carriers became familiar with PTP and how to engineer it in their network.
- 2012-18 has evolved for use with LTE, initially to support FDD mode for frequency synchronisation. Some large TDD networks, such as in China, Japan and India, adopted PTP for phase timing in this period.
- 2017 onwards has seen worldwide adoption of PTP for phase timing as LTE-Advanced features become more widely used.
Transporting time seamlessly remains a major challenge
Distributing time where delays are quantified and stable is fairly easy. But transport networks are extremely diverse and delay through packet networks is notoriously variable. This is most noticeable at network boundaries, where transport technologies change.
For example, one operator has a very nicely engineered L3 MPLS ISIS network core with VPLS and Carrier Ethernet L2 rings at the edge. At each aggregation point it runs BGP gateways, resulting in complex interworking between different L3 instances or between L3 and L2. This introduces random delay because protocol conversion isn’t a deterministic process. Even a few nanoseconds of added variability means the system can spin out of specification.
The solution here, and increasingly the architecture of choice, is to deploy several gateways clocks near the Mobile Edge rather than driving everything centrally, and this fits neatly with the notion of Mobile Edge Computing as we move into the 5G era.
Moreover, it also tells us that very high speed transmission links alone don’t mitigate timing errors – you really need to engineer a thorough and robust solution that caters for any protocol interworking.
How will timing requirements change with 5G?
The good news is that end-to-end macro network timing requirements are unlikely to change soon. The +/-1.5us inter radio phase timing for LTE-Advanced should be adequate for 5GNR radios at any frequency, both sub-6GHz and millimetre wave – at least for now.
What is new are some very tight timing requirements between the REC (Radio Equipment Control) and RE (Radio Equipment)/multiple antennas. In LTE Time Error statements from the ITU, +/-400ns was allocated between Baseband Units (BBU) and Remote Radio Heads (RRH), typically connected by a CPRI interface. In 5G, these devices evolve to Control Units (CU) and Distributed Units (DU) connected via eCPRI.
Phase timing error between DUs connected to the same CU is targeted to be less, with the precise value depending on the type of service being used. For example, an infra-band carrier aggregation of 3x20MHz would need a timing control of +/- 130ns. Although theoretically possible in the lab, nobody is seriously thinking of engineering a Wide Area Network at +/-65ns – at least not yet. Some very large operators (for example, China Mobile) have publicly argued that +/-130ns to +/-200ns is possible and the usual Network Element vendors are looking at this very seriously.
I would reiterate that this level of accuracy is the timing error between multiple antennas connected to the same control unit, typically connected via fibre or millimetre wave. The distance between the two might be quite short but could extend to 20 or 30km given the right fronthaul transmission.
I expect the performance we can achieve over a well engineered network will continue to evolve and improve in the coming years. The key is of course, “well engineered” – there will be less and less room for elasticity (noise) in the network.
eCPRI will become the new multi-vendor interface
While CPRI was effectively a proprietary interface because each vendor had their own variants, there is a lot of commercial pressure from major operators to make eCPRI a truly global open standard. Carriers are now insisting that RAN vendors comply and support interworking between the Radio Equipment (RE) and the Radio Equipment Controllers (REC). It may take time for this initiative to percolate around the rest of the world, and large equipment vendors will continue to offer “locked” or quasi-locked systems, but eventually it will be the generic model.
This should allow a wider ecosystem of suppliers who provide various formats of antennas with embedded active radio heads to thrive. Products could include everything from large outdoor multifunction active antennas to in-building DAS systems.
Timing and sync, and more especially engineering competence in this domain, will be just as important with these distributed radio units as for the central network components. I would expect many more vendors would need to ensure they have adequate technical expertise to ensure they can meet these extremely demanding timing requirements in a robust and reliable manner.
Serious clocking expertise, and high performance clocks of all shapes and sizes, as systems or as components, will be in strong demand in the future in order to deliver the consistent network performance required by users . Time and again in the past we have seen that failure to engineer the timing network correctly ends up being very expensive, and that saving money on timing components (or rigorous network engineering) has been a false economy. This will hold true as we move into the always-on high bandwidth model that is promised by 5G.
Microsemi is a sponsor of ThinkSmallCell
For more information about Microsemi timing and synchronisation solutions visit their website at https://www.microsemi.com/applications/mobile-infrastructure/picocell-enterprise-small-cell