Sometimes it seems that network architects are behaving like five star chefs, dreaming up ever new ways to combine key ingredients and present them in different ways. NFV (Network Function Virtualisation) and MEC (Mobile Edge Computing) allow important functions to be moved around – centralised or distributed as required. But is this all just table dressing or can it enable new services?
Should RAN Virtualisation be renamed RAN splitting?
It seems to me that RAN virtualisation, which applies NFV (network function virtualisation), is mostly aimed at centralising RAN functions – reducing the intelligence at the very edge of the network (i.e. in small cells) and combining it in large data centres. This C-RAN (Centralised RAN or Cloud RAN) offers benefits in terms of spectral efficiency because all the radio nodes can be tightly synchronised and co-ordinated. The downside is the higher backhaul (sometimes called fronthaul) required, which in some architectures requires dark fibre to each endpoint.
A commercial trade-off has to be made between deploying additional radio heads/small cells against the savings made. This may be more important in outdoor metro scenarios than in-building, due to site logistics driving higher cost of deployment per site. I find it hard to believe there would be significant savings in direct RF hardware costs – you have to do the processing somewhere – but it is argued that some outdoor macrocell sites don’t have the space for associated air-con and ancilliary equipment which would be more efficiently centralised.
Some C-RAN products have also been designed for larger in-building Enterprise application, directly competing with DAS.
Perhaps it would be more appropriate to call this “RAN Splitting”. There are several technical architectures proposed – the Small Cell Forum paper on the topic provides a useful technical comparison. Implementation may not necessarily require new standards, as long as the external interfaces (radio to handsets, S1 to core network) are compliant. The Forum proposes their nFAPI interface which would allow different vendors to supply the (fairly dumb) radio nodes and centralised baseband equipment.
Isn’t this the same as Core Network Virtualisation?
I don’t think so. Many core network functions used to be implemented on custom designed hardware – so-called “Telecom Appliances” – with diverse and different platforms for voice, data and messaging functions amongst others.
Virtualising those functions allowed them to run on standard datacentre machines, be located and/or concentrated into fewer datacentres. This has added efficiency and economies of scale to multi-national network operators. It’s not uncommon to find your SMS text message might be centrally handled in another country, quite transparently to end users.
The cost savings from using standardised, commodity hardware platforms – reducing the need for diverse range of hardware support skills, parts and equipment – would be significant. These savings are less visible in the RF side of the business, which needs dedicated equipment at each cellsite.
So what about mixing them up?
A more radical architectural change involves distributing some of the core network functions to the edge of the network. The concept can seem counter intuitive when so many of our smartphone apps communicate with centralised servers in the cloud.
This so-called Mobile Edge Computing initiative identifies three primary reasons why:
- Latency, when connected through a satellite link or have limited bandwidth, and a faster response is required
- Robustness, when there is a need to continue to provide service where the backhaul connection is offline
- Data filtering and consolidation: where large amounts of local data can be reduced down to key information events
An earlier article looked at running Mobile Edge Computing Apps on Enterprise servers. It’s also possible to run core network functions locally. This would be of most interest to:
- First Responders: Tactical deployment of a local standalone LTE core network which continues to provide service regardless of backhaul connection.
- Enterprise: In-building calls and data services can continue in the event of backhaul outages.
- Remote/Rural: Sites such as oil rigs, mines and rural villages could continue to operate where backhaul links are offline
An Edge Core architecture adds flexibility and value into the network operation, differentiating between network operators. This doesn’t require any standards changes. Commercially available products, such as Quortus ECX, provides mobile core intelligence suitable for Enterprise deployment. The software is scalable and has been proven to run on-board a small cell itself.
With so much focus on RAN virtualisation at the moment, it’s worth considering if this is really mostly about centralising the RAN rather than changing it’s external capabilities.
Greater resilience, robustness and lower latency for the end user services might better be achieved by distributing some of the core network functions into local controllers at the network edge. This has already been proven useful in Enterprise, Rural/Remote and First Responder applications.
We'll be expanding on this theme alongside other presenters at this month's Cambridge Wireless Small Cell SIG meeting. Join us there for an interactive debate on the value and future of Small Cells abd backhaul in the world of RAN Virtualisation.