Amit Jain, VP Product Management at Spidercloud, argues that their well known Enterprise RAN architecture already fits the current model of Cloud RAN. He’s planning to scale their E-RAN product up in size and scope through virtualisation, and shared his views and reasoning behind this evolution.
Is the term Cloud RAN well understood today?
“If you had asked me a year ago what Cloud RAN meant, I would have said it was about centralised Baseband architecture – taking the entire basestation function from Layer 1 (physical) to Layer 3 (Protocol) out of the radio head and running it centrally. Today, I’d say that the term isn’t restricted to any single architecture. The overall concept is to split functions between the radio head and centrally in order to maximise performance while minimising cost. The assessment needs to take into account the number of radio access points, backhaul and central processing resource required.
“The term C-RAN is used for both Centralised RAN and Cloud RAN. The former is mostly about shifting resources from one place to another while Cloud RAN looks at sharing and optimising system efficiency.
“We sit on the Cloud RAN side and believe the underlying LTE architecture of our product today is basically a Cloud RAN. We’ve not called in that in the past because we’ve been more focussed on solving customer problems and maturing our system. But with Cloud RAN being defined in the industry and especially through the Small Cell Forum work, I feel it is time to be clear about what we’re doing.”
What’s your split of functionality between the small cell radio head and centralised processor?
“In SpiderCloud’s LTE Enterprise RAN solution, the the eNodeB functionality is split between the radio head (called Radio Node) and central controller (called Services Node). Layer 1 and Layer 2 run on the small cell, with Layer 3 processed on the central services node. We approached LTE differently from how we designed our 3G system, where the Services Node acts like an RNC (Radio Network Controller) and the radio head as a NodeB (3G basestation)”
“The benefit is that all the UE (smartphone/device) data sessions are anchored on the Services Node. When moving between radio nodes within the building, all the handovers stay within the building and are co-ordinated by the controller. This offloads a lot of the signalling and handover processing from the operator’s core network. It also allows us to manage interference and optimisation through SON.
“The architecture connects radio heads and central controller using standard Ethernet with relatively little overhead above the user data traffic. We don’t require dedicated CAT5 cabling or fibre throughout the building and can choose to share existing Ethernet transport, where appropriate. Our ability to share Ethernet is what allows us to build an LTE small cell clip-on module for Cisco Wi-Fi APs. With most of the “heavy lifting” processing done at the edge, each time you add a new small cell/radio head you increase capacity of the system. The central Services Node requires comparatively little processing power per node, so can scale up to 100 dual carrier nodes per server today.”
How would you scale this up further?
“We plan to evolve our Services Node by virtualising it, allowing it to scale up to the largest scenarios we can envisage. Virtualization does not change our architecture, but it allows significantly greater overall capacity. In the future we want to be able to scale up to at least 1000 nodes per services node. This would ensure we can handle a large campus deployment from a central IT facility. Elsewhere, we could cover a downtown area from a single machine room, rather than needing to deploy an independent Services Node for every enterprise.
“We plan to use Intel-based servers for the virtualized Services Node. We don’t have to, but we think they have the broadest ecosystem for virtualisation technology today. In the future, we would consider other platforms if they make commercial sense.
“This would all be connected through standard Ethernet. We can tolerate up to a 20ms end-to-end delay, a figure that can easy be met by commercial Carrier Ethernet services within the same state or region. We already have installations in London located up to 10km away connected using Gigabit Metro Ethernet.”
What’s your position on Cloud RAN for 3G?
“There is very little discussion around 3G Cloud RAN today, it’s all about 4G and preparing the future for 5G. While 3G and GSM won’t disappear overnight, I’d expect all major new investments to be in 4G. Existing 2G/3G equipment already serves the areas it needs to and can continue to do so until the spectrum is refarmed. It’s not that there are technical barriers preventing a 2G/3G Cloud RAN, simply that we don’t see market pull from operators to develop it.”
How would Cloud RAN encompass LAA?
“LAA (running LTE in unlicenced 5GHz Wi-Fi band) has some important implications for Cloud RAN. The key feature is LBT (Listen Before Talk). My view is that you need the LTE Baseband to be co-located with the unlicensed channel sensing capability built into the radio heads. When we look at Wi-Fi today, every Wi-Fi access point is listening and only when no transmission is detected on the channel will it then send out a message to reserve it. The 802.11 specification then grants a maximum of 9 microseconds for data transmission to start. This works because every access point is running the baseband.
“Imagine what would happen if your tried to do this in a DAS architecture where the baseband is separated from the radio link by several 100 microseconds or milliseconds of delay. The system simply wouldn’t work effectively alongside existing Wi-Fi. So for LBT to work well, I believe you need agility within the radio access point to react very rapidly. You just can’t do this if you separate the carrier sensing capability from the baseband processing.
How does this affect Mobile Edge Computing?
“Mobile Edge Apps would typically be hosted on a centralised controller for each Enterprise. Scaling up to handle large campus environments allows us to ensure the same App has access to all users in that scenario.
“Where a larger central server handles multiple buildings and/or businesses, then different Apps can be hosted there, partitioned in their own sandbox for security.
“MEC is still at the stage where in the industry is trying to figure out which are the Apps people really care about and that will generate the most value. We’ve provided the platform and underlying architecture choices to let the market decide.
“We’ve shown there is a market for MEC but there needs to be more standardisation. ETSI’s participation is key and has also helped attract more software developers. We believe we are the only architecture today that can amortise single processor MEC across multiple radio nodes.”