Tom McQuade has been involved with small cells for longer than most, first six years at Picochip and then three years with Radisys. As their General Manager for the CellEngine™ small cell software business, his role involves making difficult choices of where to direct engineering resources, prioritising effort between individual customer demands, evolving standards and longer term product investment. Even with a staff of some 250 engineers across the portfolio (both R&D and customer engagement), some tough decisions have to be made. He shares some insights into the underlying fundamentals behind those trade-offs.
Radisys CellEngine software is platform independent and has been demonstrated on most if not all of the main silicon platforms. They have three strategic partnerships with Intel, Broadcom and Octasic. They supply source code for many small cell vendors, who can then choose to customise, extend or enhance as they see fit. Roadmap releases keep up with evolving standards and add additional features.
What does a software vendor need from a small cell chipset?
SoC (Silicon on a Chip) vendors need to provide me with three fundamental capabilities:
- Scalar processor(s) with adequate processing capacity (MIPS)
- Baseband processors with adequate signal processing (DSP)
- Layer 1 firmware to decode the physical RF signal (PHY)
After adding our software stack, the resulting small cell product balances performance between three KPIs
- Active concurrent users
- Peak throughput/data-rate
The benchmark for today’s small cell products is met by Intel and Broadcom who satisfy capacity for up to 64 active users, achieving very good throughput and high stability. Running those chipsets with 128 users would impact throughput and/or stability unless further software engineering investment was made to optimise for that.
Lower cost chipsets in the range provide more limited capacity of 8 or 16 users for residential and SoHo products. Other chip vendors offer more powerful chipset designs which can handle higher capacity, more suitable for urban or rural small cells and so can justify their higher cost.
Generally speaking, adding features tends to have a second order performance impact, but something like 4x4 MIMO would have a dramatic effect.
Maintenance also makes demands
We always need to be current on the external signalling interfaces, such as S1 and X2, in order to be communicating correctly with the core network for interoperability reasons. We’ve already shipped 3GPP Release 10 and are working on Release 11. We support the Small Cell Forum/ETSI plugfests to validate interworking across the wider ecosystem.
When SoC vendors launch new chipsets or revise their PHY firmware, we’ll start a new test cycle. This can also involve taking advantage of new or improved capabilities, requiring additional engineering investment.
We’re always looking for insights from carriers (network operators) and SoC vendors to determine where the market is headed, and as a result we generally avoid surprises. Several of the Release 10 LTE-Advanced features are optional and can be deferred or even ignored. A popular capability is Carrier Aggregation to retain feature parity with the macrocell network (and it would also be a precursor for LAA). CoMP (Co-Ordinated Multi-Path) is less urgent for today’s small cells.
What’s your view on RAN Virtualisation?
There are broadly two reasons why you might want to do this.
- To reduce overall system cost, benefitting from consolidating high capacity processing into a single point.
- To increase spectral efficiency, reducing the number of access points/small cells needed to serve an area
The radio heads that you’d use in any of these architectures would still need the same SoC and supporting hardware that we use today – they wouldn’t be cheaper with or without the full small cell stack running onboard. So I don’t see any reason to do anything differently below 64-users per small cell on the basis of radio head cost.
Once you get above that capacity per radio head, RAN virtualisation would allow the same hardware to support many more concurrent users. We’ve been working closely with ASOCS on a project which is similar to the Small Cell Forum nFAPI architecture but takes it a step further, introducing greater flexibility. Our stack is inherently suitable for virtualised environment by the nature of its design, so can easily adapt to any of the variety of architectures being proposed.
I can also see that for high traffic density situations, a centralised architecture can achieve higher spectral efficiency. There are already some larger enterprise small cell systems with central controllers and/or baseband aimed at that market. The benefit would be that slightly fewer radio heads could deliver the same overall throughput.
Any closing thoughts on market direction?
After many years of substantial investment from silicon vendors, software vendors and equipment, I continue to be optimistic that the small cell market will eventually take off.
I’m often asked if Carrier Wi-Fi can co-exist or will kill the need for small cells. I don’t see a 100% market position for either technology – each carrier (network operator) will want to have as big a toolbox as possible, then determine what works best for them. We can expect more intelligence to be added to improve co-existence as time goes by.
Both carriers and vendors have difficult choices to make about where and when to invest in new technologies. Ongoing discussion and debate throughout the industry helps everyone to make the best informed decisions and agree the most appropriate way forward.