It seems counter intuitive, but there is an argument that massively capable small cells are a less effective way of providing capacity than using lots of simpler, relatively less sophisticated ones. Read on for an explanation.
The three dimensions of cellular capacity
There are commonly thought to be three main methods of increasing the total capacity of wireless systems:
- Increase spectral efficiency (i.e. using LTE rather than 2G or 3G)
- Use more spectrum (i.e. acquire more licensed spectrum or encroach into unlicensed using LAA)
- Reuse the spectrum (i.e. shrink cell sizes and deploy lots of small cells)
There is a generally accepted view that spectral reuse (option 3) has the potential to release far more capacity density than the other two by a considerable margin.
So what does this mean for small cell sizing?
For maximum performance, I’d ideally want a dedicated small cell which can allocate all its resources to serving my device. It would have a nice big 20MHz sub-band, perhaps augmented by a second one to help out with larger downloads (possibly using LAA). Some might ask for more, wanting gigabit datarates, but a few hundred megabits dedicated to me and backed up with high speed backhaul would be pretty amazing.
Once this small cell is shared with others, the average capacity available to me is inevitably doing to decline. Depending on who and how they use it, I might start to notice delays, and as it gets busier may start to see buffering. As the cell is shared with more and more users, voice calls and streaming services need to be prioritised to avoid more serious problems, but quality will be affected at some point.
With larger numbers of small cells, each re-using the available spectrum, there is less statistical multiplexing of the available capacity. Fewer users will share the huge resource available. Think of it as driving on an empty freeway where you don’t mind sharing it with a few others vs being packed in tightly at rush hour.
Scaling up from small vs downsizing from macro
There seems to be a trend for small cell technology to grow to ever higher capacity per radio node, driven by ever more capable baseband processing chipsets. It’s not uncommon to find specifications for 64 or 128 concurrent users in some of the higher end Enterprise products. These have come a long way from the original 4 user residential femtocell designs some years ago.
Another approach has been to take existing macrocell designs and make them more compact. These can cater for several hundred concurrent users in the same sector, almost in the same ballpark as large scale macrocells powering huge cell towers.
You could get the impression that all small cells will have this huge capacity and processing power in the long term. The downside would not just be the higher cost of the silicon, but the higher power consumption and supporting circuitry. These larger chips are more akin to those used for Cloud Servers rather than tablets or mobile devices.
Mass deployment brings benefits
One advantage given is that by deploying fewer, more capable small cells, then you can save on deployment and longer term operational costs. Larger buildings justify local controllers or similar centralised co-ordination which enhance the performance of the system. This improves overall quality, reduces dropped calls and squeezes the most out of the available system.
So perhaps a few high capacity nodes won’t meet ever growing data demand alone and we should expect to have many more simpler units deployed in the long term. From a spectral efficiency viewpoint and total system throughput objective, it seems to make sense to me.
This leads me to believe that the ideal long term Small Cell product for indoor capacity might be relatively limited in concurrent users, but streamlined to handle several 20MHz bands, possibly including LAA. Sufficient backhaul might use Cate5e cable running at 2.5Gbps using NBase-T.