What is Network Slicing ?

Network SlicingAnother new term has been introduced into the mobile network lexicon in recent years – Network Slicing. What is it and how does it impact network architecture evolution? Is this architecture competitive or complementary to Small Cell evolution?



Quick recap of server virtualisation

Silicon processors have become enormously powerful over the past few decades. 1971 (45 years ago), the first Intel 4004 microprocessor had 2,300 transistors.

1993 saw the Intel Pentium with 3.1 Million.

More recently in 2014, Intel’s 18 core Xeon Haswell sports over 5 Billion, with some predicting that Moore's Law is now reaching it's physical limits.

Dedicating the full power of such powerful CPUs to a single application is overkill. Timesharing was introduced many years ago which allocated fixed proportions of processor time to multiple tasks. Virtualisation takes this a step further, creating several instances of what looks to executing software like a full, independent hardware platform. Resources (time, minimum memory etc.) can be partitioned and allocated to each instance. Different operating systems and software can run within each, avoiding the need for different variants of hardware platforms.

A common implementation of this is found in server farms used for website hosting and Cloud applications. Within each standard powerful server, multiple virtual servers can run, each with their own different operating system (Linux, Windows etc.) and disk space. Pricing models can be set with multiple levels depending on minimum memory and processing power allocated.

Instances of virtual servers can be spun up and down on demand. Some of the largest applications, such as Facebook, are said to run tens of thousands of instances at any one time, with an average lifespan of just 36 hours.

But there’s more processing power distributed in our hands

Equally, huge processing power is available within each smartphone and personal computer. There are around 2 Billion smartphones and 2 billion personal computers in use today worldwide accessing around 1 Billion unique websites, running on around 10 million servers. [Microsoft had 1 million in 2013, Google had 900K in 2011, Amazon thought to have over 2 million in 2014]

This means there are about 200 smartphones plus 200 personal computers per cloud server worldwide. Servers can be much more powerful than smartphones, for example the iPhone 6 is around 6 GFLOPS (Billion floating point operations per second) vs Intel Xeon Haswell from 200 up to 750 GFLOPS). Bottom line is that there’s probably more processing power embedded in all our smartphones and PCs than concentrated in the cloud itself.

Definition of Network Slicing

The concept of Network Slicing is similar to server farms – creating multiple instances of parallel network functions running on the same chip. This is extended throughout the Radio Access Network, not just in the core. The idea has been touted for a few years with Ericsson unveiling the concept for their IP routers back in 2013 and promoting it for 5G radio access last year.

An example would be to allocate a fixed partition of network resources (say 10%) for use exclusively by IoT devices. This would set a hard limit on the maximum and minimum amount of network capacity used, allowing a different set of QoS to be applied.

Another example is to partition resources for each technology generation, so that a basestation has known allocation for 2G, 3G and 4G.

Somewhat more radically, if a basestation were to be shared between network operators, then independent virtual network partitions could be allocated to each.

The vision of network slicing goes further than setting standard parameters and includes dynamically creating and terminating virtual slices on-demand.

Ericsson are positioning this technology as part of their 5G vision, alongside SDN and NFV. They demonstrated a lab system with SK Telekom in October 2015.

Demonstrating the concept with LTE today

Cavium and Argela partnered to demonstrate the concept live at MWC. Cavium are known for very high powered, high performance network processors. Their Octeon Fusion-M processor is targeted at macrocells and “Smart radio heads” for LTE Release 11/12. The largest chip in the family supports up to 3,600 simultaneous active users and/or 12 concurrent LTE sectors.

With such large capacity on a single chip, you can see how the concept of partitioning capacity above might also be used here. You could assign defined resources to specific sectors, frequency bands, technology generations or even network operators.

I think it would be fair to comment that these processors are not the lowest cost and are not targeted at the residential or lower end enterprise market.

Raj Singh, General Manager Wireless Broadband Group at Cavium, talked me through the network slicing concept with an LTE demo.

He thought that the architecture was particularly suitable for public venues, potentially replacing Digital DAS. It would allow the same chipset to host multiple carriers and multiple bands using any protocol. The business case for Enterprises, where traffic levels and building sizes may be smaller, could be more challenging.

Argela have pivoted their earlier business model and no longer directly offer small cells. Instead they develop IPR and show proof of concepts such as this. Oguz Sunay, CTO Argela Technologies USA, talked me through the demo on their stand, explaining how an App can be used to create a new virtual network instance on demand. He thought this might be done for special events, with different parameters set depending on the event type.

I am somewhat sceptical that network operators or venues themselves would have the time and inclination for frequent manual configuration to address different usage patterns. It seems to me that more automation, perhaps linking any frequent reconfiguration or change of profiles through a SON system would be more acceptable. If this is the case, then why not simply use SON to continuously adapt and optimise resources?

Statistical gain reduces nearer to the network edge

While I can see the point of dynamically sharing large centralised resources and taking the time to partition and allocate those between different applications, I struggle to see such clear cost benefits for the majority of small cells.

It seems to be the debate is between deploying fewer, very high capacity RF heads vs a larger number of lower capacity independent small cells. As the network densifies, it is spectrum reuse that will provide the greatest capacity gains and that implicitly requires more small cells (or RF heads).

The cost of the hardware for a “smart RF head” versus a full standalone small cell isn’t that different. The calculation may be different for a very high capacity macrocell, where site space, power and access are important issues compared with an enterprise small cell.

Where maximum capacity is required from limited spectrum, then tightly co-ordinated and locally controlled LTE systems such as Airvana’s OneCell architecture can squeeze extremely high capacity density. Most enterprise scenarios should be well be served by appropriately dimensioned LTE Enterprise Small Cells, possibly augmented by LAA or Wi-Fi in unlicenced spectrum.


I’m really impressed with the enormous capacity of the latest macrocell processors, more than enough to satisfy the needs of a large macrocell in a single chip.

Dividing and partitioning these resources for dedicated purposes is becoming quite feasible, aimed at the larger macrocells and higher capacity public venues.

For the larger macrocell sites this may have some merit – less equipment onsite means fewer site visits, centralised baseband processing may save energy. But for smaller, lower capacity small cells, this seems much less advantageous to me.

It also offers an innovative approach for use by or to replace today’s Digital DAS architectures, allocating clearly partitioned resources for each sector and network operator.

The larger long term opportunity may be in 5G applications. While still early days to determine how that will evolve, the high processing capacity and equitable sharing of resources will be important.

Hits : 23023
  • 4




    A significant number of users continue to report poor mobile coverage in their homes. There will always be areas which are uneconomic for mobile operator to reach. They range from rural areas

  • 4




    The term Enterprise addresses any non-residential in-building including hotels, convention centres, transport hubs, offices, hospitals and retail outlets. It's not just intended for businesses to

  • 4




    Urban small cells (sometimes also named metrocells) are compact and discrete mobile phone basestations, unobstrusively located in urban areas. They can be mounted on lampposts, positioned on the

  • 4




    A rural small cell is a low power mobile phone base station designed to bring mobile phone service to small pockets of population in remote rural areas. These could be hamlets, small villages or

Backhaul Timing and Sync Chipsets Wi-Fi LTE TDD Regional

Popular Categories

Follow us on...