There are so many different spectrum options for LTE (more than 50 are already standardized) combined with a choice of FDD and TDD modes. This is likely to lead to a two tier market for smartphones and mobile devices, but what will it mean for small cell vendors?
The diversity of spectrum choices for LTE
One of the great aspects of LTE is that it can be adapted and used in almost any spare piece of spectrum available. It's been designed to squeeze into small gaps, as small as the 1.5MHz carrier bands used by CDMA or expand up to 20MHz with a commensurate increase in data rate. It can be used in paired spectrum, with separate uplink and downlink frequencies (FDD or Frequency Division Duplex), or ping-pong alternately sending and receiving in the same frequency (TDD or Time Division Multiplexing).
As if that wasn't enough, Release 10 introduced carrier aggregation. You can split your data session across several different frequencies and bands, combining them to achieve an overall much higher peak data rate.
3G was much simpler
3G was far more straightforward. Spectrum regulators around the world worked hard to free up a common band of 2100MHz. A few regions went their own way – Japan at 1500MHz (although one operator did use the 2100MHz and captured international roaming traffic), Australia struck out at 850MHz (and achieved awesome range of up to 200km), and North America continued to use the 1900MHz band.
What this meant was that device vendors didn't have to think too hard about which frequencies to develop to. While initially, the range of 3G handsets for Australia and Japan was more limited, pretty much every 3G handset included the 1900 and 2100MHz frequencies so that they worked everywhere.
With the 3G data service being so similar to 2G, data roaming was commercially and technical very easy to implement – albeit quite expensive.
A fragmented LTE spectrum
Many of the 50+ LTE commercial systems operate on different frequencies and modes. It's a very diverse and fragmented marketplace, and likely to continue.
For example in Latin America, the continuing operation of analogue TV broadcast means that the popular 700-800MHz band used in the US and some parts of Europe (where the digital TV switchover has been completed) isn't available. So instead, they've recently proposed adopting the 450MHz band which will give even better range and in-building penetration for rural and remote areas.
The more popular frequencies include 700MHz and 2600-2700MHz band (which includes a piece allocated to TDD in the middle). Spectrum pricing for the 700MHz range averages at some 12 times more than that for 2600MHz worldwide, probably related to the longer range and enhanced in-building penetration of the lower frequency. This translates to fewer cellsites being required (until capacity increases when more cellsites are required and small cells move into their own).
We've also seen some operators keen to "refarm" their existing 2G and 3G spectrum, which if permitted by the regulator would allow them to deploy or expand the more efficient LTE technology without buying additional spectrum. This may also provide a faster route to market, especially if the new spectrum has to be cleared first (i.e. move existing users such as public safety users to a different system).
A conundrum for device manufacturers
This diversity gives device manufacturers a difficult conundrum. It would be very difficult to develop smartphones of the same size, weight and battery consumption that could adapt to all these different frequencies and modes. They have to pick a few and target which markets to operate in.
Undoubtedly, the mainstream device vendors will choose those options expected to be most common - especially in developed countries where prices should be higher.
A clear example of this is the latest Apple iPad (spec sheet), which includes LTE which works in the US (on the 800MHz and 2100MHz bands) but is incompatible with European (700MHz and 2600MHz) or Australian (1800MHz) frequencies. After being advertised as a 4G device, Apple has had to offer refunds to consumers who were misled into thinking it would work with their own country's LTE networks.
A two tier market for devices or a growth in Mi-Fi?
This complexity isn't going to suddenly disappear. It's really technically quite difficult to make a device with the adaptability to handle all these different options, especially at lower frequencies.
Perhaps there will be a growing demand in "Mi-Fi" devices, which provide us with our own personal area network and drive the various smartphones, tablets and other gadgets we'll be carrying around.
I've also heard it said that some of the Tier 2 device vendors have already targeted these slightly more unusual bands. With a plethora of Android smartphones and tablets available from many ODMs, demand will still be significant even in low ARPU countries.
What this could mean is that in some countries, there will be a different range of devices and handsets available. Mainstream iconic brands would either have to be used via 3G or via a Mi-Fi unit.
Time will tell if this becomes true.
What does this mean for small cell vendors?
With forecast for public access LTE small cells predicted to be high, many vendors are developing their own products. It's relatively easy to modify the RF front end to use a different frequency (or even mode), although different passive components may have to be used.
The latest small cell broadband chipsets are capable of handling 3G and LTE in the same chipset (and/or multiple LTE carriers at different frequencies), to achieve the highest peak rates likely to be used in the field.
In addition, some RF chipset vendors claim a much wider range and adaptability to match these different requirements, leading to fewer differences in the manufacturing supply chain. The choice of a common RF part that can be used in multiple markets will reduce costs and complexity. Where 3G and LTE are combined or multiple LTE carriers used, careful choice of the RF will again be important.
Other than the physical layer, there are very few issues in the software stacks and backend related to spectrum choice, although I suspect some might argue there is significant intellectual property (and performance benefit) from Self-Organising Networks (SON) features that take the frequencies and operating modes fully into account.
Do you agree with the summary above? If not, or want to expand on the points made, feel free to comment below – you can do so anonymously if you prefer.