I've read several articles recently articulating that mobile broadband networks are quickly filling up with traffic and thus affecting the end-user experience. The dramatic reduction in pricing (1000 times cheaper), wide availability of USB data dongles and 3G service means that mobile broadband is moving from an occasional outdoor/on-the-road access to being marketed as a direct competitor for DSL wireline broadband at similar or even lower prices.
There are concerns that the networks aren't geared up to take the strain (particularly of streaming video) and are actively looking at alternative ways to satisfy their customer demands. One example is where the Apple iTune downloads to their iPhone are restricted to WiFi rather than over the mobile broadband network.
One entrepreuner has even suggested a combined power-charger and dataloader for your mobile phone, transmitting the packet data over the power line and broadband router in your house from its servers. This is similar to schemes proposed by ip.access and others, where data synchronisation occurs through your femtocell at home rather than when you are out and about.
The bottlenecks for wireline broadband
In a wireless broadband service, such as DSL, the capacity bottleneck is not between the subscriber premises and the local telephone exchange (also called Central Office). It's usually somewhere between the local telephone exchange and one of the national internet exchange peering points. Contention ratios of 50:1 are typical where the DSL traffic is terminated and concentrated. Pricing for businesses and premium users takes advantage of this by charging extra for lower contention ratios of 20:1 or 10:1 and for prioritising traffic above other users. Some ISPs offer a premium traffic service, so that VoIP service (which can be patchy) operates much more effectively and so sound better rather than losing packets and dropping short bursts of audio at peak times.
Many countries are investing in massive programs to increase the performance and capacity of the "last mile", using technologies such as ADSL2+ (offering up to 24Mbit/s) or fibre to the kerb (typically up to 40Mbit/s) or even fibre to the home (typically 100Mbit/s). My own DSL service here in the UK gives me a line speed of about 5Mbit/s even though I'm something like 3-4km from the local exchange. However, I'd still expect the performance of my wireline broadband to be limited mostly by the contention ratio at my local exchange - something that should be cheaper to fix than recabling every individual home.
Some countries have relatively poor international capacity, so you can find that accessing servers and websites in the same country work very well, but international access can be poor. I've found this to be true in several Asian countries, including Australia and Malaysia, but have no objective measures of relative international interconnection capacity.
Marketing Promotions for Wireline Broadband
The marketing promotions from ISPs and other broadband providers tends to focus on the peak transmission capacity of "last mile" and on the monthly usage of data consumed. Other decision factors, such as the quality of customer care, billing are often ignored but are becoming more important. Several consumer groups publish survey results of these aspects. Some ISPs offer premium services to businesses and premium users with lower contention ratios and prioritisation of VoIP and/or streaming IPTV traffic.
Traffic Shaping is coming to wireline networks
There is evidence that many ISPs are investing in more active management of their data traffic to improve the customer experience whilst keeping costs in check.
A typical setup might involve the ISP marking packets at the DSLAM and using DiffServ to define three or four levels of priority including VoIP, IPTV and best effort.
I think we can expect more visibility of traffic shaping and QoS in the coming years in order to give priority will be given to traffic that requires it. This will result in either changes to the user experience, or differential pricing for heavy and high priority data users. This will use DPI (Deep Packet Inspection) to track and actively manage each packet and data session at the edge of the network.
The bottlenecks for Wireless Broadband
Wireless service providers also have two bottlenecks: the "last mile" provided by the radio interface using a shared resource for all users within a geographic area, and the backhaul connection from the cellsites into their core network. In the days of voice only traffic, operators would normally provide 100% backhaul capacity so that the full voice carrying capacity of each cellsite could be supported at all times. With the onset of data traffic, where usage can vary widely, many cellsites have more total radio capacity than backhaul - effectively providing a similar contention mechanism as found in the wireline DSLAMs.
As with wireline operators, wireless providers are using a number of methods of increasing their "last mile" capacity using techniques such as improved RF modulation (HSPA+), different RF technology (4G), buying additional spectrum and/or increasing the number of cellsites. Backhaul transmission from the cellsites is also going through a major change, with high capacity metro-ethernet links delivered by fibre and microwave as well as other options.
The shared resource of broadband radio capacity within a wireless network does come with some quality of service capabilities, but these are not yet fully utilised or marketed. These can also allow prioritisation of data traffic sessions on an individual basis. I'm not aware of any mobile broadband operator today that offers a "premium" or "business" broadband data package - all users are effectively offered the same best effort capacity.
Deep Packet Inspection is one solution
The telecoms industry is heavily investing in DPI (Deep Packet Inspection) - everyone from silicon chip designers, gateway vendors and software houses are making substantial developments in this field. This should not be quite the same issue as the "net neutrality" debate, because prioritisation of services should apply the same regardless of whether services are provided and delivered by the operator or a third party. This technology will examine each data packet as it enters through the system and classify it based on traffic type, session type and user's tariff plan.
Already, wireline ISP's such as the UK's PlusNet use this technology to groom and shape the traffic through their network for the benefit of all users. Their traffic shaping policy applies between the peak hours of 4pm and midnight, during which peer-to-peer traffic is substantially reduced. They publish their traffic mix and claim much higher customer satisfaction levels than those operators who do not do this.
Wireless operators are also likely to be deploying DPI in the near future and can use both this and the built in technical QoS policing capabilities of their radio networks to offer a differential service. I believe we can expect mobile broadband service providers to offer a range of differentiated packages to their customers ranging from a best effort service similar to today through to a premium service which would reliably support applications like VoIP and consistent faster data throughput. QoS cannot be guaranteed in these cases (except perhaps for low data rate services such as VoIP), but those with premium data service would always get a better service.
A more sophisticated approach would be to include a capacity limit for premium service (which is used first), after which you drop down to a lower quality/speed service (or pay more). This is similar to the schemes we see for some wireless ISPs which dramatically reduce performance from megabit speeds down to 50kbit/s or less when the usage cap is exceeded.
Offloading traffic is effective, but needs an incentive
Once we have these premium versus best effort mobile broadband service packages in place, customers may be more interested in actively offloading data traffic from the premium macrocellular network. There has to be some incentive for them to change behaviour - either its cheaper or works faster/better. AT&T and other mobile operators include access to almost 20,000 WiFi hotspots - indeed AT&T bought Wayport for $275M this week so they can offer free access for iPhone and Blackberry users. Devices such as the iPhone automatically seek out and use WiFi where its available, in order to free up capacity in the outdoor wireless broadband network. We are likley to see the growth of stealthy engineering solutions on mobile devices to offload data wherever possible.
But I would see a commercial offer being required to incentivise the end-user to change their behaviour.
A possible 4-tier data tariff scheme
I propose a tariff bundle structure which involves:
|Tier 1||Premium data sent over the |
outdoor mobile network
|Small limit |
|Tier 2||Best effort data sent over the |
outdoor mobile network
|Medium limit |
|Tier 3||Data sent using public hotspots |
WiFi or 3rd party femtocells
|Medium limit |
|Tier4||Data sent over home/office |
WiFi or femtocell
|Unlimited or very large |
limit (say 100GB)
Larger/more expensive bundles would be available for higher capacity premium usage of Tier 1.
Associated aspects and clarifications:
- Providing a femtocell to the customer free of charge. Data traffic through your own femtocell would be free of charge - it competes with WiFi that is perceived as free today.
- Access from a wide range of public WiFi hotspots free of charge (in a flat rate bundle) with a large usage cap to avoid abuse.
- Premium QoS outdoor broadband data up to a lower fixed capacity limit
- Best effort mobile data either on request (from your application) or where you've exceeded the fixed capacity limit (up to a secondary capacity limit).
- Some benefit from opening up your femtocell for use by others - perhaps rewarded with an increased premium capacity allowance when out and about.
- Clear information and visibility of current usage of your monthly limits - a bit like a fuel meter in your car.
Perhaps the user experience could be stepped or stages in several increments - for light users and when initially using it at the start of each billing cycle, it works very well, then if you use it extensively during the month performance degrades. The usage caps for individual users would need to start at different points during the month and may not be aligned with the billing cycle. This would encourage behaviour of either upgrading to a higher capacity/premium service (at a higher price) or more carefully managing scarce network resources. A knock-on effect would then be to encourage software application developers to ensure that their programs work efficiently. Meanwhile occasional or thrifty users would find the performance excellent for the times they use it.
Comparable schemes can be found in the use of other shared resources. An example includes peer-to-peer filesharing, where those who have already downloaded a file can make it available for upload and sharing with others. Systems like BitTorrent track the ratio of download and upload for each user, rewarding those who continue to provide uploads for the benefit of the community with better download services. This self-policing scheme encourages users to behave in a generous way, leaving their downloaded files available for sharing more of the time.
Perhaps we can learn a lesson from these schemes when looking to encourage users to share the precious mobile broadband spectrum through femtocells and WiFi hotspots.