Wednesday 18 December 2013

Optimal spectrum refarming for LTE


When looking to refarm some spectrum for LTE (e.g. 1800MHz spectrum from GSM) the following simple approach will lead to optimal results.

Start by thinking of how much spectrum you would ideally refarm. This will typically be 20MHz. Assuming this was possible pick the centre frequency for this allocation. This will be your EARFCN. Then look at how much spectrum you can actually refarm. This will typically be less, as the traffic on the legacy RAT might not have reduced enough or frequency re-planning your whole legacy network will take time. Most operators go for 10MHz, but in some cases 5MHz is also used.

Deploy your network.

After some time has passed and more spectrum is available, keep the centre frequency the same and just expand the bandwidth. Some cells might be using 10MHz, some 15MHz or 20MHz but because the centre frequency has not changed, all mobility can be intra-frequency. No need for inter-frequency handovers, no need for additional neighbour planning, no need for measurement gaps, no need for additional SIBs being broadcasted. UEs will seamlesly reselect & handover taking into account the used bandwidth every time as this is broadcasted in the MIB which is read in idle mode and after every handover.

Although the above might sound like the obvious way of doing things, both EE in the UK (see here) and other LTE deployments (see here) don't follow this but rather offset their two bandwidth allocations leading to needless inter-frequency mobility.

Sunday 8 December 2013

PRACH preamble power considerations in LTE

Unlike UMTS, the PRACH in LTE is used only for the transmission of random access preambles. These are used when the UE wants to access the system from RRC idle, as part of the RRC re-establishment procedure following a radio link failure, during handover or when it finds itself out of sync.

As part of the PRACH procedure the UE needs to determine the power to use for the transmission of the preamble and for this it looks at SIB2 for the preambleInitialReceivedTargetPower IE. As shown from the extract above (taken from a live network) this is expressed in the dBm and in this specific case it is set to -104dBm. So this is the expected power level of the PRACH preamble when it reaches the eNodeB.

What is also broadcasted is the reference signal power, which in our case is set to 18dBm. Based on this and a current measurement of the RSRP, the UE can determine the pathloss. Once it knows the pathloss it can then determine how much power it needs to allocate the PRACH preamble to reach the enodeB at -104dBm.

So lets say that the UE measures an RSRP of -80dBm. Based on the broadcasted reference signal power it can calculate the pathloss, PL = 18 - (-80) = 98dB. This means that for a preamble to reach the eNodeB at -104dBm it needs to be transmitted at PPRACH = -104 + 98 = -6dBm. That is fine.

But what happens if we consider other values of RSRP? For example cell edge? Cell edge can be determined by the value of the qRxLevMin. Looking at SIB1 from the same network we can see that this is set to -128dBm (IE x 2). 

So at an RSRP of -128dBm the pathloss is PL = 18 - (-126) = 144dB. So the UE needs to transmit the preamble at PPRACH = -104 + 144 = 40dBm. Is this ok? Actually no, as LTE UEs are only capable of transmitting at a maximum power of 23dBm. Does this mean the UE does not even go through the PRACH procedure? No, but it will be limited to transmitting at 23dBm meaning that the preamble will reach the eNodeB at - 121dBm, which means that the probability of a successful detection is very low.

In actual fact based on this network we can say that anywhere in the cell where the RSRP is below -109dBm will lead to a power limited PRACH attempt and a lower probability of detection. This is something to think about next time your LTE signal strength is low and your phone seems unresponsive..

Sunday 27 October 2013

3.2Mbps @ 2003? I don't think so..

Three UK have put up the above graphic on their website, here, depicting their network evolution from a throughput point of view.

It is quite nice to look at but was 3.2Mbps possible in 2003? I don't think so, as at that time only R99 networks were available and the max throughput was 384kbps. This was only increased with the first HSDPA networks in 2005 and even then speeds were limited to 1.8Mbps (category 12 devices).

Let's see how long it takes for them to correct this.. :)

Wednesday 16 October 2013

Deep dive into commercial LTE networks

I was recently in Greece and took the opportunity to have a closer look at two commercial LTE networks in order to understand what capability and configuration operators are using in the field.

The networks in question were Cosmote and Vodafone. Cosmote has launched 4G since around November 2012, initially limited to PS devices (dongles, Mi-Fi etc) and later on added support for smartphones. Vodafone has had a very limited LTE offering since the end of 2012 also limited to PS devices, but has also since expanded its LTE network and added support for smartphones.

Spectrum:
Both operators are using re-farmed 1800MHz spectrum to support 4G services. Each operator seems to have re-farmed 20MHz of spectrum which is split into a 10MHz block and a overlapping 20MHz block. In busy/important areas the 20MHz block is used in other areas the 10MHz block. Obviously as the spectrum is overlapping the two blocks cannot be used in the same geographic area. Details are shown below.
Vendors:
Vendor details are not usually shared in the public domain although occasionally certain vendors/operators do make announcements about awarded contracts. In this particular case I could not find any announcements in the public domain but what I call "L3 signatures" indicate that Vodafone are using Huawei EUTRAN while Cosmote uses NSN.

Idle Mode Selection/re-selection:
Unlike 3G which uses both a quality (Qqualmin) and signal strength (Qrxlevmin) selection criterion, LTE only uses signal strength (at least in rel8). Vodafone have configured their Qrxlevmin to -128dBm RSRP while Cosmote uses -130dBm. Although the standard allows for up to -140dBm, both of these values can be considered quite low. The obvious benefit is that UEs stay in LTE for longer, the obvious question is what is performance like (especially in the uplink) at such low RSRP values?

Specifically to intra-frequency cell re-selection Vodafone UEs start searching when the RSRP is equal or less than -68dBm. They perform the reselection if the neighbour cell is 4dB stronger.

Cosmote UEs start searching when the RSRP is equal or less than -68dBm as well but perform the reselection if the neighbour cell is 2dB stronger.

From an inter-frequency (not applicable for these networks) and IRAT re-selection point of view, LTE uses priorities similar to HCS. The configured priorities are shown below. As can be expected 4G has the highest priority followed by 3G and finally 2G.
Vodafone UEs will start measuring lower priority RATs at -118dBm and reselect when the serving cell RSRP falls below -128dBm.
Cosmote UEs will start measuring lower priority RATs at -124dBm and reselect at the same threshold.

Intra-frequency mobility:
Intra-frequency mobility in LTE is governed by event A3. A comparison of the A3 configuration for each operator is shown below.
 a3-offset is IE x 0.5dB so Vodafone trigger a HO when the neighbour cell is a3-offset + hysteresis stronger, which equates to 3dB. Cosmote also trigger a HO at 3dB however they don't make use of the hysteresis.

Connected mode IRAT mobility:
Both operators use event A2 to trigger IRAT mobility actions. The mobility mechanism itself is through an RRC connection release with redirect (PS handover is not supported). Vodafone use a measurement based approach. Two A2 thresholds are defined. The first at -120dBm RSRP triggers measurements against 3G & 2G neighbour cells. Depending on what is detected by the UE the appropriate re-direct RAT is selected. The second A2 threshold at -126dBm, is used to trigger a blind redirect to 3G directly. This is used when the UE does not report anything back following the first A2 event.

Cosmote on the other hand only use the blind re-direct and the UE is always re-directed to 3G at -124dBm RSRP.

CSFB:
Both operators use "basic" CSFB (i.e no DMCR, no SIB tunneling) to 3G (2100MHz band).
Interestingly enough, Cosmote use a CSFB inter-working function as described here. Although this eliminates any TAC to LAC planning it does create an additional call setup delay as shown from the measurements below.
RRC connection management:
The RRC state machine in 4G is very simple as only two states are defined. Idle and Connected. To transition from RRC_CONNECTED to RRC_IDLE Vodafone use a 5s inactivity timer while Cosmote use a 30s inactivity timer. It can be expected that 5s lead to an increase in signalling while 30s will impact battery life (assuming connected mode DRX is shorter than idle mode DRX which in this case it is).

That is it, quite an interesting deep dive into commercial LTE deployments and it also establishes something of a baseline for other networks I look at.

Thursday 29 August 2013

A breath of fresh 4G air


Today was a "historic" day for 4G in the UK. After almost 11 months of 4G monopoly by EE, two more operators, O2 and Vodafone, launched 4G services. But considering their tariffs are high and their coverage is poor, that is not what I wanted to talk about.

What drew my attention today is the announcement from Three UK, the smallest of the 4 operators and the undisputed disruptor in the otherwise stale UK market that described the terms of their 4G offering due in December this year.

I quote "..Every customer with a 4G ready device will get a 4G upgrade at no extra cost, with no need to go to a store, no need for a new contract, Sim or tariff change. The All You Can Eat data offering will still be available on 4G".

Now, that is the way to launch 4G in my opinion. Why spend millions upgrading your network to 4G only to then apply such high tariffs that nobody uses it? In today's smartphone world every operators 3G network is heavily overloaded so it makes perfect sense to me to offer 4G as an extension to someones data contract and not charge extra for it. That means you are getting a return of investment and you are also improving the customer experience on 3G by offloading data traffic from it and thus reducing churn.

Saturday 17 August 2013

LTE contracts by vendor

A few days ago I came across an interesting article that lists the proportion of LTE contracts awarded by vendor worldwide (as of August 2013). The source is Informa Telecoms & Media and according to them the data has been verified by the vendors themselves.

Clearly two companies, Chinese Huawei and Swedish Ericsson dominate the market with Huawei at a slight advantage. Third is NSN followed by "Other" which inlcudes Samsung, Alcatel Lucent and ZTE. Of these Samsung and ZTE can be considered developing, while Alcatel Lucent seems a shadow of its former self.

I guess what this article fails to mention is the scale of these contracts. From that perspective Ericsson would definitely be ahead as it has secured big contracts in the US, Korea & Japan where the vast majority of LTE users and traffic is today (approx 85% according to some recent stats published by the GSMA). Huawei on the other hand is pretty much banned from the US so that doesn't help either.

The full article can be found here

Thursday 25 July 2013

VoLTE checklist


Had enough of CSFB call set up delays, delays in returning to LTE after voice call termination, additional signalling due to routing/location/tracking area updates?

Time to deploy VoLTE!

Here is my checklist to get things started..

1. An IMS network

2. UEs that support VoLTE

3. Support for QCI 1 (voice) and 5 (IMS signalling)

4. Support of QoS on your LTE network to prioritise QCI 1 & 5 and pre-empt other QCIs if needed

5. Semi-persistent scheduling so you don't run out of PDCCH capacity

6. Support for RoHC

7. A dropped call rate on LTE equal or better than the CS dropped call rate on the legacy network

8. Support for RRC Re-establishment (intra & inter eNodeB) to help with the above

10. Support for TTI bundling to improve cell edge performance

11. If your LTE coverage is not as good as the legacy network then SRVCC support to handover calls from LTE VoLTE to the legacy CS domain

12. Upgrade legacy MSCs to connect to IMS

13. Upgrade legacy MSCs to connect to MME via Sv interface

14. UE battery life in LTE equal or better than 3G

Easy :)

Monday 15 July 2013

4GEE bandwidth now @ 20MHz

Back in October when EE launched their 4G network I wrote a post on their spectrum usage at the time (here). As recently announced by EE themselves they have now manage to re-farm another 10MHz block from their 1800MHz spectrum and thus offer LTE at the maximum 3GPP rel8 bandwidth of 20MHz.

As can be seen from the SIB19 extract below their new 20MHz EARFCN is 1667. This effectively extends the original 1617 EARFCN by 10MHz to the right of the 1800MHz spectrum allocation.

By defining both carriers in SIB19 they don't need to constantly update their 3G network as the 20MHz roll-out continues, as UEs will scan both centre frequencies and pick the one available to reselect to.

From a device perspective all LTE capable UEs must support 20MHz bandwidth so they all take advantage of the additional capacity.

Next step for EE is 3GPP rel10 carrier aggregation, which they announced they are looking into and will take advantage of their additional spectrum assets in the 800MHz and 2600MHz bands.

Sunday 5 May 2013

How safe is your network?

I was recently watching a video on youtube about GSM hacking (here) and it got me thinking, how safe are GSM networks and what can operators do to make them safer?

The first thing I looked at was the actual encryption algorithm itself. This is typically termed A5/X as there are a few different variants. A5/1 is the original encryption algorithm. A5/2 is a deliberate weakened version developed at a time when the standardisation community did not want to pass on the details of the A5/1 to certain non-trusted countries. A5/3 is the stronger version of A5/1. Finally A5/4 was recently introduced which is even stronger but still at its infancy in terms of support on UEs and deployment in live networks. So of all these options A5/3 sounds like the right choice to encrypt your voice calls. Right? Lets see..

First of all we need a device that supports A5/3. Even though the encryption algorithm has been around for a while, a lot of device manufacturers deliberately disable it. For the purposes of our testing I found a device that actually supported it as shown below.

Next we just need to make some voice calls and see which encryption algorithm the network instructs the device to use. The actual encryption is initiated via the RRC Ciphering Mode Command as shown below.
As can be seen the algorithm identity selected is 0. Looking at 3GPP TS 44.018 we find the table below which indicates that algorithm 0 equates to the older & and easier to crack A5/1. All 4 networks tested used A5/1.
The second thing to look at is how often is the ciphering key (termed Kc) is renewed. Obviously reusing the same key is bad practice and the ideal situation is to use a different cipher key for every call. The cipher key is generated by algorithm A8 using the RAND number and the individual subscriber key Ki. Ki resides in the SIM and RAND is sent via the Authentication procedure. It follows that in order to generate a new ciphering key Kc, the user needs to be re-authenticated. So how often does this happen?

Looking at operator 1, the same cipher key is re-used 7 times as shown below (only the messages of interest are shown and the rest are filtered).
Operator 2, seemed to have some random pattern of re-authenticating. This could be as low as once per call, but on other occasions the same cipher key was re-used 15 times.
Operator 3 had a regular pattern, re-using the same cipher key 15 times.
Finally operator 4 was the best of all re-using the same cipher key 5 times.
From an operators point of view of course, re-authenticating the user generates signalling load on the core network as the MSC/VLR has to fetch authentication triplets from the Authentication Centre. This explains why some operators are more or less generous with their cipher keys.

So what can you do as a subscriber to better protect yourself? Unless you have a trace tool, figuring out what ciphering algorithm your operator uses and how often the ciphering key changes is not something easy to find as operators do not make this information public. An easier choice would be to use the 3G network for your voice calls which means having a 3G device and finding a network that provides adequate 3G coverage. If you are are "brave" enough you could even lock you device to 3G only to prevent any accidental wandering to the 2G network. To date, to my knowledge, there have been no documented successful attacks on 3G networks.

Saturday 4 May 2013

Spectrum for rent, with a twist


At the recent auctions for 800MHz & 2600MHz spectrum here in the UK, a company by the name of Niche Spectrum Ventures Ltd (a subsidiary of BT) won 2x15MHz of FDD and 10MHz of TDD. 

There has been a lot of speculation about how a company that does not own a mobile infrastructure could use such spectrum and the conclusion is usually a) they build a network from scratch or b) they re-sell or lease the spectrum to an existing mobile operator.

However another business model I have been thinking of is a lot more interesting and it goes something like this..

A company wins some spectrum. It purchases and installs LTE small cells in key locations. These are fairly cheap and easy to install. If that company has transport network assets (like BT) then all the better. Once the small cells are installed and connected to an IP backbone, the company sells access to existing mobile operators. They way it does this is via MOCN (Multi Operator Core Network) functionality. For each customer that signs up their PLMN ID is broadcasted by the small cell and their core network is connected to the IP backbone. The specs allow for up to 6 PLMNs to be broadcasted so potentially all the existing mobile operators could be customers. What about the interworking of the LTE small cells with the existing network of the customer? Well, LTE has a raft of SON features that can take care of all of that with the minimum of manual intervention.  For neighbour planning the ANR feature can take care of that. This will work for intra-freq LTE, inter-freq LTE and IRAT. All it takes is for some UEs to camp on the small cell and send some measurement reports. PCI planning can also be automated as is RACH planning. As the core network is owned by the incumbent mobile operators do they have to do anything? Well not much. The S1 setup procedure takes care of that as it allows the MME and eNodeB to exchange the information they require to interwork. Finally from a UE perspective, MOCN functionality is mandatory in LTE so UE support is guaranteed.

As the company only owns radio network assets the management of the network is fairly simple as everything else (core network, billing, subscriber management etc) belongs to the mobile network operators.

That is it. All it takes is a commercial agreement, the PLMN ID of the customer and some minimal configuration.

Tuesday 23 April 2013

CELL_FACH to LTE reselection in 3GPP release 11

Just to keep everybody on their toes it seems the wise people at 3GPP decided to introduce mobility from 3G CELL_FACH to LTE from release 11 onwards. As the figure above shows (TS 36.331), that all important arrow from CELL_FACH to E-UTRA RRC_IDLE has been introduced. Why it was not included  in the first place is still a mystery to me as from a UE complexity point of view I wouldn't have thought it would be that difficult. From a RAN point of view it just re-uses the FACH measurement occasion concept defined since release 99 that allows a UE to search for inter frequency and IRAT neighbours during specific time intervals when it knows the RAN will not send it any data on the SCCPCH.

In addition to the above, SIB19 on 3G has some changes as the operator can choose if all EARFCNs will be searched for or only the higher priority ones (if applicable).

Although CELL_FACH can be thought of as a transient state it is possible that a UE can stay there long depending on RNC inactivity timers and keep alive periodicity from applications on the UE. With this change then the possibility of being "stuck" in 3G is minimised.

Sunday 21 April 2013

Base station in disguise, gone wrong..


One of issues faced with operators around the world is getting planning permission for base stations. At least here in the UK, one of the most common designs is the base station disguised as a lamp post. They usually blend in nicely, the public don't even notice them and some even have a functioning lamp so they serve a practical purpose besides housing an antenna.

Sometimes however things go wrong as the photograph above shows. I wonder how many people notice that it is a bit strange to have two lamp posts separated by 1 metre. Clearly the traditional lamp post should be removed but I guess someone got a bit lazy.. 

Monday 8 April 2013

Small Cells, Big Impact (my version)

I was reading the now few month old announcement from AT&T around small cell strategy here and a few things came to mind. First, the story. AT&T announced back at the end of January that they were planning to roll-out 40,000 small cells by the end of 2015. They have also posted a video on the link above that shows one of the products they are trialing, which although the logo is purposefully out of focus can easily be identified as the Alcatel Lucent 9364 Metro Cell described here. The video also mentions that AT&Ts plan is to roll these out to improve coverage which is where an important distinction in my opinion has to be made.

Small cells can be used for two scenarios. Scenario 1: Not spot. Scenario 2: Hot spot.

Scenario 1 is the easier of the two and the one AT&T has opted for. Regardless of how much an operator can try some areas will always suffer from poor/no coverage. Once these areas are identified a small cell can easily be installed to provide coverage. Small cells due to their femto background are easily deployed, mostly auto-configure and require just a backhaul connection which can be of xDSL nature. Due to their small size they are also usually exempt from the usual painful planning permission process.

Scenario 2 is the more difficult one. Here there is macro coverage, but due to the high amount of traffic the macro network cannot cope. The solution? Either try to use Wi-Fi offload (with the problems described in my previous post here) or use a small cell to absorb some of the traffic. However placing a small cell among large powerful macro cells is not as easy as some people make. At best the range of the small cell is severely interference limited and at worst it acts as a noise source affecting macro network performance. In addition to this UE uplink power can interfere with the small cell when power controlled by a macro cell further away. The solution to these problems is the new industry buzz word "HetNets" (Heterogeneous Networks). But what does Heterogenous mean anyway?

heterogeneous [ˌhɛtərəʊˈdʒiːnɪəs]adj
1. composed of unrelated or differing parts or elements
2. not of the same kind or type

When it comes to mobile telecoms, HetNets is essentially an umbrela term to descibe a number of features (some standardised and some not) that allow nodes of different types (macro cells, small cells etc) to co-exist. To date most of it is theoretical or simulation based so it is too soon to draw any conclusions as to how well HetNets will perform.

Another thing to bear in mind is that HetNets require close co-operation between the small cell and the macro cells which is where the small cells from a femto background might face some problems due to their flat architecture (combined RNC/nodeB) which essentially makes interactions between small cell and macro cell inter-RNC based. Traditional macro cell vendors have recognised this opportunity, which is why most of them are introducing small cells in their product portfolio.

All is not lost though, as vendors from a femto background already have a lot of experience in interference mitigation techniques and also auto configuration which when dealing with a large number of cells is a strong requirement.

It also worth remembering that LTE is based on a flat architecture so any advantages traditional macro vendors have might be short lived.

So, it will be interesting to see which small cells are the eventual winners. Will it be the enhanced femto cells or the miniaturised macro cells?

Thursday 4 April 2013

Nokia 6150, thirteen years on


When I started working in the mobile telecoms industry back in 2000, my employer at the time gave me a Nokia 6150. It was my first mobile phone and I loved it. At the time Nokia was the dominant force in mobile handsets and the 6150 was regarded as a top end device. Apple on the other hand in 2000 was purely focused on computers, launching products such as the Power Mac G4 and the iBook.

As time passed I got other mobiles and sold off the Nokia 6150. A few days ago however in a fit of nostalgia I went on eBay and purchased another 6150. A couple of days later I had in my hands and the rubbery keys felt instantly familiar. As I powered it on to make that first call, it got me thinking how much things have changed in the past 13 years..

So let's see. The Nokia 6150 supports GSM. That is it. No GPRS, no EDGE, no 3G, no HSDPA, no HSUPA, no HSPA+. Life was simple back then. Just GSM. The 6150 was dual band so it supported GSM in the 900MHz (no E-GSM though) and  the 1800MHz band. This at the time was quite a revolution. It also supported SMS, but only to a maximum of 160 characters, no concatenation. For data, it supported CSD (Circuit Switched Data) which essentially was a dial-up modem at an amazing speed of 9.6kbps! From a codec perspective it suppoted Full Rate, Half Rate and Enhanced Full Rate. No AMR obviously. Looking further into its capabilities it also supports the A5/2 cipher algorithm which has since been removed as a ciphering option by the GSMA.

Fast forward 13 years and the mobile telecoms industry has completely changed. But GSM networks are still around and are still backwards compatible as my Nokia 6150 works perfectly. I guess this is testament to those people in ETSI who developed GSM.

Comparing the Nokia 6150 to my current mobile, an iPhone 4, is obviously an unfair comparison but one thing has not changed...

The weight. They both weigh 140gr!

Saturday 23 March 2013

CSFB alternative using E-UTRA detection


Traditional LTE deployments, set the LTE carrier(s) at a higher priority than the 3G carrier(s) thus ensuring the UE will always camp on the LTE carrier when present. Any voice calls are handled by CSFB with an associated delay in call set-up and a few signalling issues as described in previous posts.

I have recently been thinking of the IE eutraDetection as broadcasted in SIB19 on 3G (shown above) and an alternative approach to CSFB came to mind..

First let's look at how the IE eutraDetection is described in the specifications. The applicable document is TS 25.331 and the description is given as:


Furthermore section 8.6.2.5 provides this additional information "If the IE E-UTRA detection is included in a received message and set to TRUE and the UE is in CELL_PCH, URA_PCH state or idle mode, the UE may detect the presence of a E-UTRA cell on a frequency with a priority lower than the current UTRA cell and report the information to the NAS."

Taking the above into consideration the alternative to CSFB would be to set the LTE carrier(s) at a lower priority thus ensuring the UE camps on the 3G carrier(s). By setting the IE E-UTRA detection to TRUE the UE will display the 4G icon on the screen as that is passed on to the NAS layer. This will ensure the customer is happy as for all he/she knows they are camped on the 4G network.

Any voice calls would be set up on the 3G carrier directly without the need for CSFB and avoiding any call set-up delays. If on the other hand the UE wanted to establish a data session the RNC would trigger a re-direct or handover to 4G. Once the UE would finish with the data transfer it would reselect back to 3G but still display the 4G icon as before. So in essence the opposite of CSFB can be created, which could be termed PSFB :)

One problem with this approach is that UEs, especially smartphones, are prone to many small bursts of data, so these could be handled on the 3G layer and use a traffic volume measurement report to trigger the redirect  to 4G for larger data transfers. At the same time ISR (Inter-system Signalling Reduction) could be used to minimise the signalling when moving between the 3G and 4G networks.

So that is it, some small development needed on the RNC SW and PSFB could become reality..

Friday 8 March 2013

The issues with Wi-Fi offloading


Thinking about how my data consumption has changed over the past ten years, it is quite clear that it has increased at an enormous rate. Where ten years ago I was happy with some email and static web pages, today there is video content embedded everywhere, web pages are dynamic, flash based content, streaming, VoIP, attachments, the "Cloud", synching etc, etc. Either consciously or subconsciously this is true for most people and most people want to access all this on the go.

Mobile networks have obviously evolved over the last ten years to cater for this, and the 3GPP is continously working to improve the standard to allow for faster throughputs but more importantly more efficient networks.

Sometimes however either because site density is not adequate, spectrum is not enough, or networks are not configured properly, things come a grinding slow stop which is where "Wi-Fi offloading" comes into play.

As a quick search on Google will show, Wi-Fi offloading is presented as the solution to the problem (the cynic might say this mostly comes from Wi-Fi AP manufacturers) and quite a few operators have either partnered with Wi-Fi network operators or deployed their own networks, in the hope of offloading some traffic from the cellular network.

But does this really work? The biggest problem with Wi-Fi is obviously the fact that it uses shared spectrum. How big a problem is this? Well, a quick scan of the available Wi-Fi networks, as shown in the picture above, will quickly put things into perspective. Sitting at home I could pick out 17 access points of considerable strength, all fighting for the coveted non-overlaping channels. There were even two double bandwidth 802.11n APs spreading themselves over 40MHz. So QoS is obviously an issue here, as all of these uncontrolled access points can appear anywhere, anytime ready to interfere with your "offloading".

So even though Wi-Fi might be available, it is possible that the user experience on the cellular network is much better. This is something people in the industry are aware of and it was interesting to see that for a while even Apple were thinking of switching from Wi-Fi to cellular when things get bad. This screenshot below taken from iOS 6 beta shows this, but for reasons unknown it never made it into the official release (for now).
There are interference mitigation techniques of course, ranging from switching channels (not good if they are all congested) to using various smart antenna techniques (beamforming etc) to try to improve things. I have no personal experience of these, but of course these will come at a cost and with a few caveats.

Furthermore there is always the problem of seemless mobility between Wi-Fi and cellular which even though various solutions have been put forward for it, none have made it into the mainstream yet.

So it seems the only positive aspect of Wi-Fi offloading might be that it is free to use, but then of course that creates another problem for the operator as there is no return in investment.

This then is where the industry has started thinking about "small cells" and another story begins..

Saturday 2 February 2013

Combined (3G & 2G) Compressed Mode Patterns

Key functionality in WCDMA is the use of compressed mode which creates measurement gaps that allow the UE to measure either 2G or 3G inter-frequency neighbours. Typically most UTRAN implementations restrict the usage of compressed mode to either 2G measurements or 3G inter-frequency measurements.

As almost all 3G network deployments today utilise multiple 3G FDD carriers, the operator has to make a choice. When radio conditions deteriorate, should traffic be handed over to another 3G FDD carrier or to 2G? Handing over to 2G is often seen as the safe choice to make, but why re-direct traffic to 2G when potentially other 3G carriers of good radio quality exist?

This problem can be solved by the use of combined (3G & 2G) compressed mode patterns, which is what I recently observed on a live network. With combined compressed mode patterns the UE is asked to measure both 2G and 3G inter-frequency neighbours at the same time. Based on the measurement reports received the RNC can then decide to which layer to handover the UE to (with an obvious preference to 3G if possible). The RRC signalling flow can be seen below (click to enlarge).

The procedure is initiated by the UE sending a  MEASUREMENT REPORT for event 2d (estimated quality of used frequency is below a certain threshold). The RNC responds by sending a PHYSICAL CHANNEL RECONFIGURATION message which configures the compressed mode patterns. The UE accepts these and then two MEASUREMENT CONTROL messages are sent. The first provides a 3G neighbour list for the UE to measure (and activates the compressed mode patterns) and the second provides the 2G neighbour list for the UE to measure. Following this the UE starts sending periodic MEASUREMENT REPORTS (separate ones for 2G & 3G) indicating which (if any) neighbours it has detected.

An extract of the PHYSICAL CHANNEL RECONFIGURATION message is provided below.

dpch-CompressedModeInfo
                            {
                              tgp-SequenceList
                              {
                                {
                                  tgpsi 2,
                                  tgps-Status deactivate : NULL,
                                  tgps-ConfigurationParams
                                  {
                                    tgmp gsm-CarrierRSSIMeasurement,
                                    tgprc 0,
                                    tgsn 4,
                                    tgl1 7,
                                    tgd 270,
                                    tgpl1 8,
                                    rpp mode1,
                                    itp mode0,
                                    ul-DL-Mode ul-and-dl :
                                      {
                                        ul sf-2,
                                        dl sf-2
                                      },
                                    dl-FrameType dl-FrameTypeA,
                                    deltaSIR1 12,
                                    deltaSIRAfter1 6
                                  }
                                },
                                {
                                  tgpsi 3,
                                  tgps-Status deactivate : NULL,
                                  tgps-ConfigurationParams
                                  {
                                    tgmp gsm-initialBSICIdentification,
                                    tgprc 0,
                                    tgsn 4,
                                    tgl1 7,
                                    tgd 270,
                                    tgpl1 8,
                                    rpp mode1,
                                    itp mode0,
                                    ul-DL-Mode ul-and-dl :
                                      {
                                        ul sf-2,
                                        dl sf-2
                                      },
                                    dl-FrameType dl-FrameTypeA,
                                    deltaSIR1 12,
                                    deltaSIRAfter1 6,
                                    nidentifyAbort 66
                                  }
                                },
                                {
                                  tgpsi 1,
                                  tgps-Status deactivate : NULL,
                                  tgps-ConfigurationParams
                                  {
                                    tgmp fdd-Measurement,
                                    tgprc 0,
                                    tgsn 4,
                                    tgl1 7,
                                    tgd 270,
                                    tgpl1 8,
                                    rpp mode1,
                                    itp mode0,
                                    ul-DL-Mode ul-and-dl :
                                      {
                                        ul sf-2,
                                        dl sf-2
                                      },
                                    dl-FrameType dl-FrameTypeA,
                                    deltaSIR1 12,
                                    deltaSIRAfter1 6
                                  }
                                }
                              }

As can be seen 3 TGPSI (Transmission Gap Pattern Sequence Identities) are configured. TGPSI 1 is for FDD measurements (i.e. other 3G frequencies), TGPSI 2 is for 2G RSSI measurements and TGPSI 3 is for initial BSIC identification of the 2G cells.

The compressed mode IEs for each TGPSI, result in a measurement gap as shown at the top of the post (for TGPSI 1). This essentially consists of a single measurement gap, 7 slots in duration (4.67ms). The other TGPSI are configured in an identical fashion. The overall repeating pattern is shown below (click to enlarge).


Thursday 24 January 2013

AT&T 3G micro cell, some facts and figures

I was recently looking through some Cisco Live! presentation material on the web and came across an interesting presentation given in June 2012 in San Diego about femto cells. In it, the presenter provides some interesting numbers about AT&T's femto cell offering which is marketed under the name 3G micro cell.

AT&T's micro cell has been around since April 2010 and just over two years later it seems they have a network of 650,000 femto cells! Quite an impressive number in my opinion and probably the largest femto cell deployment in the world.

Looking at some additional figures from their roadmap the 3G micro cell is based on Cisco's DPH-153 product with an AT&T casing. It is capable of 4 simultaneous users which is a typical value for a residential femto cell.

In terms of maximum Tx power it is capable of 13dBm which translates to 20mW. This is actually a very low considering most residential Wi-Fi routers are capable of around 100mW (e.g. Belkin N1, Apple Airport Extreme etc).

HSPA capability is stated as 14.4Mbps for the DL, so 16QAM @ 15 HS-PDSCH codes and 1.5Mbps UL, so 10ms TTI @ 2 x SF4 E-DPDCH. Obviously the ADSL/cable has to be capable of supporting these speeds as well.

Looking at the AT&T 3G micro cell website, there is also an interesting capability that AT&T have enabled and that is for the customer to be able to configure whether the femto cell hands out to the macro or not. At first I thought this was strange but looking through the FAQ it seems this is a recommendation when a customer experiences dropped calls in their house. As the AT&T macro network does not have the capability to hand into the femto, it is possible that a user initiates a call on the femto and as he moves around the house (towards a window, garden, balcony etc) he hands out to the macro. Then as he moves back into the house the macro signal strength deteriorates and the call drops. So to avoid this happening all mobility is disabled and the user stays connected to the femto until a radio link failure occurs which typically is at a much lower point than a handover threshold.

There is a lot more information about femto cells (from Cisco's perspective at least) including the figures above at the Cisco Live! website.

Wednesday 23 January 2013

Inter Working Function for CSFB


Traditionally support for CSFB requires an upgrade on the legacy MSCs to support the SGs interface. Additionally careful tracking area to location area mapping is required to avoid MT call failures when the UE falls back to a LA that is not the same as the one it registered on or alternatively call handling procedures such as MTRR or MTRF are required with associated upgrades on MSCs and HLRs.

Putting all this together can be an expensive and complicated process which is where the IWF for CSFB comes in. The IWF provides a number of advantages. At the most basic level it is an interface between the MME and the legacy CS network (as shown above). On one side it communicates using the SGsAP with the MME, while at the other side it communicates using MAP to the MSCs and HLRs. At the same time it provides a VLR function where all the LTE UEs are registered against. This is done by creating a "virtual" LA and updating the HLR accordingly. Any MT calls are thus routed to the IWF. At the same time it eliminates the need for TA - LA planning as all TAs can point towards this one "virtual" LA.

For any MO CS calls the UE fall backs towards the legacy network and discovers a "real" LA. This triggers a LA update and subsequently the call is set up.

For any MT CS calls the process is slightly more complicated and is illustrated in the diagram below (click to enlarge).


The call set up flow can be broken down into 4 phases. At phase 1 the incoming call is routed to the IWF and paging is initiated over the SGs interface. This triggers the CSFB procedure. At phase 2 the UE falls back towards a "real" LA and the LA update procedure is triggered. Phase 3 is the clever bit. Here the IWF sends the HLR an SRI to discover where the UE is located (note the the HLR is still waiting for a response to the PRN message from Phase 1). The HLR queries the MSC and responds. The IWF then uses this information to respond back to the original request from the HLR. The HLR then sends this back to the GMSC and the call is routed. Quite clever!

There is however one drawback with the IWF and that is that every call (MO & MT) requires a LAU before the call is set up. This adds approx. 1 to 2 seconds on the overall setup time. With the classic CSFB, the LAU is not performed and as such faster call setup times can be achieved.

To summarise then the IWF has the advantages of..
1. No upgrade to SGs for legacy network
2. No TA - LA mapping/planning
3. No requirement for MTRR/MTRF

..but the disadvantages of:
1. Longer call set up time due to mandatory LAU procedure

Saturday 12 January 2013

4GEE bandwidth @ 10MHz


Back in October 2012 Everything Everywhere a.k.a. EE launched 4G LTE services in the UK. As EE is the merger of T-Mobile UK and Orange UK, it "re-farmed" part of the 1800MHz spectrum these two companies had and used that for it's 4G LTE network.

At the time I wondered how much spectrum they had managed to re-farm for launch and few days back I had some empirical confirmation of how much that is. The log extract below is taken from a WCDMA SIB19 message which is used by the UE for reselection purposes from 3G to 4G.


Part of the information broadcasted in SIB19 is the IE "measurementBandwidth". As LTE allows for flexible bandwidth allocation (1.4, 3, 5, 10, 15 & 20MHz), the UE has to be informed how much bandwidth is used in order to average the RSRP measurements as explained here.

Rather than specifying the bandwidth in MHz, 3GPP uses Resource Blocks (RB) instead. A RB is 180KHz wide, so 50 RBs give us 9MHz. Taking into account the guard bands on each side of the allocated bandwidth we get a total bandwidth of 10MHz.

This means that at least until EE manage to re-farm some additional spectrum they can only offer 50% of the potential maximum LTE throughput. The good thing about EE is that they have quite a lot of 1800MHz spectrum so I imagine they will continuously monitor their 2G traffic (the 1800MHz was originally awarded to T-Mobile and Orange for GSM services) and as soon as possible another 5MHz block can be re-farmed taking the allocated bandwidth to 15MHz and finally 20MHz. As all LTE devices have to support the maximum of 20MHz, this can happen transparently to the end user.

Looking at the additional information in the log extract, we can also see the exact frequency EE are using which is broadcasted as EARFCN 1617. This translates to a centre DL frequency of 1846.7MHz as shown below.

This in turn can be mapped on EEs 1800MHz spectrum holdings (source: OFCOM) as shown below in red.
Taking into account that EE currently use 10MHz for LTE we can also see what is the top theoretical speed their network could achieve. This is shown in the table below.


In the DL using 2x2 MIMO a maximum throughput of 73Mbps is possible. As 4x4 MIMO is not currently implemented by neither operators or UE manufacturers that column is a bit academical. In the UL a maximum throughput of 36Mbps is possible. As a rule of thumb we can say in the best "real life" situation about 2/3 of those speeds is achievable that would make it as DL approx. 50Mbps and UL approx. 24Mbps. Obviously as more users are added to the system the throughput will decrease accordingly.

The final point to note about the log extract is the priority assigned to the LTE layer. This is configured as 6. As the highest priority is 7, EE have left some room for configuring a higher priority layer. This could be a "small cell" layer or if they are feeling confident of obtaining some 800MHz in the upcoming auction they could use that as a "coverage" layer forcing UEs to camp there by assigning it priority 7 and use the current 1800MHz layer as a "capacity" layer.

Saturday 5 January 2013

Mobile Terminated Roaming Forwarding for LTE CSFB


As described in a previous post one of the issues with LTE CSFB is what happens when the UE falls back to the target RAT but the the LA is not same as the one the UE is registered in during the combined attach procedure. This typically occurs on the borders of Location Areas (LA) and Tracking Areas (TA) or when the target layer is 3G and the UE can only acquire 2G (e.g. indoors).

In order to avoid MT call setup failures, operators must implement either MTRR (Mobile Terminating Roaming Retry) described here or MTRF (Mobile Terminating Roaming Forwarding) which will be described in this post.

The signalling flow for MTRF is shown above (click to enlarge) and essentially consists of 3 phases.

Phase 1 and phase 2 are identical to the MTRR signalling flow, with the exception that the network elements involved must support MTRF and signal as such with the applicable IE (MTRF Supported) defined in the specs.

In phase 3 however rather than cancelling the call setup procedure and re-initiating it towards the "new" MSC like MTRR, with MTRF the old MSC is used as a relay and the call setup procedure continues with an additional PRN & IAM procedure. 

So essentially MTRF manages to cut down the signalling flow and the whole MT procedure completes in 3 phases as opposed to 4. This results in a reduced call setup delay and as such better customer experience.

In the final post on the subject I will describe the IWF solution which overcomes the problem of misaligned LA & TA (and a number of other issues) with the inclusion of an additional core network element. 

Thursday 3 January 2013

F-DPCH on a live network (finally)


F-DPCH (Fractional Dedicated Physical CHannel) was first defined in 3GPP rel6 and then further enhanced in rel7 to overcome some soft handover limitations. Once HSDPA is used for the transfer of user data and L3 signalling (RRC & NAS) it allows the multiplexing of up to 10 users on a single SF 256 code for the purposes of sending TPC (Transmit Power Control) commands in the downlink.

As shown in the table below, 3GPP define 10 slot formats which effectively move the position of the TPC bits in the slot.


The obvious benefit in using F-DPCH is the savings in OVSF codes in the DL. Even though the rel99 DPCH uses only a SF256, once the number of simultaneous users increases (it can easily reach 50-60 users in a busy cell) the impact is quite substantial as sixteen SF256 are equal to one SF16. So clearly 50-60 SF256 will block a large portion of SF16 codes that HSDPA requires. With F-DPCH we can multiplex 10 users on one SF256 thus the impact on the OVSF tree is greatly reduced. On top of this there are savings in DL power and DL noise reduction.

It has always struck me as quite strange then, that mobile operators did not embrace the usage of F-DPCH sooner. As I recently discovered though, things are starting to slowly change and F-DPCH is being used by some operators.

This particular one was Wind in Greece and specifically in the area of Athens that has been upgraded due to a swap (to Huawei).

As can been seen from the trace below the UE signals to the RNC its support for F-DPCH in the RRC Connection Request. This allows the RNC to configure SRB over HSPA and F-DPCH from the RRC Connection Setup message if the establishment cause is related to a PS session. In this particular network, this option was not used and the SRB was mapped to DCH channels initially.


The actual F-DPCH configuration is activated in the Radio Bearer Setup phase as shown below. At the same time the SRB is also mapped on HSPA channels as this is a pre-requisite for F-DPCH.


Looking at the IE present we can deduce which slot format is being used (0 in this case) and which OVSF code is being used (256,17 in this case).