Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors


Viewing 15 posts - 1 through 15 (of 19 total)
  • Author
  • #50532

    Hi all,

    This is just for curiosity.

    Before the advent of GSM(Digital)the then existing
    analog mobile service(FDMA based)is providing a net BW of 30 KHz per user.

    But in GSM we say that effective BW per user is 25 Khz. But the Channel BW we have is of 200kHz. And this whole 200Khz is made available to a user for a little time(TS).

    How can we say that the in GSM BW per user is 25 Khz.



    i’ve never read that BW for GSM is 25kHz… what’s your source ?


    200kHz / 8 timeslots = 25kHz… but that’s a very unscientific computation ๐Ÿ™‚


    Yes …they are using the same logic
    200/8=25 khz. You can search google
    by using the keyword … 200/8=25khz and can see how many in the world supports the above theory.

    But…i don’t.

    That is why in my post(last line)…
    “How can we say that the in GSM BW per user is 25 Khz”

    So…what do you think effective BW should be per user in GSM

    I think it should be 400 khz.


    Also…..if we disagree from the theory 200/8=25Khz….then….

    What is the logic for going only 8 TS.

    Then why not 16 and why not 32 or…any thing else.


    mkt, it’s not because many people say something that it is correct ๐Ÿ™‚

    why 8 TS.
    the reason is certainly related to the fact that the speech is put in frame of 20ms. And when the GSM was invented, 20ms could be encoded/compressed/etc. into 4×0.577 ms.
    20ms => 4×0.577ms = 2.3ms
    each 20ms, one user needs 2.3ms.
    So during 20ms on the air, only 8 calls can be multiplexed.

    20/2.3 = 8.7

    The remaining is used for in band signalling (sacch).


    For capacity purpose some people make this comparison saying GSM = 25 KHz BW but I agree with Pix its not the right way of doing the comparsion. During a normal burst all 200 KHz spectrum is available to 1 user and on the analog side only 30 KHz is available.

    In GSM 20 ms speech frame is interleaved and sent over 8 separate half-bursts. If my memeory is right the 20 ms speech frame after source coding + channel coding and bit puncturing, etc at the end yields 456 bits and these are sent on 8 half bursts (of 57 bits). The 8 TS chosen in GSM is pretty much tied to 20 ms sampling period.



    I actually would consider trying to translate GSM spectrum usage back to an equivalent FDMA Analogue usage as anomolous. Yes you could crudely consider taking the 200KHz occupied by GSM and dividing it by the number of timeslots carried on that 200KHz as a crude translation, but this doesnt tell the whole story of a cellular networks’ capacity.

    The problem is that spectrum efficiency in terms of capacity between GSM and FDMA Analogue systems (such as AMPS) depends on more than just KHz occupied. The required C/I and hence frequency re-use distances also need to be considered to effectively derive a true capacity improvement gained migrating from AMPS to GSM. GSM for example only needs 12dB C/I, AMPS needed about 17dB C/I to resolve the same speech quality. Because of the improved C/I, the same GSM frequencies can be re-used closer together than their AMPS counterparts – allowing the capacity of spectrum in terms of MHz/square km to be increased as well.

    Figures I remember from dim dark history are that GSM has an effective capacity increase over 30KHz AMPS of about 3 times. This is a far higher ratio than the 1 call/25Khz vs 1 call / 30KHz argument put earlier in this thread.

    Also to note – GSM can actually timeslot double its capacity by switching codecs and using only every second timeslot series to transmit speech – hence two mobiles share a time slot alternately. This is where half-rate speech is derived from and effectively allows 16 users to use the one transmitter instead of the normal 8 – albeit at the tradeoff of a much lower bitrate voice codec and hence degraded voice quality.


    GSM900 the
    frequency bands are 890-915 MHz and 935-960 MHz. Then, why the
    frquency bandwidth of 20 Mhz was left out in specifications. The 20
    Mhz here iam referring here is 915-935 Mhz. Note: the duplex distance of 45 Mhz can still be maintained.


    If you do the sums:

    890+45=935MHz – still in band
    889.8 + 45MHz – 934.8MHz – yes you are within 915-935MHz – but the -45MHz uplink path is out of band as it is on 889.8MHz.

    Likewise in reverse. 915+45 = 960MHz – still in band. 915.2+45 = 960.2MHz – out of band.

    Hence to maintain a 45MHz offset you cant use the middle band 915-935MHz for standard GSM. E-GSM can – but then it uses 880-890+925-935MHz in countries that allow that band to be allocated.

    Normally the band 915-935MHz is also used by other radio spectrum users, particularly some ISM (Industrial, Scientific or Medical) unlicensed high power transitters, or in some countries Trunked Mobile Radio use it, and in other places, 2-4Mbit/s fixed radio links use that band.



    thanks for your insight about the GSM/AMPS systems. Do you have the same kind of analysis for GSM/UMTS ? If not, what’s your opinion about the comparative capacity of those systems ?



    In comparing GSM to WCDMA/UMTS there are several aspects to consider. The first is that GSM is a combination
    FDMA/TDMA system, where UMTS is a CDMA only system with no requirement for FDMA – this means that
    any one for one MHz spectrum comparison is not possible.

    WCDMA/UMTS = occupies 5MHz / channel – note however that the adjacent cell/base station occupies the same 5MHz – so there is no Frequency Division required. Every cell operates on the same frequency. Each cell has a voice only capacity around the 60-80 circuit mark for CS voice with all CS64, PS and HS bearers turned off – although this varies with distance due to one of the limiting resources being RF Ampplifier Power – so the general assumption is that traffic is distributed radially from the site fairly uniformly. (real world conditions can temper this).

    GSM = occupies 200Khz/channel – but cant have an adjacent cell on the same channel without rendering C/I inoperable. So when first setting up a comparison we need to consider how much of an equivalent 5MHz bandwidth would be available to a signle GSM cell in a network. If we consider say a 4/12 re-use plan, each cell can have 3 frequencies reused three other times for three adjacent cells. 3 Frequencies gives GSM 24 circuits per cell. Effectively GSM implies that only 600KHz of the available 5000KHz can be used by one cell in any given location because of the need for FDMA in the GSM system.

    Comparing capacity then, you can see that for the same 5MHz on a single cell in a multi-cell network 70 / 24 ~ 3 times more capacity available on a WCDMA network on the same or similar frequency band as GSM. (cf 850MHz UMTS vs 900MHz GSM).

    As a bonus, when you compare similar band UMTS vs GSM, you get between 5-7dB improved depth of coverage because the UMTS system’s Ec/No requirements are about half those of GSM. If you wanted to trade that for capacity per sqkm then you can increase your capacity over GSM even further.



    What would happen if there is, in the same system as you described, more than 50% (or more ?) of the capacity of each cells being used ? Would that lead to a coverage degradaation or a voice quality degradation ?

    This is called “cell breathing”, but when does it occur ? If it happens at 50%, then the benefit of capacity over GSM is rather inconsequent (35 channels vs. 24 channels).

    Thanks a lot, WNN, your explanations are very valuable ๐Ÿ™‚


    Cell Breathing doesnt affect capacity as much as it affects coverage in my experience. If you design your link budgets and cell sizes based on worst case Ec/No you are prepared to accept in the network (and due to the pole capacity equations you basically run out of power and cant push any more calls on even if you increase available power once you pass about 70-75% load) you can still realise the capacity you expect (say around 60-70 ccts/cell/carrier) at full useable load of 70%, over the full footprint of your desired cell coverage.

    I have seen this live in networks previously.

    So the capacity gain of UMTS over other cellular systems still holds. Sure, load as impacted on it a little, but 50% load does not result in 50% capacity reduction, rather it results in a coverage reduction – which if your inter-site distances and inter-cell border designs are good you can tolerage without harm.

    Prediction systems like Forsk’s Atoll can run simulations that show you what happens in these circumstances so that you can design around the impact of them.


    I will try that on Atoll (i just used it with 2G networks), it’s true it’s a good way to assess the behaviour of a 3G network under load.

    Thanks for your explanations !

    By the way, if you don’t mind : what’s your working experience ?

Viewing 15 posts - 1 through 15 (of 19 total)
  • The forum ‘Telecom Design’ is closed to new topics and replies.