I actually would consider trying to translate GSM spectrum usage back to an equivalent FDMA Analogue usage as anomolous. Yes you could crudely consider taking the 200KHz occupied by GSM and dividing it by the number of timeslots carried on that 200KHz as a crude translation, but this doesnt tell the whole story of a cellular networks’ capacity.
The problem is that spectrum efficiency in terms of capacity between GSM and FDMA Analogue systems (such as AMPS) depends on more than just KHz occupied. The required C/I and hence frequency re-use distances also need to be considered to effectively derive a true capacity improvement gained migrating from AMPS to GSM. GSM for example only needs 12dB C/I, AMPS needed about 17dB C/I to resolve the same speech quality. Because of the improved C/I, the same GSM frequencies can be re-used closer together than their AMPS counterparts – allowing the capacity of spectrum in terms of MHz/square km to be increased as well.
Figures I remember from dim dark history are that GSM has an effective capacity increase over 30KHz AMPS of about 3 times. This is a far higher ratio than the 1 call/25Khz vs 1 call / 30KHz argument put earlier in this thread.
Also to note – GSM can actually timeslot double its capacity by switching codecs and using only every second timeslot series to transmit speech – hence two mobiles share a time slot alternately. This is where half-rate speech is derived from and effectively allows 16 users to use the one transmitter instead of the normal 8 – albeit at the tradeoff of a much lower bitrate voice codec and hence degraded voice quality.