- This topic has 4 replies, 1 voice, and was last updated 21 years, 11 months ago by Peter Turner.
8th January 1999 at 12:02 #32309JamesGuest
I want decide a cell with 1000 users with 80 channels then compute the Probability(blocking). However, when I compare the result with 2000 users with 160 channels.
The question is why the Prob.(blocking) of small user group is greater than the result of large users group?8th January 1999 at 12:02 #32310James KennyGuest
I can’t give you a statistically based answer because I’m not a mathematician, but what you have noticed is correct. If you double the number of lines, then you MORE than double the amount of traffic it can handle. That’s why it’s best not to split groups of lines into multiple trunk groups (eg. one for incoming and one for outgoing) at least from a traffic engineering point of view anyway.
Perhaps someone else can offer a more scientific reason?
James.1st February 1999 at 12:03 #32311Justin StrongGuest
I won’t go into the math, however, but I will appeal to intuition. Mathematicians may cringe at my example but I believe it is correct.
The basic principle is that the more servers (circuits) you have the better *potential* utilization there will be.
You can think of Erlang B as modeling a set of bank tellers serving customers that will leave if no teller is free (no queueing).
Assume there are 10 tellers and people are assigned to a teller based on the first letter of their name. (e.g. A-D goes to teller 1). If that teller is not free the customer leaves the bank and has to try again later. They cannot use another teller even though that teller may be free.
If we change this and say people can go to any teller as long as they that teller is free it should be apparent that the tellers will be better utilized this way.
The same principle applies to telephony. The more circuits in a group the better *potential* utilization there will be. By having a larger circuit group you are making a larger pool of circuits available to each call attempt.
Objective Systems Integrators Performance Management Solutions11th February 1999 at 12:04 #32312Ayal LiorGuest
This can be explained by two ways:
1. The law of large numbers says that for larger sample the deviation from the expectation is smaller.
2. Look at a normal approximation to the Erlang B distribution. the variance is oposite relation to N, again saying that for large population the deviation is getting smaller.
I hope it will help you.
Best regards .16th February 1999 at 12:05 #32313Peter TurnerGuest
Technically we would say that the statistical multiplexing effects are greater.
The General principal is that the larger the sample the smaller the expected variability (in percentage terms).
In simple language lets take the example given. Suppose each user provides .075 e traffic then 100 users provide 75 erlangs while 2000 users provide 150 erlangs. If there were an infinite number of out going trunks we would say that on average 75 or 150 of them would be used. If we make the usual assumptions then the number of outgoing trunks in use is governed by a Poisson distribution.
We can determine that in
Case 1 (1000 users) : Prob more than 80 trunks used = 0.2589 Case 2 (2000 users) : Prob more than 160 trunks used = 0.2071
In both casse the percentage above the mean is the same but in the second the same degree of variability is less likely.
Hope this helps.
N.B. Blocking or sizing estimates based on Poisson distribution (or the normal approximation) are quite reasonable and give results close to those from ErlangB. Useful if Erlang-B tables not available. to it)