Two Six Sigma professionals recently posed questions in the iSixSigma Discussion Forum relating to queuing theory in a call center.
One reader wanted to know how to calculate average and standard deviation for service time and interarrival time – the amount of time between the arrival of one customer and the arrival of the next. The other had a question about coming up with an accurate “standard time” for a call center process. Forum participants responded with some advice about the need to gather more detailed data, plus some helpful rules of thumb about working with mean and median times.
wkueku – I need some advice about how to calculate a) average and standard deviation service time and b) average and standard deviation interarrival time in minutes. I am evaluating a waiting line model.
Let’s say we have data for a typical day [see Tables 1 and 2]. I would like comments on [which] approach [would be best] to take.
Table 1: Sample Service Time Data
|Call Type||Frequency||Customer Service Representative Time (in seconds)|
Table 2: Sample Interarrival Time Data
|Time||Avg. # Calls||# Calls Delayed||Avg. Delay (in minutes)||# Calls Abandoned|
SheMBB – What do you hope to accomplish with the model? The problem you’re trying to solve will affect the approach to the model. The data you’ve included is summary data. I recommend using call level details to calculate these statistics. Call level details allow you to see the variability and better represent that in your model.
wkueku – Yes, I agree that individual data rather than summary data would be better to see the variation, but if all I have at this point is summary data, I would like to leverage it somehow. The objective is to see if the current system – phone, people, and performance – supports the call volume.
SheMBB – You can’t get to standard deviation from this level of detail. Average agent time for all call types is the summed product of the CSR [customer service representative] and percentage frequency – or about 72 seconds per call. Call duration is generally a pretty skewed distribution. Call center models I’ve built have been more accurate using median instead of mean, which you can’t get from this data. It may also make sense to look at each call type separately, rather than this consolidated average – especially if all agents can’t field all call types.
To get an approximate average interarrival time, you could divide the duration of each time period (30 minutes) by the number of calls. So, for 6:30 a.m. it would be 1:22 between calls [see Table 2]. This assumes calls are pretty consistent throughout the half hour; actual averages from the data would be different. This table includes abandoned calls, which should be included in the total number of calls. I would be very hesitant to make any changes to the system based on this level of data. About the only decision I would trust from this level of detail is to get more detailed information.
naliakba – I also have a similar query. I want to come out with “standard time” for a particular process. I have already collected data over a period of time, considering all types of variations. Please advise if it is a good idea to go by mean in a graphical summary when data is normal. I am also getting skewed data – flat histogram, bi-modal sometimes. Is there any standard or guideline existing to arrive on standard time for a given process?
SheMBB – How I would arrive at standard time for a process depends on several things, including the problem I’m trying to address, what I intend to do with the information, the distribution of the data and the variability in the data. There is not a single, standard answer. Rather than thinking of a process in terms of a single number, you’re better off understanding the distribution of your data.
To take part in the discussion, please join the thread by clicking here.