iSixSigma

Mean Vs. Average delta explanation

Six Sigma – iSixSigma Forums Old Forums General Mean Vs. Average delta explanation

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #43818

    Tim
    Member

    I’m managing my first black belt project, a process improvement effort for electric service restoration following storms.  I have a question that stems from establishing a savings target. 
    I’m having a difficult time understanding the difference I calculate between the average spend vs. the mean of the individual spends for an activity.  Here’s the example:
    Total spend on an activity over  the past 2 years was $10,825,530
    Total number of events resulting in this spend during ths same period is 115,354.
    If I divide the spend/events to establish a simple average, I calculate $93.85 per event.
    Now comes the thing that puzzles me.  If I calculate that same “cost per event” for each instance across the two years then use those 26 data points in Minitab to calculate the mean, I get $75.49.
    I believe this is a result of variation across the 26 instances (having different spends and diffent number of events) but I guess I would have thought that “averages” would be the same or close to the same value.  I’m wondering if the fact that I’m deriving this number then analyzing it is a problem.
    I think I should just drop the whole concept of calculating an average using the total $/total events for fear of confusing my audience, instead using the mean and related attributes for my current state performance and related improvement target.
    Any suggestions/thoughts????
    Can anyone explain this to me?  Is it a population vs. sample issue?
    Thanks!
     

    0
    #139521

    Jered Horn
    Participant

    Where did the number “26” come from?

    0
    #139525

    Tim
    Member

    26 is the number of storms we had in the past 2 years.  During each storm, we performed this activity. I have a “spend” amount and a number of events (or jobs) performed for each of these 26 times over the past 2 years…

    0
    #139533

    Talaid
    Participant

    I believe this is a case where expected value (mean) demonstrates the importance of understanding your grouping of events when evaluating data. You are interpreting your data from two different perspectives, by average cost per job and average job cost for each storm. When you take a simple average and divide the total cost of $10,825,530 by the number of jobs 115,354 you have found the cost per job ($93.85). Now, here is where things change. You have 26 storms and each storm has its own set of jobs associated with it. When you calculated the job cost in Minitab you found the average cost of a job for storm 1, storm 2, storm 3 and so on. Then an average was found of those averages ($75.49). In this way, no significance is given to how many events occurred in each storm, just the average cost of a job for that storm. Each perspective has its uses and it all depends on how you want to interpret the data. I hope this is clear and it helps!
    As a side note, in order for the simple average to be significantly different than the average job cost by storm, there would have to be many more events in a few storms and relatively few events in most storms. This makes sense as you can interpret there are few very large storms that cause a lot of damage but many low-impact storms. From a cost stand point, these large storms cause more damage but there are more events (jobs) to spread this cost over, thus lowering the average job cost. Small storms may cause less damage but the few jobs also contribute to a higher cost per job. Good luck!

    0
    #139537

    Tim
    Member

    Thanks Andy.  You’ve hit the nail on the head.   I appreciate your feedback and helping to clarify the difference between the average “cost per job” in aggregate and the average of the average cost for jobs in storm 1, storm 2,….
    We do in fact have more “small” storms with fewer affected customers and fewer jobs.  And, as you state, when we have the larger storms, there are more affected customers and more jobs to spread the costs across.  However, several X’s are impacting the costs we incur forthis activity in larger storms, resulting in those costs actually being much higher than the small storms.  For instance, that $93.85 average cost per job is across both small and large storms.  When looked at separately, small storms average cost per job is $74.34 whereas the large storms average cost per job is $114.50!  Even given more customers/jobs to spread the cost over…
    One driver here is how we allocate resources during large storms – it’s more of an “all hands on deck” response with lots of wait time puncuated by periods of heroic effort to get the power restored. 
    In your opinion, which measure is better to use as a baseline for current state performance (and thus improvement target) – the average cost per job in aggregate or the average of the average cost per jobs in storm 1, storm 2, etc.?  While most of my improvement efforts are focused on the larger storms, I do expect some improvement on the small storms as well….
    Thanks again for your help and feedback.  This has been puzzling me!  LOL!

    0
    #139729

    Talaid
    Participant

    I don’t have the experience to correctly answer that question. I am a junior in college and very interested in Six Sigma, however I have not completed my own project. My gut feeling is to consider the average cost by storm so you can identify the special causes that go hand in hand with large storms. This way each storm will be on a level playing field and any changes implemented to affect job completion for each storm will be more visible. The drawback is it may be more difficult to see the broad impact of improvements across the board, counter to averaging your sample in aggregate. Best of luck.
    Andy

    0
    #139730

    Savage
    Participant

    Tim,
    Andy brought up another good point when he mentioned the expected value (expectation, etc.).  You can even throw in there a discrete probability distribution associated with the severity of the events and the associated costs with each, to give you perhaps an even clearer picture of this average…or not.  But all kidding aside, good luck with your project.
    Matt

    0
    #139731

    BOAMBB
    Participant

    Tim,
    It sounds like you have a bimodal distribution – one distribution of costs associated with small storms and one distribution of costs associated with large storms.  The key factors that drive cost would seem to be different in each distribution.  Have you considered moving away from grand-mean calculations and doing separate analyses for the different distributions?
    boambb

    0
    #139733

    Heebeegeebee BB
    Participant

    boambb,
    Right On!!!  From the limited data, it sure sounded like there were mulitple populations in the mix.
    It sounds like some strat/rational subgrouping is called for.
     

    0
    #139749

    Tim
    Member

    boambb –
    Yes, it is in fact a bimodal distribution as you indicate.  Yes, I have separated the large and small storms .  I’m also considering using median rather than the mode.  One of the key improvement opportunities is around resource management.  While it will have a positive impact on the small storms, the largest gain will come from the large storms.  I like your suggestion.  I may be able to find another “x” that has greater influence in the small storms and address that one as well, setting different improvement targets for the different types of storms….
     
    Thanks!
    Tim

    0
    #139751

    OLD
    Participant

    Tim:
     
    As you begin to look at your X’s, you may want to take a look at: time of day and day of the week that the storms occur (not that you have much control over weather to schedule your storms!). It seems that available resources will be different depending on the time of day that the storm occurs? A day-time storm may cost less to repair than a comparable night-time storm? A weekend storm may cost more than a during the week storm? Data/facts may lead to staffing, manpower coverage, and equipment availability changes?
     
    Depending on where you are located geographically, you may what to consider time of year or season of year. Winter ice storms may cost more than summer storms (due to inefficiencies of working in cold weather). Type of storm may have an impact too? Ice storms VS. tornados VS. wind,  etc. All get back to having the appropriate resources available for the repair. Once you know more about the data, short-term or alternative resource solutions may become more apparent?
     
    I seem to remember a statistic from somewhere that said that 98% of all power outages average less than two seconds (may not be exact so please validate/verify) in duration. With storms, you are truly dealing with the exceptions and thus some very unique/creative solutions.
     
    Sounds like a fun project!
     Good Luck! OLD

    0
    #139753

    Tim
    Member

    OLD –
    Very insightful!  These indeed are drivers as they affect overtime and premium compensation, especially with represented labor.  This might drive you to consider working crews split shifts to avoid OT pay, but there are efficiency and safety issues with working around live power in the dark!  While I’ve touched on this aspect of the problem a little, I’ve also recognized that the time the storm hits is out of my control.  I can, however, control our “reaction” to the storm.  Knowledge of these X’s may prove useful when we making decisions about the timing of our response.  For instance, calling crews in at 5am vs. waiting for normal shift start might not make financial sense, unless you’re impacting a health, safety, or large commercial customer. 
    My project appears to have some lower hanging fruit than even this – waste in the form of wait time incurred as a result of our “all hands on deck” approach.  That appears to be where my biggest savings opportunity exists initially. 
    I’m in the MidWest.  We’ve been looking at season, temps, wind speed, etc and have developed a response strategy based on many of these factors.  To your point, an ice storm that hits while many trees still have leaves on them, followed by heavy winds and cold temps = VERY BAD news for us!  LOL
    One of my peers has worked extensively on minimizing outage frequency and duration.  These are both public service commission measures we are rated on.  As you might suspect, the small, one customer outages don’t influence our peformance on these metrics as heavily as the mass outages that may last for several days.  Massive improvements on the one and two customer outage situations have very little impact on this metric.  However, minor improvements on the very large scale outages have very large influence on our performance vs. the metric.
    Thanks for the feedback and ideas!
    Tim

    0
    #139758

    better than darth
    Participant

    If you wanna calculate for the cost savings, compute first for the total benefits that your project will have then subtract by the total spends then you get the savings. Also, include in your project team a financial analyst to accurately compute the figure you want.

    0
Viewing 13 posts - 1 through 13 (of 13 total)

The forum ‘General’ is closed to new topics and replies.