iSixSigma

Lean Productivity Project – Improve Phase?

Six Sigma – iSixSigma Forums Old Forums General Lean Productivity Project – Improve Phase?

Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
    Posts
  • #39900

    Cravens
    Participant

    I need help from those of you that know both Lean & Six Sigma well.
    ? – When improving productivity using the quick hit Kiazen event approach, what validation tools are appropriate to confirm that improvement was made. My reasearch would indicate that most of these type projects do not use the statitical validation tests that would be typical on a traditional SS project. Also, these quick hit projects tend to focus on total labor cost and inventory reduction over a 2-3 period.  
    So, what approach can you / should you under these circumstances use? 

    0
    #122512

    BTDT
    Participant

    Fred:I agree that most Lean projects summarize the benefits based on faith and before/after photographs of the workstations.In a SS project, you have gathered baseline productivity data during your measure phase. Once the improvements have been made (by whatever means) and the controls have gone into place, then you gather the same data using the same operational definitions of your key process indicators. You only have to wait long enough that you can prove the new process is stable and sustainable.Perform the statistical test to verify the change then total and summarize the differences.BTDT

    0
    #122516

    Cravens
    Participant

    BTDT, thanks for your feedback
    How would you suggest handling processes which are highly human dependent combined with a highly variable order mix? In other words each person fills many orders during the course of a day, each of which may vary significantly in work content.
    I am struggling bringing this down to an as-is cost per activity as a starting place, since orders vary so much in size and quantity. Any suggestions or alternatives to consider in this situiation?

    0
    #122517

    BTDT
    Participant

    Fred:If you can not segment the data in some logical way (international versus domestic orders), then accept that the sample is a random sample of ‘typical’ data. Gather data for a longer period of time until you are satisfied you have an adequate sample of ‘typical’ cycle times. If the cycle times are human dependent, then group the data by operator.We commonly find a great deal of variation in cycle times for transactional processes. We found our cycle times followed log-normal or Weibull probability distributions. In either case the data can be transformed to be approximately normal, or you can do the process capability calculation with the assumed distribution.Use Levene’s test with the homogeneity of variance (test of equal variances) test to show differences in the distributions.Use Mood’s median test to show the difference in medians.Cheers, BTDT

    0
    #122518

    6sigmatools
    Participant

    First, In order to collect data, it may be difficult to do the many time studies needed in order to get a baseline. In this case I suggest using a “walking traveler” in which individuals log in their time per step, time per order pick, or whatever data is neccesary for you. Make sure you define a “Start and Stop” date for collecting your data. No use collecting constnat data that never is used for improvement.
     
    Second, You will need to be able to chart data based on a constant. In this case, you may be able to take all of the different options, the usage per option, and the time per option and do a weighted average. Since there is so much variation, a standard average will not work well. The weighted average can be your baseline, thenyou can chart the variation from that baseline using an MR chart, bar chart, etc…
    In most cases, you will find that you have so much noise in your process that doing a good kaizen event will reduce that noise….you can better find a constant then.

    0
    #122544

    Cravens
    Participant

    BTDT, this helps give some ideas.  I understand and could collect the data by individual operator, however, how do bring the by operator (many), data back to a single process DPMO & Sigma level?  Maybe I am misunderstanding what is being suggested.  Thank you.

    0
    #122545

    Cravens
    Participant

    6sigmatools,
    I am not familiar with the technique you mentioned regarding using a constant and then converting to a weighted average.  Can you elaborate or point me to where I can learn more on this technique for my application? 
    In this situation, there are literally thousands of sku’s and the quantity of items vary for each as well.  With this much volitility in what is being ordered, picked, packed, large quantity of people involved, I am struggling with getting this down to a common demoninator or a constant as suggested. 
    Would appreciate any additional thoughts or ideas.  I appreciate your insights.   

    0
    #122546

    BTDT
    Participant

    Fred:If all the operators are part of normal business operations, then you include all of them in the determination of your day-to-day process capability as part of Measure and report a baseline DPMO.When you are doing Analyze, then you can break the productivity measurement into groups to see if you can find the Vital Xs that are contributing to the variation in the process. If there are differences by product lines, then consider segmenting the business processes. If there are differences between operators, consider focused training or streaming of different types of requests.Help clarify things?BTDT

    0
    #122570

    6sigmatools
    Participant

    The weighted average calculation looks like this:
    product A price * volume = $
    product B price * volume = $
    product C price * volume = $
    —— —
    sum volume sum total $ sum total $ / sum volume
    In excel, the following formula is: 
    =SUMPRODUCT(B2:B6,C2:C6)/SUM(C2:C6) http://www.beyondtechnology.com/tips011.shtmlThis will allow you to Weigh against the highest volume items… 

    0
    #122666

    Cravens
    Participant

    BTDT, this really helps.  I am still perplexed about how I can potentially combine all the operators into one data set to calculate the total process DPMO, Sigma level etc.  When I combine all the data sets for each operator, I am almost certain that the data set will then show lack of homogeneity since it contains apples and oranges so to speak.   
    If this does happen, how can you get to a combined total process DPMO & Sigma level?
    Am I making this more complicated than needed? 

    0
    #123037

    R.M.Parkhi
    Participant

    The easiest & statistically reliable method is B vs C technique propounded by Shainin.You may kindly refer to the book ‘ The World Class Quality & How to Make it Happen’ by Mr. Keki R. Bhote published by American Management Association. After going thro’ this if you still have problem you may mail the problem to me. I shall try to help.
    Regards.
    R.M.Parkhi

    0
Viewing 11 posts - 1 through 11 (of 11 total)

The forum ‘General’ is closed to new topics and replies.