# Process Sigma Calculation

Six Sigma – iSixSigma Forums Old Forums General Process Sigma Calculation

Viewing 67 posts - 1 through 67 (of 67 total)
• Author
Posts
• #26999

LaySim
Participant

Hi,

Found Process Sigma Calculation by just keying in “Total Opportunities” and “Total Defects” in ISixSigma.

Could somebody teach me the way to calculate this?

Thanks.

0
#65670

Jim Parnella
Participant

Laysim,
Here is how to calculate the values on the Process Sigma Calculator found at
https://www.isixsigma.com/sixsigma/six_sigma_calculator.asp

Defects Per Million Opportunities (DPMO) =
(Total Defects / Total Opportunities) * 1,000,000

Defects (%) =
(Total Defects / Total Opportunities)* 100%

Yield (%) =
100 – %Defects

Process Sigma (type this formula into Excel):
=NORMSINV(1-(total defects / total opportunities))+1.5

Be sure to include the Equals (=) sign. This will give you your process sigma (or sigma capability) assuming the 1.5 sigma shift.

For example if you type this into Excel,
=NORMSINV(1-100/1000000)+1.5
you will get 5.22 for your Process Sigma.

0
#65673

LaySim
Participant

Hi Jim,

Thanks for the explaination. My confusion is, are we calculating process sigma using DPMO or Yield? To my understanding, DPMO cannot be changed to Defect %, can it?

Say, if I have 10 parts. And that each part have chances to have 10 defects. So
Total Opportunity = 10 X 10 = 100

And say, there is 1 part with 3 defects, and the other 9 parts are free of defects, so
DPMO = 3 / 100 X 1000000 = 30,000

But, for this case, since there is only 1 part with defect, so
Defect % = 1 / 10 X 100 = 10%

And thus, Yield = 100 – 10 = 90%
( but not 97% {100-3/100×100} ), right?

So, in this case, should the process sigma be 2.78 (where yield = 90%) or 3.38 (where yield = 97%) ?

Thanks again.

0
#65677

Jim Parnella
Participant

LaySim,
You make a valid point with your question, but I believe that the Sigma Level is based on DPMO not on yield. Therefore the Sigma Level from you example is 3.38, not 2.78.

Anyone else have a take on this?

0
#65678

Thomas Jeffers
Member

Hi LaySim and Jim,

I’ve been “lurking” in this discussion area for a while now and finally decided to post :). This is an excellent exchange on topics that I haven’t found the ability to discuss this at my current employer. I want to commend you two for the exchange.

I haven’t been practicing six sigma for as long as you two, but from what I’ve gleaned from the magazines and books the key distinguishable feature is “who’s the customer?” LaySim, when you ask if the yield should be 97% (because 3 defects of 100) or 90% (because 1 defective part out of 10), you make a key distinction. 97% is acceptable to the customer. If all three defects were on different parts or on the same part, the customer would find them to be unacceptable — provided you have defined defects in terms of your customer’s needs. When you talk about 90%, yes the customer is going to find the defects also, but it strikes me as more internally focused.

The point to six sigma, if I understand it correctly, is that our processes will never operate at their maximum potential unless we look for and root out the defects. Only by counting all defects and opportunities (total opportunities should be 100) will we have the ability to analyze the data through statistical measures to uncover the root cause of the defects. Once we have agreed that this is the way we should measure, it doesn’t make sense to have two standards. It’s too confusing for the shop floor, for management and anyone else looking at our control charts.

In my humble opinion, you should stick with the 97%. It contains much more detail on the process capability. It’s like reporting out the average when you have all the data and can report out with the standard deviation!

I hope my two cents are worthwhile. And thanks again for the dialogue. Happy new year everyone.

Thomas

0
#65680

howe
Participant

Thomas-
Although you say you haven’t been practicing Six Sigma very long, I’d say you have an excellent fundemental understanding. Good assessment!

0
#65681

Thomas Jeffers
Member

Hi Mike,

Thanks for your comment. I appreciate it and always look for new ways to increase my six sigma knowledge.

Talk to you soon,
Thomas

0
#69072

Prashant Nasery
Participant

Hi Jim,
I was pondering over the same question as many others have had that is “How does the sigma calculator calculate sigma?”. My search led me to your message on this site dated 2-Jan-2001. I definitely benefitted by learning the Excel formula given by you and I am grateful to you. However, I still have a question at the conceptual level.
What I understand from process sigma level is that in order to determine sigma level for a variable charactersitic, we need to know the tolerance limits (i.e. Upper Specification limit and Lower Specification Limit) and  process sigma  (i.e. sigma level = (USL-LSL)/sigma value).
However for an attribute characteristic, usually a binomial distribution is used, wherein to calculate sigma value we need to know n (sample size) and p (percentage defective or probability of defectives in a population). Since in your Excel formula we are calculating sigma level for an attribute situation but using a normal distribution formula (which is used for a variable characteristic). Can you please help us understand the underlying concept under the Excel formula given by you.
I shall be much obliged if you could help me understand the concept of process sigma level for an attribute characteristic.
Regards,
Prashant Nasery

0
#69212

Tara Fernandes
Member

Does anybody know how to calculate process sigma using a software program that does not have the NORMSINV function? Or alternatively, what are the calculations that occur when this function is executed in MS Excel?

0
#69919

Bonnie Paris
Participant

Prashant,
Process sigma level for an attribute characteristic can be viewed in several ways.  The simplest way — and the way that the iSixSigma calculator and Jim’s equation handle it — is to use the p, the fraction nonconforming in the binomial distribution as alpha in the normal distribution.
For example, a photo processing lab has an error rate of 0.03% defects, and we want to know at what “sigma” the process is performing.  p = 0.0003 and so we take alpha as 0.0003
Then we use the inverse of the standard normal distribution and find
Z(1-alpha) = Z(1-0.0003) = Z(0.9997) = 3.43 “sigma”
** Note, the iSixSigma calculator adds on 1.5 sigma to account for process shift, so if you put this problem into the iSixSigma calculator or use Jim’s equation, the result will be 4.93 sigma.
This approach can be thought of as saying, “I know the process is binomial, but if a normal process were producing the same proportion of errors, what would the sigma of the process be?”  This is my preferred approach because it is easy to explain.
Another approach for binomial attribute data is to invoke the Central Limit Theorum and use a normal approximation to the binomial with mean = n*p and sigma = SQRT(n*p*(1-p)).  Then you are dealing with a normal distribution — but this tends to be much more difficult to explain.
I hope my comments were helpful to you.
Regards,
Bonnie Paris

0
#70074

Ken K.
Participant

Using:
Sigma =NORMINV(1-DPMO/1000000,1.5,1)
The Excel NORMINV function’s estimate of Sigma is accurate to two decimal places between 1.4 and 5.7 Sigma. It is accurate to one decimal place between 5.7 and 6.1 Sigma.
Using a much longer formula:
Sigma =SQRT(LN(1/(1-EXP(-DPMO/1000000))^2))-(2.55155+0.802853*SQRT(LN(1/(1-EXP(-DPMO/1000000))^2))+0.010328*SQRT(LN(1/(1-EXP(-DPMO/1000000))^2))^2)/(1+1.432788*SQRT(LN(1/(1-EXP(-DPMO/1000000))^2))+0.189269*SQRT(LN(1/(1-EXP(-DPMO/1000000))^2))^2+0.001308*SQRT(LN(1/(1-EXP(-DPMO/1000000))^2))^3)+1.5
This “long” formula is accurate to two decimal places between 3.2 and 9 Sigma. It is accurate to one decimal place between 2.5 and 3.2 Sigma. It should not be used below 2.5 Sigma or above 9 Sigma.

0
#70087

T. Fernandes
Member

Thanks Ken

0
#70210

Kumar
Participant

Hi Kin.K,
The formula can be given in a simplified form which is quite obvious when we take the LN term as say, X, and rewrite the formula.

0
#70388

d
Participant

Can anyone tell me the exact formula to use in Access to calculate the sigma level?  I have a wide range of defects (from 50000 DPMO to 800000 DPMO).  My data is all in Access and I need to calculate the sigma level and output it in an Access Report.  Access does not have a ln function so that part of any equation must be approximated with Log.  I got lost in trying to convert the previously posted formula.
Thanks for any help.

0
#70390

Ashman
Member

I looked at the process sigma calculator (button in upper right below the search box) and they present a link to a forum post (https://www.isixsigma.com/forum/showmessage.asp?messageID=437) that gives information on calculating the sigma using a normsinv function. Can you use the normsinv functoin in access?
Steve

0
#70392

d
Participant

Steve,
That’s the problem.  I don’t know how to use the excel formula in access or even if that is possible.  I tried to use a formula from the book Implementing Six Sigma (great book by the way) but it seems to fall apart somewhere between 553000 DPMO and 554000 DPMO.  I have emailed the author.  I was hoping someone would make it easy for me!  Using the excel formula would be fine I just don’t know a clean way but I am a novice at Access also!

0
#70663

Colin Woods
Participant

Hi,
I too have tried to spend today getting Access to calculate a Sigma score but the NORMSINV function isn’t there in Access.  I tried to use a lookup table but that didn’t work either.  I would love to hear how to solve this!!

0
#71227

RB
Participant

I have a query that gets its data from a table. See sql statement below:SELECT DISTINCTROW Table6.[Assembly Name], tblCMS.[Number of Opportunities for Defects], Sum(Table6.[Number of Defects]) AS [Sum Of Number of Defects], Count(*) AS [Count Of Table6], ([Sum Of Number of Defects]/([Count Of Table6]*[Number of Opportunities for Defects]))*1000000 AS DPMO, ([Sum Of Number of Defects]/([Count Of Table6]*[Number of Opportunities for Defects])) AS [Percent Defective], (100-([Sum Of Number of Defects]/([Count Of Table6]*[Number of Opportunities for Defects]))) & “%” AS Yield, Sigma([Sum Of Number of Defects],([Count Of Table6]*[Number of Opportunities for Defects])) AS Sigma
FROM tblCMS INNER JOIN Table6 ON tblCMS.[Assembly Name] = Table6.[Assembly Name]
GROUP BY Table6.[Assembly Name], tblCMS.[Number of Opportunities for Defects];I run this query to activate the following code in module:Function Sigma(Arg1, Arg2) As Double
On Error GoTo Err_Sigma

Dim Calculate_With_Excel As Object

Set Calculate_With_Excel = CreateObject(“Excel.Sheet”) Sigma = 0
Calculate_With_Excel.ActiveSheet.Range(“A1”).Formula = Arg1
Calculate_With_Excel.ActiveSheet.Range(“B1”).Formula = Arg2
Calculate_With_Excel.ActiveSheet.Range(“C1”).Formula = “=NORMSINV(1-(A1/B1))+1.5”
Sigma = Calculate_With_Excel.ActiveSheet.Range(“C1”).Value

Calculate_With_Excel.Close

Set Calculate_With_Excel = Nothing

Exit_Sigma:
Exit FunctionErr_Sigma:
‘Didn’t care about the error trap…They can only be division by zero errors
Resume Exit_SigmaEnd Function
Works great!

0
#71316

Wayne Lombardo
Member

Ken does this formula have all the brackets in place . There seems to be some missing if you count them??

0
#71322

Jean Luc Lemieux
Participant

We are using Minitab with Six Sigma package at work and we found big differences (deltas or inacuracies) between the Sigma level results from the Minitab package and the Excel “Norminv” formula, especially when dealing with large DPMO figures.  I would be looking for a more stable an acurate function to implement in Excel.

0
#71363

Alok Jain
Participant

Dear Sir
Excellent
Alok Jain

0
#71445

Brian Birk
Participant

Thanks for all the great info in this forum. I’ve noticed a lot of people are looking for an algorithm to compute the inverse normal, and I’ve found a site that has the info.Check out: http://www.math.uio.no/~jacklam/notes/invnorm/The site contains good information, and contains not only pseudo code on how to do the computation but also contains matlab and perl scripts. Hope this helps.Brian

0
#71545

Dave Harrold
Participant

I’m having trouble understanding exactly what the calculator wants in the way of data from me. Can you explain Units, Opportunities, and Defects using this golf example?
A golfer plays 70 rounds. Within each round are 18 greens. Ideally he/she wants to have no more than two putts on each green. However, the golfer three putts one green on each of the 70 rounds.
Is this what I’d put into the Sigma calculator?
Total Units = 70 x 18 = 1,260 total greens
Opportunities per Unit = 2 putts
Total defects = 1 (or is it 70 x 1 = 70)?

0
#71547

McNabb
Participant

Total Units = 70 x 18 = 1,260 total greens
Opportunities per Unit = 2 putts

Total defects = 1 (or is it 70 x 1 = 70)?
Dave,
It all depends on how you define the opportunity and defects, but I think you are on the right track. If you are defining a defect as anytime a golfer over puts (over 2) on a green, the opportunities are 1,260 (as you suggest) and I would consider it one opportunity per green (not two as you suggest). If they three put one of the greens, that is 1 defect. Plug ’em in!
This is a very discreet analysis. You can look at it as more continuous by saying the spec limit is two and look at the continuous data. Anything greater than 2 would be a defect in this case.
Mary

0
#71550

Dave Harrold
Participant

So maybe the way to look at it isn’t for 70 rounds of golf but for one 18 hole round of golf.
Then the calculator inputs would be:
Total Units = 18 greens
Opportunities = 2 stokes/putts per green
Total defects = 1 extra stroke/putt on one of those 18 greens
Does that make sense?

0
#71552

RGD
Participant

I believe the most important thing about counting opportunities and defects is not necessarily how it is done, but how consistently it is done.Assume for example that you own several golf courses. Course one counts opportunities as two puts per hole (36 opportunities) and defines a defect as any putts over two per hole. Then, a three putt would count as one defect.The second course counts opportunities also as two puts per hole (36 opportunities) but defines a defect as any putt that is more than three inches outside of it’s target. Your first putt target is a spot about 12 inches from the hole and you end up 16 inches short. That is your first defect. Now assume that the second put misses the target of going in the cup. You now have two defects. Furthermore, if you had made the second putt, you would still have one defect because your second putt was burdened by coming up short on the first putt.The real issue becomes how you compare two courses who are not counting opportunities and defects the same. It is more important that everyone be on the same page than on how specific scenerios are counted.

0
#71591

schaeffers
Member

Hi you all,
I have been struggling like all of you with this problem. To help companies we have made a free six sigma calculator both for variables and for attributes. We have all the formulas and I’m willing to offer some free help to get things working for you guys. We have the calculation sdone in visual basic and can offer you some help in that direction.
We can also give you complete explanations how calculation sneed to be done. We used the calculations given by Mikel Harry in his 8 books Vison of Six Sigma.
What is the catch. Nothing everything I offer is free. However you can also use SPC Wizard which does all calculations for you and can read Excel or ACCESS directly and show the control charts, histograms, mulkti vari analysis as well.
For the free calculator or requests fro the calculations check http://www.spcwizard.com.
I don’t look at the forum so often so if you are in a hurry send me an email directly.

Kind regards,

Marc Schaeffers
Netherlands

0
#71592

schaeffers
Member

Hi,
I did not read all the replies so on the risk of doing things twice.
Aren’t we looking completely looking in the wrong direction.
I believe you are trying to treat variable data as an attributes. If you assume as specification level for example PAR you can just calculate averages and standard deviation and calculate sigma levels from there.
You can just calculate sigma levels based on variable data.
Even better use control limits, Cp, Cpk and Ppk because they will give you more information about your process.
Kind regards

Marc Schaeffers

0
#71824

Chaturvedi
Participant

The formula is not working correctly . It is ambiguous that which part of the formula is to be squared/cubed.
For DPMO of 111111 the Sigma should be 2.72 according to Excel
but it is returning -1.05155
The formula has been translated as
function test(d)
{
var x;
x=Math.log(1/(1-Math.exp(-d/1000000))*(1-Math.exp(-d/1000000)));
x=x/Math.log(10);
result=Math.sqrt(x)-(2.55155+0.802853*Math.sqrt(x)+0.010328*Math.sqrt(x)*0.010328*Math.sqrt(x))/(1+1.432788*Math.sqrt(x)+0.189269*Math.sqrt(x)*0.189269*Math.sqrt(x)+0.001308*Math.sqrt(x)*0.001308*Math.sqrt(x)*0.001308*Math.sqrt(x))+1.5
return result;
}
Please check if the formula is correct

0
#71825

Chaturvedi
Participant

This formula is not returning the correct inverse of the standard normal cumulative distribution
for a probability of 0.908789 the value returned should be 1.3333
but the value returned is 0.6270580123677483
I checked the correct value by using the NORMSINV(.908789 ) function in excel.Please advise if this function is equivalent to the NORMSINV function

0
#71852

Brian Birk
Participant

Pankaj,That’s odd, my function seems to return 1.33. Maybe you should double check your algorithm. Here is what I’m using. (In Java code) Hope this helps.public class invNorm {
public invNorm() {
} public static double ltqNorm(double p) {/*
%LTQNORM Lower tail quantile for standard normal distribution.
% Z = LTQNORM(P) returns the lower tail quantile for the standard normal
% distribution function. I.e., it returns the Z satisfying Pr{X < Z} = P,
% where X has a standard normal distribution.
%
% LTQNORM(P) is the same as SQRT(2) * ERFINV(2*P-1), but the former
% returns a more accurate value when P is close to zero.% The algorithm uses a minimax approximation by rational functions and
% the result has a relative error less than 1.15e-9. A last refinement
% by Halley’s rational method is applied to achieve full machine precision.% Author: Peter J. Acklam
% Time-stamp: 2000-07-19 16:44:07
% E-mail: [email protected]
% URL: http://www.math.uio.no/~jacklam% Coefficients in rational approximations.
*/double[] a = {-3.969683028665376e+01, 2.209460984245205e+02,
-2.759285104469687e+02, 1.383577518672690e+02,
-3.066479806614716e+01, 2.506628277459239e+00};
double[] b = {-5.447609879822406e+01, 1.615858368580409e+02,
-1.556989798598866e+02, 6.680131188771972e+01,
-1.328068155288572e+01};
double[] c = {-7.784894002430293e-03,-3.223964580411365e-01,
-2.400758277161838e+00,-2.549732539343734e+00,
4.374664141464968e+00, 2.938163982698783e+00};
double[] d = { 7.784695709041462e-03, 3.224671290700398e-01,
2.445134137142996e+00, 3.754408661907416e+00};// Define break-points.
double plow = 0.02425;
double phigh = 1 – plow;
double z = 0;//# Rational approximation for lower region:
if (p < plow) {
double q = Math.sqrt(-2*Math.log(p));
return (((((c[0]*q+c[1])*q+c[2])*q+c[3])*q+c[4])*q+c[5]) / ((((d[0]*q+d[1])*q+d[2])*q+d[3])*q+1);
}// # Rational approximation for upper region:
if (phigh < p) {
double q = Math.sqrt(-2*Math.log(1-p));
return -(((((c[0]*q+c[1])*q+c[2])*q+c[3])*q+c[4])*q+c[5]) / ((((d[0]*q+d[1])*q+d[2])*q+d[3])*q+1);
}// # Rational approximation for central region:
double q = p – 0.5;
double r = q*q;
return (((((a[0]*r+a[1])*r+a[2])*r+a[3])*r+a[4])*r+a[5])*q / (((((b[0]*r+b[1])*r+b[2])*r+b[3])*r+b[4])*r+1);
}}

0
#71853

Brian Birk
Participant

Well, that posted code got pretty scrambled… The cut and paste method didn’t work to well.. If you like, I will email you a copy. You can reach me at [email protected] Luck,Brian

0
#71883

Kevin Hankins
Participant

The Six Sigma philosophy is to _understate_ your process by the 1.5 sigma shift.  You always shift the 1.5 toward the Customer Spec Limit.
Therefore, the Calculator is wrong.  It overstates the answer by 3.0 sigma, because the 1.5 should be -1.5, not +1.5.

Kevin Hankins
[email protected]

0
#72333

Sagar Sonar
Member

Where can I find the derivation of this equation? Wanted to know what assumptions are made while deriving this.

Thanks,
Sagar

0
#74377

RR Kunes
Member

I am not aware of any alogorythm within minitab which calculates sigma.
The Norminv function is more accurate than most other method specifically better than utilizing the Z values as the Z values presume normality.
As long as you maintain the same method of calculation before and after the project it really doesn’t matter.

0
#74378

RR Kunes
Member

LaySim,
Please be very careful with terminology. Defects vs defectives. Your operational definition must be agreed and consistant prrior to calculating sigma values. What does the customer view as a defect?
DPMO refers to defects per million opportunites. Therefore your definition of a defect must be understood.

There is an alternate method which uses the equation (1-DPO) x 100 to calculate yield.

Your error was in the assumption that you had 90% yield. This is the argument regarding defects and defectives.

0
#74875

Participant

After reading through nearly all the Sigma Level replies, I have a question regarding a variable or continuous data process. If I use the customer’s spec limit, say USL, and work through the Z score calculation, I effectively arrive at my Sigma Level, or how my process is performing with respect to my nearest customer spec limit.
The trouble is, when I run the Z calc, I don’t get the same answer as when I convert to a DPM approach.
And, no, I’m applying the 1.5 sigma shift yet – no point if the numbers don’t jive.
Any help on this one?

0
#85817

Member

Please could you tell me how to calculate the sigma level for just one data point.
This is a yearly process..happens once a year, so we have just one observation point.

0
#88587

Participant

Because Excel has the Statistical Functions that you need, youcan “ACCESS” them by using the Office Automation functions.  Please observe below the procedure as indicated un the Microsoft Support site (support.microsoft.com).
HOW TO: Call Excel Functions from Within Microsoft Access 2000The information in this article applies to:Microsoft Access 2000Microsoft Visual Basic for Applications
This article was previously published under Q198571 For a Microsoft Access 97 version of this article, see 153748.
This article applies to a Microsoft Access database (.mdb) and to a Microsoft Access project (.adp).
Advanced: Requires expert coding, interoperability, and multiuser skills.
SUMMARYThis article shows you how to use Automation to call Microsoft Excel functions from within Microsoft Access.
Microsoft Excel has some functions that Microsoft Access does not have, for example, statistical functions and add-in functions. By using Automation, you can use these Excel functions in Access. To do so, first set a reference to the Microsoft Excel object library.
Setting a Reference to the Microsoft Excel Object LibraryIn Access, press ALT+F11 to switch to the Visual Basic Editor. On the Tools menu, click References. In the References box, click the Microsoft Excel 9.0 Object Library check box, and then click OK. After you set the reference, you can use the Object Browser to view all the Microsoft Excel objects, properties, and methods, including the functions available through the Application object.
The following two sample procedures use Microsoft Excel statistical functions.
NOTE: Excel does not automatically close after you open it through Automation. The following sample procedures use the Quit method to close Excel. For additional information about quitting Excel, please see the following article in the Microsoft Knowledge Base: 210129 ACC2000: Applications Run from Automation Do Not Always Close
Example 1The following subroutine uses Automation to call the Excel Median() function. Half of the set of numbers fall below and half above the median. Sub xlMedian()   Dim objExcel As Excel.Application   Set objExcel = CreateObject(“Excel.Application”)   MsgBox objExcel.Application.Median(1, 2, 5, 8, 12, 13)   objExcel.Quit   Set objExcel = NothingEnd Sub    The subroutine displays 6.5 in a message box.
Example 2The following subroutine uses Automation to call the Excel ChiInv() function, which returns the inverse, or the one-tailed probability, of the Chi-Squared distribution: Sub xlChiInv()   Dim objExcel As Excel.Application   Set objExcel = CreateObject(“Excel.Application”)   MsgBox objExcel.Application.ChiInv(0.05, 10)   objExcel.Quit   Set objExcel = NothingEnd Sub    The subroutine displays 18.3070290368475 in a message box.
You can simplify the code by calling the Excel reference directly:   Sub xlChiInv()      MsgBox Excel.Application.ChiInv(0.05, 10)   End Sub
The equivalent call for NORMSINV would be:   Sub xlNormSinv()      MsgBox Excel.Application.NormSinv(.91945)   End Sub    NOTE: When you use this syntax, Excel remains in memory until you reset the code or close the database.
I hope this helps you solve your problem . . .

0
#89683

ROSS
Member

The correct calculations for sigma are as follows:
c: Uniform sub group, number of conformities:
Sigma = SQRT ( cbar )
Where “cbar” is the average number of non conformities per sub-group
pn: Number rejected per sample:
Sigma = SQRT ( pbar x n ( 1 – pbar ))
Where “pbar” is the average reject proportion and ” n” is the sample size.
… etc. Basic statistics have not changed since the advent of six sigma !

0
#89684

ROSS
Member

Bonnie,
The statistics for calculating of sigma have not changed with the advent of “six sigma”. Attribute charts ( c, u, pn, and p ) and their calculations are the same as they have always been.
Tony

0
#89687

ROSS
Member

Where possible, it is always preferable to use variable data than counting defects, because variable data gives more information about the process. Counts should only be used for discrete events. Where counts are used standard pn, p, c and u chart calculations should be used.
Tony

0
#101002

Sorour
Participant

I am trying to figure out how to apply the calcuation to my inventory control process.  I am pretty new to all this so bare with me.
I have 14,000 locators and 20,000 SKUs, almost 600,000 parts, after completing our last cycle count we ended up with a 99.6% accuracy rate.  How would we convert that to a sigma   We cycle count by accounting requirement 80% of the locators twice a year.  Using Oracle terminology we have a count tag which equates to the QTY of a part number listed in a locator, a variance is when the count tag QTY doesn’t match the count.
The recount matches either the first count or the count tag or a third value.  The opportunity here is to determine the sigma value of the total inventory and the counting.  Thanks for your help in advance.

0
#101030

GDS
Participant

Paul,
We measure location accuracy, and use the following formula to calculate a DPMO, we then convert the DPMO to a sigma value.
Locations in error / total locations counted X 1,000,000 = DPMO
1/1,000=0.001 X 1,000,000 = 1,000 DPMO
DPMO of 1,000 = 4.59 sigma

0
#101032

Sorour
Participant

A new message by GDS was posted in the Discussion Forum.

Paul,
We measure location accuracy, and use the following formula to calculate a DPMO, we then convert the DPMO to a sigma value.
Locations in error / total locations counted X 1,000,000 = DPMO
1/1,000=0.001 X 1,000,000 = 1,000 DPMO
DPMO of 1,000 = 4.59 sigma
Ok thank you the picture is getting clearer now.
I we had 120 locators in error and a total of 14,000 locators counted then the formula would be:
120/14,000 X 1,000,00 = 8,571 DPMO
Then the DPMO of 8,571 = ? sigma.
How did you get there?  Is this good and why?
Thanks
Paul

0
#101046

GDS
Participant

Paul,
There is a “Sigma Calculator” in the upper right hand corner of this page, click on it and enter your data.

DPMO of 8,571 is 3.88 sigma. (Bottom line of DPMO is if you counted a million locators, you could have around 8,571 errors)
Your question of, Is this good and why? That is up to you and your management, how much is your missing or missed placed inventory costing your operation?
Our Inventory Accuracy target is 99.95% accurate. we are running at 99.98%, we count 61,497 locations per month with 15 errors average,
that gives us a DPMO of 244 & 4.99 sigma. We have been cycle counting now for two years, prior to our cycle counting our inventory accuracy was running around 99.00% based on Annual Physical Inventory. I feel sharing our inventory information with shop floor personnel and several SS projects have helped us in this area. However I feel we still have room to improve.

0
#101590

Dorene
Participant

I’m working on a similar project as a Six Sigma Black Belt on inventory accuracy. You mentioned collecting data on error in inventory locations – do you mean that the number of parts in that physical location does not reflect what is depicted in your ERP system? For opportunities of defects on a single unit, I’m debating whether to include the reasons for defects eg. Planning error, Operator Move error etc as possible defects, or to just use “1” for defects on a single unit (meaning the physical quantity of parts either matches the system or it doesn’t). Does anyone have any opinions on this?
Paul, how are you quantifying your savings for this project. Inventory Record Accuracy is something that is important but how are you justifying it? Would you mind sharing what measures you’re using also?
Would really appreciate anyone’s input on this – this is a most helpful forum! Thanks!

0
#101595

Sorour
Participant

We use a cycle counting process that begins from the first locator (location) to the last.  We have 13 sub inventories in one Organization and about 22 Organizations in the system.  Our parts are anywhere from \$0.01 to tens of thousands of dollars and there are about 550,000 parts in one subinventory.  My job is to analyse the process and results to see improvement and to quantify the results among other things. :).
We have found that using a cycle count methods like ABC analysis and different catagories of counting has one systemic problem you never pick up the lost parts.  You are targeting one type of criteria and you will not find extra or mis-placed parts.  If the count sheet has 10 parts on it with associated qtys you can verify the accuracy of the system but if you have parts that were put away wrong or placed in areas and not entered into the system there will be no finding them without a bin to bin count.  Since our auditors require at least one count per year of each subinventory we just do a location to location count.  We do not use designated locations for certain part numbers so a by part number count could run a counter all over the warehouse so we use a one part number per location rule.  I also extract from the system the “Should be empty” locators and do checks on those to find lost parts.
We are in the service and warranty repair business so a one cent part could send a 50,000 dollar unit into an Awaiting parts area and increase turn around time for the repair.  Justifying the counting cost is not a problem for us because it is mandated by accounting and it also keeps our ordering and forecasting system functioning well.
Hope this helps it is sort of an over simplification.
Paul

0
#101597

Sorour
Participant

Part two:
I think I was talking about breaking down the reasons for counting errors as they pertain to the act of counting not the location accuracy.
When you have a defect there are 3 possible numbers at work, the system number, the “actual” number of parts in the location and the number the counter marks down on the count sheet.  We use a blind count sa as not to encourage short cuts.  This being said a second count is taken on the location and the rule of thumb is that the second count should match the system or the counter and you would perceive that you have an accurate count.  The number of second counts should equal the number of defects.  The matching numbers are what I want to quantify to determine how many times the counter is right and how many times the counter is wrong based on the re-count.  This should tell me what to expect form the work performed and to look a little further into so called perfect counts and really bad counts.
Now one might say that if the re-count or second count produces another number, lets say that the system says there should be 100 and the first count is 99 and the re-count is 101 but the reality is 98 we need to look at the counting methodology to save time and money doing all this stuff.
1,2,3,4,5  and 1,2,3,4,5 isn’t something you learned in pre-school fun.
Paul

0
#101598

Dorene
Participant

Thanks, your reply brought a new world of issues to my attention! I never thought about studying the cycle counting process itself – only fixing the problems that were found.
We do use ABC classification. To help us find extra parts, we do blind counts. However, if a part was not entered in the system, then you’re right, it wouldn’t be discovered. It’s something that I need to think about further. We only just started cycle counting and our accuracy is really low – at approximately 50%. Therefore, we have many other problems than just the cycle counting process. However, the total net variance hasn’t been too much of a problem (so far) so I’m trying to justify the necessity of working on this as a Six Sigma project. I tried this though – I took X% accuracy*\$total inventory = \$ at risk. My objective is to improve IRA and to therefore reduce the \$ at risk.
To make sure I understand correctly, you use 3 for Defect Opportunities/Unit because you’re using Six Sigma to check if the cycle counting process is in control?
Did you ever use Six Sigma to improve Inventory Record Accuracy (IRA)? Your IRA is very high now – did your organization use Six Sigma to get to that percentage? What would your recommendation be on Defect Opportunities/Unit that I should use – 1 (either match system or don’t), X number (possible reasons for an error), or 3 (like how you’re doing it?) My objective is to drive up IRA.
Thank you for your help thus far.

0
#101601

Sorour
Participant

Your way beyond the average Joe with the sigma stuff, your probably working with people who don’t really know much about it too.
One suggestion I have with an accuracy rate as low as 50% is to make sure that the new put-aways are put out in front of your count and audit the heck out of the system entry and the placement.  This way your numbers will improve because you are counting process improved inventory, (little tricks) sorry.  Try to count a logical area where a whole put away may have taken place.  That way you may just be able to correct by moving a part or two with no system work, cheaters would not count that as a DPMO.  Begin your process improvement at receiving.  Audit Audit Audit the system and the data entry.
Good luck

0
#103274

V. Laxmanan
Member

Dear All:
While I was thinking about the 1.5 sigma shift question posed recently, I found the thread started by LaySim, which poses a very valid question.  How do we account for defects observed and what is Process Sigma?
In this example, there are 10 opportunities to produce a defect in each unit produced. If 10 parts are produced, the total opportunities for producing a defect = 100.
9 parts were defect free and 1 part had 3 defects.
Defects per opportunity (DPO) = 3/100 = 0.03
Yield = 1- DPO = 1- 0.03 = 0.97
The yield of 0.97 means a Process Sigma of 3.38 after adding the 1.5 sigma shift. We use the formula,
Process Sigma = 1.5 + NORMSINV(Yield)         ……….(1)
where yield is expressed as a fraction, not a percent.
An alternative view of the same situation, which LaySim is talking about is the following.
Scrap rate = Defective parts / Total Parts = 1/10 = 0.10
Yield = 1 – Scrap rate = 1 – 0.10 = 0.90
The Process Sigma for 90% yield is lower and equal to 2.78.  What should be used?  This is the question that LaySim is asking.
Obviously, only 9 parts were shipped to the customer, in the example given by LaySim.  The 1 part, which had all the 3 defects, was not shipped.  The profts and the revenues of LaySim’s company depend on the parts shipped to the customer, not on how we calculate the Process Sigma. If the 3 defects had been distributed differently, we would have a different levels of shipments.
1 defect each in parts A, B, C = 70% shipments.
2 defects in 1 part A and 1 defect in part B = 80% shipment.
According to probability theory that we assume in such calculations, all these are equiprobable events. If one part had 3 defects, something is wrong. Very soon, we might have a whole bunch of equally likely scenarios where shipments would vary from 70%, to 80%, to 90%, for each lot of 10 parts produced.
Hence, even with just 3 defects per 100 opportunities, it is conceivable that the number of parts shipped is as low as 70% of the total parts produced. Ultimately, this is what, in my humble opinion, should drive Six Sigma methodologies, not just the DPMO, yield, and Process Sigma calculations, with and without the 1.5 sigma shift.
In a way, that is also what other recent posters are telling us.  A good example is Newbie’s post.  The number of opportunities to produce a defect is 30 per unit produced.  Newbie finds that he could never exceed 3.90 sigma. Is this a case of Diminishing Returns?
I think, we should look at both definitions of yield.  It is not an either or proposition. Yield defined using scrap rate affects the company’s bottom line.  Yield defined using the DPO tells us something about the process capability.  If the process must improve, every opportunity where a defect can be produced, must be addressed and fixed.  That’s tough, as Newbie tells us. But, there is no brick wall either, or diminishing returns, with the DPO and DPMO and Process Sigma calculations which lose sight of the company’s profitability.
Now, imagine you are the owner of a small company that has just  shipped out 7 out of 10 parts and 2 were defective. You are wondering what went wrong and why 3 were scrapped. Your quality guy comes and tells you that the yield was 97% since there were 100 opportunities to produce the defects in 10 parts and we only had only 3 defects per 100 opportunities! I know, if I said that, I would be shown the door out and told never to come back.
Don’t think of these parts as trinkets being shipped out at \$5 or \$10 a piece. The ten parts could each be worth \$1000, \$10,000, or \$50,000, or \$1 million.  There are many small businesses shipping small volumes with very high value added.
I remember seeing a show recently (History Channel, I think) about a family owned small company making high priced, made-to-order motorcycles. The son was working for his father. The father was “snarling” at the son many times.  The son stayed calm and was focused on what he was doing.  As I watched the show, I was thinking about Six Sigma and how it might be apply here.  What is the DPO, DPMO, yield, etc, long term and short term variations.
Then it hit me – the only thing that matters is what happens when the motorcycle is shipped and/or the customer comes to take it out for a test drive!
I have seen many other shows like this.  Another one I remember is a show where the company was making specialized cars for the super rich. The person who had ordered the vehicles, a celebrity, flew in from Paris, just to try it out before taking delivery.  Just imagine what would happen if he did not have a big smile on his face after the test drive and also if he goes around telling all his super rich friends about the lousy job that they had done.
And so, the yield based on the actual scrap rates should also become a part of the whole Six Sigma methodology.  I know there are many experts here, who have been doing Six Sigma for much longer than I have been. I would certainly like to hear from all. Best regards.
Laxman

0
#103279

Taylor
Member

Then it hit me – the only thing that matters is what happens when the motorcycle is shipped and/or the customer comes to take it out for a test drive!. Just imagine what would happen if he did not have a big smile on his face after the test drive and also if he goes around telling all his super rich friends about the lousy job that they had done.
You have through logical deduction and mathematical analysis reached the true essence of Six Sigma contribution.  Bravo.  Nothing beats a smiling test drive.   All of that other stuff, e.g., TRIZ, Lean using value stream mapping, reliability test designs, Eyring models, TPS, etc., etc., etc., that get bounced around and intertwined in this forum, meshing together developmental and manufacturing concepts trying to show interrelationships is obviously crap.   Takeer out for a test run and if you come back smiling whatever you did to get there must have been right.  I like it and am OK with it.  Like you said earlier, say it and move on.   Im also OK with that.

0
#103282

Savage
Participant

Laxman,
Personally speaking, I report yield both ways and have for many years.  I’m sure many others do so as well.
Not to be critical, but sorry, there’s nothing new in what you’ve posted here.

0
#103286

V. Laxmanan
Member

Dear Todd:
Thanks a lot. That makes me smile too!
By the way, here’s the explanation given in Zack Swinney’s article which can be found on this website regarding why a “shift”, whatever its numerical value, is used in the Process Sigma calculation.
Zack says,
After a process has been improved using the Six Sigma DMAIC methodology, we calculate the process standard deviation and sigma value. These are considered to be short-term values because the data only contains common cause variation — DMAIC projects and the associated collection of process data occur over a period of months, rather than years. Long-term data, on the other hand, contains common cause variation and special (or assignable) cause variation. Because short-term data does not contain this special cause variation, it will typically be of a higher process capability than the long-term data. This difference is the 1.5 sigma shift. Given adequate process data, you can determine the factor most appropriate for your process.
The graphs on pages 186 to 190 in The Six Sigma Way Team Field Book by Pande, Neuman, and Cavanagh, show how the mean value of a normal distribution shifts, or keeps changing, with time as we observe the process over a long period of time.
Now, to get back what I started talking about here in this forum.  These variations in the mean value over time can be understood using the idea of a work function.
I have been observing such variations in mean values for the last six years, in the profits and revenues data for hundreds and thousands of corporations reported in leading business magazines.  And, I found a simple explanation, based on the idea of a work function.
The profts-revenues graph is a straight line for Microsoft and for many other corporations (please see my posts, sorry I don’t remember which one now). The “scatter” is due to the work function, which takes into account many many factors not included in x, the revenues, that we are assuming implicitly determines the profits.  What is the correlation coefficient?  Often greater than 95%, even 98% or 99%.  Where there is scatter and a low value for the correleation coefficient (I am giving the Rsquared values), the companies, have lousy performance.
In other words, like a friend of mine, who is a Professor of Marketing told me recently, “Laxman, what you are saying is if a company focusses its energies on increasing its revenues, everything else follows.  We don’t have to worry about all these other measures like assets, shareholder equity etc. So, it is OK to aggressively go after sales, by offering rebates, discounts.”
I said, thanks and yes. That’s what this linear relation means and profits revenues relation is enshrined as a law, Profits = (Revenues – Cost) which is exactly the same as Einstein’s K = (hf – W).   Yes, the Professor of Marketing heard this from me, before I came to the forum.
If we prepare a graph of the mean values of a process and plot it versus time, what would you find?  A big scatter, right?
No. I don’t see it that way.  The scatter is due to the changes in what I have called the work function over time.  That is what one must investigate.
This is a simpler approach that everyone (hopefully the remaining 56%) will be able to follow. The whole idea of approaching problems that we have to address in many walks of life, in the real world, using notions of probability and statistics (originally developed to understand games of chance, or what was going on in the gambling houses of London and Paris) itself must be revisited.
Gambling houses and casions are different.  The same dies are rolled. The same deck with the same 52 cards are played.  But even here, they pull a fast one on you.  Sometimes, the dealer will combines 2, or 3, or 4 decks. This is what I learned from a friend, who likes to visit casinos and plays blackjack. Equiprobable events probably occur in gambling houses, not in the real world outside these houses.
But, as I have shown in my posts, even a coin tossing experiment can be interpreted using the idea of a work function!
Probability and statistics, and the coin tossing experiment, is the very philosophical underpinning of Six Sigma.  Thus far, nobody came forward with an alternative to the statistics-bsed approach.  So, the statistics (or the mumbo-jumbo, the term used in at least one article that I found on this website by a statistics expert) continued.
I am not trying to challenge the use of statistics. I am simply suggesting that we can use the far-reaching idea of a work function that Einstein conceived to explain photoelectricity.  In due time, if it replaces statistics, so be it.
There is enough in many posts to date that Six Sigma practitioners, if they are willing, can try and see for themselves how the work function approach may actually make a lot more sense than the use of MINITAB and other statistical tools.  Before you use MINITAB you must under what the software will do for you.  It is like relying on the calculator without ever learning your multiplication tables.
My older son once asked me, “Dad, what do you have against computers.” I told him, NOTHING.
But, I have a better computer and I like to use it.  Instead of being made up of silicon chips, this one has “chips” made out of carbon, the element right above silicon in the periodic table.
I was talking about the human brain. The body is mostly carbon, hydrogen, and oxygen atoms weaved together by nature.  The human brain is also a computer – a different kind of a computer and a far more powerful one.
Before using MINITAB you must be able to do all the calculations by hand or by writing your programs, using something like Microsoft Excel, which is what I do.  I then check the calculations with the MINITAB output that you will find in many textbooks now.
The main point, we must understand what we are doing using statistics, or what ever new “model” is used to understand the data that we are generating everyday. I am proposing a new model – the work function. I am surprised that there are still so many questions.
To me it looks so much simpler and easier to use! I have checked this so many times and reworked problems from statistics textbooks.  I have posted my version of the solutions to at least two problems now.
I am ready for it, if you are – to take this work function far beyond physics to solve many problems that affect a corporations bottom line, our jobs, etc.  I sincerely hope all this actually begins to show some rewards and benefits. Regards.
Laxman

0
#103288

V. Laxmanan
Member

Dear Matt:
Thanks.  I am glad to know that you have been reporting yield both ways. I would have been surprised if solid thinking people have not already recognized that they should look at yield both ways.
What I am trying to show here is how to integrate all this using the simple idea of a work function. Sometimes stories have to be told to get the message across – especially for newcomers to the field and to the business world in general.  There are many young people here asking questions.  They are probably starting out on their first jobs and training to become Black Belts and trying to implement Six Sigma. So, the stories have to be told. I hope you will agree.
I am also new to this forum. But I have been around – all my jobs were with some of the top and largest organizations –  where I have actually helped implementing process changes and improvements aimed at producing cost benefits across the board for the company.  Only recently I began to the see the connection between Six Sigma and Einstein’s work function, which got me to the forum.  Again, thanks for your comments – especially that there are many who use both definitions of yield. Regards.
Laxman

0
#114024

preethija
Participant

Can you tell me some case study so that I  can understand more

0
#114391

Vikas
Member

I like to know how to calculate sigma by using vbscript in ASP.
Data:
total_defects=611
total_units=1081
DPMO=(total_defects/total_units)*1000000=5652173913
DEFECTS(%)=(total_defects/total_units)*100=56.52173913
YIELD=100-DEFECTS(%)=43.47826087
SIGMA=?????????VIKAS

0
#117422

Anonymous
Participant

Many thanks for the Excel calculations.
Please can someone explain why we have to assume a 1.5 Sigma shift

0
#117544

Sankar
Participant

Hi
Check this link to know the answer.
https://www.isixsigma.com/library/content/c010701a.asp
Thanks
Ganeshps

0
#120577

Participant

Can anybody clarify me how does this formulae =NORMSINV(1-(Total defects /Total opportunity))+1.5, works, and how to interpret the results from this. does it means defects per million.
is total opportunity means the gross output. if that is the case, then why can’t we convert the defects that is arrived from the actual output into per million and derive the result.
Friends kindly clarify

0
#120580

Participant

Hello Laxman
I need a clarification on the opportunities side, if the final output comes from various processes (say 5 processes) there is a possibility of getting defects either one of these processes, if that is the case, then while arriving the opportunities can we take the number process into account. for e.g. say 10 outputs are produced thro 3 processes, then the probability of defects per outputs is 1 :1 and the total will be 10 from each process, so why can’t we consider the total opportunities as 10(defects)*3(process)*10( outputs) = 300.
thanks

0
#130076

liane evans
Participant

hi
if i have sigma of 2 whaqt exactloy does this mean? does it mean that 3.4 defects per million opportunities lie within 2 standard deviations from the mean? i am having trouble interpreting what exactly the sigma value means and how this relates to the normal distribution

many thanks

0
#130096

jimmie65
Participant

In a nutshell, the sgma value is how mnay standard deviations fit between the mean and your spec limits.
A 2 Sigma means approximetely 97.75% of your units/parts/process meets specifications – a DPMO of 22,500 (and a long way away from 3.4).

0
#143059

bhavana
Participant

I would like to know more about “Double Sigma Calculations”.
Kind Regards,
Ms.Bhavana H.S.
Officer-CSC,
Bilcare Ltd.
India.

0
#161272

Azim
Participant

Hi,
I am a beginner & wanted to know if i have a process where there are 1,850.0.. documents processed & there are 35 documents as defects, Per document has 10 parameters to to quality check on. How do i calculate the sigma value of this process.
Thanks,
Azim

0
#169187

chhabra
Participant

I have a querry. There are 10 samples drawn from a batch. The USL is 105% & LSL is 95%. An individual sample can be in the range of 90-110%. %RSD NMT 6%. What is the process sigma?

0
Viewing 67 posts - 1 through 67 (of 67 total)

The forum ‘General’ is closed to new topics and replies.