Hypothesis testing

Six Sigma – iSixSigma Forums Old Forums General Hypothesis testing

Viewing 7 posts - 1 through 7 (of 7 total)
  • Author
  • #49866


    I have 2 category of products A & B. A category of products have more features compared to B products.
    During our design qualification testing, B products go 2 tests in sequenec i.e., through X test first followed by Y test where as A products are qualified directly through Y test.
    My question is that we are trying to have similar route for B products as well i.e., directly subject to Y test. In which case, can we compare the 2 category of products failure rates through their respective sequence using hypothesis testing to conclude that testing thru 2 steps does not surface more failure ?
    Is it a pre requisite that the failure rate means should follow normal distribution ?
    Appreciate the help in advance.



    In general, hypothesis testing requires that your data adhere to the normal distribution (or be reasonably close to normally distributed).  If your data is not normally distributed, there are several options available to you:  transform the data, create a subgrouping strategy that allows you to take advantage of the central limit thereom and then perform tests based on the subgroup averages, or use tests that do not rely on the normal distribution.
    Also, don’t overcomplicate things either.  If your testing is non-destructive all you have to do is show that product that fails test X always fails test Y and that product that passes test Y never fails test X. I use the terms “always” and “never” but I should probably say that the difference in results falls within acceptable risk parameters.  You could accomplish something like this by performing an attribute agreement analysis.



    Hi Jsev607,
    I am pretty new to the concept of Hypothesis testing & just thought it may be usefull in the current scenario.
    Just to add some light to this, my intention is to study the impact if  the B products are directly subjected to Y test (like A products) instead of the current sequence of X followed by Y tests. Failures occur during both tests X& Y. So it would not be possible to say “product that passes test Y never fails test X”. I just want to statistically verify whether test X results in a degradation in products to make it fail the test Y.
    Need you feedback & may be some links to the new approach you are referring to.



    So, just to be clear, you are saying that you are trying to evaluate whether performing test X results in a degredation of your product integrity such that it is no longer capable of passing test Y?
    So if we let:
    FR1= Failure rate at test Y, given product tested with X
    FR2= Failure rate at test Y, given product not tested with X
    Your null hypothesis would then be:
    Ho:  FR1=FR2
    And your alternative hypothesis would then be:
    Ha:  FR1>FR2
    Your next steps are to:

    Determine whether sigma is known or unknown (determines what type of test you perform)
    What beta power you are looking for (helps you determine sample size)
    If you perform 100% testing of all product and have the data available to you, then your sigma is known.  If not, then it is unknown.  You can find some beta power calculators online that may be able to help you in determining a sample size, but I would recommend you get a program such as Minitab or Statgraphics to help you in your analysis.
    In the future, I would leave the “A” products out of your explanation.  Based on your explanation they themselves don’t appear to be relevant to the test, unless you are trying to use the data from the “A” products (which is not advisable).
    Finally, even if you find a statistically significant difference in your groups that does not automatically mean you should eliminate test X.  If test X finds failures in your product that test Y doesn’t and the customer would agree that those failed products are not suitable for use then you really shouldn’t be eliminating the testing (unless you never fail test X).  What you should really be doing in that case is digging into the causes of these failed tests and reducing/eliminating them. 



    This is a binomial situation. Test has either failed or not. So you have a proportion of failure (failure rate).
    The 2 proportion test is what you want. Minitab and other packages does this easily and has a sample size calculator for you to figure out how many of each group are required.
    You don’t need the standard deviation, you know it already when you know number and proportion.



    Hi Daves,
    Is it necessary to test the same B products through X & Y tests – Group 1 and Test B products directly thru Y test – Group 2
    Is your recommendation to use group 1 Vs group 2.
    Or  is it ok to use Group 1 Vs Group 3 ( A products directly thru Y test). similar to comparing cats & dogs for ex



    The short answer is “How the hell would I know?”
    A brief roadmap to any hypothesis test is,
    1) Frame a practical problem statement
    2) Frame a statistical problem in hypothesis terms
    3) Choose a test statistic
    4) Gather data
    5) Analyze for the statistical significance
    6) Does it answer the practical question?
    I responded to 3) only.
    Your description leads to believe that you have binomial data that you want to compare between two groups. Proportions test does that.
    YOU DA  MAN for 1) 2) and all the rest!
    I can’t tell enough from your post to help anymore. It sounds like the practical problem is to reduce testing.
    I don’t have any knowledge of your process or any external factors that might need to be considered and don’t want to see anything more unless you want me to come on as a paid consultant.
    When you decide what groups you want to compare in the statistical problem, the proportions test I believe to be how to do it

Viewing 7 posts - 1 through 7 (of 7 total)

The forum ‘General’ is closed to new topics and replies.