Confidence Interval vs. Level

Six Sigma – iSixSigma Forums Old Forums General Confidence Interval vs. Level

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
  • #31111


    Can someone explain the difference between a Confidence Interval and the Confidence level.  I’ve always had trouble with this and I need to get a hold of this concept.  Thanks.  Ashe



    The question seemed easy to me, but I am stuck now..lets try this..
    I hope you follow cricket..:o). Lets look at the following discussion
    I ask you: “How much would australia score in today’s match?”…you would look at the past scores for australia and calculate the average score.
    Your reply: “Australia would score 250 runs”
    I would say: “ok can you put a bet for 100$ on australia scoring 250 runs.”
    well you would say: “Hey I am not sure if australia would score exactly 250 runs, but yes I am 80% sure that australia would score somewhere between 225 and 275 runs and can bet for 100$.”
    I increase the bet amount to 1000$..!!! and you dont want to loose 1000$ so you increase your prediction interval by calculating that 99% of the time australia had scored between 210 and 290 runs and place your bet for that interval..
    So what are we doing, we are spreading our confidence interval for more confidence level.
    Confidence interval is the range (defined by two values: eg. 210 and 290) and confidence level is the percentage of readings falling in the interval (i.e. 99%)
    Hope this helped…


    Kiran Varri

    I liked Hemanth’s explanation a lot. Very easy to understand. But yesterday when i asked for help , some one posted that the difference between Upper and Lower confidence interval is Confidence Interval and what is outside Confidence Interval is Confidence level, hence they are inversely proportional.
    But according to hemanth’s example they seem to be directly proportional.
    Please help.
    Kiran Varri



    Kiran, Good morning!
    The difference between the terms “confidence interval” and “confidence level” can be confusing. It is unfortunate that the 2 terms sound very similar.
    To keep things straight, try to use the term “margin of error” instead of  “confidence internal”.  The margin of error in a political poll, for instance, is often stated like this:  “plus or minus 4%”.  In this example, the range of values from “plus 4%” to “minus 4%” is equal to the “confidence internal”.
    So, to help explain the terminology, let’s use 1) “margin of error” and 2) “confidence level” instead.  This site has some useful explanations and examples to explain the difference:
    1)  Since sampling is only an approximation of reality, every sample will have error.   The margin of error is the amount of error that you can tolerate in a sample. In political polling, for instance, it is common to hear a news reporter say something like “Candidate YYY leads our poll of 1,200 possible voters.  Candidate YYY was favored by 57% of possible voters. Our poll has a margin of error of plus or minus 4%.”
    This means that the sample is trying to predict what the whole population of voters would actually do.  In this case the prediction is that 57% would vote for Candidate YYY.  However, since this is just a sample, it will have error associated with it.  The error is plus or minus 4%.  Consquently, the REAL percentage of voters who would vote for Candidate YYY is really somewhere between 53% and 61%.
    These kinds of polls ALWAYS have a “confidence level” associated with them, also, even though the news reporters almost never tell  you what that is.  Typically, you can back into that if they tell you how many people they polled — often, the “confidence level” on these polls is either 90% or 95% — see below (item #2) regarding what “confidence level” means.
    2)  The the “confidence level” is the confidence level is the amount of uncertainty you can tolerate. It represents the likelihood that one sample drawn from a population will provide the same results as the NEXT sample (or samples) drawn from the same population. The confidence level is usually set by someone, and typically is 90%, 95%, or 99% in American industry (though, it can be any number up to 100%).
    Suppose that you drawn 20 sample lots from a production line, and judge each item in each lot as “good” or “bad”. With a confidence level of 95%, you would expect that for one of the lots (1 in 20), the sampled percentage of “good” items NOT be a good predictor of the REAL percentage (the sample percentage would be more than the margin of error away from the true answer).
    …. does this help? ….
    Best regards,

Viewing 4 posts - 1 through 4 (of 4 total)

The forum ‘General’ is closed to new topics and replies.