Your English is likely better than my Spanish, so not to worry.
Near the top of the page is a dark blue bar. Modify it so it shows: Search:[Entire site] for [Project Identification]. Then click on Go. What is displayed will provide plenty of learning.
There was a discussion here a while ago that was quite good too. The best projects, are tho…[Read more]
As I undersatand a translator, you are looking for the locations of some project lists for your consideration. Is that correct? If so, please narrow the question to your area of interest — health services, manufacturing, teaching, etc.
Please use English, even if your English is not good. English really helps me a lot. I’m not sure how…[Read more]
Lee replied to the topic Need help with a 96% Confidence Interval Question? in the forum General 10 years, 8 months ago
Most text books have what you need, filed under “Confidence Intervals”, subcategory “of the mean”.
Your posting does violate a rule: You cannot get more lemon juice from a lemon that what is inside the lemon. (Otherwise known as significant digits.) Your input data is to the nearest tenth, yet you are asking for an output that is100 times mor…[Read more]
Lee replied to the topic BlackBelt prjct on employee turnover(atrition cntrl) in the forum General 10 years, 9 months ago
Subdivide your turnover into two groups, those that leave within 60 or 90 days and those that leave after that. I found that the bulk of our problem was in the first 60 days, and that caused me to really question the adequacy of the orientation (Was it well planned out? Was it logically arranged, Did it cover the right things?, etc.).
I i…[Read more]
Personal suspicion, based on no facts at all, is that when the automated systems were originally installed/created that the sample size was small. But as computer speeds increase the sample size was increased but the programming was not updated to do the Std Dev method — possibly because some applications the still have small sample sizes. A…[Read more]
The second one is the source I used to get d2 values for large samples. What I’m suggesting is that you contact him, as I did, so you get a more direct contact with the person that has not only has an Excel pro…[Read more]
and the remainer of the message is …
1 Take a step beyond doing atttribute checks (go/no-go) and look at the reject cartons. You indicate your have a very good supplier, so why not partner with them to solve your specific quality issues? Help them be better, they may not have training at their facility to know how to make the next st…[Read more]
Forest – thanks for the reply. I have been without internet connection for a couple of days, so I was not ignoring your advice & thoughts.
I did try transforming the data and found the following: The Johnson transformation is the only one that produced a normally distributed data set — but I am very far from convinced that the transform has…[Read more]
A bit rich for my blood at this time, but look at http://www-stat.wharton.upenn.edu/~lzhao/papers/newtest.pdf
Completed examining residuals for two of the 100+ brines. No Autocorrelation above 5% significance (I was not taught about this in the BB training I had, but it looks like in MiniTab that the goal is to stay between the red 5% lines, and it does). Because the brines are not made on any time frequency (i.e., not each Tue, not each 10th of the mo…[Read more]
Look at http://www.stat.unc.edu/teach/rosu/Stat31/E1_104.html perhaps that is what you are looking for. I have not personally used the site.
Thanks for the reply Darth.
So far as a normality assumption, I thought I had read at one time that the IMR charts were sensitive to that assumption, but that the XBar-R charts were drawing upon the Central Limit Theorem to remove that sensitivity to the underlying data distribution, thus they are more robust and most often used. I’ll go back an…[Read more]
In your reply you wrote of “simple guardbands”. If you asking if there is a natural boundary to the values, the answer to that is no. The measuring scale is from zero through 45 (over that is just recorded as “Over”). The absolute value of the averages is around 20, no average or measure under 5, and the bulk of the values from 15 to around 30.…[Read more]
The limits are being calculated with the standard formulas for an IMR chart, with the range determined from successive times the process is used. (Limits = Average +/- 2.659*average Range).
I had first looked at the residuals awhile back, but I have noted that the number of outliers is much smaller than expected (out of 500-700+ readings with not…[Read more]
Thanks for the reply.
The process is fairly complicated (50+ inputs, likely only one or two that are the culprit) so I was exploring the line I presented. Apparently that is a dead end, so I just have to find a different solution.
I now have to look at the efficiency of improving that process vs another process. Thus far the spread of the d…[Read more]
In regards to d2, there is a person on this site going by “Bower Chiel” that provided me with the formula for d2 and a spreadsheet to calculate d2 values. The computations do not have to be re-created from scratch.
Bower, I do not want to plagerize your fine work or not give you credit. Please respond thread if you can.
Just my approach — for which I find nothing written —
When I start to process data I first try to determine what the physical process is, because the fundamentals/physics behind that process should reveal what the “real”/accepted variables (x’s) are. In those cases, I am essentially banking my reputation as a process improvement person on t…[Read more]
- Load More