iSixSigma

Lee

Forum Replies Created

Forum Replies Created

Viewing 79 posts - 1 through 79 (of 79 total)
  • Author
    Posts
  • #187271

    Lee
    Participant

    Your English is likely better than my Spanish, so not to worry.
    Near the top of the page is a dark blue bar.  Modify it so it shows: Search:[Entire site] for [Project Identification]. Then click on Go.  What is displayed will provide plenty of learning.
    There was a discussion here a while ago that was quite good too.  The best projects, are those that are of concern to your boss.  Rather than approaching then and asking “Do you have a project for me?” I usually just say “Where are we losing the most money right now?”, as that puts the same question in words that they are very familiar with, gets them talking about several areas of concern, and by default gets their endorsement of the area selected for improvement.
    Go to who trained yo in the GB for ideas too — likely they know of some projects.
    One last comment: The processes here tend to change quite frequently, so I tend to factor that in too.  I had one project, but when I found a solution a week later I found that the shop floor had changed their process radically.  So I was left with a solution but had no problem to go with it – that is a failed project.
    Congratulations on your GB!
     

    0
    #187261

    Lee
    Participant

    As I undersatand a translator, you are looking for the locations of some project lists for your consideration.  Is that correct?  If so, please narrow the question to your area of interest — health services, manufacturing, teaching, etc. 
    Please use English, even if your English is not good.  English really helps me a lot.  I’m not sure how many on this list are fluent in Spanish.

    0
    #187260

    Lee
    Participant

    1.  I get irritated when people call be Gene vs my given name of Eugene.
    2.  6.5 yrs in Navy
    3.  I am not Freddy.  There is two persons present.
    4.  Not sure who is the fool here, but I do know who has the facts.
     

    0
    #187194

    Lee
    Participant

    Most text books have what you need, filed under “Confidence Intervals”, subcategory “of the mean”.
    Your posting does violate a rule:  You cannot get more lemon juice from a lemon that what is inside the lemon.  (Otherwise known as significant digits.)  Your input data is to the nearest tenth, yet you are asking for an output that is100 times more accurate.  That should not be – neither in my shop nor in yours.  Review significant digit rules first.

    0
    #187074

    Lee
    Participant

    I’m bowing out of this thread.  Little further productivity is probable.
    I’m disapointed that we did not get beyond stating opinions.   Never the less, as the Yanks would say “If you happen to be in my neck of the woods”, feel welcome to stop by for a spot of tea.  That offer is good for ALL that contributed to this discussion.

    0
    #187061

    Lee
    Participant

    Good answer.  On the other hand I have no reason to believe that people do not post with more than one name, i.e., I have no way of knowing if Stevo and Darth (used for example only) are one and the same person.

    0
    #187059

    Lee
    Participant

    Ok, I hear you say that I have an uniformed answer.  I proposed one idea of how to test the effectiveness of S&S with a metric.  Please post a better proposed metric so we can get beyond stating opinions — after all, it is not our opinions that matter but what data shows that is important.  Right?

    0
    #187036

    Lee
    Participant

    Ok, I’m up for it — why do we have a forum that has a anonymous nature?    I have seen no reason to use other than my real name.

    0
    #187034

    Lee
    Participant

    Agreeded on the “…that have a little Socratic flavor” approach.
    A further thought on the S&S approach.  Regardless of the view on how much value it has, I have noted that most people cite instances of its value that involve face-to-face encounters.  The value of S&S on a forum likely has less value because one can not read the others face — just my opinion based on few facts.

    0
    #187028

    Lee
    Participant

    Hit a wrong key …. As I tried to relate
    I try to never use S&S because it can have adverse consequences.  Think of it this way — what gets an animal to come out of its cage?  Yelling at the animal (S&S) has a low probability of success.  Using Coaching/mentoring/rewards for even coming towards the cage door is effective.
    In situations of grave danger, then S&S might be used briefly to gain attention.  After the attention is gained, then coaching/mentoring etc. is effective.
    Perhaps someone that has access to the right information could shed light with this:  If S&S were effective, then when a newbie to this site first posts, and the response the first queries is S&S then the newbie would become a frequent poster.   Look at the nature of the responses to the initial posts and tell me if there is a difference if they keep posting to the site.  Therein will be a strong clue as to the training effectiveness of S&S.
     

    0
    #187027

    Lee
    Participant

    I try to never use S&S, for this reason:
    What

    0
    #186842

    Lee
    Participant

    Subdivide your turnover into two groups, those that leave within 60 or 90 days and those that leave after that.  I found that the bulk of our problem was in the first 60 days, and that caused me to really question the adequacy of the orientation (Was it well planned out?  Was it logically arranged, Did it cover the right things?, etc.).
    I interviewed the presenters (We had different presenters for each subject area) and asked them this:  To provide 6 questions that could be used to assess if the participants had learned the most important things from their presentation.  I then formed a post orientation quiz, to see if the new employees had learned that material.  The central idea is this:  If the student has not learned, then the teacher has not taught.  We modified content as needed.
    Second, all material was reviewed by all presenters and overlaps and inconsistencies were eliminated.  Central idea: We should not confuse them with inconsistent statements.
    Third, all presentation materials were reviewed and upgraded.  Several modules had way too much technical content for the first day, many had boring slides and lacked any hint of enthusiasm.
    We did other things too, but the above is perhaps enough to get you started.   Largest6 gain was when we looked at the time to when people left – that drove the area we selected to upgrade.
     

    0
    #186782

    Lee
    Participant

    Personal suspicion, based on no facts at all, is that when the automated systems were originally installed/created that the sample size was small.  But as computer speeds increase the sample size was increased but the programming was not updated to do the Std Dev method — possibly because some applications the still have small sample sizes.  Another thought is that it was easier to tell the programmer how to use the range and d2 than it is to explain the Std. dev. calculations, so the range is still used.    Like I said, those thoughts are based on zero facts. 
    We need to hear more from those that are using the tables as to why they need d2 for large values.

    0
    #186760

    Lee
    Participant

    Two comments:

    1.  For values, see https://www.isixsigma.com/tools-templates/control-charts/control-chart-table-of-constants/

    The second one is the source I used to get d2 values for large samples.    What I’m suggesting is that you contact him, as I did, so you get a more direct contact with the person that has not only has an Excel program to get the numbers, but also has copies of background papers that explain the d2 value.

    2.  The request for d2 values for large samples is cropping up more often.  We need to understand why that is — just like looking at other seemingly unusual requests.   My interest was more academic in nature, so that does not count for much.  However, I understand that with the advent of mass production that automated data collection has made larger sample sizes have very low costs.  Is that what is happening in each case?   Posters of the requests for d2 values for n>25 please respond, indeed why not have persons that use d2 values for n>10?

    Technology may be driving us towards re-thinking what is a typical sample size.

    Have a statistically good day.

    0
    #186755

    Lee
    Participant

    I can help for d2, and know who might be able to help with other constants for large sample sizes.  E-mail me directly at Eugene_Lanning at Cargill.dot com

    0
    #186647

    Lee
    Participant

    and the remainer of the message is …
    1  Take a step beyond doing atttribute checks (go/no-go) and look at the reject cartons.  You indicate your have a very good supplier, so why not partner with them to solve your specific quality issues?  Help them be better, they may not have training at their facility to know how to make the next step.
    2  You said their turnaround is quite rapid.  Is the printing issue related to cartons being stacked so quickly that the ink is not dry? Different inks dry at different rates depending on the thickness applied and the paper stock.
    3  Remember that you are the customer here, and “The customer is always …” but balance that with demand that are realistic for your supplier.   Have you had a sit-down with them yet?

    0
    #186646

    Lee
    Participant

    Just a couple of thoughts –
     

    0
    #186444

    Lee
    Participant

    Forest – thanks for the reply.  I have been without internet connection for a couple of days, so I was not ignoring your advice & thoughts.
    I did try transforming the data and found the following:  The Johnson transformation is the only one that produced a normally distributed data set — but I am very far from convinced that the transform has anything to do with the actual process going on.  Nevertheless, when the transform was applied both the UCL and the LCL lines shifted by only about 10% of their values, and the difference between the lines was essentially the same.  So, independent of the side one wants to take on transforms, for this application there was not much effect.
    My current thoughts center around common cause variability, as you suggested.
     

    0
    #186343

    Lee
    Participant

    A bit rich for my blood at this time, but look at http://www-stat.wharton.upenn.edu/~lzhao/papers/newtest.pdf
     

    0
    #186326

    Lee
    Participant

    Completed examining residuals for two of the 100+ brines.  No Autocorrelation above 5% significance (I was not taught about this in the BB training I had, but it looks like in MiniTab that the goal is to stay between the red 5% lines, and it does).  Because the brines are not made on any time frequency (i.e., not each Tue, not each 10th of the month, not every 5 hours, etc), I do not really expect that any of the remaining brines has autocorrelation either.
    Thanks for the suggestion though, I should be checking it in other applications though.

    0
    #186322

    Lee
    Participant

    Look at http://www.stat.unc.edu/teach/rosu/Stat31/E1_104.html perhaps that is what you are looking for.  I have not personally used the site. 

    0
    #186307

    Lee
    Participant

    Could be, I have not done that to confirm.   Thanks.

    0
    #186306

    Lee
    Participant

    Thanks for the reply Darth.
    So far as a normality assumption, I thought I had read at one time that the IMR charts were sensitive to that assumption, but that the XBar-R charts were drawing upon the Central Limit Theorem to remove that sensitivity to the underlying data distribution, thus they are more robust and most often used.  I’ll go back and review what I though I knew.
    So far as the “setting of the control limits”  I agree 100% that the calculations are what they are.   That being said, my phraseology is misleading,  what I meant is that I do the calculations and control the value (from the calcs) provided to the production floor, so from the operators view I “set” the number. 
    Now, for the use of an IMR chart:  There is no rational subrouping that I can justify.  There are about 100 different solutions (generically we refer to them as “brines”) that are made here, all in a batch process system.  There is no time pattern to when any specific brines is made — some are made once a month or so, some less frequently, some as often as several times a week.  Because of the batch process, the non-predictability of when any specific brine is made, and that the making of a specific brines can occur on either shift, I did not perceive of any rational subgrouping that would apply.  Hence (am I on track still?), I did not see the applicability of the XBar-R chart for this situation. 
    Now, as to what is being measured:  The pH of the solution and the Brix(%) of the solution.  The Brix(%) is a measure of the equivalent disolved sugar level.  The meter has been calibrated, opertors trained, etc.  It is a linear scale runs from 0 through 45% for 0 through 45% disolved sugar.
    I’ll check out the tread you mentioned, perhaps that will also provide some clues.  In an earlier study of the Brix(%) data my notes (of which I had almost forgotton) are:
    The residuals were input into MiniTab 15.1.0.0 for evaluation.   The shape of the distribution was compared to that of the following standard distributions: Normal, 3-Parameter Lognormal, 2-Parameter Exponential,3-Parameter Weibull, Smallest Extreme Value, Largest Extreme Value, 3-Parameter Gamma, Logistic, and the 3-Parameter Loglogistic.  It is noted that several of these transformations are not possible because of the negative values inherently in residuals.
     

    Additionally, a Johnson transformation was requested that would yield a normal distribution for the residuals so that possible transforms for data prior to evaluation by Helix might be evaluated.  The automated Box-Cox transformation is also not possible because of the negative values inherently in residuals, although a series of manual transformations is feasible.”
    No Johnson transformation with p>0.1, or other data shape curve with p>0.05, was found, so efforts to adjust the limit lines to produce the classical %probability lines is not possible.  Box-Cox manual transformations were made at -2, -1.5, -1, -0.5, 0.5, 1.5, 2, and 0.37.   The p value was maximum at 0.37, but was only 0.007, still under the traditional 0.05 that indicates normality is evidenced.”
     
    Thanks for your time … I think I will gather a new set of data to examine and try looking at it with a set of “new eyes”.  My suspicion now is that there is a process variable at play thay I do not know about.

    0
    #186285

    Lee
    Participant

    In your reply you wrote of “simple guardbands”.  If you asking if there is a natural boundary to the values, the answer to that is no.  The measuring scale is from zero through 45 (over that is just recorded as “Over”).  The absolute value of the averages is around 20, no average or measure under 5, and the bulk of the values from 15 to around 30.  There is no nearby constraint or limit on the values recorded.

    0
    #186284

    Lee
    Participant

    The limits are being calculated with the standard formulas for an IMR chart, with the range determined from successive times the process is used. (Limits = Average +/- 2.659*average Range).
    I had first looked at the residuals awhile back, but I have noted that the number of outliers is much smaller than expected (out of 500-700+ readings with not one outlier recorded).  My concern is that the limits are set too loose.  Although I could not locate materials that spoke of adjusting the UCL and LCL, it might have been that the more experienced persons would know of it (I consider it affirmed that the reason I could not find anything is that the concept is not the best).  It appears that some variable is not controlled and causes the non-normal distribution, as suggested by HBGB.   I have considered that when an outlier is present that the logs are being falsified to a value that is in the acceptance range, but that is a painful idea to entertain except as a last resort, and would make that practice the norm across 8 operators (unlikely).
    Thanks for your time and thoughts …

    0
    #186283

    Lee
    Participant

    Thanks for the reply.
    The process is fairly complicated (50+ inputs, likely only one or two that are the culprit) so I was exploring the line I presented.  Apparently that is a dead end, so I just have to find a different solution.
    I now have to look at the efficiency of improving that process vs another process.  Thus far the spread of the data has not yet been correlated with a significant change in the as-shipped product (after more process steps).
    Thanks, again, for your input.

    0
    #186174

    Lee
    Participant

    In regards to d2, there is a person on this site going by “Bower Chiel” that provided me with the formula for d2 and a spreadsheet to calculate d2 values.  The computations do not have to be re-created from scratch.
    Bower, I do not want to plagerize your fine work or not give you credit.  Please respond thread if you can.
     
     

    0
    #186062

    Lee
    Participant

    Just my approach — for which I find nothing written —
    When I start to process data I first try to determine what the physical process is, because the fundamentals/physics behind that process should reveal what the “real”/accepted variables (x’s) are.  In those cases, I am essentially banking my reputation as a process improvement person on the work of others that are more knowledgeable than I.  In essence I an taking the approach that I am very unlikely to be wrong on the regression equation.  Now, if the p values are >.05 or the r-squared values is not >.9 to .95 then I look for an additional variable in the data (like shift-to-shift differences, measurement accuracy, variables that are not well controlled, etc.) and improve them.
    When I can find little on the process fundamentals/physics and branch out on my own (quite often), then I will accept p values of up to around 0.2, and r-squared values of over 0.8.  If the process involves a lot of people determined x’s the I accept p values of around 0.5 (in a lot of bio-med work and social services work they are doing quite good if they get r-squared >0.5)
    The r-squared and p values I accept are largely determined by my guide: If I am wrong on the x’s, then the predictive capability is very poor.  Poor predictive values mean that I will loose face — i.e., the number of times I’m called upon to solve problems will drop.  To advance the processes here I need to have a high batting average.
     

    0
    #186011

    Lee
    Participant

    You wrote …”It either means that there was a big coincidence or the measurement device may not have good enough resolution.”.  Other possibilities are that someone improved the process ….  or that persons are rounding values prior to making the log entry.

    0
    #186000

    Lee
    Participant

    Let the experts weigh in too, but my 3 cents worth (2 cents, adjusted for inflation effects):
    PC charts are not designed to just detect unusually poor situations, but rather they are to detect unusual deviations.  Example: Weight per specs is 10 grams, SPC chart has UCL =12 grams, LCL=11 grams.  A reading of 10.5 grams  might save you money, and thus be “Good”, but is it nevertheless a deviation from the norm and is thus flagged by the SPC chart.
    Regarding the range, the LCL us impacted by the value of d3.  Now D3 is in fact 0 for sample sizes of 6 and less.  For larger samples the probability of a range of 0 is increasingly small (likely all sampled units being identical decreases in probability).  Like the average weight example cited above, the range of 0 would be unusual and thus flagged by the SPC chart, even though it might be desirable.
    If a range of zero is in fact found, one might look at the measurement accuracy of your instrument and see if it is too course for the anticipated range, this making it appear as though the range is zero.

    0
    #185950

    Lee
    Participant

    Found the technique of what I wanted.  Each unit has a different variable name for the load, and each unit for which the load is not applicable has the load set to zero.  That forces the coefficient on Time to be the same in all units and each unit then comes up with a different coefficient on Load.
    Thanks for the new perspective that started breaking up my mindset.

    0
    #185948

    Lee
    Participant

    Let me try again, sans fat fingers ;)
    Thanks for the reply.   In the stepwise regression (or regular) MiniTab is not producing a warning about the squared time, but it is nice to know about how to sidestep that issue via normalization.
    Now, is it possible to have the coefficient of the Load to be different, i.e., what I am getting is
    Temp= c0 +c1*Unit +c2*Time^2 +C3*Load
    What I want is
    Temp=c0+c1*Unit+c2*Time^2+c3*Load_in_Unit 1+c4*Load_in_Unit2
    In other words, I want to see the differing slope for the Loads for the units rather than having them lumped together.   I do want the coefficient for time to be the same though.
    Thanks for the additional time.
     
    Eugene

    0
    #185947

    Lee
    Participant

    Tnaks for the reply.   In the stepwise regression (or regular) MiniTab is not producing a warning about the squared time, but it is nice to know about how to sidestep that issue via normalization.
    Now, is it ossible to have the coefficient of the Load to be different, .e., what I am getting is
     

    0
    #185915

    Lee
    Participant

    I presume you mean Juran’s Quality Handbook rather than Juran’s Quality Control Handbook.  Correct?

    0
    #185906

    Lee
    Participant

    Does anyone know of a way to conduct a poll on this site, even if via another party?
     
    We have quite a few suggestions thus far, and the time is approaching to conduct a survey based on the nominations that would help ferret out the top 5.  Likely the survey should consider if a person has not use/has never heard of a book listed, and consider the experience level of the responder.   I’d just as soon see this as a project involving more than just myself, so ideas are solicited.

    0
    #185905

    Lee
    Participant

    Your pount that “…reference books are generally dry material that may not instill the passion …” is well taken.   Perhaps Phase II of this effort.
     
    Thanks for contributing to the list.

    0
    #185880

    Lee
    Participant

    Thanks for the reply Robert.  I agree that a good part of our jobs is to be proficient in statistics – and smart enough to REALLY simplify it to those non-SS types we interact with.
    Regarding books at home – I keep what ends up to be most of my books at home due to limited work space & a culture here that does not encourage looking up stuff in books at the office – one is expected to just know it (so I do most reading/learning at home to fit in better).
     

    0
    #185878

    Lee
    Participant

    Pardon my lack of rigor.  Yes, more precisely:
               the most used books is a function of a person’s job field, certification level, experience level (measured as years or by projects completed?), current type of projects worked, source of initial training (the books initially exposed to are likely to be the ones most referenced later), personal preference, what country one practices in, the number of responses in each of the preceding variables, etc.
    Rather than get into a long study that would cost more than the value obtained, I figure it is more efficient to ask for the books that are most recommended, imprecise as it is.  Most likely the recommended books will be nominated by the more seasoned practioners that have some basis for their recommendation.  Even an imprecise list is better than no list at all.
     

    0
    #185875

    Lee
    Participant

    Looks like we have a diversity of opinions (no surprise there).  Is there a way on this forum to list say 10 books and then have a poll run on the top 5?
    My vision is that the end point would be that newer persons could start with a library of those good books, and when posting responses that references would be to those books whenever possible.  That would make it so that understanding of responses could be beter in a shorter period of time (no delay as yet another book is ordered).  I understand that the attempt to focus on 5 books would not cover all cases, but it seems that it would be better than the current way of doing things ….

    0
    #185870

    Lee
    Participant

    FWIW, I keep Implementing Six Sigma – Breyfogle here at work, but others at home (the other books are book I paid for)

    0
    #185848

    Lee
    Participant

    I would guess so, but they would be Little, therefore they readily escape notice.

    0
    #185786

    Lee
    Participant

    A normally distributed data set will not necessarily produce a nice straight line.  In the past someone on this forum suggested that I generate “a bunch” of data sets that are normally distributed, each with about the number of data points I expected for the application I was tackling.  Then let Excel plot each of them.  Do that to get a feel of how much deviation you should expect for a normally distributed data set.
    I did it and found it very instructive.

    0
    #185640

    Lee
    Participant

    I could not find the data either, at least not through the time I got tired of searching.  I considered that the data was likely removed on some update of their web site.
    I considered just getting a large bag and giving it to my grandsons to sort by color.  Certainly that is sampling, but a sample of one large bag is better than ignorance (especially when we think we know the colors are not evenly distributed).

    0
    #185602

    Lee
    Participant

    I’d like a copy of the spreadsheet too.  ealanni(at)windstream(dot)net.
    Now a question:  To keep life simple on the production floor, I almost always (40 or 50 instances) have a sample size of 5.  In one or two instances a sample size of 10.   Because of the desire to keep the sample size small enough to keep things simple I do not need D2 values beyond what any readily availble source provides.  So, my question is this: What causes the need for the sample sizes of >100 and abandoning simplicity/speed of analysis on the plant floor?
     

    0
    #185576

    Lee
    Participant

    You said “… or changing our supplier to someone who will give us 10 free harnesses for every one we receive defective.”  Once I had a similar delima, so in contract negotiations we got the manufacturer to be boasting about how defect free his product was.  Then, we reinforced their stance by saying that they could offer to increase their warranty to be 5x the product price, because it would make us feel more comfortable about our risk but have no financial impact on them because of their superior product.  That got some realistic discussion of defect probability going (the sales retoric was dropped), and we ended up with a 1.25x multiplier ….
    Look to see if the defective cables have a warranty of any sort, express or implied, and then make claims on that warranty.  That will likely spur a more realstic discussion of the defect rate ….

    0
    #185575

    Lee
    Participant

    Thanks, that also explains why the measured pH for some mixtures have a standard deviation that is a lot wider than for other mixtures.

    0
    #185544

    Lee
    Participant

    Thanks, I’ll seek out a chemist.  I had thought that for a given mixture that the distribution would be more dependent on the pH meter response characteristics and the stability of the mixing process — and somehow recognize that the meter readings are based on a log scale but that the process variations are (presumably) something other than a log scale.

    0
    #185479

    Lee
    Participant

    1.  Good basic approach you use, that normality is checked before generating non-meaningful statistics.
    2.  I do not consider myself adequate t respond to the Johnson transformation aspect of tour query, but I can respond this way:  It is likely that you can find some sort of standard transformation that makes the data look normal … but the burning question is what is inherit in the process that would cause the data to fit the transformation selected?  In other words, if data is not normal, but the log transformation makes it normal but there is no process fundamental why the log should be present, then beware that the transformation is likely not valid.    Just a self-imposed rule I use, and it forces me to think about the process and the measurements.
    3.  You mention outliers … have you considered recording errors and special causes?   Perhaps there is another independent variable manifesting itself.
    4.  I presume you have weighed the costs/time in getting a larger data set.

    0
    #185439

    Lee
    Participant

    Thanks for the reply — I was thinking that there was another whole system (other than regression) that I could not recall.
    In my case the uncertainity in the recorded temperature is about the same as the uncertainity that the variation in time would also cause, so I wanted to be a tad bit more cautious in the analysis before I went on.  I have a book that will help at home, so I’ll look more at regression with errors in both variables tonight.
    Thanks again.
    Eugene

    0
    #185418

    Lee
    Participant

    Sorry, I missed your extensive analysis of this sites use.  Where it it posted?  I did not say a discussion of Gemba was not relevant, what I did suggest was that not all postings in a tread are related to 6 sigma.  Simply countng postings is not a good indicator of the apparent intended use of a site.
     

    0
    #185412

    Lee
    Participant

    Traffic count is a lousy indicator to use.    If you must use a traffic count, then at least remove the threads, like this one, that are 75+% non-Sigma related.

    0
    #184402

    Lee
    Participant

    Just a thought:  Are your devices subject to fatigue?  By that I mean if device one is tested at load level x and cycled nn times, do we know if that test will influence the results of the second test of the same device at load x’ when it is cycled nn times?  Classic example is the bending of metal, where the prior testing can will influence the results for the sucessive testing.
    Also, was your planned test structured to reasonably replicate the user environment?  Does it consider the VOC?
    Other considerations, although not a comprehensive list: Test one of each unit a head of the planned test to get a feel on where the unit breaks, as that helps to ensure that testing will indeed go to failure.  b) Do you have persons that can do a comprehensive failure analysis (where exactly did it break, and what can be done to fix that) & re-engineering work? c) Although I do not have all of my resources on hand, be aware that for device A that all of them will not fail at the same load, so do a bit of reading on Binary Logistic Regression on how to handle your data — I seem to recall a good article on this site about that too.
    Just my initial thoughts …

    0
    #184172

    Lee
    Participant

    I agree that averaging is not appropriate, but that was what I found they are doing.  I give them credit for at least trying to make sense of the information.  I’m trying to bring a bit more sophistication to the data review and interpretation.
    I have started to challenge some concepts, like “If we can figure out how to get all 3s to be 4s fours…”.  To me that is non-sense because some people had mini-debates going on in their head about whether to mark 3 or 4, so even if you targeted tried to get all 3s to be 4’s, some 4’s would slip to 3.
    I intend to get the raw results and then by category and as a whole apply a chi-squared test to see if, at say a 80% confidence level, that the results are the same or not.  Does that seem reasonable to you?
    Thanks for the reply…

    0
    #184167

    Lee
    Participant

    Had forgotten about the Chi-squared test, seem like it is simple and applicable.

    0
    #183796

    Lee
    Participant

    Thanks for the replies. 

    0
    #170953

    Lee
    Participant

    The key concept at play here, to my knowledge, is that the X_bar R chart has imbedded in its methodology the Central Limit theorm.  When we plot the average of the sampled (typically 5 packages/units/locations) weights (or thicknesses, or whatever), the Central Limit theorm is at play
    If you are plotting individuals, then the Central Limit therorm is not used/is not applicable, thus the underlying distribution becomes a significant factor or concern when the chart is created.

    0
    #169311

    Lee
    Participant

    Late, but here.  Please provide some samples for consideration in the training I conduct.
    [email protected]

    0
    #169309

    Lee
    Participant

    Please send them to [email protected]

    0
    #169307

    Lee
    Participant

    Please send the ppt.  Thanks in advance.
    [email protected]

    0
    #167054

    Lee
    Participant

    Thanks for the name to search on, that should help in looking at the possibility for two variables control charts.
    You are right, about the operation.  Meat is being sealed in a pouch composed of two pieces of plastic.
    I agree that the seal is a quality issue, but many here only see it as an operational issue (aside from the hidden factor as the meat is re-sealed).
    Right now I’m just getting the basics of the process.  It could be that time and temperature are not KIVs.  I have removed the variables of plastic types, the machine number, machine manufacturer, meat type, die size, etc. when I collected the 50 data pairs.  I do yet have to investigate the operator (shift) as a potential uncontrolled variable.
    Currently some people test the package seal qualitatively, as not everyone believes that the vacuum testing is adequate — we do have the tester though,  I looked into a pull test machine a bit.  I found that they are designed for Lab use, not the production floor environment, cost was certainly in line with Lab equipment, space is an issue, and training would have been an issue.

    0
    #161875

    Lee
    Participant

    To separate it out in my mind, I go back to a study I read about once (do not recall where anymore).  The study was of the birth rate in Chicago and was many years ago.  They studied all sorts of factors, and even took into account the 9 month lag time.  What they found was that the brith rate was highly correlated with the thickness of the ice on the Great Lakes.  Now we all know (I hope) that ice does not *cause* babies.  The causal factor was that as the lakes froze over, the shipping rates drop, more men were home, …
    Just though I’d share what I use as memory aid to remember to be very cautious of correlation results, especially if one does not understand the basic principles behind what is being studied.

    0
    #155267

    Lee
    Participant

    Thanks for your time to put together a reply.
    For your first comment:  I have come to the conclusion that the VOC is really a fairly complex idea.  Thus, when I hear a simplistic answer, or a “whatever” type of reply, I am to conclude that I’m not talking to the critical customer; that there is likely another out there someplace that does have more expectations.
    In regards to your second set of comments:  I’m familiar with the Kano concept, but have not used the QFD as much as I should.  I used a simplified QFD on a past project, but not on this one.  To unravel what will delight a customer is not a simple task and will require insight/innovation.
    Please do not think that the delay in this response is a reflection of not caring — quite to the contrary.  I had a deadline for getting some new procedures out for review that had to be done too.
    Eugene
     

    0
    #155266

    Lee
    Participant

    Thanks for the feedback.  You inferred a conclusion that I was approaching:  That the VOC is sometime straight up (easily understood and complete), but when it is not the BB needs to develop a skill in asking questions that gets the type VOC that are actionable (that we can act on/respond to).
    Thanks again,
    Eugene

    0
    #155199

    Lee
    Participant

    Can someone comment on two customer concepts that are evolving in my head?
    First, the customer that sets the expectations is not necessarily the immediate recipient of the process output.  Example:  At this facility we have a product that is cut and often the cuts are incorrect (rather that being of the right weight, two pieces need to be put together to make the weight).  After the cutting the product is coated, and that coating is done on a weight basis only.  After it is coated, the next step is packaging, but extra labor is expended when two pieces have to used to make the package weight.  When I looked at the process the cutter only optimized the machine for his ease, as the weight produced was the same if each piece was the right weight or if it was comprised of two pieces (His goal was to produce 10,000 lbs today).  The recipient of the cutting (the coater) did not have an expectation on the individual piece weight.  In this case I saw the customer as “once removed” in the stream, as the coater effectively muted the packager’s expectations for an efficient system and led to no change in the cutting process (until I looked at it and said it was non-sense).  So… have others seen instances of what I loosely call a “once removed” customer?  If a process is defined too large, it is hard to grasp all of it and change it.  If a process is too narrowly defined, we may miss the VOC that helps us have a better system.
     
    The second concept that is evolving, for me, is that of customer sophistication.  In my experience, some of our external customers simply look at a sample of our final products and then say they want x lbs of it.  To ask them about their expectations is almost as fruitless as to ask a wall (OK, a bit of an overstatement).  Other customers, however, interact with us and develop a pretty detailed description (expectations, VOC) that is written.  The sophisticated customers seem to be a lot easier to understand from a VOC perspective, and the less sophisticated are harder to work with (to understand how one might delight them, as even they do not seem to know).  Do others also see a wide range of sophistication of their customers?
    Thanks for your reply,
    Eugene

    0
    #154575

    Lee
    Participant

    I’m not the swiftest boat in the ocean, but this is what I do:
    1.  I do not bother with the Standard Deviation.  True, it is just a formula and any set of numbers will produce Std Dev value.  However, the value ONLY has the intpretation of encompasing 68% of the data if the data is normally distributed.  I have no reason to suppose that survey results are normally distributed (example: you intenty to teach that 2+2= 4.  At the end of the eaching you survey if you met your objective.  Hopefully all of the participants would agree that you did — thus the resonses are not normally distributed).
    2.  I use surveys on the classes I teach.  I run a Chi-squared test to compare the survey results from two different sessions.  Normally I look for at leat an 80% confidence level that the survey results are not the same.  If that trigger is met, then I re-examine the activities to determine what I did that increased the survey results ( If I  modified the instructional material, is that the cause?  Did someone make an insightful observation that helped the class? etc.  Then make sure that factor is retained for the next class.).  On the other end of the spectrum is the decrease in the survey response (Did I forget a topic?  Dis a change, intended for the better, make the link between topics less clear?  Dis the background of the participant change in an unexpected way?, etc.)
    I do not have a good basis for the 80% confidence level I use, other than I can usually figure out what changed that may have created the different survey response — i.e., the 80% level was empirically determined by me and allows me to focus on a few course changes at a time.
     
    Eugene

    0
    #152734

    Lee
    Participant

    What I was looking for was something of the below format (but have not yet seen it):
    IMHO, XBar-R charts : 80% of the time, Attribute charts: 15% of the time, ImR Charts 5% of the time. 
    So, in practical across the industries in the US/World, what is your humble estimates of the percentages?

    0
    #152659

    Lee
    Participant

    Good thoughts, and not presented in the demeaning format typical of the forum.
    I’ll looking up the references you mentioned.

    0
    #152609

    Lee
    Participant

    This is not too much in the way of an objective basis, but one aspect that I’m beginning to accept:  Pick projects for which the system is relatively stable.  In other words, take a look at the time since the existing process last changed, and the time that you might be able to complete the project.
    Example: If a process tends to change every two months, but you expect that it will take three months to complete your project, you will likely find your solution is based on a process that is no longer in use, or equally bad is that you will find yourself extensively revising the project and that will take even more time …
    Since I’m relatively new to SS, I’d be glad to hear other’s views on this.  One possibility is to just create a whole new system (DSS) for those cases, but I have not gone down that road yet.
    Eugene

    0
    #152462

    Lee
    Participant

    In my late Fri post it was not obvious (at least to me, on a Monday morning) as to what I was trying to accomplish.  We are using the ANSI Std to determine a “good” sample size for our lots, the lots vary in size from <10 to about 500.  The sample testing is to be determined if we "have a problem".  That implies that we set an AQL, but to do that I need to get the AQL in terms of the probability of future testing being satisfied, as that is closer to the terms/way-of-thinking that is used here.  The example provided was to see of I was handing the ANSI tables correctly.
    Thanks

    0
    #147603

    Lee
    Participant

    I too found no canned function, so I made a spreadsheet that accepts values, plots the histogram, provides statistics, and then overlays the normal curve shape over the histogram of plotted points.  It is just part of a worksheet I use to generate SPC charts.  It is not 100% automated, but it beats nothing.
    Contact me offline, at [email protected] if you want to followup.
    Eugene

    0
    #147602

    Lee
    Participant

    I’ll be looking that the process again today and will look for alternate metrics that may be easier to control.  Alternate drivers is a good idea.  Thanks for the help.
    Eugene

    0
    #147585

    Lee
    Participant

    First, thanks for the reply.
    In regards to the SPC chart limits:  You did check the problem before you replied, and caught an issue.  I usually think in terms of the Std Dev. of the packages, so the quoted Std. Dev. was from the 250 individual package weights, and is 0.212 Lbs.  The Std. Dev. of the 50 sets of 5 is 0.117.  To construct the SPC chart I used UCL=A2*R+Ave, where A2 is 0.577 for groups of 5, R is the average Range (0.3724 for the 50 sets), and the average of the 50 averages is 2.349, thus UCL=2.564
    As for the operators:  I agree that they are smarter than they are given credit — it amazes me that we hire people that are involved in civic duties, that run a household and pay taxes, yet when they clock in they are believed to be incapable of thought!  Sometimes I think that a process improvement would be to remove the time clocks.  Anyway, in my case the operators are not seasoned persons, some have been at the job as little as two days, some as long as 6 months, fewer beyond that.  I’m tending to think that believe that better operator training is the stone we need to step on to move forward.
    As for the area of concern:  My concern is producing product that is outside of the 2 to 2.5 Lb range.  The customer specs allow two pieces to be inserted -so using two pieces is not a deviation.  Of the 250 individual packages weighed (data taken from the raw SPC chart data), 41 packages in fact were not within the spec. range.  What I’m fighting (I think) is the mindset of “its within specs, so it’s ok”  vs “the process changed so lets find out what happened”.  The lingering question that I had is if an SPC chart is still applicable in a process that is dominated by human behavior rather than machine performance.
    So my line of though is to provide better training on how to do the job, and fight the Goliath of “Its in Spec so its OK”.  The question is “Is the SPC chart applicable for this situation?”

    0
    #147570

    Lee
    Participant

    Is a copy of teh spreadsheet till available? 
    [email protected]
     
    Thanks, Eugene

    0
    #127802

    Lee
    Participant

    Sigma is the standard deviation of your data and the degree to freedom is your sampe size minus 1.
    Hope that helps!
    Eugene

    0
    #106603

    Lee
    Participant

    Ritz, hello.
    Thank you for your information. It is very useful for me.
    I am using excel data analysis tool now, so thank you for your advices.
    It is very pleasent to hear that there are some connections between you and Ukraine.
    So I will rise my level in statistics.
    Have a nice day!
    Regards
    P.S. If you have wish to visit a Kiev please let me know, I hope I will help you with this.

    0
    #105969

    Lee
    Participant

    Ritz, Thank you a lot for this complete information!
    Yesterday I have improve my statistic lnowlrge and today I am understand you entirely.
    I have additional quaestions, why ANOVA is inappropriate test, I mean if befor testing I phave berformed the Anderson-Darling test then ANOVA is suitable, is  not it?
    And I wil try to use your goodness one more time. You see I am from Ukraine and have no softwares tatistical tools, so I rty to eleborate the algorithm and implemented it by myself. So, Can you suggest me some links to the web sights with such detailed explanation?
    Thank you a lot, you have helped me very greatly.
    Regards
    Eugene
     

    0
    #105906

    Lee
    Participant

    Ritz, thank you for help .
    But I still have some question, I just started to learn  statistics, so the question.
    I use ANOVA single factor annalysis (it is the T-test also?),  but it does not answer the question about in what times it less/bigger. I am interested in quantative estimation.
    Can you explain what factors are responsible for quantative ratios between means?
    Thank you again
    Regards
    Eugene

    0
    #105905

    Lee
    Participant

    Renato, thank you for help at first.
    But I still have some question, I just started to learn  statistics, so the question.
    I use ANOVA single factor annalysis, as I am aware, the rations between F and F critical determine the statistical significans of the two means, but it does not answer the question about in what times it less/bigger. I am interested in quantative estimation.
    You say the p-value represent the quantative estimation, does not it?
    Could you tell me about this, please?
    Thank you again
    Regards
    Eugene

    0
    #98221

    Lee
    Participant

    I understand you are all in the Six Sigma Wagon, but please come up with some jokes that really pertain to Black Belts.  I am sure you are all very creative. The above are old jokes that used to be accredited to Engineers in general.

    0
Viewing 79 posts - 1 through 79 (of 79 total)