# The 1.5 Shift – The Published Evidence

Six Sigma – iSixSigma › Forums › Old Forums › General › The 1.5 Shift – The Published Evidence

This topic contains 29 replies, has 14 voices, and was last updated by lin 14 years, 11 months ago.

- AuthorPosts
- February 5, 2004 at 6:17 pm #34468

Robert ButlerParticipant@rbutler**Include @rbutler in your post and this person will**

be notified via email.My first encounter with 1.5 was not a pleasant one. The company I worked for was supplying product to a customer who made frequent comments concerning the capability of our process and the probability of their long term projections of our impending failure to meet requirements. It was my responsibility to meet with their rep to iron out the differences. In the meeting I was introduced to the notion of the inevitability of a 1.5 shift and the predicted failings of my company. I freely admitted that I had never heard of the concept and I asked for details. I was told the idea had been around for a very long time, there were countless published papers on the subject, and the 1.5 shift was a fact which had been verified on hundreds of separate occasions. It was strongly intimated that I was not only a poor analyst but also hopelessly out of touch with current best practices. After recovering from this verbal beating, I asked for references. In so may words my assailant admitted he didn’t happen to know of any offhand, but he assured me the information was readily available and all I had to do was search the web to find it. A search of the statistical literature didn’t reveal a thing. While searching the web I found the ISixSigma site and posts like the following:Is There Any Empirical Evidence To The 1.5 Shift?Posted on: Friday, 11th October 2002 First, a long time is years. Long-term variation should only be talked about for a process that has been observed over years, not months or weeks as some would like to think. The empirical evidence comes from Motorola. They studied processes that they had applied Six Sigma to years after the project ended and that is when they noticed a 1.5 Sigma Shift, on average. I don’t whether that is published anywhere. Anybody with a link? The answers went pretty much like this… Posted on: Friday, 11th October 2002 There is a great deal of discussion here regarding the assumed 1.5 sigma shift. Okay deal with it statistically someone somewhere made a lot of observations and so here we are.What does it mean to you as a six sigma practitioner? It means that in most cases you will overstate your actual results by 1.5 sigma in the long term.Long term is not relevant in a continuous improvement culture. Strive to maintain and develop a six sigma continuous improvement culture and the question becomes mute.The answers to the 1.5 sigma shift have been weel documented on countless number of responses. In short, the web was equally mute with respect to information and case studies. I found the lack of solid information, about a subject which had been so vehemently promoted by our customer, to be very distressing. Finally, courtesy of this site, one of the posters referenced the then recently published article by Davis Bothe – Statistical Reason for the 1.5 Sigma Shift – Quality Engineering 14(3), 479-487 (2002). I immediately got a copy and read it. The last part of the opening paragraph of Bothe’s paper went right to the heart of the problem then and it appears to be central to recent exchanges on this forum. “When asked the reason for such an adjustment, Six Sigma advocates claim that it is necessary, but they offer only personal experiences and three dated empirical studies (2-4) as justification (two of these studies are 25 years old, the third is 50).” The papers referenced by Bothe are the Bender, Evans, and Evans papers listed below. Bothe showed that certain conditions will give a process shift between 1.3 and 1.7. However, he was also very clear that it could be more, it could be less, and that all of what he had to say was predicated on the assumption of a stable process variance. He concluded by asking, “What if sigma is also subjected to undetected increases and decreases? Further studies are needed to determine how these changes would affect estimates of outgoing quality.” The post of 26 August (below) is the only listing of citations in support of the 1.5 shift I have seen in the public domain and it mirrors the antiquity of the papers exhumed by Bothe’s careful research. It also reinforces the impression that these few articles are really all there is as far as formal support for the 1.5 shift is concerned. ‘Z’ Short Term And Long TermPosted on: Tuesday, 26th August 2003…Again, you really need to read the following resources: Harry, M.J. and Prins, J. (1991). The Vision of Six Sigma: Mathematical Constructs Related to Process Centering. Motorola University Press, Motorola Inc., Schaumburg Illinois. Harry, M.J. and Stewart, R. (1988). Six Sigma Mechanical Design Tolerancing. Publication Number 6s-2-10/88. Motorola University Press, Motorola Inc., Schaumburg Illinois. Harry, M.J. and Lawson, R.J. (1988). Six Sigma Producibility Analysis and Process Characterization. Publication Number 6s-3-03/88. Motorola University Press, Motorola Inc., Schaumburg Illinois. Bender, A. (1975).”Statistical Tolerancing as it Relates to Quality Control and the Designer.” Automotive Division Newsletter of ASQC. Evans, David H., (1974). Statistical Tolerancing: The State of the Art, Part I: Background, Journal of Quality and Technology, 6 (4), pp. 188-195. Evans, David H., (1975). Statistical Tolerancing: The State of the Art, Part II: Methods for Estimating Moments, Journal of Quality and Technology, 7 (1), pp. 1-12. Evans, David H., (1975). Statistical Tolerancing: The State of the Art, Part III: Shifts and Drifts, Journal of Quality and Technology, 7 (2), pp. 72-76. If this is the case, then the hundreds of independent confirmations and dozens of papers claimed by that customer rep of a few years ago (and also claimed by my Black Belt instructors and other Six Sigma practitioners) are reduced to three old papers, a few articles by one other individual, anecdotes,…and the Bothe summary. For a concept that is central to so much Six Sigma rhetoric, this is unacceptable. My experience in industry closely mirrors the many facets of the Bothe paper. I have worked on processes that exhibited long-term drift which could probably be summarized and guarded against by invoking 1.5 (or something more or less) as a production protection factor. I have worked on processes where the combination of changes in mean and variance worked, over the long term (as in 3.5 years long term), in the opposite direction and I have worked on processes where careful implementation of standard SPC practices held the long term variation and mean shift to within limits much less than those predicted by the automatic invocation of 1.5. Given the nature of the second law of thermodynamics this really isn’t too surprising. Contrary to commonly held belief, the second law of thermodynamics does not say that entropy is constantly increasing in all systems (yes, I know, I have textbooks that offer up this kind of sound bite science too). What it does say is that entropy is constantly increasing in a closed system. I have never worked on a closed system process. Recently on this forum there has been a veritable blizzard of posts concerning the existence and the extent of the 1.5 shift. Post ranging from flippant to densely unreadable have been exchanged and, at the moment, one of the discussants has given the impression he is going to try to set up some kind of panel of judges at the University of Arizona to assess the merits of the expressed views. The meager supply of carefully written recent papers on the necessary and sufficient circumstances required for the occurrence of the 1.5 shift suggests the interests of Six Sigma practitioners would be far better served by an extended discourse in refereed journals – not a one-time Olympic-Judges-Please-Hold-Up-Your-Score-Card contest in an obscure university auditorium.

0February 6, 2004 at 8:05 am #95138Amen.

Well said – but I bet you can’t sell any snake oil with answers like this one. It needs to be more confusing and long winded. It also must hold the threat of intimadation that you must be an idiot if you don’t fall for parlor tricks like showing that SPC with a sample of 4 can let in a 1.5 shift (they forget about what happens with a rational start criteria and rational sampling frequency).

I have been doing this for as long as Dr. Harry and his lap dog, Reigle. I have never seen a 1.5 shift in a process with real controls, SPC or otherwise.

You have to wonder why it is soooo important to these guys. Note to Mikel – you have made tens of millions of dollars, mostly from other people covering your back – go retire and enjoy life. Note to Reigle – get a life.0February 6, 2004 at 9:52 am #95142

AnonymousStan,

I believe that they now shifted their position from that of a process mean shift to one of sampling error in the estimation of the sigma (n= 30) in a short term process capability study. I don’t agree with this position either. What is your conclusion in this respect?

Regards,

Andy0February 6, 2004 at 11:10 am #95143Have you ever seen a magician of a scam man draw a lot of attention to somehting while something else is going on? That is the SPC for a subgroup of 4 and the sampling error explanation. Most are not statistically adept enough (yes even the “experts” the BB’s and MBB’s with a few years experience and no real stats education) to know and this is especially true in the board room. Not only do they not understand, they also don’t want everyone else to know they don’t understand.

By the way, do I believe in the principles and tools of variation reduction? Absolutely. Its just all this other crap does not need to be there.0February 6, 2004 at 11:33 am #95144Whether 1.5 shift exists are not is not clear( to me) . As from this forum it is clear that there are two schools of thoughts. To my little understanding, when i measure my process with a set of data (n=100), i get one particular mean and std. deviation, with in the same process, when i measure another sub group of n=100 i get different mean and std.dev. now if i repeat this process, i have various means and std. deviations.

This itself is a sample for mean to plot and check the mean of means and std. dev of the mean, I did this out of my own interest on one of our assembly process. My shift is no where near 1.5. To be precise it is .98s.

Now, my question is if there is variation between and with in the sub groups how rational it is either to estimate my long term or short term Sigma level. As there are no clear definitions on how to define a long term or short term ( to my understanding)

I am lost in this confusion, where SS is supposed to be data driven ..! Is it really a data driven to say 1.5 shift exist. What happens if I dont consider this?

0February 6, 2004 at 3:33 pm #95151

Mario Perez-WilsonParticipant@Mario-Perez-Wilson**Include @Mario-Perez-Wilson in your post and this person will**

be notified via email.This is an excerpt from my book Six Sigma – Understading the Concept, Implications and Challenges.I was in Motorola in a role to promote and implement Six Sigma.Is Six Sigma 3.4 ppm? Six Sigma is not 3.4 ppm. The whole misunderstanding about 3.4 ppm resulted from Motorola?s document ?Our Six Sigma Challenge?. In it Motorola asserted if a process was made to be Six Sigma by having the design specifications be twice the process-width, the process would be extremely robust. Such a process would be robust, even if it was surprised by a significant or detrimental shift in average, as high as +1.5 sigma, the customers would not perceive a degradation in quality. At worst case, a shift of 1.5 sigma, would make a zero-defects product be 3.45 ppm and the customer would only perceive an increase from zero to 3 products defective, assuming a production run of 1,000,000. This was supposed to be the warranty Six Sigma processes brought to the customer, not actual ppm levels for Six Sigma. The problem became widespread, when Dr. Mikel Harry [1] attempted to find a mathematical justification for a ±1.5 sigma shift in average by erroneously quoting an article written by David H. Evans. In the series of articles ?Statistical Tolerancing: The State of the Art?, and more specifically in Part III. Shifts and Drifts [4], Evans discusses a tolerance stacking problem in which multiple disks are staked to produce a final stack assembly. He states ?…that a slight shift in the mean thickness of the disks could cause a drastic increase? in the fraction out-of-tolerance of the final stack assembly. He also states a good quality control program would detect any shift in means in the components or disks. But, he states a proposed solution suggested by A. Bender in setting tolerances, which is to take the variance of the linear combination of the individual disk variances, take its square root and multiply it by a factor of 1.5, and use this as the standard deviation of the final stack assembly. In other words, Bender suggested amplifying the standard deviation by a factor of 1.5 to compensate for any shift in mean of any individual disk, and to compensate for the lack of prediction. Nowhere, do Evans or Bender suggest the mean be shifted by any constant, far less a 1.5 sigma shift. Furthermore, Evans states that ?…it is almost impossible to predict quantitatively the changes in the distribution of a component [disks] value.? Dr. Mikel Harry had erroneously misinterpreted a 1.5 magnitude of inflating the estimator of the standard deviation with a shift in mean of 1.5 sigma.

What is a ±1.5 Sigma Shift? The plus or minus 1.5 sigma shift surfaced when Motorola in their explanation of ?Why Six Sigma??, used it as a worst case scenario of a significant shift in process average. They stated that a ±1.5 sigma shift would not show a detriment in the out-of-tolerance percentage to their customers if their processes were designed to have their specification limits be at twice the process width, or at Six Sigma levels. It does NOT imply a process mean shifts about ±1.5 sigma over time or as an average.Mario Perez-Wilson asc@mpcps.com http://www.mpcps.com0February 6, 2004 at 4:23 pm #95157

Reigle StewartParticipant@Reigle-Stewart**Include @Reigle-Stewart in your post and this person will**

be notified via email.Ram:First, you are treating the 1.5s shift as a

constant, but it is not. Even Dr. Harry says the

shift factor is NOT a constant (as so many

falsely say). The shift factor will vary as

degrees-of-freedom and alpha varies. For the

case alpha=.005 and df=29, the shift

expectation (using the chi-square distribution)

is 1.46s, or about 1.5s. Of course, this is a

statistical worst-case condition. So, we expect

to see a 1.5s shift or less (for the vast majority

of samplings). We do not expect to see

anything greater than 1.5s. For your case of n=

100 and subgroup size of g=1, then the

equivalent shift expectation would be about

1.2s. Again, this is a worst-case expectation for

the case of g=1 and n=100. So, your discovery

of a 1.0s shift is clearly within expectation, given

df=99 and alpha=.005. If you gathered g=10

such subgroups of data (each reflecting n=100)

such that ng=1,000, the shift expectation would

be about 1.06s. Again, you observed a shift of

.98s. This is very close to your estimate. You

can read a lot more by referencing several of the

questions on the Ask Dr. Harry forum on this

website. Several of these questions directly

relate to what you are asking, including the

definitions of what constitutes short-term and

long-term variation and how the shift factor can

be computed using the ANOVA model.Reigle Stewart0February 6, 2004 at 7:41 pm #95163

AnonymousMario,

You have made a fine contribution …

Good luck!

Andy0February 7, 2004 at 12:03 am #95168

Reigle StewartParticipant@Reigle-Stewart**Include @Reigle-Stewart in your post and this person will**

be notified via email.Mario Perez-Wilson:Youre last post is seriously in error (and maybe

even a little self-serving). From your recent

post, you stated: Nowhere, do Evans or Bender

suggest the mean be shifted by any constant,

far less a 1.5 sigma shift. Furthermore, Evans

states that …it is almost impossible to predict

quantitatively the changes in the distribution of a

component [disks] value. So far, this is true;

however, but Dr. Harry has never said

(anywhere or anytime, to my knowledge) what

you are saying he said. In fact, A cursory review

of Dr. Harrys and Dr. Lawsons 1992 book (Six

Sigma Producibility Analysis) Dr. Harry quotes

Evans work. In this book, Dr. Harry DOES NOT

confuse an inflation rate of 1.5 with a 1.5 sigma

off-set in the mean. In fact, he and Dr. Lawson

demonstrate how such an expansion can be

equivalently expressed in the form of a linear

off-set in the mean. You also stated: Dr. Mikel

Harry had erroneously misinterpreted a 1.5

magnitude of inflating the estimator of the

standard deviation with a shift in mean of 1.5

sigma. Again, this is FALSE. Evans is quoted

as: A solution proposed by Bender allows

for shifts and drifts. Bender suggests that one

should use: V = 1.5*sqrt(Var(X)) as the standard

deviation of the response to relate

component tolerances and the response

tolerance. Given that V is the standard

deviation of the stack, then the aforementioned

equation can be algebraically rewritten as: V =

sqrt((1.5*S1)^2 + + V = (1.5*S1)^2). So,

Bender is applying the recommended uniform

correction to all of the individual component

standard deviations. Hence, he is

recommending an inflationary correction of c=

1.5 for any given component. Dr. Harry has

clearly demonstrated that when alpha=.005 and

df=29, the inflation rate is c=1.4867. Obviously,

this is very close to c=1.5 (for all practical

purposes). Furthermore, Dr. Harry

demonstrated that Z.shift = 3(c-1). So, for the

case of c=1.5, the equivalent mean off-set is

computed as Z.shift = 3(1.5-1)=1.5. Again, all

Dr. Harry is saying (and has said over the years)

is that an expansion of the standard deviation

(due to error) can be expressed as an

equivalent linear off-set in the mean. Reigle Stewart0February 7, 2004 at 12:25 am #95169

Reigle StewartParticipant@Reigle-Stewart**Include @Reigle-Stewart in your post and this person will**

be notified via email.Mario Perez-Wilson:You stated in your recent post: The plus or

minus 1.5 sigma shift surfaced when Motorola

in their explanation of ?Why Six Sigma??, used

it as a worst case scenario of a significant shift

in process average. They stated that a ±1.5

sigma shift would not show a detriment in the

out-of-tolerance percentage to their customers if

their processes were designed to have their

specification limits be at twice the process

width, or at Six Sigma levels. It does NOT imply

a process mean shifts about ±1.5 sigma over

time or as an average. I could not agree more

with you, but you make it sound like Dr. Harry

DOES imply this when, in reality, he DOES NOT.

Dr. Harry has said (over and over) that every

process will exhibit a unique amount of shift

and drift, but when the exact amount cannot be

estimated, the RULE OF THUMB is to use 1.5.

What you forgot to mention in your discourse is

that Bill Smith asked Dr. Harry (in 1985) to

investigate Bills intuition-based assertion that

the average process will shift and drift by

about 1.5 sigma. Well, Dr. Harry did and found

statistical evidence to support his intuitive case.

Even Bill did not try to assert that 1.5 is a

constant nobody has asserted this except

novice practitioners. Yes, we treat it as a

constant for purposes of first-order

benchmarking, but this is quite justifiable since

most of the benchmarking data from which a

sigma value is calculated is based on TDPU

(which is long-term by nature) use of the 1.5

shift is merely a way to APPROXIMATE the

short-term capability.Respectfully,Reigle Stewart0February 7, 2004 at 1:28 am #95170Reigle,

Out of curiosity, would you agree that I can live my life as a Six Sigma professional and NEVER incorporate the 1.5 sigma shift?? That I can complete my DMAIC projects and maintain the improvements without considering the shift at all?

In an average project (for the sake of arguement, let’s say an average project exists), what does the 1.5 sigma shift tell me or buy me?

I’m just trying to understand.0February 7, 2004 at 3:11 am #95172

Reigle StewartParticipant@Reigle-Stewart**Include @Reigle-Stewart in your post and this person will**

be notified via email.Matt:As I understand, your question is: “Out of curiosity, would

you agree that I can live my life as a Six Sigma

professional and NEVER incorporate the 1.5 sigma

shift?? That I can complete my DMAIC projects and

maintain the improvements without considering the shift

at all?” Answer: Yes, it is entirely possible that you might

go an entire career and never find a need to use the 1.5

sigma shift. Yes, you can complete many types of DMAIC

projects and never use the shift factor. But remember,

some projects (like DFSS related projects) might

necessarily rely on its use. For example, assembly gaps

are quite often highly dependent on process centering

error. Many design engineers know that processes shift

and drift, but they don’t know how much they should

compensate their design configuration or tolerances to

compensate for its effect. Remember, the 1.5 is a rule-of-

thumb and not an absolute. The quantity Z.shift=1.5

should only be applied when no other information or data

is availible.Respectfully,Reigle Stewart0February 7, 2004 at 3:15 am #95173

Reigle StewartParticipant@Reigle-Stewart**Include @Reigle-Stewart in your post and this person will**

be notified via email.Matt:As a post-script, many physicians (general practitioners)

go their entire career and never use surgical instruments,

but does that mean we should not teach how to use them

at medical schools?Reigle Stewart0July 14, 2004 at 12:38 pm #103402

SurprisedMember@Surprised**Include @Surprised in your post and this person will**

be notified via email.I happened to find this response from earlier this year to a message on the shift. Look at the previous two messages on the string.

In it, Reigle states “Remember, the 1.5 is a rule-of-thumb and not an absolute. The quantity Z.shift=1.5 should only be applied when no other information or data is available.”

Yet, I don’t think I remember reading this in Reigle’s voluminous, long-winded posts of late, as it implies the fact that any shift may be much smaller than 1.5, {when dealing with highly precise, automated manufacturing in a very controlled area, like making microproceesors} or much greater, and perhaps even 1.5 {when dealing with less precise, less controlled manufacturing like making snow shovels or answering questions over the telephone.}

My examples, of course.

Funny thing about it, I believe that this response to Matt’s question and the brief, clear, answer is telling.

He admits that the 1.5 is a rule of thumb and not an absolute. So why is it even considered part of the methodology unless it is used to inflate numbers, and many EGOS and get users closer to their “goal.”

The shift is like fuel additives for gasoline available at the local store, use it, don’t use it, but know that by using it you’re probably not improving your situation or your gas mileage.

Do you have a long winded explanation for us, Reigle??0July 14, 2004 at 1:15 pm #103403We don’t need Reigle’s long-winded explanation, we have direct quotes from Dr. Harry himself. He refers to the 1.5 shift as a “statistical correction”. In his e-book, he states:

Statistically speaking, the 1.5 shift factor represents the extent to which a theorectical +-4 sigma design model should be “shifted” from the target value of its performance specification in order to study the potential influence of long-term random effects. In this sense, it provides an equivalent first-order description of the expected “worst-case” sampling distribution under the conditions of df=29 and 1-alpha=.995………As previously discussed, this statement is fully constrained to the case n=30 and alpha=.005 and is generally limited to the purpose of design qualification and producibility analysis as well as several other common types of engineering assessments.

So, he never promised us a rose garden nor declared this to be a universal truth despite may attempts by others to do so. How many real life applications use n=30 and alpha=.005? If so, this could work, if not collect some data and determine your own process shift, if any.

0July 14, 2004 at 3:28 pm #103409

cornucopiaParticipant@cornucopia**Include @cornucopia in your post and this person will**

be notified via email.Well stated. It gets no clearer or more meaningful than that. Believe it or don’t. Use it or don’t. Embrace it or don’t. Speak to it or don’t. It really matters not. There is no more logic or science to it than what was just expressed by light saber boy. To make more of it is to waste your time chasing shadows. Get past it. The great debate, if it occurs, will have both sides making the same point using different examples, analogies and terminology. Shift away from concerns of the shift. It’s of no value – reread Darth’s post whenever you have a concern about the 1.5-sigma shift. Post it on the walls of your offices, cubicles or little telecommuting briefcases. But, please, quit wasting bandwidth and time posting goofy questions and long-winded commercially driven responses about it.

0July 14, 2004 at 3:57 pm #103412So Darth, how do you explain the 1.5 shift built into the changing of attribute data (by definition it is long term) to sigma levels.

This does not jive with the cop out explanation from the esteemed Dr.0July 14, 2004 at 4:59 pm #103414Reigle,

Can you explain how you say that the max shift you can ever expect from a process is 1.5 sigma?.Pls give some calculations on the screen.0July 14, 2004 at 6:05 pm #103419I intend to debate that very point directly with Dr. Harry later this month in Phoenix…..oh wait…..that’s you. I look forward to his explanation.

0July 14, 2004 at 6:09 pm #103420Harry does a long and confusing (for me) derivation in his book. The actual number computes to 1.46 but that would sound silly as a sound bite so he rounds to 1.5 for convenience. Again, it is predicated upon his limited assumptions so is more of a nice math exercise than a really practical use for many.

0July 14, 2004 at 6:51 pm #103424

Dr Harry is my heroParticipant@Dr-Harry-is-my-hero**Include @Dr-Harry-is-my-hero in your post and this person will**

be notified via email.Darth – do you have a Dr Harry action figure at home that you put on your pillow at night? You are pathetic, six sigma is great, you have obviously made a nice career out of it somewhere (please don’t tell me where, I am frightened to know), but get yourself a dog or a hobby of some sort. You know way too much about the man and reference him / drop his name way too often to be healthy.

0July 14, 2004 at 7:06 pm #103426You think Darth drops his name alot??? Where have you been??

0July 14, 2004 at 7:08 pm #103427You say it is not a constant, but all the conversion tables of Sigma level to DPMO treat it as a constant. And these are provided to operations, not as a tool for design as you state.

What gives? Are you saying we should throw these tables out?0July 14, 2004 at 7:10 pm #103428Matt,

I’ve only got a sample of one (Reigle), but this Dr. Harry worshiping crowd does seem to be a little slow.0July 14, 2004 at 7:10 pm #103430Oh, come on now. Don’t be too hard on good ole Darth. He means no harm! He just wants to be part of the “cool crowd”. He goes out of his way to be part of any group regardless.

0July 14, 2004 at 7:14 pm #103431Hey Hero, you have obviously not been following my posts over time. You must have meant our friend Reigle. I am certainly not one of the Harry Worshipers. I respect what he has done and give him the appropriate credit, as all of us in the business should do. My recent posts have attempted to answer some basic questions when in reality if some of the posters would actually do a little research, do a little reading and attempt to understand what he has written, we might not have some of the long winded, repetitive and wandering threads. I put you in that classification.

0July 14, 2004 at 7:15 pm #103432Darth does? You sir, are either very uninformed (as in not reading and understanding posts) or are an idiot, or both. My bet is both. Darth can be counted on for objectivity and being, occasionally, a little too supportive of Stan’s obsessive anti-Harry commercialism and self-promotion campaign.

0July 14, 2004 at 7:16 pm #103433Glad to see you have shortened your name to SS instead of SSMBB. But, as usual you are quick with the quip and insult and slow on the intellectual contribution. Feel free to add something of value instead of insulting.

0July 14, 2004 at 7:21 pm #103434Darth isn’t the only one lately. Seems like a lot are starting to see through the commercialism and self promotion. You left out the part of selling vaporware – we are obsessive about this nasty little habit as well. If you don’t understand about the vaporware part, talk to the hundreds who attended two weeks of it at Croatenville.

0July 14, 2004 at 7:23 pm #103435“Darth can be counted on for objectivity and being, occasionally, a little too supportive of Stan’s obsessive anti-Harry commercialism and self-promotion campaign.”

Darth? No way! You surely must be talking about someone else. He just wants some recognition. That’s all.0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.