iSixSigma

Likert Scale for DOE

Six Sigma – iSixSigma Forums General Forums Tools & Templates Likert Scale for DOE

Viewing 38 posts - 1 through 38 (of 38 total)
  • Author
    Posts
  • #54182

    Sam
    Participant

    We messed up in some of the DOE’s we we did past months. So now we are doing small simple experiments in our processes to improve the process and also learning design of experiments. Right now I planing to do this experiment where i have 3 factors at two levels, resolution 3 design. I am planing to use Likert scale of 1-3-5-7, bad, not so bad, its tasty, very tasty. I will have something like 8 repeat Y’s for this. Now my question is, Is the likert scale I am choosing the right one? any comments/improvement ideas on the scale? I know the analysis of repeat measures are different and will need help which i will ask after collecting the data…
    Thank you in advance for the help… I am hoping Robert Butler ( @rbutler)will see this post and give his comments..

    0
    #194062

    Chris Seider
    Participant

    I suspect you will not find much statistical valid factors with such an approach. Is there no way to have a better response variable or at least have a large group of assessor and find the % of very tasty for the product?

    0
    #194064

    Sam
    Participant

    Thank you @cseider, The group size is 8, is this enough for the % analysis you mentioned? Could you explain this further please

    0
    #194068

    Chris Seider
    Participant

    If you have a percentage, then you have a chance of finding something significant. If you look up power and sample size for proportion, you will see it will be difficult to see small changes in % of very good tastiness. You can probably detect big changes with n=8 for your assessors. Read up on the “power and sample size” under the stat command of Minitab.

    If you have any doubts, consider doing an attribute gage R&R of your assessors on tastiness. You will see interesting results that may make you wonder if you can ever detect small changes in tastiness.

    I’m not being difficult, just realistic about your upcoming challenge but DON’T let that stop you. Especially if it’s really important to your marketing and sales or production.

    0
    #194070

    Robert Butler
    Participant

    IΒ’m sorry for the slow response but I had to give some thought to a few things. First Β– youΒ’ve changed the rules of engagement from our first discussion Β– back then the question was a simple one of determining which of X number of items were significantly different given that we had Y number of testers each of whom tested each of the X items. Now you are running an experimental design and you are not just interested in item differences but you are interested in knowing which of 3 variables are having a significant effect and in what direction.

    The repeated measures analysis for this kind of problem is different from that which I cited in the Vittinghoff book. What you now have is a full blow repeated measures regression problem and that is a different animal. The simplest way to get around this is to follow the recommendation of CSeider and turn the question into percentage of respondents who rate the concoction “very tasty”.

    If you have 8 tasters and they are only given one of 4 choices then the only percent values you can have for “very tasty” are 0, .125, .25, .375, .5, .625, .75, .875, and 1.

    If you are running a Res III design for 3 variables then the experiments would be (-1,-1,-1), (1,1,-1), (1,-1,1), and (-1,1,1) (or the other half if you prefer) and if you only replicate a single point say (-1,-1,1) you would have to get perfect replication in the extreme percentages to have any chance of statistical significance and when you did this all factors would be significant which wonΒ’t tell you much of anything (try, for example the percentages of 0,1,1,.125,0 for the above 5 points). The slightest deviation will result in non-significance (try, for example the percentages of 0,1,1,.125,.125). The same situation will apply if you run these responses as repeated measures.

    What you will have to do is take the 4 design points of your Res III design and replicate the same 4 compositions Β– note I said REPLICATE Β– which means you will have to make 4 more material batches from scratch and then have your same 8 raters rate those combinations.

    If you do this you can run the analysis on percentages as noted by CSeider and you will have a reasonable chance of seeing some differences if they exist. For example, if the above 4 design points are numbered 1-4 and the replicates of the same 4 points are numbered 5-8 then you will be able to have changes like the following : .25, .875,.875,.25 for experiments 1-4 and then .5, .625, .625, .5 for experiments 5-8 and still detect a significant difference (P = .039) (note: for this example IΒ’m setting up the percentages to favor significance for factor #1).

    The results will be the same whether you run percentages or a full blown repeated measures mixed model.

    0
    #194071

    Sam
    Participant

    Thank you CSeider and Robert Butler.
    Robert: I bought the Vittinghoff’s book and was reading rereading Fecal Fat example in chapter 8 and trying to relate it with my design above. I could not relate that repeated measure example with my design and all this time I thought I couldn’t relate it because I did not understood it well(^_^)…
    I will proceed with the percentage suggested by CSeider. A quick question, If i don’t get any “very tasty” rating from any assessors, Can i do the same analysis with the “tasty (5)” rating to find out the important factors contributing to the taste?

    I have asked another regression question as another topic. The question is actually from Hunter text book (already solved in the text book). I would appreciate if you can answer this.

    Thanks a million for all the help..

    0
    #194072

    Robert Butler
    Participant

    Yes, you can run the analysis for any one of the category choices as a percentage against all of the other choices and for 8 samplers the percentages will remain the same.

    As long as you have the Vittinghoff book I’d recommend taking the time to read and understand all of chapter 8. Repeated measures are extremely common and if you understand what they are and how to identify them you will save yourself a lot of grief and you will also avoid making a lot of mistakes as far as analysis of this kind of data is concerned.

    0
    #194077

    Sam
    Participant

    Thank you Robert. We are making the first concoction (run) now. We might able do two runs today. I will post the combinations (-1,-1,-1) and output (y) here.

    0
    #194081

    Chris Seider
    Participant

    @Peach Can we get samples if you find a very tasty product? Tongue in cheek response…oh wait, a pun on a pun.

    Good seeing your posts, RB.

    @rbutler

    0
    #194085

    Sam
    Participant

    Chris: If you are living in Ohio, I may able to get you some samples (^_^).
    Robert and Chris: We ran 2 combinations yesterday -1 -1 1 and -1 1 -1. I will post the results here after two hours

    0
    #194086

    Chris Seider
    Participant

    I used to live in Uniontown, OH! :) I fly back to that continent tomorrow…err your yesterday (so confusing with this darn int’l date line). :)

    0
    #194087

    Sam
    Participant

    Here are the three runs we ran so far with 9 assessors/tasters. Will post the remaining runs with my analysis. Thank you again for all the help
    X1 X2 X3 asr1 asr2 asr3 asr4 asr5 asr6 asr7 asr8 asr9
    -1 -1 1 5 3 3 3 5 5 5 3 3
    -1 1 -1 3 3 1 1 3 3 3 1 1
    1 1 1 3 5 5 5 3 3 5 5 5

    0
    #194088

    Sam
    Participant

    One important point that I forgot to mention before is the levels i used. -1 and +1 levels are not equally apart from the mid point. For eg, I have for X1 5grams as the mid point and my low level is 3grams and high level is 9grams. I tried but could not acheive equal levels because we know the results will not be useful. Can i know the best way to standardize the independent variables?
    @rbutler

    0
    #194089

    Robert Butler
    Participant

    Are you running midpoints? Your posts gave the impression that you were running a two level Res III design. Either way it doesn’t matter your low level still codes -1 and your high level still codes 1. If you did run a mid point where the mid point isn’t equidistant from the low and the high then the code change would be with respect to the mid-point in that it would no longer be reported as 0 but rather some scaled value dependent on the low and the high points of the design.

    For example, let’s say the low and the high for the design are 0 and 10 but that the actual midpoint of the overall space is 8. If you run an experiment at 8 the code for that would be .6 instead of 0.

    0
    #194091

    Sam
    Participant

    I am not running midpoint. Here is the complete set of experiment i am runninig. Res III with one replicate
    X1 X2 X3
    -1 -1 1
    -1 1 -1
    1 1 1
    1 1 1
    1 -1 -1
    -1 -1 1
    -1 1 -1
    1 -1 -1
    I was under the impression that it does not matter if you run mid point or not its always better to standardize the independed variables to reduce error (cannot remember where i read this). I think these things matters when i run regression with actual values and the values of variables (X1,X2,X3) differ a lot (eg X1 in the range of 1-10 and X3 range of 1000-1200 or something)???. Here the actual values of X1,X2 and X3 are all between 1 and 19. @rbutler

    0
    #194094

    Robert Butler
    Participant

    That’s right, center points have nothing to do with standardization or, as it used to be called “normalization”.

    If you have X’s that have large differences in size (X1 from 1000-2000, X2 from 1-5, etc.) then even with double precision it is easy to get roundoff error when running the regression and you can have variables exhibiting significance due to nothing more than scale differences. If you “normalize” everything to the range -1 to 1 then this problem goes away. A number of the statistics packages do this for you automatically but others don’t so I always run the “normalization” myself. Indeed, I do this just as a matter of course whether I think I need to or not.

    The way you scale everything to a -1 to 1 range is you take the min and max values for a particular X and you compute M1 = (Xmax + Xmin)/2 and M2 = (Xmax – Xmin)/2 and then the “normalized” X will be Xnorm = (X – M1)/M2. The only time you don’t do this is with a mixture design – for those designs the “normalization” range is 0 to 1 and not -1 to 1.

    0
    #194099

    Sam
    Participant

    Thank you Robert. @rbutler
    We completed one more run today.
    There is something very important we found out while doing the experiment. The high level of X3 is way too high. Too high meaning, we will not get very tasty product with X3 that high. It is very clear to everyone now that the level we choose is not the right one. We did not know this fact before the experiment.
    Right now we stopped the experiment. We already ran 4 runs. What i am planing is to redo the experiment with the modfied high level for X3. Any better ideas/comments are welcome.

    0
    #194100

    Robert Butler
    Participant

    How do you know that? While there is confounding in the first 3 runs note that the best sequences of ratings both occurred while X3 was at the high level.

    Before deciding that this is an issue you should take a look at the 4th run in the presence of the other 3. With 4 runs you certainly have independence and you should quantify the effects of changing X1-X3 and see what you see.

    If you are going to drop the level of X3 without looking at the results of the first 4 then your decision to redo the first 4 with the lower level of X3 is the easiest and least complicated. If you choose to do this then I’d recommend you re-run the first experiment (-1,-1,1) first just to see what happens to your ratings. This is a single point comparison and it is fraught with all of the issues one normally has with one-to-one comparisons but it is a quick check to see if your thinking is correct.

    0
    #194101

    Sam
    Participant

    @rbutler
    Thank you Robert for the help.
    I just came out of myriads of meetings/finger pointingΒ’s. We have found out some big issues with run # 1 and 2. There is a special device we have to measure the ingredients (x1, x2, x3) and there is an order to mix this (eg step 1 mix xyz with abc then add x1, etc). The measure of ingredient and the order of mixing for those two runs are wrong (procedure is nonstandard for this experiment). The preparation of each run takes time so both shifts were working on it and I could not stay with second shift to help them.
    To answer your question, first of all I did not see the interaction you saw. The reason we all thought the X3 level is way too high because the taste of X3 was very prominent and normally people donΒ’t like it when itΒ’s too high (think of X3 as salt or sugar in normal food, people donΒ’t like it if itΒ’s too much). There was no evidence from the experiment that x3 is high just that everybody tasted thought X3 is high. With this new above finding all the runs we did are in question. I will update you soon. Looks like we will need to re run the messed up runs grrrrrrrrrrr

    0
    #194108

    Sam
    Participant

    @rbutler
    Robert: This is one DOE where everything that could go wrong did go wrong…
    Please ignore the previous results i posted. We completed 4 runs over the past three days. The remaining 4 (replicate) will be done towards the end of this week. Here are the results of 4 runs
    X1 X2 X3 Ass1 Ass2 Ass3 Ass4 Ass5 Ass6 Ass7 Ass8
    -1 -1 1 5 5 3 5 5 5 3 5
    -1 1 -1 1 1 1 1 1 3 1 1
    1 1 1 3 3 3 3 3 3 5 3
    1 -1 -1 5 5 5 3 5 5 5 5

    0
    #194109

    Sam
    Participant

    By the way these are new runs we did after the mess up we found. The levels of all experiments are the same. We did not change anything other than running the experiment again.

    0
    #194114

    Robert Butler
    Participant

    If we run the analysis as a straight up repeated measures regression where the repeats are within each assessor and we treat the within assessor structure as autocorrelated then the final model is

    Rating = 3.4 +.6*X1 -1.2*X2 +.4*X3

    and the measures of significance for the three x’s are

    X1: P = .0005
    X2: P = <.0001
    X3: P = .0024

    If we use the conservative estimates of significant P values when using ordinal measures then instead of P < .05 we would use P < .01. All three of the X values meet this criteria. You get the same model whether you run it backward or forward selection with replacement. Therefore it would appear that a unit increase in X2 more than offsets a unit increase in both X1 and X3. So for a first look using this screen design the results would suggest the biggest killer in terms of taste is X2.

    0
    #194115

    Robert Butler
    Participant

    By the way. What I’ve done for you is the minimum – that is the simple act of regression – and I did this for you since you don’t have repeated measures capability. Before you do anything else you need to do regression analysis – specifically you need to examine the residuals and run the usual diagnostics. This will take some manual effort on your part but you have the equation, you have the actual values, and you have the assessor id’s therefore you should be able to run the diagnostics to determine if the model is adequate or if there are issues.

    0
    #194116

    Sam
    Participant

    @rbutler
    Thank you Robert a million for all the help. I was waiting to get the results of the replicate (other 4 runs) to do the analysis. I sure did look at the coefficient of three variables and found out X2 is the big one. What we did not like is the sign of X2. This means we had too much X2 and need to go down to get tasty product. I used rating 5 for percentage conversion (.75,0,.125,.875). Here is the equation i got Y = 0.438 + 0.0625 X1 – 0.375 X2 – 0.000000 X3. This is obviously different from your equation because you used the raw data not percentage.
    Now here is my understanding of this type of design, please correct me if i am wrong.
    I cannot do an ANOVA because the 3 degrees of freedom are used to calculate the three coefficient estimate and no df for error. So no F, or P statistics, No graph, No residuals. Is this correct?
    Now regarding your analysis, you used raw data, no conversion to percentage and treated different assessors as repeat measures correct? In this case how did you account for the variation between the assessors which is higher than an actual repeat experiment? Actual repeat in my thinking is the same person repeating same combination 8 times? Please correct me if i am wrong.

    0
    #194119

    Robert Butler
    Participant

    That’s correct – a conversion to percentage values leaves you with one measure per experimental condition so there are 0 degrees of freedom left for error assessment. If you run the analysis as repeated measures (every measurement within an assessor is a repeat) then you have to define to covariance structure of the measurements within the assessor. There are any number of structures from which to choose. I chose autocorrelated since the measurements within each assessor were made over time.

    If you have a computer package that can run repeated measures then buried in the code are instructions for evaluating within and between variance to construct a model. Unfortunately, I can’t think of a simple way to explain how this works. Since you have the Vittinghoff book I would recommend reading the rest of Chapter 8 and then look up one of the repeated measure model building routines available in R and use it to run the analysis in the way I did. This, of course, means learning R but since it is R you can download the program and try it without incurring any costs.

    0
    #194120

    Sam
    Participant

    Thank you Robert. I will read Vittinghoff’s chapter 8 to learn how to handle repeated measure. I do not know if Minitab can handle the repeated measure. Learning R will take time because I have never used it also the lack of GUI of R.
    I guess the results of replicate we will run towards the end of the week should help the percentage analysis
    Did you consider assessor 1 ,assessor 2,assessor 3, 4.. etc of each run as repeat?

    0
    #194124

    Robert Butler
    Participant

    The repeated measures were within each assessor. Therefore there were 4 repeats for each assessor corresponding to the 4 taste samples. The result of the replicate will definitely help the percentage analysis. You should be able to evaluate the combination of the 8 runs in the standard fashion.

    0
    #194152

    Sam
    Participant

    @rbutler
    Hi Robert: Here are my comments and questions after I glanced through chapter 8 and after reading your post. I put them in separate points for clarity. I would appreciate if you can tell me if I got any of the below wrong.

    The experiment (including the replicate) we did (doing) can be analyzed using the regular regression method by converting it into percentage?

    Since the taste measures are correlated within the assessors, we need to include this fact in the model for accurate calculation of errors?

    Repeated measures analysis can take the taste within assessor correlation into account hence give a better model?

    In your repeated measures analysis you included the correlated term (within each assessor) in the model and the structure of correlation you specified is auto correlation?
    I found out Minitab can do repeat measures analysis, not found out the way to specify the correlation structure (auto correlation, etc)
    I forgot to mention this in the beginning, this is the first experiment we did on this subject. The idea was to learn and do follow up experiments to find out the tasty product. Do you agree that for a first experiment the percentage analysis is good enough?
    We completed two more runs and the remaining two will be complete by Friday. I will post the results here.
    Thank you so much and I am learning a lot from you Sir.

    0
    #194154

    Robert Butler
    Participant

    To your points:

    1. The experiment (including the replicate) we did (doing) can be analyzed using the regular regression method by converting it into percentage? – Yes standard regression methods will work.

    2. Since the taste measures are correlated within the assessors, we need to include this fact in the model for accurate calculation of errors? – You should keep this in mind but without the ability to run repeated measures there’s not much you can do about it. The thing to keep in mind is that if you don’t have some way to tell the computer that the measures are repeated it will use the wrong estimate of error for purposes of comparison. Back in the old days (roughly pre 1995)when repeated measures methods were not available in regression packages the usual procedure was to boost the critical p-value from < .05 to < .01. We would also watch the relationship between R2, Sy.x, and Mallow's Cp and quit accepting terms as significant when one or more of these statistics plateaued during backward elimination or stepwise regression.

    In this instance the latter won't be of much use but the former might be.

    3. Repeated measures analysis can take the taste within assessor correlation into account hence give a better model? – Yes it does, however, it isn't so much that it is a better model rather it is the case of proper construction of the model given the nature of the data used to build it.

    4. In your repeated measures analysis you included the correlated term (within each assessor) in the model and the structure of correlation you specified is auto correlation? – Yes – this was a choice based on the description of the way in which the experiments were run and tested.

    5. Do you agree that for a first experiment the percentage analysis is good enough? – Yes. Of course, the icing on the cake (or cookie or whatever) will be a confirmation run. I'd recommend a blind test with the "optimum" formulation from the design against either the worst from the design or the current standard.

    0
    #194182

    Sam
    Participant

    @rbutler
    Hi Robert: Here are the results of remaining runs

    111 55355555
    -1-11 35355555
    -11-1 11311131
    1-1-1 55355535

    Y=3.6250-0.6875X1+0.9375X2-0.6250X3 (there is a wired pattern in residual vs fit graph)

    This is the model i came up with treating the measurements as repeat and Assessors as random. Can you please post your model? I would like to see your model for comparison because i am using Minitab for analysis without knowing very well how to use it for repeated analysis.
    Regarding the confirmation run, I did not understand the optimum formulation you mentioned. Are you saying run a combination that gives a high (desirable) Y value using the model (in this case increase X2 and decrease X1 and X3 to get high Y value) then compare it with actual results? Or are you saying pick a combination we ran (out of 8 runs) and do it again for confirmation?
    As always thank you for all the help
    Sam

    Here is the complete output from Minitab…
    Factor Type Levels Values
    X1 fixed 2 -1, 1
    X2 fixed 2 -1, 1
    X3 fixed 2 -1, 1
    Assessors random 8 1, 2, 3, 4, 5, 6, 7, 8

    Analysis of Variance for Outpt, using Adjusted SS for Tests

    Source DF Seq SS Adj SS Adj MS F P
    X1 1 30.250 30.250 30.250 37.72 0.000
    X2 1 56.250 56.250 56.250 70.15 0.000
    X3 1 25.000 25.000 25.000 31.18 0.000
    Assessors 7 5.000 5.000 0.714 0.89 0.520
    Error 53 42.500 42.500 0.802
    Total 63 159.000

    S = 0.895481 R-Sq = 73.27% R-Sq(adj) = 68.23%

    Term Coef SE Coef T P
    Constant 3.6250 0.1119 32.38 0.000
    X1
    -1 -0.6875 0.1119 -6.14 0.000
    X2
    -1 0.9375 0.1119 8.38 0.000
    X3
    -1 -0.6250 0.1119 -5.58 0.000
    Assessors
    1 -0.1250 0.2962 -0.42 0.675
    2 0.1250 0.2962 0.42 0.675
    3 -0.6250 0.2962 -2.11 0.040
    4 -0.1250 0.2962 -0.42 0.675
    5 0.1250 0.2962 0.42 0.675
    6 0.3750 0.2962 1.27 0.211
    7 0.1250 0.2962 0.42 0.675

    0
    #194185

    Robert Butler
    Participant

    There’s something odd about the squashed table you posted with the heading “Term Coef SE Coef T P” because when I copy it out and align everything I get a bunch of stray “-1’s” which I don’t understand.

    The equation I got is

    predicted rating = 3.6272 +.6932*x1 -.9631*x2 +.6488*x3

    As you can see the signs of the coefficients are the reverse of yours. It confirms that X2 is the bad actor. All of the term p-values are <.0001.

    The residual plot is banded which is what you would expect given that the Y responses are restricted to a discrete set of values. I've never seen a book on regression diagnostics which discusses this pattern but Shayle Searle discussed it in an article in The American Statistician back around 1988.

    As for building an optimum – this means taking advantage of what you have found – your regression equation – and building a recipe on the basis of what it predicts should be an optimum and seeing what you can see.

    0
    #194186

    Robert Butler
    Participant

    Found the reference for the parallel lines (banding) in residual plots.

    Shayle Searle in the American Statistician August 1988, Vol 42, No. 3 – Parallel Lines in Residual Plots. According to the article you may also find it referenced in McCullagh and Nelder (1983) Generalized Linear Models.

    0
    #194188

    Sam
    Participant

    @rbutler Thank you Robert. We will go through the article of Shayle Searle.

    I do not know why there is -1 right next to the name of each variable (X1, X2..). I used the General Linear model routine in Minitab for the analysis. Standard regression in Minitab do not have the option for Random variables that’s the reason I used GLM. When i use standard regression it gives me this output, Y = 3.36 + 0.687 X1 – 0.938 X2 + 0.625 X3 + 0.0595 Assessors. This is similar to yours…

    I guess we are going backwards, the more we learn about regression the more we get confused over the basics..I have the following basic regression question after the discussion we had here over centering and scaling…

    Here I used -1 and +1 for two levels. The way we interpret the above model is, take X3, for each unit we go down in X3 from the mid point (mid of -1 and +1) we will get an increase of 0.938 times Y (increase because its negative). Is this correct?

    FYI, There are a bunch of us here follow each and every post of yours. We are learning a lot from you. Thank you and hope you continue posting in this site.We got training from a BIG consulting group but the trainer does not have a very good knowledge on Regression and DOE. Whenever we ask questions he always refer to the pdf presentation we received, which does not have any details.

    0
    #194189

    Robert Butler
    Participant

    To your question: The way we interpret the above model is, take X3, for each unit we go down in X3 from the mid point (mid of -1 and +1) we will get an increase of 0.938 times Y (increase because its negative). Is this correct?

    Yes this is correct and if you put X2 at the value corresponding to the scale value of 0 the effect of X2 will be neutral.

    There is a caution here Β– The residual plot indicates the model is adequate over the region you investigated. If you choose a level of X2 (or any of the other XΒ’s) that is either lower than the lowest value or greater than the largest value of X2 you used in your design you will be making predictions for values that are outside of your experimental region. This is called extrapolation and it is risky because if you extrapolate you are assuming that the trends observed inside your design space do not change and continue in the same direction and have the same magnitude outside of that space Β– in other words, you are making predictions in a region where you have made zero measurements.

    You can certainly go ahead and make a prediction that is an extrapolation and you can go ahead and build an experiment to test the extrapolation. Just be prepared to have a situation where your extrapolated prediction and the results of the actual experiment donΒ’t agree.

    P.S. I’m glad you and your friends are finding these posts useful. Thanks for the compliment.

    0
    #194191

    Robert Butler
    Participant

    An additional thought. If you and your friends are having trouble understanding various aspects of regression then, at the risk of sounding like a book pusher, I’d recommend going to the library and getting a copy of Regression Analysis by Example by Chatterjee and Price through inter-library loan. It is a very practical and readable book and, as the title indicates, it is by example.

    0
    #194193

    Sam
    Participant

    Thank you Robert… We actually bought the Chatterjee book couple weeks back… Did not get a chance to go through it yet…

    0
    #194256

    Sam
    Participant

    @rbutler
    We did the confirmation runs outside the experimental region. We made a product with x1,x2 and x3 levels which gave Y rating above 7 and gave it to all 8 tasters. All of them rated 5.
    What we found out is the definition of rating 7 was pie in the sky type of thing. The way we defined rating 7 is like ” the best xxx you have ever tasted in your life”… This made the tasters to reserve the rating 7 in anticipation of the best in the world product….Anyways we got a very good product from the first experiment itself. We might do additional experiment but right now i am planing another DOE on a different subject which i will post in another thread here. As always we are hoping for your valuable comments on that as well..
    Thanks a million Robert

    0
    #194258

    Robert Butler
    Participant

    You’re welcome – I’m glad I was able to help.

    0
Viewing 38 posts - 1 through 38 (of 38 total)

You must be logged in to reply to this topic.