Forum Replies Created
Forum Replies Created
October 29, 2009 at 2:05 pm #186461
Well, your answer is hidden in 2 questions that I want to ask from you…
1. If your family is planning a vacation to Florida and you have the responsibility of booking the tickets – how do you book these – collective booking for all OR seperate bookings for each individual on different airlines and via different routes? Also, write down the reason for your decision.
2. Have you checked what Anova and T-test actually compare? Write down the differences and then pls re-frame your query.
Hope you don’t take my comments -vely…but little groundwork will definitely help you in this and maybe bring out the real question that you may have.
Thanks – MM0September 29, 2009 at 7:28 am #185790
Intresting thoughts…and thanks for voicing them out.
I think you are right, we should first understand what we mean by productivity. Also, we should look at identifying what we want to report out. Is it the effort required spent in an activity or the cases/reports resolved?
I think effort spent in an activity is something that can yield results…though both the metrics mentioned above seem to show the same…though in a different manner…
Hmmm….anymore thoughts that anyone has?
Also, if the cop is made to work for more than 9 hours due to staff shortage, then how can we showcase this? There are studies available which state the ideal public to cop ratio…Shall we just go ahead and compare the ideal ratio with actual or there should be some other metric?
MM0September 25, 2009 at 10:29 am #185708
That’s a good point. However, this metric and % of cases resolved is already there. But with a flip side – the cops discourage the booking of complaints…
Apart from other reasons, this is also because of the fact that the force is understaffed. And these guys end up working more than they are supposed to, multi-tasking etc…
Any other suggestions?0September 24, 2009 at 9:24 am #185658
Since there are different things that you repair or differnt types of repairs- which means that segmentation is there and the data needs to be looked at seperately for each.
Also, comparing of means is done post checking the normality of data.
As we say, questions lead tools follow. What exactly do you want to do and why?
Thanks – MM0September 18, 2009 at 1:42 pm #185534
Thanks Darth. It was a great example.
Have a gr8 weekend.0September 18, 2009 at 10:59 am #185529
I think Darth mentioned the correct way of approach as well as someone else who asked you to explore the definition of floppiness.
No advice from me on this…
But…I do have a question on transformation of data … why do we transform data OR in which situations should we transform data?
If I should post this query in another thread…pls reply to this post of mine and I will immediately act on it.
Thanks in advance.0March 2, 2009 at 1:29 pm #181849
Urw. Also, it will be helpful to know what you want to do with the sample. In case of normal audit, I think the sample size calcs should do.
In case, you want to do some specific test then use the following:
In minitab: Stats > Power and sample size
This may be useful to you. Read the help section if stuck.0March 2, 2009 at 12:37 pm #181847
Hi – this depends on what type of data you are capturing: Continuous or discrete. There are different sample size calculators for this.
Also, in case you have JMP, it will help you take into account power etc. Though others in the forum will be able to shed more light on this. :)
Thanks.0March 2, 2009 at 10:13 am #181846
I think there is a possibility of using DMAIC or DFSS as required. And I do agree with the other comments that six sigma tools can be utilized in their individuality.
For example: Police response time to crime scene: Control charts can work here as well as DMAIC or DFSS ummm project as well.
In the labs as well, we can use control charts to check the turn around time for reports etc.
Also, as I understand, lab reports generally say that a particular reading is ok if within x and y – but is it with 100% confidence? Also, the accuracy of the instrument giving the reading – whats the CI for it?
Unfortunately, I am not aware of six sigma use in police force. But if given an opportunity and money and someone to blame in case of failure – would love to experiment with this idea. :o)
Thanks.0March 2, 2009 at 9:31 am #181844
First things first: Do a basic scoping of your project (Ppl of which tenure, which designation, which particular deptt. etc.). Determine the possible x’s that can cause attrition (for example: travel time to office, salary, last three ratings etc.)
On the possible x’s, make your data sheet. Its easy to go then.
Thanks.0March 2, 2009 at 9:23 am #181843
Before you proceed further, pls check if all the events are of same nature. If not, you may want to analyze them seperately.
Thanks0December 22, 2008 at 11:29 am #178973
Another way is understanding Voice of the Customer. I would think that in the advertising world a thorough understanding of the customer wants and needs then doing a Kano analysis would help to ensure that you give the customer not only what they want, but will help you to provide some additional service or product that will delight them. See an article published on this web site last January titled “Is VOC Killing Innovation?0May 5, 2008 at 10:02 am #171739
Thanks Mr Kid for the reply.
Currently there are 13 ppl in the team. We have done overtime and awarded ppl for that currently. I can not get more ppl on permanent basis since budget is not available for that – client is paying this much only.
I was thinking of making some kind of model where I show that for x number of days I require 13+x number to process the work and bring the queue down to 15 days and thereafter manage the queue with existing 13.
I am trying to free up some capacity from other queues and am working on it. But meanwhile I need this matrix/model to showcase the workoverload that the team has. Maybe this will get me the required headcount.0March 19, 2008 at 7:31 am #169853
Madras Management Association is conducting 4 day GB certification workshop starting this week end ( 22 , 23 Mar & 29 , 30 Mar)
Pl. contact them @ 044 – 24962766
0February 4, 2008 at 1:00 pm #168140
Hi Neal – its the same process that they are doing – but since the work is lengthy, it has been divided into three parts. This has been done to ensure accuracy. I am trying to reduce the transportation time by trying to improve the work allocation method. I have thought of certain ideas like auto forward to the 1st team, who upon completion send that to a mailbox from where it is autoforwarded to the next team – but the catch is the number of ppl present in each batch and the time and accuracy of turning the rules on and off daily. I was thinking of copying the mails (they contain attachments of approx 2 MB in each) on some type of system which will have a IN sheet and in the end OUT sheet. This way I’ll be able to track the movement of work from one team to another, one individual to another, check on time taken on each step, all MIS and also (very important) avoid any client information staying on the system of any individual, even in the work tool designed…But I could not come up with any compatible system – can’t use oracle, VB – bane of working in a MNC—–
So, any suggestions…0February 4, 2008 at 11:29 am #168133
if the sample size is fixed – use p, if not – use np0February 4, 2008 at 11:26 am #168132
P or np chart…0January 22, 2008 at 3:12 pm #167553
hmmm…I am currently in a trng and the instructor is saying that we require to check that…guess will have to read more about this…thanks.0October 31, 2007 at 3:20 pm #164185
I just got the ppt for same…thanks folks for the prompt help…appreciate this.0October 31, 2007 at 3:14 pm #164183
Thanks ATI…however this still does not completely solve my query…I am looking for atleast 2 examples each for these situations…can you help here?0October 31, 2007 at 2:51 pm #164180
Thanks, this is what I was looking for…
Can you add more tests to these pls?0August 6, 2007 at 4:07 pm #159609
Thanks Annon.0August 6, 2007 at 2:55 pm #159602
CE = Customer Experience0July 10, 2007 at 3:14 pm #158416
What all do we include in “Discriminant Analysis”?0July 6, 2007 at 5:22 pm #158334
Well all the things have been answered by ppl more experience than me. But O), here is my two bit from my recent experience at my new job at a call center:
1. Agents have to discuss about a policy with customers in their overall call if the customer is out of warranty period: each of them is doing this in his/her own manner – leading to inconsistency in delivery, all information not provided correctly etc…you can guess the rest.
2. If the customer is out of warranty and accepts the charge for the out of warranty service then he/she has to provide debit/credit call details before we can proceed.
Solution that I have:
1. Automate the script for the out of warranty policy and let customer know that he/she is going auto for 2 mins
2. Automate the acceptance of debit/credit call details in IVR if the cust. accepts the pt 1.
Gain: Standard script, defined time, debit/credit card details security
Suggestion for you: Listen to the calls, identify what you can automate – implement ASAP0July 5, 2007 at 3:59 pm #158272
I did not understand what Ashok is asking here – my guess is that he answered someone’s question in a new post.
Also, as far as the SD goes – Omashi and Jeff – 3 SD will mean + and – 3, i.e., 3 on either sides. Is this not so?0July 2, 2007 at 6:30 pm #158125
Well, you have to follow the plan advocated by all…and the plan is …
1st get the leadership trained and have them lead projects – make them understand the power of six sigma and very importantly make them understand that the “six sigma deptt.” is not after their jobs or to showcase how incompetent they are (hehehehe) but a tool to improve further.
I hope you are getting the gist. Start from top down…and meanwhile train the bottom ones in something like Yellow Belt and 7 wastes. Good luck to you. In case you need any help, keep on posting in this forum, there are several experienced and knowledgable ppl who have done this and are on this forum.0July 2, 2007 at 6:13 pm #158124
hi Omashi – wanted to understand here and as well try to answer Mak….
There are two formulas to find out (in my easier terminology) outliers from USL and LSL.
Per me, Omashi has given the formula for LSL cos u have specified that there is only LSL identified in your process and there is no USL.
As far as I remember, formula for USL is USL-x bar divided by 3SD.
Omashi am I correct?0June 26, 2007 at 12:26 pm #157932
ISO and COPC mention about version control – I am not aware exactly what they state. Anyways, I don’t think one needs a program to maintain version control. There are few things that need to be kept in mind while doing this task:
1. The document should be available at one place only and people should not be able to use “save as” option on this.
2. Make the document read only
3. Keep the modification control with limited ppl
4. Design a robust process for ensuring that only updated document is available for view – use FMEA to ensure max pits are covered
5. Design a process for updation of the document – use FMEA again for this
I can go on like this…you can build further on these points…0June 25, 2007 at 8:06 pm #157911
First of all, we have to understand why we use CC. It is a simple and a great tool to find two things basically. Outliers and specific trends. Both of these help point us in the direction of reducing variation. They can result in finding a quick hit or best practices in a process. Also, before turning on all the tests you should theorize on the action if one of the 8 tests fails. Will it be worth it…0June 25, 2007 at 7:57 pm #157910
Pls elaborate more on your query…0June 25, 2007 at 7:54 pm #157909
You can do gage rr in minitab too.
0June 25, 2007 at 3:27 pm #157889
Here is one example that I use while training.
In earlier age, we used to walk from one place to another. If we wanted to reach faster then we would run or hire two people to carry us faster. This is DMAIC.
DMADV here was when wheel was invented.
Hope this helps.0May 17, 2006 at 1:04 pm #137708May 16, 2006 at 4:02 pm #137683August 16, 2004 at 2:54 pm #105783December 19, 2003 at 3:35 pm #93708
I teach my middle school-age son in this way that he can understand
Smaller P ==> More Significant Difference ==> Believe Difference in the Results
Bigger P ==> Less Significant ==> Do Not Trust the Difference in the Results becasue there is big change for random factors to make the results.0November 28, 2003 at 5:07 am #93019
One BMG SG
And other organize I’m not sure, Singapore Quality Ins. may be0May 23, 2002 at 12:53 pm #75680
We used RTY for a metric to compare a yearly turnover rate, FMEA and surveys. From these basic tools, we were able to examine the current practices and improve. Sorry for those non-belivers. However, it works and we obtained significate improvements, especially on third shift.0November 12, 2001 at 2:22 pm #69894
I too, think the costs are outrageous and do not reflect anything more than the greed of the companies that have copied and recopied the program. I saw similar problems when ISO 9000 came out and with the computer certification programs being offered at present.
Many of these companies expect the person’s company to pay these prices, but in fact, it is usually the individual who is forced to take out the loan and pay for years, just to maintain their place in the workforce.
My suggestion to those who cannot afford these prices, wait a year and buy the book! You can always be certification ready, if your company is insistent on the title, then when they pay for it, take the exam! This works with ISO 9000-2000 and MCSE programs. The companies will pay for the exams gladly, even when they refrain from paying for the courses!