MONDAY, OCTOBER 23, 2017
Font Size
Implementation Case Studies DMADV Case Study: Performance Management System Redesign

DMADV Case Study: Performance Management System Redesign

To facilitate the development of its employees and better respond to the changing business environment, one department of a large financial-services company decided to revamp its existing performance management system through a Six Sigma project. A pre-project analysis revealed that a complete redesign of the system was required. As incremental improvement in the existing system was not possible, the project team followed the Design for Six Sigma (DFSS) DMADV (Define, Measure, Analyze, Design, Verify) roadmap, incorporating best practices from Six Sigma, project management and information technology (IT) service management. This case study covers a few major aspects of the project, which could readily be applied in similar situations across various industries and business environments.

Define Phase

In the Define phase, the team created a goal statement: To implement a comprehensive, well-aligned and consistent performance management system for Department A.

The team looked at the existing performance management system. It had the following prominent attributes:

  • The system supported all product lines of Department A, covering more than 200 employees.
  • The implementation and usage of the system was limited to individual departments.
  • The company had functional silos, and employee goals were determined within the department.
  • Unit managers (there were multiple units within each line of business) were responsible for setting performance targets for their units.
  • Individual performance was compared against the set target.

Measure Phase

As part of the Measure phase of the project, the team analyzed the existing performance management system. Then by interviewing key stakeholders, team members identified what the company wanted from such a system.

In particular, the improvement team focused on the identification of:

  • Most useful features of the current system
  • Not-so-useful features
  • Missing features (i.e., needed improvements)

When interviewing managers, team members asked the following questions:

  • How can this system help you in coaching, performance appraisal and decision making?
  • What information do you want to receive in order to develop and maintain employee and service performance?

The team further had to consider these factors for the performance management system:

  • Frequency: How often should metrics be updated?
  • Availability: At what time should the performance management system should be available?
  • Security: Who should be able to see what information?
  • Continuity: Is it a business-critical application? What needs to happen during/after a disaster?
  • Capacity: How many users need to be supported? How much data needs to be stored?

Analyze Phase

Based on the information gathered during the Measure phase, the team identified the vital problems of the existing system and derived key requirements for the new system as an output of the Analyze phase (table below).

Deriving the Key Requirements of a New Performance Management System

Problem with Existing SystemKey Requirement for New System
Managers were able to influence the targets heavily and were setting up lenient targets. Thus, a lot of employees were rated high performers while the business was not benefitting equally.• Head of department to set product-line targets. These targets should be used to compare roll-up level metric values to determine each unit’s performance.
• Current achieved performance levels at unit and department level should be used to derive the future performance targets.
Performance metrics and ratings across product lines were not standardized, thus making it very difficult to translate and roll up metrics from individual employee to department level.Standardize performance management process, metrics and ratings to enable quick understanding of process, roll-up of metrics and comparison of employees across product lines.
Performance metrics were available only at the end of the month, making it difficult for managers and individual associates to take corrective action proactively.System should be refreshed daily to provide up-to-date information.
Productivity was weighted much higher than quality. The focus on productivity came at the expense of quality.Equal importance to be given to productivity and quality. Department head should also have a mechanism to change relative importance according to business need.
Individuals within each product line were compared against each other, even if performing different tasks. This comparison was not standardized, making entire system biased toward some types of tasks.Set up peer groups to enable fair comparison. Together with standardized performance metrics and ratings, this should enable comparison across the board.
Individual employees did not have access to their own performance metrics, thus hindering self-directed performance improvements.Each employee should have access to their own performance metrics and should have a way to compare it against the baseline and against the peer group.
Month-over-month metric trends were not available. Creating such a trend report required a lot of manual effort.Month-over-month metrics values and trends should be automatically generated. Also, provide a facility to select the reporting period.

Design Phase

With its functional silos, the business environment was not conducive to a solution that incorporated a balanced scorecard approach. The functional silos made it difficult to cascade organizational-level targets to departmental-level targets and further to individual-level targets. Due to these reasons, a balanced scorecard approach was eliminated from the scope of the project.

Because a key requirement of the new performance management system was to move from using a fixed-target system to a dynamic-target system, two alternatives for measuring baseline performance were thoroughly explored: 1) the best known performer, also referred to as k-performer, and 2) the average performer. Ultimately the department decided to go with average performance as the baseline.

Before the decision was made to move from a fixed-target-based system to one based on a dynamic target, there was much deliberation on that question. Some of the prominent points that came out of those discussions would be of interest to all practitioners:

  • A fixed-target system provides visible targets to employees. Typically, it does not require complex calculations and makes it easy for individual employees to determine their own rating solely based on their own performance. On the other hand, the dynamic-target determination system (either best performer as baseline, or average performer as baseline) makes it difficult to determine the target and leaves employees with some guess work until the final performance targets are derived and announced.
  • A fixed-target system is susceptible to violations. Targets could be set so that they are either too strict or too lenient. A dynamic-target determination system fills this gap. It is also self-corrective in nature, and adapts itself according to business and employee performance. For example, in a fixed-target system if there is not enough work to achieve productivity targets, employees would not have enough opportunity to meet the targets and would be rated “below target.” The dynamic target determination system would accommodate such fluctuations and is thus a more robust system.
  • Setting the k-performer as a baseline would make the entire population’s performance ranking highly vulnerable to the performance of one individual. This is similar to the impact that an outlier can have on a set of data. Setting the average performance as a baseline reduces this vulnerability, making the system more robust.

Equipped with input from stakeholders and the derived requirements for a new performance management system, the team moved forward with a solution following these five steps:

  1. Identification of alternatives
  2. Comparison of alternatives
  3. Selection of the most feasible alternative
  4. Creation of an implementation plan
  5. Implementation of the solution

Those steps were considered from both a functional and an IT perspective.

Verify Phase

As part of process control, a technique for automated data validation and verification was employed. This technique helps indicate any out-of-order data point to line managers and the department director. These out-of-order data points and any other significant events are recorded in an event log. An incident log has been established to capture various incidents that take place with respect to the performance management system. The event log and the incident log play a pivotal role in the identification of improvement opportunities.

About 12 months after implementation, the system has been performing to expectations. An evaluation to either maintain the status quo or to improve the system further would be made as part of the strategic planning session for the following year.

Tags:  ,

Register Now

  • Stop this in-your-face notice
  • Reserve your username
  • Follow people you like, learn from
  • Extend your profile
  • Gain reputation for your contributions
  • No annoying captchas across site
And much more! C'mon, register now.

Leave a Comment



Comments

Vijaya Sunder M

Hi Dushyant,

Thanks for sharing the project-case study.
Did the project use any Design/CTQ identification tools like QFD? If so, can you pls share that?
Also can you pls elaborate – how did you measure the attributes defined (Frequency, Availability, Security, Continuity, Capacity) before and after the project.

Reply
Dushyant Thatte

Hello Vijaya,

Please find attached some additional comments. I hope this will help.

Frequency: How often should metrics be updated? —> We decided to update the metrics on a daily basis. There is a batch job which is scheduled every morning. We receive a notification for success/ failure of the job. The Job status is also captured in a database, which is used to track the Job Performance (Frequency of data update).

Availability: At what time should the performance management system should be available? –> The System is supposed to be available Monday-Saturday. Sunday could be utilized as a Maintenance Period. Any unavailability issue is notified via an “Incident Ticket”. The time to resolve the Incident is then considered as a down time. Total Down Time in the Month is divided by the total expected available time to achieve Unavailability %. The Target is less than 5% Unavailability. (It is not the most critical system). So a n higher unavailability is tolerable.

Security: Who should be able to see what information? –> This is much more of a policy documentation, which is followed in case of any new user access request is received. e.g. only Managers and above should have access. A regular audit of user access levels is used to measure compliance.

Continuity: Is it a business-critical application? What needs to happen during/after a disaster? –> It is identified as a Non-Business Critical Application. The metrics could be calculated manually if need be.

Capacity: How many users need to be supported? How much data needs to be stored? –> System is designed to hold at least 2 Years of data.

Reply
Mark Aguilar

Thanks Mr. Dushyant for this one! I will use this for my report in school. It helped a lot!

Reply
Dushyant Thatte

Glad to hear that. I’m happy to know that this has helped you.

Reply
Jim Kurtz

Thank you for sharing this case study, I found it very useful in helpful me get a better handle on DMADV. One thing I noticed is that this case study has no explicit problem statement . Is this omission typical of DMADV? (I did read that “A pre-project analysis revealed that a complete redesign of the system was required”, but I still thought there would be a problem statement) Do DMADV problem statements usually state explicitly that that new products or redesigns are necessary, or does the problem statement drive the goal creation that leads to the choice of DMADV (over DMAIC)?

Reply
Dushyant Thatte

Sorry for such a delayed response Jim. However better than never.

Problem Statement ommission from this article was accidental. We still have a defined problem statement for DMADV projects as well.

There is one small variation to it – at times a project started as DMAIC – may change itself into a DMADV after analysis/ during improve phase.

I hope this is still helpful to you. Please feel free to reach out to me.

Regards,
Dushyant

Reply


5S and Lean eBooks
GAGEpack for Quality Assurance
Six Sigma Statistical and Graphical Analysis with SigmaXL
Six Sigma Online Certification: White, Yellow, Green and Black Belt
Lean and Six Sigma Project Examples
Six Sigma Online Certification: White, Yellow, Green and Black Belt

Find the Perfect Six Sigma Job

Login Form