Several information technology (IT) metrics can be developed on the basis of best practice frameworks such as capability maturity model integration (CMMI) and information technology infrastructure library (ITIL). Measuring and collecting such data, which is often part of process improvement initiatives, brings to light what is happening in IT processes.

Following are a few tips for metrics definition and deployment. These tips are divided into the strategic, tactical and operational aspects of metrics.

Strategic: “Why Do I Need a Metrics Program?”

One reason a metrics program is beneficial is because it helps IT collect voice-of-the-customer data. For instance, an IT practitioner might ask internal customers: What are the areas within IT where the business leadership is trying to gain insight? What are the motives to focus on those areas?

Practitioners do not want to risk following the wrong metrics. Designing a dashboard with a wrong set of metrics may mean a scrapped project, as leadership may soon realize what they obtained from the metrics initiative is not what they wanted in the first place.

A detailed study, starting with interviews with the leadership, is necessary. The practitioner must articulate the needs behind the metrics initiative, and verify them with leadership, before defining the individual metrics.

It is also important that practitioners do not force mature metrics, such as function points, starting on Day 1. Asking for high-maturity metrics will not help if the organization does not have the proper measurement framework and the resources to collect it. Until and unless the leadership and the project teams understand the needs and benefits of a particular measure, and are assured that the effort needed to gather them will be beneficial, resistance may remain high.

Suppose the leadership wants to see metrics around productivity, but the organization does not count function points. If the practitioner tries to enforce function point count or lines of code count within project teams, deployment may face the risk of failure because of too many protesters from project teams. Instead, practitioners can create the need for the high maturity metric by depicting how that metric would fill a missing niche. It may help to start with simpler metrics, such as lines of code, and demonstrate how they yield crude productivity measures. However, every report should carry the message that productivity calculation may be improved greatly with the help of function points.

Tactical: “What Metrics Do I Need?”

One key aspect of metrics collection is the choice of unit. In order to collect project metrics, practitioners may ask the process owners to provide defect counts in a data collection template that is assigned to the project. Although a less time-consuming way of collecting defect information, the approach has two caveats:

  1. Lack of details: With this method, only the number of defects is collected, rather than the details relating to the defects. This does not permit further analysis of the data or use of data for other purposes. For example, details such as the effort spent in defect closure could be used as a part of a different metric; for example, percentage rework effort.
  2. Difficulty in verification: With only one total figure at hand, it is difficult to go back and check the accuracy of individual values. Also, if a particular type of defect was excluded from consideration by mistake, there is no way to catch the error, and a higher defect value would be reported.

On the other hand, if a practitioner asks the process owners to provide the log entry for each defect – including resolution time, effort spent, brief description and unique identification number – different metrics can be generated from the information and the entry would be verifiable from the system.

Activities in IT projects or programs, and the metrics related to them, may broadly be divided into three categories: development, maintenance/enhancement and production support. Accordingly, the unit of work varies, and so does the priority of metrics. But the overall theme of metrics, in absence of any special considerations, covers:

  • Effort: productivity, utilization, throughput – Effort-related metrics are difficult to obtain because effort data may not be logged against each deliverable. Practitioners should persuade process owners to log this information. Even if the data is not 100 percent accurate, it adds great value in the calculation of resource capacity utilization and productivity.
  • Quality: defect, defect prevention, training – Defect-related information is more easily available, but most defects are not logged. Some organizations log only system testing and user acceptance testing, or simply one or the other. More mature organizations record metrics related to training and penetration of the process improvement program.
  • Budget: cost variance, return on investment – Cost variation, along with productivity, is a metric that the leadership at most organizations takes interest in. But the answers are not as obvious as they seem. IT organizations may not have their payment accounts at each individual project level, or sometimes IT projects compensate deficits from other projects. Although these practices save IT process owners from overhead, they also forbid cost calculations at the project level.
  • Schedule: service-level agreement (SLA), slippage – Schedule slippage can be seen from two aspects: go-live slippage, which is visible and impactful to the customer, and internal slippage, which is visible to the project team but not to the customer. Internal slippage requires planned phase end dates for all the phases in the development life cycle. For production support, the time stamp at various phases of the issue life cycle may be recorded against the SLA expectations. Phase-wise schedule information provides.

Operational: “How Do I Get Those Metrics?”

For a metrics rollout across an organization, the deployment needs to be scheduled into small steps instead of planning for one large rollout on a certain day. Communications to all employees, such as “Start collecting metrics from Nov. 1, 2008,” might arouse doubts and resistance. The deployment needs to be staged across the groups with a span of one to three months, and deployed team by team.

Following are five steps to carry out for each team in a metrics deployment:

  1. Prepare a schedule. Break down the whole deployment, team by team and department by department. Provide this plan to the metrics initiative manager to track progress against it.
  2. Meet individual process owners. Describe the objectives of the metrics initiative, the benefits of tracking metrics and the means to collect them. Tell each process owner that as the monthly data analysis is done, the metrics will first be verified and reported to them.
  3. Watch over the metrics collection process. This should be done with the project teams after obtaining consent. Meet the team members, and equip them with the necessary data collection tools and methods (spreadsheets, definitions and diagrams). Assure them that help will be offered until they get comfortable.
  4. Avoid big meetings with multiple process owners. People who have been working on processes for some time often will be opinionated, and may be armored with special cases that will defy ground rules and deviate from discussions. Deal with veterans individually, if possible, only with their supervisors present.
  5. Create excitement within project teams. At the end of the month, practitioners should not forget to report the metrics to the individual process owners who helped collect them. They should discuss observations, possible suggestions and highlights coming out of the metrics. Inviting subjective inputs and commentaries for the metrics – which will be included in the high-level dashboards – is important. Practitioners also should discuss process improvement opportunities.

In order to bring more enthusiastic project leaders under the metrics program voluntarily, remember to share success stories in newsletters and mailers from leadership teams.

While conducting an introductory session with a process owner, tell them how another process owner expressed enthusiasm after seeing the metrics for their team and show the results too, if possible. Process owners may look at metrics as overhead initially, but providing them the monthly metrics with interpretations will turn them to enthusiasts, rather than resistors.

About the Author