iSixSigma

Software Development Assessments for the 21st Century

During the last 35 years, software development and technology processes in general have evolved at a rapid, even chaotic, rate. These processes range from small, Lean, agile (sometimes labeled iterative) development pockets to large, bureaucracy-laced legacy projects of tremendous scope (sometimes labeled waterfall development) and everything in between.

Many organizations have undertaken attempts to classify, characterize and assess these processes for purposes of driving significant improvement. What has resulted is a number of standards and associated assessments, all professing to be a magic bullet. Most of these standards share philosophies and some are derivatives of others. Although the majority of these efforts yield some improvement, it is often not the enterprise-wide, step-function improvement hoped for.

Of course there are best practice stories from each assessment, usually where management has taken the time to really understand their culture, needs and requirements, and committed to and aligned for a change in culture. But in today’s environment, the pressure is great to deliver results now. Quick fixes that drive never-ending reactive behavior and associated resource shuffling are common. The bottom line is that many organizations, unless mandated to do so, simply do not take an objective look at themselves, and thus become stuck in their ways. Or, they embrace ad-hoc improvement, trying many different things and hoping for the best.

Handpicked Content :   Lean First…or Six Sigma?

So how does an organization start the process of driving enterprise-wide change, without investing significant time in resource-intensive assessments to establish a quantitative baseline and measure subsequent performance progress? The answer, as with many things today, may lie in web technology. But to understand this technology, it is important to first understand the characteristics and reasons assessments are done in the first place.

Why Do an Assessment

Process or organizational assessments, if done right, produce good information and data relative to how an organization performs its work. Assessments help provide objective viewing of processes, tools, behaviors and the consistency of application use across an organization. From that baseline, companies can identify process and organizational strengths as well as weaknesses, where opportunities for improvement exist. In some cases organizations also may receive a numeric score, ranking or dashboard color, relating its performance to best practices. The results also may be displayed demographically by location, division, product line, project team and so on, which is useful for planning and implementing an improvement strategy.

Handpicked Content :   Course Correction? Tips for Doing a Deployment Review

Normally, and especially for lower maturity processes and organizations, self-assessment is not an option. Objectivity is difficult to achieve. Plus, most companies do not have individuals with the skills to perform the assessment and compare their organization to best practices; if they do employ such an individual, that person is too busy with daily issues to respond to organization-wide needs.

Hence, a team of objective experts usually is brought in to perform the assessment. These on-site, interview-based assessments are high overhead events, especially for medium- to large-size organizations. They require lots of planning, and can be time consuming and disruptive. Further, the sample sizes end up being small relative to the total population within the subject organization, creating a risk in the accuracy of results. Further risk is created by interviewee manipulation, preprogramming, and bias of individuals and the methodology.

Handpicked Content :   Improving Offshore Outsourcing Efficiency with DFSS

An Alternate Method: Web Technology

There is a strong correlation between organizational sample size and result accuracy. But trying to improve a score by increasing the sample has the potential to drive up the overhead of the assessment considerably. Consider that for a 2,000 person organization, a 5 percent sample size would require 100 interviews, or as much as 200 person-hours of effort. This would also mean 200 hours of lost labor. Add in planning, work disruption, travel, compilation of results and follow up – this is a significant resource drain. Even at a 5 percent sample size, the accuracy of results and conclusions may be at significant risk.

One alternative, web technology, makes it possible to scan large swaths of an organization, at a fraction of the cost, with far less work disruption. The key to the technology is capturing and documenting the best practices, developing a question set that can be answered by all layers of the organization, and processing the resultant information in a statistically valid and graphically friendly manner.

Handpicked Content :   Roadmap for Integrating ITIL, CMMI and Lean Six Sigma

Web-enabled assessment processes are becoming more advanced and alleviate much of the downside of on-site, interview-based assessments. Because the question set is standard and fixed, the sample size can be up to 100 percent of the organization, the questions can be answered in a confidential way and at the interviewee’s leisure, and much of the bias can be eliminated or normalized. Plus, the cost per online interview is reduced by an order of magnitude over on-site assessments, so many more interviews can be conducted, improving accuracy and reducing risk. This results in a highly accurate assessment of the as-is behaviors and practices for the subject organization. It also provides great insight into differences in operational behaviors between business units, product lines, project teams or virtually any defined demographic.

Handpicked Content :   Exploring Defect Containment Metrics in Agile

Benefits of Technology

Technology allows for the acquisition of much more detailed responses, and results that may be organized demographically. In addition, an array of analytical techniques may be applied in an efficient manner. Through the use of fixed questions with Likert-type scaled responses, multiple choice questions and open-ended questions requiring a typed response, a complete and objective characterization is acquired.

The technology also brings significant efficiency to the process through the use of preloaded demographics, alignment of pertinent questions to respondent types and the ability for the respondent to answer the questions in multiple sittings, thus reducing the overhead required to sit with an interviewer for one or two hours.

As this technology continues to be adopted, the day may come where organizations will share data industry-wide, helping to propagate more meaningful standards databases.

You Might Also Like

Leave a Reply