Successful corporate initiatives such as Lean Six Sigma require proper planning, prioritization, resource allocation, budgeting, training, and proper review and reward mechanics. These initiatives also must consider the stability, accuracy and maturity of the core processes, measurement systems and the people they will affect.

In most organizations there is a wide range of both experience and infrastructure to deal with such issues. This variation can be attributed to a variety of causes including – organizational structure (e.g., centralized versus decentralized), organization size, geographical diversity, previous initiative rollout experience, management style/culture, management infrastructure (e.g., tools to enable clear and concise communication, measurement and reward/recognition) and contention for the best resources. Within this dynamic environment of moving parts lies the groundwork for a highly successful initiative or – at the other end of the spectrum – a train wreck.

Understanding these processes and systems enables an organization to logically build out an implementation plan while mitigating potential risks of failure. This is the concept behind a readiness assessment used to drive successful initiative planning and rollout.

Use What’s Already Available

Several factors must be considered when deciding when, how and just how much assessment to do. Mature organizations may already have in place a variety of comprehensive and regularly scheduled assessments, from which much of the necessary information may already be available. These may include assessments and audits such as the ISO Series (ISO9000), CMM/CMMi®, national or state quality awards, or a company’s own existing assessment program. While each of these assessments may have been created for a purpose other than determining readiness for a Six Sigma initiative, many of their attributes are useful in determining readiness or maturity.

Assessment Goals

  • Validate critical business measures and selection high leverage projects (from existing clusters).
  • Identify leverage points and areas of opportunity in upcoming projects and existing processes.
  • Identify potential obstacles and mitigation strategies.
  • Recommend a set of next steps and tune Six Sigma deployment plan accordingly.
  • Minimize risk and improve cycle time of deployment (especially project selection).

Existing strategy documentation, goals and objectives, policies, procedures, process documentation, data, organizational charts and other pertinent plans and information will provide a basis for evaluating both the scope of the prospective assessment and serve to help in the planning and preparation for the assessment.

Communicate to Avoid Organizational Resistance

Any assessment can be a source of trepidation for an organization. Without communicating the purpose of the assessment, the schedule, the roles and responsibilities of individuals during the assessment, the confidentiality of statements and documentation, and what will be done with the results, organizational resistance can create an environment where gleaning accurate information will be nearly impossible. Generally speaking, any assessment should be communicated to the organization well in advance of when people will be required to respond.

Communications also may be repeated a few times to reinforce its importance. It is usually best if senior management details the assessment purpose, scope, context, size, location(s), dates and intended follow-up. This communication also should state a firm commitment to the process, a solicitation for full forthright cooperation and an assurance of confidentiality. This type of communication when reinforced in regularly scheduled meetings and informal communications will help to reduce organizational resistance and assure a free flow of accurate information.

Post-assessment communications also should be scheduled. Sharing information with the people who participated helps develop buy-in for the results and subsequent actions to be taken.

Planning Essential for Assessment Success

Planning for an assessment will be gated by the scope and context of the initiative and the organization. Most readiness assessments involve either interviewing or surveying of a cross-functional and/or cross-organizational group of people. One way to envision the survey/interview population is by imagining a diagonal slice of the organization. Given this view, the company needs to be sure it elicits information from all constituencies in a way that provides a balanced view of what is going on (from strategic/organizational elements to tactical/operational elements).

If an organization is organized by product line, similarly, it needs to acquire equally distributed information across all product lines. Practically speaking this means, everyone from senior executives to process participants must be interviewed. These views will help to characterize and highlight the alignment of the organizations strategies, goals, programs, processes and metrics. This alignment is important to test for, as strong alignment contributes significantly to the strength and longevity of a successful initiative. Lack of alignment means that communications, priorities and actions will become clouded and subject to drift or ambiguity.

This distribution also provides a clear view of the aggregate organization with visibility into “leverageable strengths” and/or specific opportunities for improvement. For these reasons some level of “randomness” must be assured in the case of developing a sample population to survey or interview.

One of the primary inputs to the assessment process is what people in the organization actually do in the performance of their daily work. Deciding how, how many and who that information is elicited from is a critical decision in the process. When considering the survey or interview populations, several choices should be considered:

  1. Sample size (how many and from where) – This is often determined by the size of the organization and whether or not web-based technology is used. If assessing an organization of 500-5,000 associates and utilizing an interview-based model, sample sizes should be 3 to 5 percent of the population. When using web-based technology, a sample size of 100 percent is not out of the question. Web-based assessments while missing on actual observation and dialog, add the capability to collect, process and store a large amount of data quickly, which enhances the ability to reuse and drill down in the detailed data.
  2. Demographics (various locations, cultures and or geographies) – Demographics are important to learn about in an assessment. Often locally tuned processes and practices are getting results and thus may be ripe for adoption by the organization at large. Sometimes certain geographies are required to comply with local laws or requirements. A supply chain view also is worth considering. It can be highly desirable to include several customers, clients or even suppliers in the assessment. This is a great way to see how external sources view the strengths and weaknesses of a process.
  3. Availability of information (from other sources including other audits and assessments) – If a recent and rich source of information exists (previous audit or assessment) it only stands to reason that this information is useful. Even if it appears to be in conflict with other things discovered. This often happens when things are being done in a vacuum or pocket of excellence. All are useful things to learn.
  4. Use of technology (specifically, web-based survey and assessment tools) – Web-based tools for assessments are gaining popularity and accuracy. It is important to differentiate between “survey only” and assessment tools. Assessment tools usually have some type of intelligence built in to generate comparative and gap analysis automatically. They typically can trap user comments and opinions, which can then be stored for quick retrieval and summarization. There are many advantages to web-based systems but no web-based system can observe actual behavior. Usually some combination of web technology and observation-based validation provides the most accurate result.
  5. Timing (of the initiative and other things going on in the organization) – All efforts should be made to plan an assessment in concert with the initiative scheduling process. Sometimes when senior management announces a timeline for an initiative it does not consider the assessment. The assessment becomes an afterthought and then must be rushed. While not the optimum way to execute, good planning and frequent communication can help overcome this shortfall.
  6. Organizational culture (receptiveness or resistance to assessment) – If this is the first time an organization has undergone a formal assessment, it is normal to see significant organizational resistance. The best way to overcome this is by meticulous planning and frequent communication. Organizations used to such events will be able to move faster with less communication.
  7. Availability of people (resource bandwidth given the type of work being performed) – Careful consideration should be given to make people available to participate without any negative consequences. Typically a survey or interview will require an hour or so of preparation and an hour in an actual interview or survey. Management must create a safe ground for employees to participate.

Once the above is understood and agreed to by the target organization and the assessment team, development of the interview schedule, or in the case of a web-based tool, a deployment plan should commence. Schedules need to be specific about times, locations, names, titles and functional responsibility. The organization being assessed typically provides an administrative resource to manage and communicate the master schedule and changes which may occur during the course of interviews.

Assembling the Results of Assessment

A typical readiness assessment is conducted by a team of two or more assessors with a goal of deriving an objective and unbiased expert characterization of an organization based on the information collected. Consensus between assessors is critical to drive out error and bias and fully examine all points of view. The examiner training used by the Malcolm Baldrige National Quality Award program provides an excellent model for assessment teams to build and derive consensus leading to valid conclusions. Generally a documented body of knowledge (quality, Six Sigma, management, software, etc.) is tapped to help identify organizations’ leverageable strengths, opportunities for improvement and priorities for the pending initiative.

Examples of typical results might point to (as a leverageable strength) “a strong sense of pervasive process focus and documentation as evidenced by procedures and the rigor with which they are used.” It is a strength because the organization recognizes the need and importance of process focus and has an established system with which to keep them documented. It is leverageable because best practices suggest process rigor and documentation are a critical success factor to institutionalize and control an optimized system or process.

A corresponding opportunity for improvement may indicate, “The measurement system is inconsistently used and the collection and characterization of data is highly variable.” This might suggest that a significant effort using measurement system analysis (MSA) would be required to validate both project selection criteria and the execution of the projects themselves. Additionally, pockets of excellence (places where excellent measurement systems have been validated and are in use) may indicate areas ripe for early adoption of more advanced techniques. These characterizations are usually completed process-by-process, scored and aggregated into an organization-wide baseline.

Process Maturity Assessment (One per Core Process)
Process Maturity Assessment (One per Core Process)

Conclusion: Creating Action and Deployment Plans

Communication and use of the assessment results require careful interpretation and discussion amongst the core deployment team. Identified gaps and opportunities for improvement may indicate places to avoid for early adoption until fundamental and systematic changes can be implemented (e.g., establishment of a process management system or establishment of a measurement system). On the other hand, leverageable strengths and pockets of excellence may provide rich early adoption process and project opportunities. Action plans and deployment plans can now be intelligently prepared, communicated, prioritized and implemented.

The assessment process also helps to identify and validate initial project clusters for consideration as Lean or Six Sigma projects. These will need to be qualified, quantified, scoped and prioritized accordingly as the program rolls out.

It is important that everyone involved in the assessment be kept abreast of what the results are and what is being done with the results. This keeps an organization engaged in the process and helps to develop buy-in for the plan. In many cases the organization may elect to perform a follow-up assessment one or two years later to observe changes in the baseline. Many times an organization becomes either satisfied with the great results of the early part of the program (low hanging fruit) or becomes distracted and complacent. In this case, major savings opportunities may be missed and cause the program to fade. Supplementing a readiness assessment with a deployment assessment enables an organization to keep the initiative on track and monitor organizational behaviors and performance related to the initiative’s critical success factors.

In terms of payback, deployments preceded by proper planning, communication and prioritization will be significantly larger, occur sooner and last longer than initiatives which simply jump to training the masses hoping for the best. “Ready, Aim, Fire” always beats “Ready, Fire, Aim!”

About the Author