The Lean Six Sigma (LSS) toolbox can always be extended with new approaches that enable the harvesting of new opportunities for improvement. Philosophically, improvement in terms of velocity, cost, quality, performance, reliability and widespread customer acceptance remain relevant but also complex. One way of extending the use of these methodologies and tools is by developing new, nonlinear approaches to improvement within the transactional improvement space – particularly upstream in the areas of innovation, new product development and the software development life cycle.

There have been attempts to improve new product development and software development by applying Lean manufacturing tools such as value stream mapping, Kaizen, 5S (sort, simplify, shine, standardize sustain), flow and bottleneck management, agile thinking, etc. Many of these attempts waste time, are met with resistance, fail to address true root causes and may be short lived. Why? Because complex creative processes include unpredictability, engineering judgments versus hard data, a high degree of informal activities underlying a formal process, and fuzzy cause-and-effects in space and time. The philosophy of continuous improvement is applicable to any business, but improving highly complex processes requires nonlinear thinking, creativity and adaptive improvement methodologies.Lean Six Sigma toolkit

In knowledge-based processes there is a high degree of people-to-people innovation and people-to-technology interactions that do not lend themselves to the traditional linear approach to improvement, such as DMAIC (Define Measure, Analyze, Improve, Control). In certain situations in the new product development or software development process downstream in design standardization, validation and test, and commercialization, problems are known with greater certainty and the linear DMAIC approach to improvement works well. Experience has shown, however, that this approach is less effective upstream in research and development, innovation and technology development, hardware and software concept engineering, and the functional specification stage.

Several nonlinear process improvement approaches, globally referred to here as abstraction factor analysis,  have generated success in improving new product development and software development.

The Typical Development Scenario

If you walk into product development and software development organizations, they will likely find a few practices that are common to each. They include the following:

  • There is a higher perceived workload of projects than available resources to execute and complete projects on time and on budget. Schedules are artificially set aggressively to achieve approval for revenue plans. Schedules and priorities are in a state of constant flux. Yet when looking at project activities and flow closer, more than half of the spent time is wait time – waiting for someone else to complete a particular development before the process can continue.
  • Requirements are either misinterpreted, incomplete or characterized by a moving target of feature creep, added functionality and drive-by engineering, causing rework loops in individual projects. In a software organization, for example, development occurs before hardware is developed or before many critical business and operating requirements are even known.
  • Projects are planned and executed through shared teams of resources, often from different functions, business units and physical locations. This situation creates overloaded resources, conflicting priorities, the cannibalization of resources, and weak system capabilities to manage and smooth out constraints. Additionally, there are integration issues as the individual elements of development come together.
  • There are meetings in which people discuss technical and non-actionable judgments for why projects are behind as well as share perception-based judgments about how to realign individual activities and get projects back on track. These decisions are made in a silo without a full understanding of the impact on all projects, resources and schedules.
  • There are unplanned workloads for sustaining engineering and software maintenance, testing and debugging activities or customer field issues that create resource distractions from gate-approved projects (those hardware and software products that are approved, budgeted and resourced for development through commercialization).
  • There are conflicting expectations and performance metrics that drive behaviors at a business unit or project level that are misaligned with aggregate company objectives. Executives and program managers learn about problems after they occur or when they are about to occur with no opportunity for proactive recovery.

This list of items continues; however, it should be noted that these observations are more prevalent in small- and mid-sized organizations than in larger organizations with an abundance of resources to dedicate to programs. One challenge that stands out in product and software development is the ongoing prioritization and efficient use of the right resources to keep projects moving – simple in concept, difficult in practice.

Quantifying Soft-decision Processes

These environments give rise to a key question: How do you quantify and track all of the informal activities regarding shared resource decisions under a geographically-distributed formal stage-gate process or another structured development methodology (such as agile) and their relative impacts on time to market? In other words, how do you quantify the soft-decision processes of an organizational structure with its chain of command and span of control networks? And before that, how do you even convince an organization that there is a problem at all when the typical scenario described previously is an accepted and institutionalized way of conducting business?

With a little creative LSS thinking, the use of an organization’s existing information technology (IT) infrastructure and statistical software (such as Minitab) to analyze and correlate the results, a nonlinear approach can be a creative means of analyzing the “soft stuff” hiding within the formal process.

Abstraction Factor Analysis

Abstraction factor analysis is a fact-based approach to understanding and evaluating the use of shared resources on development projects. An abstraction factor is a measurement of the number of steps a resource is away from the executive development program manager responsible for the successful completion of a project. A factor of one represents a team resource that is a direct report; a factor of nine represents a team resource that is several functions, business units and site locations away from the executive development program manager. The higher the abstraction factor, the more likely a resource will receive invisible directions from others in the process.

How to Conduct the Analysis

To initialize the analysis, prepare a portfolio of gate-approved projects and list the team members and other resources involved in each project. In effect, create a matrix for each live project that resembles the matrix below.

project matrix
Project Matrix

Next, compare each project matrix to the organization chart and calculate the abstraction factors for each person and for each project. Also include other critical project data (stage/gate status, budget, actual time reported by individual by project, development complexity, etc.) in the abstraction factor baseline matrix. This provides a baseline of quantifiable data to evaluate project status and resource utilization, and to calculate the costs and lost opportunities associated with disruptions to the development process. These costs and lost opportunities are not available on financial statements because they are hidden.

The analysis is not complete yet; more data sampling is needed to explain and quantify “why.” abstraction factor analysis, along with the development time reporting system, provide the ability to drill down to specific projects and individuals and measure actual versus planned/allocated time spent on projects, and to assess if the best resources are assigned to the highest risk programs. With this approach, team members come to project reviews equipped with analytics and facts about what is going on relative to staffing and resource management, rather than relying on perceptions and opinions.

It is not unusual to find that actual development time may be as much as 30 percent below planned development time at any given point in time. The opposite may be true, however, where a project is resource overloaded at the expense of all other projects in the portfolio. Both situations create significant project delays, design process quality issues, budget overruns and late time-to-market.

More Data Sampling

The next step of the nonlinear analysis approach is to design a sampling plan – more data elements that will help to describe and solve the problem with a process of transaction stream mapping. Every time a process breakdown or disruption to a project schedule is detected in any gate-approved project, answers to the questions listed below should be collected. Detection occurs during weekly program manager reviews or may be voluntarily reported by team members as they are being redirected and realize that those changes will affect their planned workloads.

This sampling process is an objective effort to bring to light the organizational dynamics at play behind the formal system. These sample points are based on individual activities that are not done, incomplete, incorrect or late in each development project and are identified during, and in between, the program manager’s weekly review meetings. For each incident (defined as a redirection of work that takes a day or more planned resource time), a 5W + 1H (5 Whys and 1 How) analysis is completed to better understand the individual disruption sample points:

  • Who caused the disruption (name and reason  – continue to ask “why” to get to the root cause) and on what project?
  • What caused the disruption to happen?
  • Where in the organizational network did the disruption occur (individual, manager, organization)?
  • When did the disruption occur? Document other potential events that might have caused the conflict.
  • Why did the disruption happen? What was the root cause of the disruption?
  • How could this disruption have been prevented?

Since program managers and design teams are collecting samples across all gate-approved projects over time, this sampling is a good indicator of how the overall development process works. After data is collected, analysis can be performed on the data using tools such as pareto analysis, scatter plots, regression analysis, and other simple graphical and tabular analytics.

Abstraction factor analysis quantifies the surface-level hidden waste in the development process due to shared resource issues. This waste may include issues such as the following:

  • Projects that are the furthest behind schedule cause the highest lost revenue potential because they are complex and have high abstraction factors.
  • Resources are working less than their allocated time on development projects because they are being directed to work on other tasks by other executives and managers in the abstraction factor chain.
  • The best resources with the right skills may be misaligned to the specific needs of projects. The best resources may also be spread so thin across multiple activities that they cannot be effective at any single effort.
  • The harder people work, the more they fall behind because they are working and competing against each other’s resources, not in a synchronous program flow mode  based on facts.
  • The latter phases of development are time compressed, creating the need for shortcuts and workarounds.

With abstraction factor analysis, program managers can see with data how to course correct; the data exposes disruptions to the schedule due to shared resource issues. They can collaborate and smooth out project flows, and prevent larger problems from occurring downstream in the development process. This type of analysis also provides residual input into future resource planning and talent development. (Executives and managers in the abstraction streams operate with the best of intentions, but they can be blind to the impact of their staffing changes on the development portfolio.)

Managing Abstraction Factors

The process of abstraction factor analysis can manage shared resources more effectively on an ongoing basis. A two-tiered approach to development leadership is recommended:

  1. A steering committee for new product/new software development that conducts a higher-level review of projects relative to revenue plans on a monthly basis and makes strategic commitments to project resources.
  2. A core team of program managers that conducts weekly collaborative reviews of projects, discusses particular shared resource issues and resets the process for the next few weeks.

This is a gross oversimplification of the teams’ complete roles and responsibilities, but the overall approach works. Abstraction factor analysis equips program managers with the ability to chase down gaps and root causes of planned versus actual resource contributions by project on a week-to-week basis and then reset resource commitments. It is even possible to monitor project flows to plans in near real time with project tracking software.

For example, projects and tasks that are on time and on budget are marked by green zone designations. If an individual commitment is coming due, the project tracking software highlights the event in yellow. If a task reaches red status (past due), this initiates an immediate shut-down-and-recovery session (usually a short, stand-up meeting). Over time, data patterns arise about the inefficiencies of shared resource practices, such as the common inflection points (points where alternate direction is given to a resource) that drive the portfolio off track in the interest of a single project or individual’s goal, or the recurring weak abstraction streams (links in the organization chart where resource disruption occurs) in projects.

Improving Product and Software Development

Quantifying the complex problems of shared resources provides a more scientific approach to taking the best corrective actions. Often, once a problem is quantified, a reorganization and realignment of critical resources is necessary, but this is just the beginning of major improvement opportunities. There are many solutions that have been implemented based on the results of this nonlinear approach. Below is a partial list of improvements implemented in product development and software development.

  • Continuous customer collaboration (requirements process control and stability)
  • Co-location of key technical resources into clustered work groups (dedicated cells)
  • Monitoring actual versus scheduled individual/team time (line of balance)
  • Elimination of inflection points (managing bottlenecks)
  • Paired development practices (shadowing, mistake proofing)
  • Interactive development (problem solving in empowered teams)
  • Smaller bits of development with more frequent reviews (pull and one piece flow)
  • Sustainable development process (synchronization, continuous flow)
  • Digital communication and dashboards (visual management)
  • Structured program management practices (standard work, plan discipline)
  • Frequent design reviews (daily team meetings)
  • Green-yellow-red management practices (flow regulation, proactive problem solving)
  • Development shutdowns and recalibrations (quality at source)
  • Clear metrics and planed versus actual progress (timely closed loop feedback)
  • Business case reviews, lost revenue due to late projects (ownership)
  • Develop versus test quality into the process (prevention versus detection via inspection/test)
  • Fast-track development, validation and tests (cycle time reduction, risk analysis)

The largest advantage of abstraction factor analysis is that it replaces the informality of managing a complex process with actionable facts and analytics in a formal collaborative leadership environment. The goal is to drive down the abstraction factors, personal leadership interventions and inflection points through total portfolio management, organizational and process improvement, and empowerment of program managers and teams.

Summary

Abstraction factor analysis is not the end-all to development issues, nor is it a standalone improvement tool. It is a use of experimental design thinking on a complex transactional process. It was developed to provide a means to view the invisible disruptions of development program flow behind a formal stage-gate process.

Continuous improvement in the knowledge-based transactional environments is challenging and complicated. Serious and sustainable improvement in product or software development (and in other organizations) is impossible unless there is a shared recognition of the need to change by leadership. When team members at all management levels accept that their shared resource practices are a root cause of late time to market, it is easier to enable improvement.

The nature of product and software development is not steady and there are dozens of opportunities for improvement in product and software development. This article addresses only the shared resource issue, albeit a huge opportunity. Overall, these development processes are highly complex and innovative, and will always have some degree of unplanned variation; this is the nature of human multiple input, multiple output; multiple input, unknown output; and social complexity processes. Design and software engineers have a maximum capacity (similar to machines); when they are overloaded, they find workarounds and shortcuts, or multitask to such a point that what they work on is less efficient.

Restraining these people in a rigid standard development process does not solve the problem, nor does the advice “do your best” or “you have to work harder.” Standard development processes and practices promote efficiency and effectiveness because they provide a baseline to measure progress against. Every project is not identical in tasks, scope, and complexity, however, and there will always be legitimate reasons to veer on and off a highly rigid process, such as fast-tracking development on simpler projects.

Organizations must create the ability to sense-interpret-decide-act-monitor within their development environments as unforeseen events begin to appear. Abstraction factor analysis enables executives and program managers to make better fact-based decisions regarding the assignment and utilization of shared resources, visualizing workloads of various planned scenarios, and optimizing revenue opportunities through a more intelligent and analytics-based approach to shared resource management.

About the Author