The Myth of the Perfect Process

Like so many things involving risk reduction in safety, trying to implement a safety system through statistical process control is theoretically possible, but practically impossible. Here's why.

Ever since statistical process control (SPC) was conceived at Bell Laboratories by Walter A. Shewhart sometime back in the early 1920s, manufacturers have pursued the great white whale that is the perfect process. Along the way, any loss prevention engineer who has tried to implement a safety system of SPC (and then Six Sigma) knows that — like so many things involving risk reduction in safety — it is theoretically possible, but practically impossible.

Statistical process control is challenging because it relies on a process that is “in control,” which means that the process effectively returns a predictable result within calculated upper and lower control limits. The typical profile of a controlled process should follow a normal distribution (where the mean, mode, and median of the data are equal) of tendency and there must be a virtual absence of “special cause variation,” i.e., the process must be well defined, tightly controlled, and have most things being executed and performed according to plan. SPC works well when these conditions are met, but many shops still struggle with getting their safety processes sufficiently tight.

One of the reasons for this is simple: The higher the degree of process automation, the more controlled the risk reduction elements within that process. However, mistakes can occur even in automated processes. And the more manual the process — the higher the degree of difficulty in transforming complex human variability into robot-like precision — the more natural variation in the process, because people tend to vary greatly in size, shape, aptitude, skills, and attitude . . . hence less control and more mistakes. There are no absolutely perfect processes.

In a world that operates with no perfect processes, I personally find that few safety professionals really understand Geometric Dimensioning and Tolerancing (GD&T), tolerance stacking, and how variance plays into safety. That’s unfortunate, because every process has tolerances that it must operate within to be acceptable — which applies directly to loss prevention control.

To understand how tolerance stacking relates to safety, let’s use the analogy of a relatively plain bottle of water. The bottle and the bottle cap are each manufactured in separate operations, under different conditions, then joined together in still another distinct operation, the bottling process, under other conditions. The outer rim of the mouth of the bottle must be finished to a very specific size, as must its inner rim, as must the inner rim of the bottle cap, etc. Tolerance is typically expressed as X + or – Y. If the bottle cap is too small or too large (i.e., outside the upper or lower control limits of the final specification) it will not fit on the bottle. Similarly, if the bottle itself is too large or too small, it will likewise be rendered useless.

But what if the bottle cap is just a tiny bit big and the bottle is just a tiny bit small? In this case, the tolerances from each operation have “stacked,” meaning they add together to cause more combined variability from the original specification of mechanical fit, which identified how big or little either the cap or the bottle could be, but failed to consider what would happen if both the bottle and cap were out of specification. Now apply this simple example of tolerance stacking to an operation that far more complex than capping a bottle, say an automobile, where there are tens of thousands of critical tolerances that must be considered.

Both of these cases deal with a relatively high degree of mechanical automation in their processes, yet both must reduce the risk of product failure through control of tolerance stacking that accumulates from mechanical variation. In other words, the automated processes being performed are not perfect. Now apply this concept to safety in a manual operation that uses people, where the goal is to reduce the risk of accidents through control of tolerance stacking that accumulates from human variation.

Consider a manual process in your shop, which has the best engineers available creating the standard operating procedures for the tasks to be performed in that process. Who exactly did the process have in mind? How much force is acting on the human body when a person performs each job task? How much of this force can an average person withstand before their minds, knees, back, elbows, or other body parts give out? In other words, what cumulative forces act on the body and what are the tolerances of both the body and the process?

Some great companies out there perform ergonomic evaluations that can calculate these statistics for you. Other human factors evaluations can test an individual’s abilities to perform at given specifications. Both of these services used together can give you the bottle and bottle cap numbers, figuratively speaking. But is that enough? While it makes sense that doing this kind of analysis would return a safer workplace, remember that this only deals with a process that is under control, where people operate within a normal bell-shaped curve.

I am a strong proponent of both ergonomics and human factors engineering, but let’s not run off half-cocked here. Continuing our bottle and bottle cap analogy, would we be successful by establishing a process, then expecting the equipment that makes the bottles and caps to follow it? Of course not. Yet in too many cases, ergonomic studies are only ordered for processes that have already hurt someone. Even the most sophisticated human factors programs tend to measure a person’s ability to do a job before they are hired (or more likely as part of a post-offer qualification), so the processes that are most likely to hurt someone tend to be addressed only after a human has been harmed — and a person’s on-going ability to perform a job is seldom evaluated.

In effect, even in the best circumstances, we only have a single snapshot of risk in an imperfect process.

Don’t misinterpret what I’m saying here . . . I certainly don’t knock shops whose safety management systems have reached this height of human factors sophistication. What a great start! This is an important accomplishment, but it really doesn’t provide us with as much protection as you might think. And, let’s face it, most of us haven’t come anywhere close to this level of behaviorial engineering.

Even on a smaller scale, many of us don’t recognize that we can have all the protection in the world, but if our process is filled with variation — and every process is — then the risk of injuries can still remain great.

Phil La Duke

Phil La Duke is a partner in the Performance Assurance Practice at ERM: Environmental Resources Management, 3352 128th Avenue, Holland, MI 49424, 313-244-2525, You can also follow Phil and reach him on his blogs at

1 Comment


Subscribe to our e-Newsletters!

Get metal manufacturing industry news delivered straight to your inbox!