Six Sigma is a well-used management ‘buzzphrase’. As the name suggests it has its roots in statistical analysis. Some years ago Motorola, under the direction of Bill Smith, pioneered its application to drive product quality improvement. Their approach was based on the fundamental axiom that for any property varying within the Normal (or Gaussian) distribution, we know how many of the values fall within however many standard deviations (Sigma) of the mean.

TQM_5

In fact:

  • 68.2% of the observations will occur within one standard deviation of the mean.
  • A further 27.2% between one and two standard deviations.
  • A further 4.2% between two and three.
  • . . . and so on.

So 99.7% (that is, 68.2 + 27.2 + 4.2, rounded) of all values will fall within three standard deviations either side of the mean. By the time we reach six standard deviations this percentage has risen to 99.9999998027. So only two values in one hundred million will fall outside the range.

Thus if we look at this from the other side, so to speak, in a process where we are cutting metal tubes and the standard deviation of the length produced is 0.1mm, then 0.3% of the items will be produced with a length beyond 0.3mm of the mean and 2 values in 100 million will fall beyond 0.6mm of the mean. So if we require a length of 945mm ± 0.3mm, 99.7% will be within tolerance. At one time a reject rate of 0.3% might have been thought to be quite good but Bill Smith went further to highlight that variances at that sort of level were too high.

At the same time as the Motorola approach was being developed we were hearing of Japanese companies achieving ‘single figure’ reject rates. While some people felt initially that their own performance levels (of, say, 1% or 2%) matched the Japanese, any such illusions were quickly dispelled by the explanation that the Japanese measurement was in parts per million. Of course one per cent = 10,000ppm so the Japanese were in fact in a completely different league! Even getting to the point where the standard deviation is one third of the acceptable tolerance means a failure rate of 3,000ppm.

Bill Smith actually set a target for Motorola of all measurements within 4.5 standard deviations of the mean as being within tolerance. This actually gives a failure rate of 3.4ppm which falls within the same scope as the results being reported from Japan. He went on to explain that achieving this level of performance requires a process capability of 6 Sigma. In other words, the standard deviation of the process must be one sixth of the tolerance either side of the mean.

Why is this so? There is no proof but Smith argued that process capability is ‘what we can achieve’ – which doesn’t mean we will achieve it constantly. Performance drifts over time as tools wear, temperature fluctuates, operators become tired, complacent, or both, and so on. The figure of ‘6’ in order to achieve ‘4.5’ is entirely empirical; there is no scientific proof. It is simply an assessment by somebody who is clearly an expert in the field.

The Statistical Approach

We should qualify all of this with the statement from the introduction; this applies to any properties falling within the Normal distribution. (The Normal distribution generally applies to outputs which are recognised as the collective result of a number of other variables. The length of a piece of tubing as per the example above may be the result of a number of properties of the material, elements of the drive system within the piece of plant, the tooling and such aspects as air temperature and pressure. As a result, the assumption that the length will fall within such a distribution seems valid and experience suggests that plotting this parameter would result in the type of ‘bell curve’ shown, with results generally gathering around the mean. Some people will now be wondering whether the Six Sigma approach can thus be applied universally to all, for example, business process measurements. Do all fall within this distribution? Quite simply, no.)

A fundamental of any statistical approach, of course, is to assign mathematical functionality to the inputs and outputs of the process. In this case:

y = f(x) + ε

Where the output (y) of the finished item is a function of inputs:

  • Material or component (x)
  • What we do to it – the function (‘f’)
  • ε is an error / random variable

A question we should ask ourselves is “is anything really ‘random’?” Operations managers among our readers will probably agree that there is always a reason for variation. After all, even tossing a coin is not a ‘random’ process. The coin comes down on heads or tails not because of pure chance but because of the physics of the exercise. The force with which the coin is projected, the rotational force or torque imparted by the flick of a thumbnail, even matters such as atmospheric pressure and humidity can play their part. These last two may be recognised by some as factors in their own operations – a damp environment can lead to quality problems in many forms of manufacturing but this cannot in any way be described as random. If humidity is a problem we can address it.

So we can control materials and the process and, by doing so, we should be able to control the output. One lesson the Japanese taught us in their development of Quality Assurance (as opposed to Quality Control) is that of managing processes to prevent errors. This took many forms – improved maintenance of plant, better training of operators, quality circles in which multi-disciplinary teams addressed all the causes of potential problems, sourcing based on partnership with suppliers who were as focused on problem-free manufacturing as the people buying their components and so on.

Controlling the nature of the process (‘f’) and eliminating the random elements (‘ε’) was thus central to Bill Smith’s approach and undoubtedly delivered success. Of course there are many elements of quality improvement and, in fact, many aspects to Six Sigma these days. Central to this is the structured approach, called DMAIC, to addressing problems. This approach is based on clearly-defined steps:

Six_Sigma_Graphic

“Six Sigma Performance”

We do hear this term sometimes and people wonder what it might mean.

Of course, we all achieve six sigma performance 99.9999998027% of the time for processes whose outputs fall within the Normal distribution.

(That is, this considerable majority of the outputs will fall within six standard deviations of the mean. Whether the standard deviation of the process is sufficiently controlled to provide the quality levels required by the business is another matter. The term has now been ‘branded’ to mean something else. Authors, lecturers and, yes, management consultants often appear to need to sell ‘products’ as opposed to their own services and this is just one example among many.)

Caveat

Some people promoting the Six Sigma approach seem now to have lost sight of the fact that it is simply one of many powerful tools available to the business change agent. Indeed they now bundle other tools from within the Lean framework (e.g. poka yoke devices, design for manufacturing and even the cultural aspects of kaizen) into an offering they market as ‘Lean Six Sigma’ as if it were one entity!

The cynics among the MLG team have even been heard to speculate whether ‘Lean Six Sigma’ wasn’t simply a creation of the George Bush Jnr administration, resentful of the fact that Western manufacturing had been led into the modern world by Taiichi Ohno and the other Japanese fathers of best practice. If Bill Smith’s statistical approach could be expanded and proffered as the overarching concept to all the other ideas then perhaps President Bush could sleep more soundly at night as the leader of the nation whose industry led the world.

Of course in real terms this isn’t a problem as long as the key players in a business (management who must give direction and the improvement teams who must make change happen) understand all the tools available and can make best use of the ones most relevant to the challenges faced.