The Evolution of Quality
As a highly generalized statement, it was around the beginning of the 20th century that the industrial revolution began to reach critical mass as a pervasive tour de force in shaping global consciousness, as well as in shaping a new economy. In part, this new economy was based on a number of revolutionary ideas, among which were several new ways of thinking about quality. The idea of quality was no longer rooted in the skills of craftsmanship but had become inextricably tied to the idea of factory yield.
Once, the craftsman created quality with the art of his mind, heart and hands. But then the science of progress and technology enabled the creation of relatively low-cost, quality products via mass production. The very nature of economic activity had shifted from the personal to the systematic. Specialized skill gave way to compartmentalized science. Handcrafted goods gave way to machined products. In short, the era of craftsmanship was eclipsed by the era of management-ship and engineering.
Birth of a Management System
As mass production and the consumer society took hold, the ideal of quality was realized as a product of scientific discipline, not of personal skill. Advances in engineering, management, statistics, communications and transportation - along with the proliferation of machines and the explosion of demand - created the shift from a highly subjective definition of quality to one that was inordinately more objective and defined.
Inasmuch as the idea of science was to systematize knowledge (derived from quantitative observation and experimentation) to determine the nature of nature, the idea of quality, at this time, was to systematize the way organizations engaged in determining and realizing what was "good." The idea of quality had evolved in lockstep with the idea of science such that it was no longer grounded in the subjectivity and skill of the artisan. The notion of quality, as it were, took on the characteristics of the new age by integrating, practicing and embodying the tents of science.
Along with engineering and production, quality evolved into a science that had to be managed due to its organizational rather than personal nature. As organizational life evolved, we were clearly still concerned with the quality of products, if only in a rudimentary way. But we also became concerned with the quality of cost, the quality of time, the quality of actions and tasks — on a mass scale.
No wonder Fredrick Taylor – so well known for his innovative contributions to the field of motion and time measurement – was principally focused on productivity, or how to improve the "yield" of time, money and resources - also called "efficiency." For an industrial business to be successful, it had to yield more products at a lower cost and somewhat higher quality to keep pace with increasing customer and shareholder demand.
The Science of Quality
Around this same general time, the work of Walter Shewhart began to emerge in the context of achieving a priori control over product quality through the application of mathematical statistics. In many ways, Shewhart's work was a clear signal that thought was evolving as the economy shifted from the dominance of the craftsman to the dominance of the machine. Shewhart's basic premise was that, through scientific measurement and statistics, we can control the product by controlling the variations that emanate from the process. By doing this, we can minimize the production-related errors that lead to "non-quality" outcomes and, thereby, improve the yield of the production system and lower operational costs.
Such a scientific framework stood in stark contrast to the modus operandi of the craftsman, whose errors were accepted as part of the process. To err is human, so went our mental programming, and it wasn't unusual for a craftsman to scrap an entire job and begin anew when its quality suffered. Granted, the better the craftsman was, the less this occurred. But whether quality was good or bad, the craftsman was its source – and the target of blame when quality fell short of rightful expectation.
Building on the work of Shewhart, Dr. W. Edwards Deming more deeply challenged many of the quality-related beliefs, practices and methods of the era. As the worn but important story goes, Deming traveled to Japan with the charge of helping it rebuild its economy after the devastation of World War II. The visionary that he was, Deming saw the worldwide need for making the realization of quality more organized and manageable - systematizing the way corporations made that which was good.
In a nutshell, it was the responsibility of management, not workers, to ensure that products were made and delivered according to specification, within budget and on time. It was through managementship - the judicious use of control to the attainment of an end - that Deming believed a company could ensure the quality of its products, the quality of its work, its processes, its costs, its timing and so on. Controls, tools, organization, coordination - these became the new guidewords for ensuring quality outcomes at the process, operations and business levels of a corporation.
Parallel to Deming, corporations engaged in the disciplined practice of end-of-line testing and inspection, in-field testing and began to somewhat consider the idea and utility of customer feedback - a relatively foreign concept at the time. While Deming’s message was one of a priori control (defect prevention), the overriding practice of the day was one of a posteriori control (detect and fix). American factories were operating at full capacity to meet pent-up demand from the years of war and, in that environment, Deming's message was only partially heard and partly heeded.
While Deming and others were proliferating the tenets of quality management, most corporations were operating by the principle of build and ship — meet production quotas at any cost. After all, the U.S. was the world's benchmark producer, so why would a highly profitable business want to extend the effort and resources required to improve quality? The cost of scrap and rework was simply not perceived to be significant in light of boom-time demand. The accounting systems were not designed to isolate, capture, consolidate and report many of the costs associated with poor quality, so when the cost of poor quality was examined, the number looked relatively small, insignificant and quite tolerable. Besides, such costs were simply viewed as a normal component of doing business, the unavoidable consequences of capitalism and mass production.
On top of this, the science of quality was still in its infancy; it was more a vision and a theory than it was a sophisticated and proven practice underpinned by demonstrated success, globally speaking. Given this, the task of transmitting quality improvement principles and practices to the masses was too cumbersome and daunting. Still further, while Deming's message was clear, there was no do-or-die imperative to improve quality on this side of the Pacific. Nor was there a structured, repeatable, mature and readily available methodology for solving problems and preventing defects.
Looking back, it's easy to understand that the very idea and aim of quality management was a direct outgrowth of the pre-scientific era. For hundreds, even thousands of years, the human mind had been conditioned to think in terms of a Platonic ideal - quite literally a god-given sensibility for what was good, beautiful and useful. Inasmuch as science was systematizing the ideal of truth, the likes of Deming were systematizing the manifestation of truth in tangible form, fit and function.
Connecting Yield and Quality
But while Deming was preaching higher quality, business was demanding higher yield. The compromise position was to practice quality management in a yield-driven sort of way. As long as companies detected and fixed defective parts and products, they could ensure their processes yielded their intended benefit. It was the simple idea that process yield is a function of output over input –regardless of how much rework had to be done to make the ratio 1.0, or 100 percent. This was the rational fallacy: as long as yield was high, product quality had to be optimum. Interestingly, virtually no meaningful or scientific consideration was given to the idea of first-time process yield - executing each operation right the first time. It was simply believed that if the product was come out right (through continual detection and repair), then the process must be right too.
It is astounding, knowing what we know today about "hidden processes," the number of corporate executives who still believe that the first-time yield of an operation is given by the simple function of final output over input. The existence and impact of invisible processes evades them to the extent that they believe scrapped units are their only source of lost yield. In reality, any business-, operations- or process-level transformation that does not yield "entitlement value" cannot be considered "quality yield. ”
It is important to understand the concept of throughput yield as the statistical probability that any given value opportunity produced at a particular step in the process will conform to its respective performance standard. It is the statistical likelihood of "doing it right the first time" at a given point in the process. It is also necessary to understand the concept of rolled-throughput yield as the statistical probability of being able to fully create all the critical-to-quality characteristics (CTQs) of a product or service – error free, the first time through. Expressed differently, rolled-throughput yield is the statistical probability, based on performance data, of doing all things right at each step across a complete series of process steps.
If supplementary processes or activities are needed or otherwise engaged to provide a quality outcome, then the final yield is not quality yield — it is just "final yield." The idea of quality yield can only be realized when the final yield is the result of executing all the necessary transformations right the first time. Only when this is accomplished will the intrinsic or "entitlement" value be realized. The augmentation of a process to realize a quality outcome (by way of repair) only reduces the overall value of that outcome, because additional cost and time is required. Hence, the producer and/or customer ultimately bear a negative consequence, even though the product conforms to its performance standards.
In other words, the quality of a business is a function of first-time transformational yield - not final product yield. It is the extent to which a corporation can conduct its millions of daily transformations right the first time that determines the extent to which it produces full value for customers and shareholders. It is in this broad sense that we define transformational yield as the confidence with which a corporation can enact all of its critical value-based activities rightly the first time. Only when this aim is realized can we say with any degree of certainty that the corporation is a quality business.
The principle we learn from this is that we can realize better business yield only through the systemic and systematic improvement of the many value-based transformations that are regularly performed at all levels of a corporation. As this occurs, the quality of business increases.
A Burning Platform for Change
In an almost religious way, Deming exhorted us to focus on product quality because he knew there was an inherent relationship between higher quality and lower costs, between better quality and more satisfied customers, between fewer defects and increased yield. What we lacked at the time (1950 - 1980) was the proper construct and language for closing the gap between the aims of quality and the needs of business. Furthermore, American companies had no real burning platform for change, as they enjoyed their respective roles and places in the world's greatest industrial machine. They, along with the general population, were fatter and happier than they ever could have imagined.
That burning platform emerged, however, as Japan had rebuilt itself into the world's second largest and most powerful economy by the decade of the 1970s. Most of us are familiar with the rise of Japanese global market share in cars, televisions, steel and consumer electronics during the post-war era. In turn, this translated into America's decline in global market share in these same products during this same time period.
Although the Americans invented many of the key theories and practices of quality management, it wasn't until 1980 that the quality movement reached the mainstream in the form of an NBC documentary titled, If Japan Can... Why Can't We? It was a 90-minute, hard-hitting program that examined the achievements of Japanese and American industry and "that helped us start the introspection, all of the soul searching and rethinking of how we wanted to function." (Gerald, 1990)
The NBC production galvanized a nation and sparked a decade of searching for answers to America's productivity and quality problems. As of 1990, more than 6,000 copies of the documentary were ordered, 4,000 more than the next most popular NBC video program, the Frost/Nixon interview. In short, the program solidified and institutionalized the growing perception that America's business problems were a direct result of its quality problems.
Perhaps this explains why the nation's business leaders began to look to the likes of Dr. Deming and Dr. Juran for guidance on how to systematically improve quality. As Xerox, Ford, 3M, General Motors, Motorola, Florida Power & Light - and an enormous host of others - began to adopt the ideas and methods of continuous Improvement, the quality movement quickly reached critical mass. It is absolutely not a stretch to say that, by 1990, almost every company in the Fortune 500 had initiated a quality program in one form or another.
Perhaps the quintessential manifestation of this drive was when Motorola's chief executive Bob Galvin placed quality at the very top of the agenda for every board of directors meeting. Quality was number one for Motorola, and all else would follow. Galvin embodied at the top what so many quality practitioners were practicing at the bottom: even though we don't know exactly how quality impacts our business, we know it is good and, therefore, we will embrace it on the faith that every defect and error we prevent will show up on our income statement.
We mentioned that Motorola had already won the Baldrige Award in 1988, in no small part due to Six Sigma, and it had become known among other quality aspirers as a benchmark. The company had perfected its processes to the point of producing no more than 3.4 defects per million opportunities for such defects. Motorola's Bandit pager was produced so reliably that it became more cost effective to replace rather than repair in the extremely rare advent of a failure.
But Motorola's process-level success was also, paradoxically, a proxy for the failure of TQM. As companies heard the story of Six Sigma via the Baldrige requirement to "spread the word," they began to realize it was a superior method of quality improvement. Nevertheless, their main preoccupation was with winning a Baldrige, a PR maneuver that seemed much easier than implementing the rigors of Six Sigma. As more and more companies won the Baldrige, more and more doubts emerged, because there was a growing sense that quality was not necessarily correlated with business performance.
The Failure of TQM
Some companies that won the National Baldrige Award were not perceived by the public to make "quality" products or to be business exemplars. Furthermore, when the financial performance of Baldrige winners became the subject of scrutiny, the conclusion was that they did not necessarily perform better than non award-winning companies. Even among companies that tried to implement the Baldrige criteria without chasing the award, executive management formed the impression that such criteria, and the initiatives designed to realize them, were simply not powerful enough to move the needle of corporate business performance.
A 1996 study, the results of which were recently published by Quality Progress (Berquist, Timothy, 1999) strongly indicates that the impact of TQM practices may not be nearly as significant as some think. After examining performance data from Baldrige and state quality award winners, applicants and non-applicants, the study's authors said they could not "conclusively determine whether quality award winning companies perform better than others."
Even before this, TQM skepticism was already building. A 1994 article in Quality Digest magazine titled, Is TQM Dead, editor Scott Madison Paton cited study after study that brought the viability of TQM into serious question. This article pointed out that: "Only 20 percent of Fortune 500 companies are satisfied with the results of their TQM processes, according to a 1992 Rath & Strong survey." (Paton, 1994) "Florida Power & Light remains the only U.S. company to have won Japan's coveted Deming Prize. It's winning strategy was largely dismantled after complaints of excessive bureaucracy and red tape."
Paton continued: "A survey of 300 electronics companies by the American Electronics Association found 73 percent had quality programs in place, but, of these, 63 percent said they had failed to improve quality by even as much as 10 percent.” Patton went on to indicate that: “A study of 30 quality programs by McKinsey & Co. found that two-thirds of them had stalled or fallen short of yielding real improvements."
Then Paton made a simple but profound point: "The 'balkanization' of TQM spreads on an almost daily basis as TQM splinters into ever-smaller spheres of influence like ISO 9000 and Reengineering..." Then, after asking five noted quality experts for their respective definitions of TQM, Paton reports that he got five different answers. To this he said that: “TQM is a philosophy, not a science.” In that sense, Paton concluded, TQM wasn't dead. It's failure just proved "that bad management is still alive and kicking."
Many couldn't agree more that the era of TQM can be characterized as a period of intellectual divergence in which a theoretical connection between quality and business performance was conceptually established. In practical terms, however, no such connection was made outside the confines of isolated organizational and operational pockets. Quality was good, but it was not necessarily good for all. Quality was free, but it wasn't always so free that it impacted the income statement. Quality was focused, but not so much that it became a tour de force in the overall performance of a global corporation.
Clearly, something was missing, and the management community moved in rapid fashion to fill the void. The helium was released from the balloon of business performance, and something had to keep it afloat. In chronological terms, this was the time period of the early nineties, when the US economy was faltering into recession, and when it became clear that the church of quality had failed to save the day. TQM was undeniably good for detecting, fixing and even preventing defects. But was it a system for managing overall business performance and achieving economic breakthrough on a global scale? Could it help create a quality business?
In the minds of senior executives, TQM was squarely about working on product quality problems, not about managing the business enterprise. No one would deny the need for or importance of the quality function, as it was good for operations, good for its perceptual value and good in the eyes of the customer. Like ISO 9000, TQM was a ticket to punch in the eyes of senior management. It had reached its point of diminishing return in terms of providing the impetus and ingenuity needed for solving the business problems of the day.
New Programs Abound
From this perspective, it is easy to understand why such initiatives as reengineering, restructuring and downsizing emerged as the new alternative for management attention. These types of initiatives provided a lot more opportunity to cut costs and improve productivity. Simply stated, reducing defects and improving "conformance to requirements" was not viewed as a strategy for business success – a way to capture market share and produce greater shareholder value. There was a much broader spectrum of economics that the quality leaders and technicians simply overlooked, or were not trained, able or empowered to address.
Maybe these new initiatives, in combination with traditional quality approaches, would be powerful enough to achieve business breakthrough. At least in the case of reengineering, its inventor, Michael Hammer, made this claim at least in regard to revamping major cross-functional processes through the combination of new technology and a mentality of "tossing aside old systems and starting over.” (Hammer, 1993)
Naturally, Hammer was not shy in his 1993 book, Reengineering the Corporation, to point out that his business improvement thrust yielded a 90 percent reduction in cycle time and a 100 percent improvement in productivity in IBM Credits' credit issuance process. He sites other dramatic improvements in headcount reduction at a Ford department, as well as dramatic cycle time and cost reductions in Kodak's new product development process.
We note that Hammer made a significant contribution to management thinking and practice with his emphasis on discontinuous change - breakthrough as opposed to continuous improvement. In this sense, he also caught the attention of the quality community, which for so long enjoyed its run as the only game in town. Now, breakthrough was the game, and slow, incremental improvement schemes were fully out of vogue in the minds and dockets of business leaders.
Yet in another sense, Hammer's new approach was very much like the quality initiatives he overshadowed: his reengineering success was focused very much on isolated pockets of activity within a business. In this sense, as with TQM, reengineering lacked the scope and depth of reach to transform a corporation and set it on a new performance continuum. While reengineering disturbed certain parcels of business activity, it did not disturb the whole. It simply didn't get to the control function of a corporation.
Nevertheless, in terms of the popular business consciousness, a philosophical shift was underway. Business improvement was not so much a function of quality improvement as quality improvement was a function of business improvement. In other words, if you focus on improving the business, you can't escape the requirement to improve quality in all facets. But if you focus on improving quality, you can get away with doing so to the exclusion of improving the business.
Recognizing the oversimplification, the era of TQM simply had its Ys and Xs mixed up, where Y is the output and X represents the inputs. Improvement initiatives were more symptomatically driven than they were problematically driven. Furthermore, the priorities and practical expression of TQM were largely disconnected from the priorities and practicalities of business. The time was overly ripe to bring the two trains of thought together — to pragmatically merge the best of quality with the best of business.