This action might not be possible to undo. Are you sure you want to continue?
Design, innovation, agility
This paper was prepared as the opening address to the Design Research Society’s conference, Quantum Leap—Managing New Product Innovation, held at the University of Central England in Birmingham, September 1998. Professor Bruce Archer is President of the Design Research Society.
also very active members of the Design Research Society, so that what I am about to say is as familiar to them as it is to me.
n welcoming you to this Conference, let me try to interpret its title for a moment. In their Introduction to the preprint papers, the Conference organisers, Bob Jerrard, Myfanwy Trueman and Roger Newport equate the term ‘Quantum Leap’ with the term ‘a breakthrough in product development’. They ask the Conference to address questions such as: ‘How do successful companies use design in order to reduce risk and uncertainty in developing innovative new products?’ and ‘To what extent can design be used creatively to make a major breakthrough or quantum leap in successful innovation practice?’ These questions clearly lie in the domain of design practice and design management practice. Let me, in my role as President of the Design Research Society, also try to place them in the context of Design Research, and in the climate of ultra-fast moving product development. I hasten to add that Bob, Myfanwy and Roger are
The term Quantum Leap, borrowed from particle physics, implies, for me, a jump to a higher energy level. It also contains something of the notion of a ‘Paradigm Shift’ as described by Thomas Kuhn in his famous book, The structure of scientiﬁc revolutions, published in 1962. I shall be saying more about paradigm shifts as my argument develops. The second part of the title of this conference—Managing New Product Innovation—indicates the ﬁeld in which this Quantum Leap, or paradigm shift, has to occur or is occurring. Actually, the term New in the title is tautological. Robert L Charpie deﬁned the term ‘Innovation’ for the US Department of Commerce in his inﬂuential report Technological innovation: Its environment and management in 1967, as ‘The successful bringing to market of new or improved products, processes or services’. So the term ‘New’ is contained within the term ‘Innovation’. Thus, we are asked to address the issues in the management of the task of bringing about paradigm shifts in the
www.elsevier.com/locate/destud 0142-694X/99 $—see front matter Design Studies 20 (1999) 565–571 PII: S0142-694X(99)00025-3 © 1999 Published by Elsevier Science Ltd All rights reserved Printed in Great Britain
design and marketing of new or improved products. This is the very stuff of the Design Methods and Design Research movements. Let us remind ourselves of the way in which it has developed. The ﬁrst Design Methods Conference was held in London in 1962, thirty-six years ago. The second was held here, in Birmingham, three years later, in 1965. Unusually, the Organising Committee of the 1962 conference, under the chairmanship of John Page, then of the Department of Architecture, Shefﬁeld University, and the secretaryship of Peter Slann of the Department of Aeronautics at Imperial College London, remained in being for several years after the 1962 Conference had ﬁnished. It was this 1962 Organising Committee that announced in March 1966 a decision to form a Design Research Society. We called a First Meeting of Founder Members for 2 February 1966, later postponing it to 27 April 1966. So the Design Research Society was formally founded on 27 April 1966. The list of the names of the 167 Founder Members reads like a roll call of the great and the good of the design methods movement of the 1960s and 1970s.
The systems approach
The interesting thing about the membership of the Design Research Society, then as now, is the eclecticism of its composition. It included, and includes today, architects, computer scientists, engineers, ergonomists, industrial designers, planners, cognitive psychologists and systems analysts. Its membership was then, and is now, drawn from industries as diverse as advertising, aerospace, building construction, civil engineering, computing, consumer products, health care, ship building and textiles. The driving idea behind the formation of the Society was an interest in the things these practitioners had in common, rather than the things that distinguished between them. We felt that the cognitive pro-
cesses of matching a perceived need with a proposed conﬁguration were the same, or similar, whatever the ﬁeld of application. This notion in 1966 was, in part, a reﬂection of the intense interest that had been generated during the late ’50s and early ’60s by the release of historical details about the successes and failures of crossdisciplinary teamwork in war-time enterprises of the Second World War. The optimisation of food production and distribution; the development of weapons systems; the search for means of defence against the enemy’s weapon systems; the development of new materials; the formulation of new approaches to war-time logistics, notably the organisation of convoys of shipping across the oceans and the hunt for U-boats; the development of computer systems; even the search for strategies for the conduct of military operations: these had resulted in the evolution of a new discipline, Operational Research, originated on this side of the Atlantic by Professor P M S Blackett, who was Chief Scientiﬁc Adviser to the government of the day in 1941. Operational Research, we learned, was characterised by the cross-disciplinary collaboration of teams of scientists, engineers and others, of surprisingly diverse backgrounds, in attempts to solve pressing, practical wartime problems. Importantly, the experience of the Operational Research teams also consolidated a new approach, the Systems Approach, to the analysis of problems. The systems idea is usually credited to Ludwig von Bertalanffy, a biologist, who published a book The organism considered as a physical system in 1940. He later went on to publish another book, General systems theory, in 1956, but many later systems analysts had misgivings about this attempt at postulating a generalisable theory of systems. Ken Boulding tried to do better in his General systems theory—the skeleton of a science, also published in 1956, but was not until people such as Herbert Simon published,
Design Studies Vol 20 No 6 November 1999
for example, The new science of management decision in 1960, that it was generally accepted that systems analysis was not so much an explanatory theory as a useful methodology. C West Churchman put it all in a nutshell in The systems approach, 1968. The Design Methods Movement was a child of the post-Operational Research era, and systems analysis dominated our early thinking.
Conjectures and refutations
But if operational research was the mother of Design Research, Karl Popper (spiritually, at least) was its father. In 1959, just when the Design Methods Movement was beginning to establish itself, Karl Popper’s seminal book The logic of scientiﬁc discovery was translated into English and published in London. The original had been written in German. This was followed in 1963, the year after the ﬁrst Design Methods Conference, with his even more inﬂuential book, written originally in English this time, entitled Conjectures and refutations. The essence of Karl Popper’s message in Conjectures and refutations was that we should reject the old Baconian principle that the true scientist should arrive at a scientiﬁc theory through inductive reasoning. He argued that we must accept, instead, that most, if not all, scientiﬁc discovery is based on the positing of an insightful tentative explanation about the meaning of the evidence. This, said Popper, was followed by an exploration of the implications of such an explanation. But most importantly, argued Popper, this had to take the form of serious, comprehensive, systematic attempts to ﬁnd any ﬂaws in the theory posited. You can spring from your bath shouting ‘Eureka!’, or wake up in the morning with a conception of relativity, or visualise the structure of the double helix. You do not, said Popper, have to prove whence these conceptions came. What you do have to do, is apply every test you can think of to discover any ﬂaws in, or
limitations to, your proposition. Hence Popper’s title Conjectures and refutations. Or, to put it more precisely, real science proceeds by the postulating of informed conjectures, followed by systematic attempts at the refutation of these conjectures. The impact on the infant discipline of Design Methodology was immense. Conjecture, exploration and refutation (or, more popularly, proposition, development and test) is exactly what designers do! Design activity was scientiﬁcally respectable! More than that, in the light of the Popperian revolution, we can assert that research can just as properly be conducted through the medium of design activity itself, as it could by orthodox scientiﬁc enquiry! The Design Methods Movement had matured into the new discipline of Design Research.
Both this new insight into the nature and status of design activity, and the Popperian revolution concerning the nature and status of scientiﬁc activity which brought it about, are examples of ‘Paradigm Shifts’ as described by Thomas Kuhn in The structure of scientiﬁc revolutions published, not at all coincidentally, in 1962, the year in which the ﬁrst Design Methods conference was held. In The structure of scientiﬁc revolutions Kuhn acknowledged that throughout the history of science the great majority of workers in any ﬁeld accepted that, if they were to be taken seriously by their peers, they had to present their evidence and conduct their arguments in ways that were considered respectable according to the canons of their time. But Kuhn also pointed out that, every now and again, someone would publish a serious scientiﬁc account that offered a new and revolutionary explanation for existing phenomena. Or perhaps the new explanation would demonstrate a new manner of reasoning. In such an event, scientists everywhere would look again at their own data, and test the consequences of using this new
manner of reasoning. If it seemed to work, then soon everyone would recognise this as a new canon of good practice. Kuhn called this a ‘paradigm shift’. Darwin’s The origin of species caused such a paradigm shift, as did Newton’s Laws of mechanics before him and Einstein’s Theory of relativity after him. Let us look at some of the paradigm shifts that have occurred in the short history of Design Research. We have already come across the systems approach, which galvanised and transformed the academic side of the Design Methods Movement in its earliest years. In reality, this only became a practicable aid for professional design activity when Peter Checkland published his Techniques in ‘soft systems’ practice in 1979. Another, lesser, paradigm shift came with the growing recognition in the scientiﬁc world of ‘Action Research’, usefully summarised by M Foster in An introduction to the theory and practice of action research in 1972. Action Research recognises that sometimes it is impracticable for the investigator to maintain the traditional stance of objectivity and non-intervention. In some circumstances, the investigator (say, a surgeon) may of necessity be an actor in the situation (a need for surgical intervention in an unusual case) under investigation. In Action Research the investigator takes some action in and on the real-world in order to change something and thereby to learn something about it. A great deal of real-world design activity takes the form of Action Research and this experience represents a useful bridge between design practice and design scholarship.
There was an even more signiﬁcant paradigm shift that also occurred in the 1970s. After the end of the Second World War, the US government of the day adopted a far-sighted and magnanimous policy, known as the Marshall Plan, calculated to help rebuild the economies of the
war-ravaged countries of Europe and the Paciﬁc. As elsewhere, Japanese manufacturing industry was offered the opportunity to nominate any expert advice they chose to receive. It is hard to remember that before the Second World War, Japanese goods were famous for being shoddy. Not surprisingly under those circumstances, the Japanese industry bosses therefore asked, amongst other things, for expert advice on product quality control. The army generals and the civil servants who were running the Marshall Plan turned to the American academic world to recommend a quality control expert. They were offered the name of the American production engineering theorist, Professor W Edwards Deming. What the generals and civil servants did not appreciate was that to a production engineer the word quality has a meaning different from that to which the man in the street is accustomed. To a production engineer, the word quality means the degree of adherence of an item to its speciﬁcation. Or more precisely, the amount of variation between one exemplar of the item and the next. Thus high quality means there is very, very little variation between successive items. This has little to do with the speciﬁcation level of the item concerned. In quality control terms, one can choose to manufacture high quality (that is, very highly consistent) exemplars of a product, whether that product is at a high, luxury speciﬁcation or at a low, cheap and nasty speciﬁcation. Similarly, one can choose to manufacture low quality (that is, very variable) exemplars of a product, regardless of whether it is at a luxury speciﬁcation or at a cheap and nasty speciﬁcation. Deming’s pet theory was that, contrary to the beliefs then current world-wide in industrial practice, overall manufacturing costs could be reduced rather than increased by working to closer and closer dimensional and materials tolerances. No one in manufacturing industry believed him. As a budding young mechanical
Design Studies Vol 20 No 6 November 1999
engineer, I was certainly brought up in the Ford– Taylor tradition that, for the sake of economy, one should work to the coarsest tolerances one could reasonably get away with. The ﬁrst piece of writing I ever had published, in 1957 I think it was, extolled the advantages of specifying coarse tolerances. In Deming’s revolutionary view, if all parts production was geared to the closest possible tolerances, and if all parts were thus guaranteed to be very highly consistent, there was much money to be saved by omitting interprocess inspection, reducing inter-process piece-part stocks and eventually by automating assembly processes. Not only that, according to Deming, one could then guarantee the performance and durability of the assembled ﬁnished product. The Japanese industrialists of the time were predisposed to believe him. It chimed with the Japanese approach to the arts. Consequently, within the framework of the Marshall Plan, one Japanese industry after another adopted the Deming Quality Assurance principles and found that his theories did indeed deliver lower costs and better performance. Then, with Deming’s direct assistance, Japanese ﬁrms pushed Quality Assurance downstream to the component supplier phase, to achieve yet more cost savings and yet shorter lead-times. This combination of price, quality and speed of delivery revolutionised Japan’s position in world markets, and very nearly wiped out whole industries in the West. Eventually, of course, European and North American ﬁrms overcame their prejudices and they, too, adopted the Quality Assurance principle. In the meanwhile, ﬂushed with their success, Japanese manufacturers extended the Deming theory by pushing Quality Assurance principles upstream to the design phase, and went on from there to develop a system of market-led product innovation techniques that went to the heart of delivering to consumers the product attributes they really valued.
A means for achieving ever-shortening leadtimes was the introduction of the practice called ‘Concurrent Engineering’. Concurrent Engineering describes the compression of research, design, development, production and marketing processes by, as the term implies, conducting them simultaneously, using mixed discipline teams, often on the basis of a common computer model of the emerging product requirements, product form and manufacturing processes. Before this, for much of my professional lifetime, the traditional way of conducting a product development project was in the manner of a relay-race. Market researchers produced research reports and handed them to the designers; designers prepared design concepts and handed them to the development engineers; development engineers produced piece-part speciﬁcations and handed them to the jig and tool designers, and so on. ‘Throwing the package over the wall to the next department’ is how management commentators described it. We all know the story. It would take several years to get a new or improved product to the market. By contrast, under the Concurrent Engineering principle, a product development team of all the relevant disciplines (research, design, development, tooling, production, distribution, marketing and after-sales service) would work together from the outset with a common goal. Dramatic shrinking of lead-times resulted. Market competition expanded to a global scale. Oddly, the implications of Concurrent Engineering have not yet been fully appreciated in general design practice and design education. Some industrial design schools and engineering departments remain unsure as to the knowledge bases their students are expected to acquire, and the skills they are expected to develop. Some schools, perhaps unconsciously, still promote the role model of the sole designer rather than the role of a
member of a joint product development team. It is one of the function of this Conference to clarify such questions.
If the design schools have not yet fully come to terms with the implications of Concurrent Engineering, manufacturing industry and the business schools have gone some way towards clarifying the context. Towards the end of the 1980s, articles appeared in the general business journals and on the business pages of the daily press describing the turbulent state of the market, the speed with which competitors were able to bring new and improved products to market and the implication for organisational change. In 1989, the Department for Trade and Industry commissioned P A Consulting to prepare a report, Manufacturing into the 1990s. They have found that many ﬁrms had concluded that the only answer to shrinking lead-times and global marketing was the restructuring of corporate organisation. Market and technological change must be detected earlier, and product, process and marketing responses must be introduced sooner. In 1991, J D Blackburn published a book in the United States under the title Time-based competition, in which the term ‘responsiveness’ was attributed to companies who were able to counter competition with speedy and appropriate action. It was not long before ‘responsive’ companies were being described as ‘lean’ companies. The term ‘lean’ has a deceptively docile ring about it. In fact, it is intended to describe the leanness of the panther: fast, agile and deadly. A ‘lean’ company has shed all its surplus fat. It has few levels of command in its management. It does not carry the burden of more ﬁxed assets than can be helped. To this end, it engages in widespread subcontracting. It carries minimum stocks. It ensures that both incoming materials from subcontractors and outgoing ﬁnished pro-
ducts to customers are Quality Assured and delivered ‘just in time’. It expects to cover the costs of any new product launch quickly. A company exhibiting such responsiveness is described as ‘agile’. At the limit, the agile company almost disappears. In their book Agile competitors and virtual organisations, 1996, Golman, Nagel and Priess describe this as ‘the disintegration of manufacturing’, and they identify many well-known brand suppliers as ‘virtual manufacturers’. In 1997, the Management Best Practice Directorate of the Department for Trade and Industry sponsored a mission by a representative group of industrialists and academics to the United States, with a view to visiting leading manufacturers and business schools and to bringing back intelligence of the most effective corporate strategies. ‘Responsiveness’ was the message and ‘Agility’ was the key. There are, in fact, many agile companies and virtual manufacturers in Britain today. In a Workshop organised by the Institution of Electrical Engineers on 23 February 1998 at Savoy Place, a number of such companies reported their progress. One of them, Raleigh Industries, reported a reduction in manufacturing lead-times from 42 days in 1987 to 4 days in 1997. Another, Van den Bergh Foods Limited, reported a reduction from 3 or 4 days production lead-time to 15 hours!
Why am I making such heavy weather of this? Because it has a signiﬁcant bearing on the two questions with which we began: ‘To what extent can design be used creatively to make a major breakthrough or quantum leap in successful innovation practice?’ and ‘How do designers and design managers deal with the problems of ultra-fast moving joint product development teams?’ In respect of the ﬁrst of these questions, ‘To what extent can design be used creatively to make a major breakthrough or quantum leap in
Design Studies Vol 20 No 6 November 1999
successful innovation practice?’ it becomes clear that agile companies are far less constrained by existing technologies and by investments in plant than were their less agile counterparts of the past. Since most of the manufacturing is subcontracted, then a design incorporating new technology or which demands new manufacturing processes is not ruled out by plant investment problems. Nor must it be obscured by limitations in the knowledge and skills of the design team. The design and development team must be as versatile and agile as the company itself. It must have at its ﬁngertips a wide range of knowledge about appropriate technologies, and rapid access to a comprehensive database on materials, processes and available ready-made components. To a great extent, under these circumstances, as the Japanese example has shown, the design driver must be inspired insights into customers’ real needs and values. The company’s existing plant and technology count for very little. What the customer perceives as attractive count for everything. This brings us to the second question with which we began: ‘How do designers and design managers deal with the problems of ultra-fast moving joint product development teams?’ Not every new product launch has to be radical. Customers’ real needs and values are not always answered by fundamentally new products. ‘Innovation’ includes ‘improved’ as well as ‘new’. In ﬁnding an ultra rapid answer to a competitor’s move, the company will often decide to take advantage of their existing position in the market and in the supply chain by offering a ‘new, improved’ model rather than a totally new one. If this is to be easily achievable, existing designs have to be ‘Robust’.
market competition with minimum upset. A robust design is one whose subsystems have minimum interdependency, so that any subsystem can be replaced with a revised version without encountering knock-on effects in other subsystems. This is a design principle that is more radical than it may sound at ﬁrst hearing. For generations of designers, it has been a matter of fact that an integrated design is smaller, lighter, cheaper than one which is composed of subsystems bolted together. An unfortunate consequence for consumers in past decades has been that if anything has gone wrong with a product, the whole thing has had to be thrown away. Nevertheless, for the sake of cheapness, lightness, compactness, manufacturers have gone for integrated designs and consumers have bought them. The paradigm shift we are now addressing will do little to change the experience of the consumer. The decoupling of subsystem interdependency in design terms does not necessarily mean bolt-together subsystems in manufacturing terms, and does not make it any easier, per se, to carry out repairs. It does, however, represent a paradigm shift in the practice of design.
Flash of light
A robust design is one that will accommodate changes in user requirements, technology or
To return to our examination of the title of this Conference, it appears to me that any Quantum Leap in the management of new product innovation is contingent upon recognition of the nature of corporate agility. Corporate agility has major implications for interdisciplinary product design teamwork, and requires a closer look at the subtle nature of robust design. To my generation, in a certain sense, the advocation of robust design looks like a quantum leap backwards. If I remember rightly, in my second year introduction to quantum mechanics, I learned that a return of a sub-atomic particle to a lower energy level results in a brilliant, but ﬂeeting, ﬂash of light. Let us all watch out for that ﬂash of light in the course of the next three days.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.