A Critical Review of European Information Society Policy


Professor Nicholas Garnham

With comments from Robin Mansell, London School of Economics and Political Science Johannes M. Bauer, Michigan State University W. Edward Steinmueller, SPRU–University of Sussex Martin Fransman, University of Edinburgh Jean Paul Simon, France Telecom Peter Johnston, DG Information Society, European Commission Anders Henten, Technical University of Denmark William H. Melody,

Edited By Pascal Verhoest, TNO & ENCIP

About ENCIP…
ENCIP is a collaborative network for socio-economic research on ICT and policy. ENCIP’s activities aim to create an open intellectual environment for the design of policies for ICT, strengthen the European and international ICT research infrastructure and bridge between the academic world, the ICT industry and the policy-making arena.

EuroCPR …
The European Communications Policy Research (EuroCPR) Conference is held annually to facilitate systematic interaction between academic research, European and national policy makers, and industrial representatives in the communications sector. EuroCPR’s strategy is to bring well-grounded theoretical and empirical research to bear upon current policy issues. Research presented at the EuroCPR conference is leading edge. It is chosen for its innovative character and for the contribution it will make to current policy and business strategy analysis. To optimise dialogue, the number of invited guests is limited, and discussion is of the highest quality.

Acknowledgements We are most grateful to Nicholas Garnham for accepting the intellectual challenge we offered and for making available the transcript of his presentation. All the commentators are thanked for their valuable contributions. Finally, thanks are due to Cynthia Little and Remi Douine for the editing and formatting of this document.

Table of Contents
Introduction by Pascal Verhoest....................................................................................................5 Contradiction, Confusion and Hubris: A Critical Review of European Information Society Policy by Nicholas Garnham ...............................................................................................6 Strategic Interests in Information Societies by Robin Mansell ....................................................................................................19 Mechanical to Adaptive Policy by Johannes M. Bauer.............................................................................................27 Half-Empty and Half-Full Glasses by W. Edward Steinmueller.....................................................................................31 Creating a ‘Critical Space’ for Analysis and Debate by Martin Fransman ...............................................................................................35 ‘Le retour des eponges’ by Jean Paul Simon.................................................................................................39 European Research and Telecommunications Policy: an Evaluation Perspective by Peter Johnston....................................................................................................43 Are Industrial Policies Irrelevant or Obsolete? by Anders Henten....................................................................................................47 On Muddling Through Contested Terrain by William H. Melody .............................................................................................52 Contradiction, Confusion and Hubris: An Afterword by Nicholas Garnham .............................................................................................61

by Pascal Verhoest1
I have seldom experienced such a tension as was apparent during Professor Nicholas Garnham’s keynote address at EuroCPR 2004: heads were nodding or shaking in approval or disapproval and the conference hall hummed with sotto voce comments. Many attendees asked for a transcript of Nicholas Garnham’s address, some suggested follow-up activities. This document and a special session devoted to it at the occasion of the EuroCPR 2005 conference in Potsdam are the fruits of these suggestions. If Nicholas Garnham was aiming to be provocative, he certainly succeeded. The collection of comments that follows bears witness to this. But his presentations was more than just provocative: it was also a well substantiated essay describing a negatively balanced information and communication technology policy and research agenda, or perhaps more accurately it portrays a technology policy and related research programme that unsuccessfully claim to be in the interests of society. Nicholas Garnham points to the many theoretical and other inconsistencies in policy and related research with their claims of being at the source of social progress. However, many of the commentators disagree with his view and reject his scepticism about the capacity of research and policy to direct processes of social change. Has Nicholas Garnham embraced post-modernist scientific relativism? I think not. Nicholas Garnham was never a positivist, but he has not stopped being a modernist: if he had, he would not be advertising his opinions in his keynote address, nor submitting them to the critical scrutiny of some of his most esteemed peers. This collection represents critical reflection in the best modernist tradition.

Managing Director of ENCIP; Senior Strategist in ICT Policy and Innovation of the TNO institute for Information and Communication Technology (TNO-ICT) in Delft, the Netherlands; Professor of Political Economy of Communication at the Free University of Brussels (VUB).

Contradiction, Confusion and Hubris: A Critical Review of European Information Society Policy
by Nicholas Garnham
Let every student of nature take this as a rule, that whatever the mind seizes upon with particular satisfaction is to be held in suspicion. Francis Bacon EuroCPR was founded as the EU began the move from state monopoly of telecommunication networks and services to regulated competitive market provision – a process usually referred to as liberalisation or de-regulation. The purpose of EuroCPR was to match the telecommunication policy research effort in the US by developing a European research capability, in part to avoid simply importing US regulatory models and their accompanying politico-cultural ideological assumptions. The aim was also to bring that capability into a dialogue with policy makers with the objective of ensuring a more rational policy-making process with policy based upon the best available evidence and dispassionate analysis, rather than the crude interplay of economic and political interests. As one of the parties involved, I think I can say how impossibly naive that aim now looks. Indeed, I would suggest that, in the light of nearly 20 years experience, the results have not been very encouraging. I want today to explore the reasons for this and what useful conclusions we might draw for both research and policy making. Two years ago Eli Noam asked, and rightly, where the voices of critical, or even sceptical, researchers could be heard during the ‘telecoms mania’ of the late 1990s. In trying to explain the telecom/ boom and bust Martin Fransman described it in terms of a ‘consensual vision’, which, although it turned out to be wrong in almost every respect and although there were good reasons to dispute it at the time, was accepted by both market actors and analysts to be an incontrovertible Truth. I want to follow Noam and Fransman by asking where are the voices of the sceptical researcher? Are policy makers now being gripped by the new consensual vision of the information society? Have we learnt nothing?


So far, from the abundant research, and the increased use of this research by policy makers leading to a firmer grasp of realities and better policy formation, I would argue that as policy has moved through its three cycles, liberalisation, convergence and the information society, its ambitions have grown. It has progressed from aims of shaping a competitive telecommunications market to bringing into existence a new social order, or at least a new economic order – the Lisbon goal of making Europe ‘the most competitive and dynamic knowledge-based economy in the world by the year 2010’ – but at the same time its research-based grasp of reality has weakened. In short the story now is one of policy over-reach and research underreach. The problem in my view lies in both the policy-making and research processes. Before going on to explain how and why I believe this to be the case I will share with you my conclusions – conclusions that I confess I find uncomfortable. Here I should note that I have arrived by a somewhat different route at conclusions similar to those reached by Johannes Bauer in his paper for this conference. (It is good to know that one is not alone.) These conclusions are that researchers (and I certainly include myself here) at least in their roles as policy analysts and advisers, do not well understand the processes they are analysing and exaggerate the possible scope of their understanding, but are unwilling to admit to this limitation. Policy makers are unwilling, maybe congenitally unable, to accept either the very limited scope of their powers or admit to their failures. Both researchers and policy makers thus tend to suffer from severe historical amnesia. In short, on both sides the story of the last 20 years is one of hubris. Why is this historical amnesia both important and debilitating? Because it tends to make both research theories and models, and the policies that draw on them for support, appear much more coherent and tidy than in fact they are. In judging both the applicability and relevance of an economic theory or model, or the efficacy of a policy, we need to know the problem it was designed to address. In discussing the research here I will focus on economics because I think it uncontroversial to say that, for better or worse, it is economic theories and models that have driven, or at least been mobilised to legitimate, the telecommunication/industrial society policy process There is a tendency in all academic disciplines, and economics is no exception, to search for maximal internal logical coherence and scope in theories. In my view it is more useful to see theories and models as responses to real world problems and applicable only to that set of problems. It follows from this that theories and related policies designed to address such problems are likely, when faced by the extremely


complex social system that is the contemporary global economy, to throw up new problems requiring new theoretical and policy responses elsewhere in the system. In such a situation, and in my view this has been the situation in the telecommunications/information society field over the last two decades, to cling to a theoretical position in the name of intellectual consistency, or a policy in the name of political consistency is a grave error. This has particularly applied in recent years to models of the market and competition. Thus, to understand the strengths and weaknesses of policy in the telecommunication/information society field and of the research underlying them in recent years, we need to be clear about the problems these policies and theories were designed to address. The difficulty is that there is agreement neither about the problems or their relative importance, nor about the appropriate theoretical way to approach them. This in particular has led policy makers, who of course want to please everyone, to formulate sets of mutually contradictory policies and/or to give regulatory agencies the invidious task of balancing conflicting goals. Thus they mobilise what they do not seem to appreciate or hope no one else will notice are mutually contradictory theories. In the current information society climate I have a strong sense that in the intellectual and political excitement about detailed analysis and policy implementation we have been distracted from the underlying issues. The result, in my view at least, is that too much policy research and formulation is now operating in a self-justifying world of its own. This reminds me of the old Irish joke when on being asked the way to get somewhere the reply comes: ‘If I wanted to get there I wouldn’t start from here’. Where then would I start? If we examine the development of research and policy as it has moved from telecommunications, to converged media, to the information society, the conclusion one reaches is not so much that either the economic theories and models or the policies were wrong (although some clearly were) as that they were partial and often contradictory. Policy has been driven by a range of different interests with differing definitions of the problem, different aims and different supporting economic models and theories. At the start of the liberalisation process we can identify a range of policy goals and supporting analysis each of which involved different conceptions of the role of telecommunications in the economy, different models of the economy and of markets and, therefore, different definitions of the problem that liberalisation and re-regulation were designed to solve.


First there is the industrial policy approach. Here telecommunication operators were seen primarily as investors in networks and associated technologies. At national level the problem was seen as insufficient investment in network modernisation and an associated inadequacy of network facilities and services to meet demand. At the European level it was the fragmented nature of the European equipment manufacturing industry caused by national monopsonies seen as the cause of US and Japanese domination of the global equipment business that was the focus of attention. It was this policy perspective and analysis that led directly into the series of Framework programmes and the development of European standards, especially for mobile telephony. There was an approach that stressed the role of the telecoms network as a key business infrastructure. This was a response to pressure from the corporate sector, particularly from multinationals, for competitive supply of both networks and services, and at a European level to facilitate cross border operation and, thus, harmonisation of regulation. Here consumer welfare was seen to derive primarily from increased business efficiency. Another approach stressed innovation, and saw telecommunications as primarily a field for ICT development and the regulated monopoly structure as a barrier to the development of the ICT industry and services. Here liberalisation was in part a response, as it had been in the US, to pressure from the ICT industry. At the European level there was an approach which stressed the importance of cross national infrastructures both as unifying mechanisms and as contributing to the efficiencies derived from market scale. Here it was not the monopoly nature of the existing industry so much as its national nature that was the problem. Finally there was an approach that stressed the general role of telecommunications in economic and social development. It is here that we find the seeds of what became information society policy with its vision of telecommunications as the infrastructure of a knowledge society leading to Internet mania. This approach had close links with the innovation approach if only as part of the marketing discourse of the ICT industry. Here the network modernisation and service innovation debate took on an aspect of religious faith, thus often bypassing rigorous economic or cost-benefit analyses. Across these policy perspectives ran a series of different and often conflicting theories of market and market power. 1. Neo-classical competitive equilibrium. In the political rhetoric it was this market model that was largely mobilised to both describe and justify


liberalisation. Re-regulation was to create easy market entry and price competition which was supposed ipso facto to increase consumer welfare. Here the arguments were about natural monopoly, cost based pricing and so on. 2. Schumpeterian competition. Here the goal was not price competition but entrepreneurial innovation, which in its turn depended upon the monopoly rents derivable from successful innovation. 3. The Hayekian model of the market as a search mechanism in the face of necessary uncertainty with prices and price competition not seen as driving productive and allocatory efficiency, but as signalling choices in the face of information overload. Let me now look at each of these policy strands and their underlying justificatory arguments in more detail.

Telecommunication network access and services as a retail commodity. The problem was seen as a loss of consumer welfare through monopoly rents, and the solution as the creation of fully competitive markets. The market model used was one of price competition with low barriers to entry. The debate was over the extent of natural monopoly, the position of public service, and the degree to which network facilities and services could be efficiently unbundled. The expectation was that sector specific regulation could be phased out in favour of general competition. It is within this thematic that most regulatory debate and policy development has taken place. Telecommunications as a business service and investment good. This is often confused with theme (a), and this confusion continues to bedevil information society policy, but in fact both the characteristics of the market and the definition of the problem, and possible policy responses are very different. Here, the drivers of the policy were major corporate users and their allies in government. Attention was focused on leased lines and advanced services. The problem was seen to be the slowness of response, the assumption that one size fits all, and the price averaging policies of monopoly incumbents. At a more general level the problem was one of corporate efficiency, productivity and inward investment rather than consumer welfare directly.

In Europe there was a particular version of these problems, namely the lack of European level operators or unified regulations in the face of the desire of multinationals for one-stop shopping.

Industrial policy focusing on telecommunications as both a market for equipment, a major investor in, and therefore driver of, technological development and as a provider of network capacity for upstream hardware and software developers. Because of the attention given to competitive retail and


wholesale markets this crucial driver of liberalisation policy has been overlooked. It is important to note here that this has been a consistent concern that has been reflected in papers presented at successive EuroCPR meetings, including this one. It is important because, as we shall see, it has returned to haunt information society policy. We can see here two sets of arguments and two relevant interest groups. On the one hand, the problem, from the view of both national governments and the Commission, was the nationally based monopsonistic relationship between incumbents and equipment suppliers. There was a need both to consolidate manufacturing (and associated research and development – R&D) on at least the European scale to challenge North American and Japanese competitors, while at the same time ensuring continuing investment by operators in both R&D and equipment purchase. We can witness in current policy and the recurrent versions of the Framework programme the continuing strength of this industrial policy strand and of the equipment manufacturers’ lobby. The problem however was that this policy was quite at odds with a policy aimed at squeezing operator margins in the name of consumer welfare. In practice regulators were often left to balance the two. The second set of arguments saw the whole field of telecommunications as one of technological innovation in two senses. 1. It was necessary to have both network and service competition in order that new network technologies and new software could enter the market and ‘fight it out’. Thus the telecommunications market was seen primarily not as a competitive market for a homogeneous range of services, whether voice telephony or data transmission, but as a market for technological and service innovation. Here the main policy driver was the ICT industry. We should not forget that the liberalisation movement in the US started with the desire of the computer industry to have greater access to and control over the telecommunication network and the services that ran over it. 2. The telecommunication network was the necessary infrastructure for innovation not only in ICTs and software, but across the whole field of the economy. It is here that we find the seeds of one strand of information society policy and its stress on broadband and e-business within a general innovation agenda. I will return to this later.


From this position the goal was network and service innovation. The problem was and still remains who will pay for this innovation. Here, two different economic models have been mobilised, each quite different from the price based, low entry barrier consumer market that has dominated regulatory debate. The first, from which is derived the principle of technological neutrality in regulation, is the Hayekian model of the market as the only available tool for selecting technological options in the face of radical uncertainty. In this model prices are signals that aggregate a wide range of choices between technological solutions across a myriad of users with different assessments of their needs. It has much to be said for it within its own terms as a critique of state planning. But it is quite incompatible with industrial policy of the current EU type. It is not useful in the face of network technologies with long lead times, and constant or growing returns to scale, and where it is very difficult to identify a user that is setting a price. In particular, it is incompatible with the influential path-dependency and endogenous growth models, which drive much information society policy. It is also important to stress that it is totally incompatible, for better or worse, with a policy to encourage broadband or use broadband penetration as a benchmark for policy success, to set targets for R&D expenditure or target specific areas for research support. The second model, perhaps now the most influential of all, is the Schumpeterian model of competition through innovation. Because the concept of innovation is now so central to economic and regulatory policy and the key driver of information society policy it is important to outline the main features of the Schumpeterian model and the problem it was addressing. Schumpeter, like Keynes, was faced with the stagnation of the capitalist industrial economies. In his view the equilibrium model of inter-firm price competition as the driver of capitalist development led inevitably to squeezed rates of profit and sectoral oligopolies and thus a fall in rates of investment. As a way out he argued that dynamic market development was driven not by price competition, but on the contrary by entrepreneurial product and process innovation. The risk of such innovation was covered by the possibility of monopoly rents for successful innovation. It is essentially this argument that has been used to defend Microsoft against anti-trust action. Crucially, it is incompatible with a price competition, low entry barrier competitive market model. Within our discussion two further problems arise here. First the regulatory structure needed to encourage such entrepreneurial innovation is quite different to that required to maximise consumer welfare in established product and service markets. The difficulty in recent years in the telecommunication/ICT field has to been to know which situation we face. Thus, we have witnessed, and continue to witness in


broadband, a constant policy fluctuation between the need to support network and service innovation and the need to maximise consumer welfare. Second, as in the telecommunication/ bust, you can have too much innovation and the process can then lead to serious overshoot and over investment. In this case no innovator can establish the monopoly rents required to make the process profitable and consumers suffer because all investment is taken from current consumption, as a bet on the future. It is important to stress that the telecommunication/Internet boom of the late 1990s was essentially a Schumpeterian boom. Analysts, investors and managers bought the Schumpeterian argument. The result was speculative competition between investors for a share of the future excess monopoly profits that the winners in this winner-takes-all market would achieve, and it was upon this that valuations were based. It is important to stress here that the Schumpeterian analysis has developed in two divergent directions. One sees the process of innovation as a competition between entrepreneurs to produce distinct products and services. The central dilemma here is one of incentives to innovate and the risk/reward ratio. The other, within a more general evolutionary economics framework, places the emphasis on innovation in general purpose technologies and the problem then becomes one of the long run adoption process. We can see the different policies to which these strands lead if we look at the field of intellectual property (IP). The first strand favours a strengthening of IP protection as an incentive to innovate. The second sees the situation as one of the optimum sharing of knowledge in a socio-economic learning process and thus favours Open Source, and minimal IP protection. Against this background let me now turn back to EU information society policy and the consensual vision underlying it. Francis Lorentz gave us a very clear and concise presentation of this vision in his opening remarks at this conference. As with EU telecommunications policy, of which it is in part a development, information society policy is characterised by a major cleavage and contradiction between an industrial policy, a state intervention strand and a competitive market strand. To take one example, after years of championing the market competition model and technological neutrality in telecommunication networks and services, policy now, when the market has not delivered the supposedly desired goods, has flipped to an obsession with broadband roll-out, the targeting of a specific range of so-called information society technologies (IST) through the Framework programmes and even an attempt to legislate at state level for a specific level of R&D expenditure. Indeed, my main criticism of the EU information society policy


label is that it serves as cover for the continuation of a largely failed industrial policy and a protection at EU level of the budgets associated with it. I think one of the best ways to understand telecommunication and then information society policy in Europe is by tracing back to the original European coal and steel community, the twists and turns of an industrial policy and state planning approach – a state planning approach that owed its prestige and strength and a significant proportion of its manpower to the success of the French Commissariat du Plan in the 1950s and 1960s – as they grappled with the pro-competitive and anti-state provisions of the Treaty of Rome. From the point of view of policy makers and lobbyists the information society brand or consensual vision has three advantages. 1. It is sufficiently vague as to hide these contradictions. 2. It is sufficiently forward looking to distract attention from the past failures of what is in effect the same policy. 3. It conveys an illusion of socially useful activity and relevance while distracting attention from the analysis of the problems to which these policies are a supposed solution. The consensual vision consists first of a definition of the economic problems facing the EU to which information society policy is a response, and second of a set of analyses of these problems from which appropriate policy solutions flow. The definition of the policy problem has very old roots in EU policy thinking and lies at the heart of the Lisbon goal: it is the Colbertism underlying Lorentz’s presentation; it is that we are in competition with the US and Japan in a zero-sum economic game. Since Japan has temporarily dropped out of the race this is the Defi Americain all over again. In the old days we had to have a European computer industry or an aerospace industry. Now we have to have European broadband penetration and e-business at the same levels as or better than those of the US. Let us just note in passing, the fundamental argument of economists that while firms compete on global markets, nations and regions do not. And the fact that at the same time as the EC was, within this analysis, constructing Esprit, Framework and related programmes to compete with a supposedly superior US, the European economies were consistently out-performing the US. In short, while in the nineteenth and early twentieth centuries the great powers competed by building bigger and better fleets, and during the cold war bigger and better missiles, the new national virility symbols are broadband networks and ebusiness start-ups. Progress of a sort I suppose.


The crucial point to be made at this juncture is that in so far as there has been in recent years a widening of the GDP per capita gap this has little if anything to do with ICTs or a knowledge economy, or with a superior endogenous competitive environment, á la Michael Porter, but is very largely due to an increase in the working age population through increased rates of immigration. In my judgement the economic problem is quite other and it is shared by the US and Europe. In spite of the euphoria associated with the telecommunication and Internet booms and with claimed increased sustainable rates of productivity growth associated with ICTs, which is one of the core components of the consensual vision, the real underlying problem is the stagnation of the major industrial economies and the accompanying three decades of decline in corporate profitability. This is due to saturated markets, the completion of the sectoral consolidation process on a global scale and, thus, the end of returns to economies of scale in mature markets and crucially to the shift in the centre of economic gravity to the service sector. This is central to arguments about the information society because it can be seen as a re-labelling of the service or post-industrial societies that were the favoured terms in the 1960s, ‘70s and ‘80s, and because one version of information society theory saw ICTs as the productivity-enhancing machines of the service sector, and service workers have been renamed knowledge workers without any change in their functions. Some of you probably remember how in the 1980s within the OECD, countries measured their percentage of knowledge or information workers instead of broadband or e-business penetration as tokens of success or failure in a race once again to catch up with the US. Plus ça change. The whole liberalisation, de-regulation movement can be seen as a response to the problem of service sector productivity and especially of that significant part that lay and still lies in the public sector. Now what is significant in recent years is that whatever gains in productivity growth rates have been achieved have been not in the service sector, nor even generally in the manufacturing sector, but in the manufacture of the chips themselves. In my view one of the major weaknesses of current information society research and policy is a misunderstanding of the potential relationship between ICTs and services and thus an exaggeration of the contribution they could ever make to enhancing either productivity or service quality. As with the original telecommunications policy the consensual vision of the information society, while in using a common label it enables people to believe


they share a common vision, in fact contains within it a number of distinct visions or strands of analysis leading to distinct policy interventions, each of which needs to be judged on its own terms rather than taken as a single package. This is important for us because these differing visions have different views of the role of ICTs – if any – within the information society. Indeed it is important to stress the ‘if any’ because the ICT and telecommunications industries and their associated policy makers have succeeded in propagating the erroneous view that the information society is primarily about them. The core of the information society vision, derived mainly from Daniel Bell, can be simply stated. Those capitalist economies are shifting from fixed to human capital as the key source of value added and economic growth. There are, however, different versions of this vision. The Bell version places the emphasis on the contribution of trained scientific labour power in the process of production. Here the ICT and pharmaceutical industries are key cases. ICT growth is the result of this process, but not its cause. The policy problems are the production and efficient deployment of highly skilled labour power. The benchmarks are numbers of university graduates and R&D intensity. 1. It should not however be confused with the more general knowledge- or symbolic-worker vision. This holds that an ever-increasing proportion of employment involves not working with machines to manipulate matter – the classic industrial model, but in creating, manipulating and distributing information. This is also sometimes dubbed the immateriality thesis. The problem is that this takes different forms with different economic implications, and different relations to ICTs. On the one hand it describes the growth of the service sector where humans are dealing directly with other humans and the value exchanged is embedded directly in human labour. Assessment of the impact of ICTs on either productivity, quality or market growth in services remains highly controversial. On the other hand it describes the bureaucratic overhead that results from growth in market size and complexity. It represents the costs of coordination. Thus its increase is not a sign of economic efficiency. It is here that the long held out, but continually dashed, hopes for the impact of ICT investment on productivity largely lie. It is worth pointing out that one of the sources of the information society lies in Japan’s response in the early 1970s to the problem of its dependency on imported oil. The purpose of the information society was to create an economy that was no longer producing and distributing things, but was producing symbols. This also in their view had environmental advantages. Against this background it is worth noting that the


US, supposedly the world’s leading information society, is so dependent on increasing oil consumption that it cannot even ratify the Kyoto protocol. 2. A variant of the immateriality vision is the death of distance. This holds that the move from atoms to bits, allied to the rapidly falling cost of high capacity digital networks, removes the transport cost barriers to market size that firms, particularly service firms, can efficiently serve. This is true, but can be exaggerated and needs analysing market by market, service by service. 3. The symbolic economy/content vision assumes the growth of the media sector and the positive impact of so-called new media. This version of the vision is really just the leisure society thesis revisited and does not stand up well to even cursory empirical investigation. The successful communication developments have been those that sell connectivity, with people supplying their own content. The media content sector has grown in nominal terms, but this is largely due to a relative price effect Consumption time has not increased, indeed with rising wealth it tends to decline. 4. Finally, we have the Schumpeterian innovation vision. Here innovations may or may not be knowledge intensive. ICTs are simply one among many cases of innovation. Information in this model is much more about creative cluster and their modes of intercommunication, and about the interaction between information and markets in terms of innovation choice and risk. The policy focus here is the context for entrepreneurial activity and the creation of innovation clusters. The policy dilemma is the tension in the innovation process between sharing and property rights and capturing returns, exemplified, above all, in the intellectual property area. In conclusion the key division for policy debate is between those who focus on ICT and communication as a new growth sector in its own right and thus what can be done, if anything, by public bodies to optimise that growth and capture those markets, and those who see communication networks and services as the essential infrastructural underpinning beneficial economic developments elsewhere. From this perspective the old questions of the appropriate role of public finance and management in infrastructural provision have not gone away. In particular, the current policy field is riven by contradiction between liberalisation/market competition and public intervention/regulation, between technological neutrality and industrial and research policies. In short the problems with information society policy and related research are firstly that they are based on a faulty analysis of the underlying difficulties facing the EU in terms of competition with the US. Second, even if we accept the underlying analysis and the Lisbon goal there is no evidence that we in fact have the policy instruments that might produce the desired result. Thirdly, this is in part


because the theories and models underlying policy formulation and implementation are much more controversial, partial, doubtful and contradictory than either theorists or policy makers are prepared to admit. And this is in large part, as Bauer has stressed, because what the world theorists are trying to understand and policy makers to steer is inherently more complex than either researchers or policy makers feel comfortable in admitting to. This then leads me to my final conclusion that we must take all grand visions, plans and theories with a very large pinch of salt. This is not, I must stress, an argument against public intervention and in favour of letting the market rip, although versions of it have been used as such. Market messianism is subject to the same critique. We know that markets are of different types, are messy mixtures of private actions and public rules and institutions and have extremely unpredictable results. Now the message is that both researchers and policy makers need to be much more humble. We cannot, I think, avoid policies and regulatory interventions. No human society or social group can be unplanned in that extreme sense. The best we can hope for is messy, short term interventions in specific areas to solve specific problems. There are no general answers or rules and the results of intervention are likely to be very different from what was planned.


Strategic Interests in Information Societies
by Robin Mansell2
The EuroCPR 2004 conference organising committee invited Professor Nicholas Garnham to deliver the conference keynote speech because we felt that it would be timely to hear from an acknowledged academic expert who has invested considerable effort in analysing information society developments from a sceptical point of view. We hoped for and received a robust critique of historical and current developments in academic theory, policy rhetoric and policy practice. Since this terminology gained currency in policy discourses, Nicholas Garnham has been very concerned to dispel the mystique that so often shrouds discussions about the origins and prospects of the information society.3 In his speech, Nicholas Garnham untangles many of the webs of contradictory assumptions that infuse discussions about information society policies and regulatory approaches in the European Union. There is much in what he argues to agree with. Underpinning his argument about European policy and regulation with respect to the information society is the following: ... the model or concept of the Information Society is at present the dominant way of thinking both among academics but also within the corporate and political areas, not just about the relationship between ICTs on the one hand and communication and culture on the other, but about the nature, and development dynamics, of society per se. It is a model now widely mobilised not just to understand the world but also to change it. In assessing its accuracy as a model more is therefore at stake than merely theoretical disputes among scholars.4

Professor, Dixons Chair in New Media and the Internet, Department of Media and Communications, London School of Economics and Political Science. 3 Garnham, N. (1994) ‘Whatever happened to the information society?’ in R. Mansell (ed.) Management of Information and Communication Technologies. Emerging Patterns of Control. ASLIB: London. Garnham, N. (1997) ‘Europe and the Global Information Society: The History of a Troubled Relationship’, Telematics & Informatics, Vol. 14, No. .4, pp.323-327. 4 Garnham, N. (2001) ‘The Information Society: Myth or Reality?’ paper presented to the Bugs, Globalism and Pluralism Conference, GRICIS, Montreal, 19-22 September.



Nicholas Garnham’s basic premise is that ‘the arguments that ICTs have caused an epochal shift in either economy or society are specious’.5 In earlier papers he has argued that whatever the impact of innovations in the telecommunication infrastructure and in service provision, this cannot be understood without a detailed examination of specific changes in the labour process and the specific social and economic activities in which different types of ICTs play an increasing role. In his keynote speech for EuroCPR 2004, Nicholas Garnham discusses the contradictory economic theories that have been mobilised to legitimate EU policy in the telecommunication infrastructure and services area and more broadly to underpin a mix of policies and strategies. He expresses profound concern that both the academic and policy communities seem to have become spell-bound by a ‘consensual vision’ of the progressive development of a European information society, which is expected to bring social and economic benefit to all. He asks ‘where are the voices of the sceptical researcher?’ According to Nicholas Garnham, academics have not only failed to challenge the erroneous premises of the predominant vision of the ‘information society’, but have also presented theoretical explanations that over-reach in their explanatory claims. It is certainly the case that much work within the economics discipline, especially following game theoretic modelling and simulation approaches, often gives rise to dubious claims about appropriate policy or regulatory practice and its likely outcomes. It seems to me, however, that we need to delve more deeply into the relationships between other strands of scholarly argument and the policy making community if we are to understand the silence of the sceptics. In his speech Nicholas Garnham acknowledges that ‘policy was driven by a range of different interests with differing definitions of the problem, different aims and different supporting economic models and theories’, and yet he also asserts that policy makers ‘want to please everyone’. As an academic with an enormous command of the history of European economy and the economic determinants of innovation and competitiveness, it comes as no surprise that Nicholas Garnham should want the research community to unravel specific interests, problems and goals, and their alignment with different economic models and theories. Is it any wonder that there are contradictions between industrial policy initiatives and the drive to liberalise markets and stimulate competition in the ICT sector, given the array of economic interests that are at play in Europe? Clearly, different business interests stand to gain from different approaches to information society policy. However, when it comes to the collective processes of governance within which policy makers operate, Nicholas Garnham seems to lose sight – at least in this speech – of the highly differentiated interests of policy makers themselves –

See footnote 3.


interests which, while certainly contradictory, do not extend to pleasing ‘everyone’ when ‘everyone’ is taken to include citizens as well as firms and regulatory institutions. Contradictions between the interests of those who advocate industrial policy and those who support the strengthening of competition in the marketplace are no surprise, but neither is the tendency for policy makers to lose sight of the citizen’s interest in favour of other interests. It is also not surprising that the sceptical researchers have been relatively silent. In the context of communications policy research and policy making in the US, Sandra Braman6 has drawn attention to the absence of sceptical academic voices. She examines communications policy as a context for research, in terms of the relationships of academics to policy makers, and the pressures on academics within universities. It seems to me that in the European context, if we are to better understand the hegemony of the information society consensual vision and the roles of scholarship and policy makers in sustaining it, we need also to study the incentives and structural dynamics of the research process itself and of the intersections between research and policy making, and implementation. In the European context, the academic researchers have had to contend with the near-term strategic research agendas fostered by successive Framework Programmes and with the vicissitudes and contradictory incentives of their nationally-based universities and research institutes. In these contexts, individual researchers have made choices about their ‘political strategies’ towards their research topics. The dynamics of policy researchers’ own institutionalisation and exposure to the forces now governing higher education have led many policy researchers away from sceptical inquiry. In arguing this, I fully acknowledge my own culpability at various times. In addition, especially for younger researchers, much of the training provided with respect to many information society developments is infused with hype and is mainly ahistorical. It is little wonder that the voices of those researchers who believe in e-business and ICTs as the primary drivers of the economy are loudest, since in many academic programmes this is the dominant viewpoint being taught. In his speech, Nicholas Garnham argues that ‘researchers … do not well understand the processes they are analysing and exaggerate the possible scope of their understanding, but are unwilling to admit to this limitation’. This is so, especially when it comes to grand theories of the economic or social dynamics of the so-called information or network society. But Nicholas Garnham himself notes elsewhere in his paper that ‘it is important to note here that this [scepticism about grand claims] has been a consistent concern that has been reflected in papers
Braman, S. (ed) (2003) Communication Researchers and Policy-Making, Cambridge MA: MIT Press.


presented at successive EuroCPR meetings’. It seems therefore that there have in fact been some sceptical voices – the problem being that either those researchers were unable to bring their insights to the attention of European policy makers or, that, when they did so, their insights were unpalatable given the overriding consensus vision of the information society. Beyond the arena of economic theory there is another fault line that becomes visible when we attempt to move in the direction that Nicholas Garnham advocates when he says ‘we must take all grand visions, plans and theories with a very large pinch of salt … We know that markets are of different types, are messy mixtures of private actions and public rules and institutions and have extremely unpredictable results’. If researchers take this observation seriously, there is a danger of tipping in one of two different directions. The first is the tendency to tip wholly towards the current fascination with the highly situated character of all socio-technical developments of which ICTs are just one small part. There are increasing numbers of studies offering enriched descriptions of ICTs as artefacts and of the local contexts in which they are deployed.7 There are studies of the ways in which various situations are said to ‘coproduce’ knowledge within communities of practice.8 These research traditions often provide insights into the processes through which ‘epistemic cultures’9 give rise to knowledge systems that shape the use of ICTs. Bowker and Star10 and others have developed research within the actor–network theory11 tradition with interesting insights into the emergence of infrastructure ‘ecologies’ through their detailed studies of individuals’ practices. Researchers working in these traditions often argue that empirical research that focuses on ICT developments ‘in practice’ or ‘in situ’ provides the best way to avoid the technologically deterministic arguments that so infuse the consensual vision of the information society. However, because these studies do not begin with questions about politics and power in the wider context, they are poorly designed to tackle the difficult problems and questions that normally confront the policy researcher or the policy maker. They are not designed to address questions about the rate of investment in technologies such as broadband, wireless networks, etc.,
Orlikowski, W. and Iacono, S. (2001) ‘Desperately Seeking the “IT” in IT Research – A Call to Theorizing the IT Artifact’, Information Systems Research, 12(2): 404-28. 8 Lave, J. and Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation, Cambridge: Cambridge University Press. 9 Knorr-Cetina, K. (1999) Epistemic Cultures, Cambridge MA: Harvard University Press. 10 Bowker, G. and Star, L. (2000) Sorting Things Out: Classification and its Consequences, Cambridge MA: MIT Press. 11 Latour, B. (1999) Pandora’s Hope, Cambridge MA: Harvard University Press.


or about monopolisation tendencies in ICT markets, much less the effectiveness of public policy responses. I suspect Nicholas Garnham would not advocate that we all travel down this particular research route, interesting though it is. If we did so we would be unlikely to have anything – however modest – to say to policy makers who claim to want to secure economic growth and equitable social development. The second tendency is for researchers to tip in the directions of scholarship that are predicated on a belief that because a socio-technical system is complex (the growing complexity of open networks systems) their dynamics are best understood through the lens of ‘complexity theory’. Professor Garnham argues that ‘the world is complex and policy makers and researchers should admit this’. Indeed, we should: but should we also jump onto the ‘complexity theory’ bandwagon that increasingly is attracting social scientists across the disciplinary compass? In his address, Nicholas Garnham refers to Professor Johannes Bauer’s paper12 prepared for the EuroCPR conference. Bauer’s argument is that as the network infrastructure evolves from a closed to an open system, policy tools and instruments ‘are typically confined to more limited aspects of the sector. Thus policy, while not irrelevant, may not be able to steer the evolution or performance of the sector effectively’. The argument here is that because the infrastructure system is more complex today than it was in the days of public or private telecommunication monopolies, it is more difficult to identify policy instruments or their impacts. This is undoubtedly so. But this does not mean that politics and the strategic interests of firms have disappeared. In addition, what appears to be evolving is not a purely open system, but rather a mixed system which combines the open and proprietary infrastructure systems in new ways. These new ways must become the subject of critical analysis because their interpenetration with the economy and, more generally, the social order, is unlikely to leave behind the conflicts of the past or to smooth the pathway, such that all of the population benefits. Bauer’s portrayal of the policy-making process moves from a highly rationalist and deterministic model whereby a policy instrument is deemed to be effective ‘if it is sufficient to cause a desired or prevent and undesired effect’ to one in which policy making is regarded simply as a process of trial and error. When the interests of firms, governments and regulatory institutions come into play, they do so in relatively predictable ways. When he considers recent developments in technologies such as WiFi – Wireless Fidelity, VoIP – Voice over Internet Protocol, and the Internet, however, he argues that because these innovations
Bauer, J. (2004) ‘Harnessing the Swarm: Communications Policy in an Era of Ubiquitous Networks and Disruptive Technologies’, paper presented at EuroCPR 2004, 28-30 March 04, Barcelona.


introduce greater system complexity, they are also disruptive. Therefore, we should seek to understand their impact through the lens of models of complex adaptive systems. In this context, ‘only in rare circumstances … will it be possible to define and implement effective policies’. Although he acknowledges that complexity theory ‘does not necessarily provide radically new and different answers to the problem of governance’ or policy making, he calls upon researchers to take the emergent and unpredictable properties of complex systems into account in their analyses. He concludes that ‘seen from a complexity perspective policy needs to become more humble, seeking for local improvements’. As Bauer points out, scholars such as Hughes13 and Mayntz and Hughes14 have been studying the evolution of large technical systems for some time. These studies have revealed the complex forms of interaction between the many components (human and technical) of these systems without the aid of formal ‘complexity theory’ as exemplified by Waldrop15 or Kauffman.16 The observation that complex systems often behave in non-linear ways is not the exclusive preserve of ‘complexity theory’. This observation and related ones about system dynamics have been central to theories of technical change and innovation for some time.17 Recourse to the tools of complexity theory for understanding socio-economic and technical systems is, of course, revealing of the phases of evolution through which systems may pass – between order and chaos. However, what follows from this theoretical standpoint is typically the observation that ‘complex adaptive systems are self-organising’, and the resulting ‘spontaneous order’ would be superior to any planning attempts.18 In both cases of the tipping phenomenon discussed here, there is a danger that the historically developed structural features and dynamics of our economy will disappear entirely from view, making it more rather than less difficult to comprehend any of the underlying – and troublesome – features of emerging information societies. The turn to situatedness renders policy intervention difficult if not impossible, not because we claim to understand too much, but because we
Hughes, T. P. (1983) Networks of Power: Electrification in Western Society, 1880-1930, Baltimore MD: Johns Hopkins University Press. 14 Mayntz, R. and Hughes, T. P. (1988) The Development of Large Technical Systems, Frankfurt am Main: Campus Verlag. 15 Waldrop, M.M. (1992) Complexity: The Emerging Science at the Edge of Order and Chaos, New York: Simon and Schuster. 16 Kauffman, S. A. (1995) A Home in the Universe: The Search for Laws of Self-organization and Complexity, New York: Oxford University Press. 17 Freeman, C. and LouçQ, F. (2001) As Time Goes By: From the Industrial Revolutions to the Information Revolution, Oxford: Oxford University Press. 18 Johnston, P. (2004) ‘Comments on a paper by Johannes Bauer’, EuroCPR 2004, 28-30 March, Barcelona.


claim to understand too little about anything other than local, time-bound, situations. The application of complexity theory to an understanding of the dynamics of information societies renders all of its properties emergent and entirely unpredictable. This does not auger well even for the limited, problemsolving interventions that Nicholas Garnham still advocates when he says that ‘the best we can hope for is messy, short term interventions in specific areas to solve specific [real world] problems’. I agree with Nicholas Garnham when he says that distortions and inequality within the European economy are not due to ICTs or the knowledge economy per se. As sceptical policy researchers we should indeed start from people and real social and economic problems when we attempt to understand the origins and consequences of technological innovation and the growing reliance on services in the industrialised and some developing economies. But this should not stop us from being concerned about the growing distortions in the economy that are the reason why policy research on the emerging ‘information’ or ‘knowledge’ society does matter. One answer to Nicholas Garnham’s question about ‘who will pay for innovation’ in ICT and related sectors is that an increasingly privileged minority of the population will pay, and will reap whatever benefits are to be gained from so doing. The rest of the population could remain socially or economically disadvantaged in many ways regardless of their ability or willingness to pay for the new services that become available. This is why policy research in this area matters and why some – albeit limited – institutional and policy interventions will continue to be attempted, even if far from perfectly. In the policy arena, Nicholas Garnham has always professed caution with respect to public policy promotion of investment in advanced ICTs where it is unclear what the demand for them will be, whether they will come to be regarded as technologies or services that should be made universally available, and who should bear the major cost of such investment. So, for example, with respect to investment in broadband technology in advance of clear demand in the market, Nicholas Garnham19 has argued that ‘no convincing general case has been made for such an intervention, or for the massive diversion of investment funds that would be required. The uncertainties of demand and the dangers of wasteful investment are simply too great’. Any attempt to alter existing policies should be a matter for future consideration on the basis of the empirical evidence of take-up and a consideration of the role that access to such technologies has come to play in the lives of citizens. In other words, as Nicholas Garnham argues in this EuroCPR keynote speech, our efforts at policy analysis need to address real problems and the

Garnham, N. (1997) ‘Universal Service’, Chapter 16 in W.H. Melody (ed.) Telecom Reform: Principles, Policies and Regulatory Practices, Lyngby: Technical University of Denmark, pp. 207212.


diagnosis of the need for intervention must be responsive to those specific problems as they change through time. It seems to me that Nicholas Garnham offers some very important lessons for those of us who choose to remain actively involved in policy research that addresses ‘information society’ developments in whatever way. These are to ensure that our analyses are historically informed and responsive to social and economic problems. We should be more sceptical of simple theoretical frameworks that purport to explain more than their premises permit and we should be deeply aware of the politics of both policy research and policy making when we present our analyses of the contradictory dynamics that underpin information societies. As Professor Garnham argues, ‘we cannot … avoid policies and regulatory interventions’ but we can and should admit to the specific and limited nature of the insights that we have and to the uncertainty that prevails in those contexts in which all such interventions are implemented.


Mechanical to Adaptive Policy
by Johannes M. Bauer20
Four years after the Lisbon Strategy was adopted, Europe has made seemingly little progress toward the stated goal of becoming the most competitive knowledgebased economy. Professor Nicholas Garnham diagnoses hubris, a misconception of the role of information and communication technology (ICT) and inconsistent policies as the main culprits. In my own paper for the EuroCPR conference in Barcelona, I came to similar, fairly sceptical conclusions as to whether ICT and its effects on society can be socially engineered to achieve the broad range of objectives commonly discussed under the heading of information society policy. While I find myself in broad agreement with the overall tenor of Nicholas Garnham’s arguments, I can see important differences in the analysis and the overall outlook for the governability of ICT. Before addressing these points of deviation it may be helpful to briefly assess the state of European ICT policy. Although, as Nicholas Garnham points out, it does not necessarily make sense to benchmark European achievements against the Unites States, this is one of the stated goals of European policy makers. Accepting this logic for a moment, an interesting bifurcation becomes visible. During the past two decades, Europe has been able to close the significant ICT infrastructure performance gap relative to the United States in many areas. Availability of service has increased throughout Europe, real prices have declined, and in areas such as mobile voice communications the Union took the global lead during the second half of the 1990s. There is evidence that the closing of the productivity gap between Europe and the United States during the 1980s and 1990s is related to ICT use. Yet these factors could not be translated into the envisioned macroeconomic dynamic. Unemployment remains stubbornly high, growth expectations relatively low, and during the past few years, European productivity trends have once again lagged behind the United States’. This evidence illustrates that for macroeconomic goals ICT policy is not a sufficient, but only a necessary condition that needs to go hand in hand with other measures.


Professor, Telecommunication, Information Studies and Media, Michigan State University.


Did anything go wrong? At a superficial level one could be tempted to argue that the EU suffered bad luck in announcing an information society policy shortly before the information technology stock market meltdown, and later insisting that its goals could be realised as soon as the sector would recover. But the problems run deeper and have to do with the fundamental conditions of governing modern ICT and its effects. Sociologists have long pointed to the increasing differentiation of modern societies, which has also affected the ICT subsystem. This process was accelerated by the deregulatory measures of the past decades, which led to new market entry, and ran parallel with the differentiation of both supply and demand. There is now evidence and reason to believe that this process has increased the degree of complexity of the ICT sector and its interactions with the socio-economic system. In highly complex systems, changes in one node directly and indirectly affect many other nodes and thus percolate through the entire system. Although the working of complex evolving systems can be understood and modeled, their future states elude forecasting, except for very short time horizons. The performance of such systems at macro levels is shaped by micro-level processes, but not fully determined by them: it is an emergent property of a self-organizing system. Policy makers are but one component of this overall system: they can influence it, but not fully control its outcomes. In contrast, the traditional policy approach assumes that the policy makers are external agents who can steer the ICT subsystem and its effects. Once the complex evolving system nature of ICT is recognised, the ability of policy to govern is seen in a new light. Complexity does not imply that policy is ineffective or that it cannot improve the working of a system. Unfortunately, complexity theory was usurped by proponents of a paleo-liberal position, a fallback to the classical liberal stance, which claims that collective action cannot and should not improve the ‘spontaneous order’ emerging from unregulated market forces. However, this is not a logical conclusion from the application of complexity theory, which points to limits but also to new opportunities to govern ICT and its effects. Policy faces several challenges: like other agents in complex systems, policy makers lack complete knowledge; historical, political and institutional constraints limit the range of feasible courses of action; and conflicts of interest as well as the necessity to reconcile the logic of several sub-systems (e.g. law, economics, technology) impose additional constraints. Under these conditions, policy making cannot be conceptualised as the nearly mechanical, rational pursuit of ends, as prescribed by the dominant orthodoxy linking policy to market failure. Rather, as was already recognised by John Dewey in the 1920s, it resembles social experimentation with uncertain outcomes. Good policy facilitates experimentation, regularly monitors the experience, and continuously adapts its methods and approaches. A complexity perspective lends additional credence to the institutional argument that markets do not exist in the abstract and that the legal and regulatory


frameworks in which markets are embedded matter. Contrary to the present dominant paradigm which asserts the superiority of unregulated markets, the examination of complex evolving systems also reveals that multiple stable constellations, characterised by more or less collective interventions, do exist and may yield similar overall performance, although with vastly different consequences as to the participation in and distribution of the benefits of the information society. Applying this logic, the shortcomings of the past and present approach become quickly visible, and several are pointed out by Nicholas Garnham. A core problem with the Lisbon Strategy and the earlier sector transformation during the late 1980s and 1990s is the strong focus on supply-side measures and the comparative neglect of demand-side policies. Second, regulatory policy has become an exercise in micro-engineering of market conditions at the firm level, focusing for example on interconnection and unbundling, but has neglected to directly address issues at a higher, sector or macroeconomic level. The persistent employment problems cannot be effectively solved by relying on a trickle-down supply side philosophy. (In contrast to Europe, the United States, with its tax cuts and expansionary public sector spending, has created a massive fiscal impulse at a macroeconomic level.) Nicholas Garnham reveals inconsistencies in the analytical foundations of ICT policy by pointing to the conflicting conclusions emanating from a Schumpeterian versus Hayekian framework. I want to add that present regulatory theory is predominantly based in the orthodox neoclassical model, which is but a special case of a more general dynamic theory á la Hayek or Schumpeter. It is not surprising that an intellectual framework rooted in static economic analysis at some point reaches its limits in designing policy for highly dynamic industries. Complexity theory allows several additional insights. It is generally easier to specify conditions for local improvements if the status quo ante is far from the best-practice frontier. Given the gross inefficiencies that occurred under state ownership, it was relatively easy to point to options for improvement. Under these conditions, even an inadequate conceptual framework, such as the dominant neoclassical school, may correctly point towards performance improvements. (In the United States, with its higher level of efficiency prior to reform, the best course of action was and is less evident, as is indicated by the highly contested, slowmoving nature of its communications policy.) However, if every nation follows the same policy blueprint, it may remain unknown whether other, superior alternative paths of action exist. Therein lies a danger of the present dominance of a new orthodoxy: that no competing alternative approaches are institutionally tested thus foreclosing on social learning opportunities. Furthermore, with the caveat that generalisations need to be made with caution, policies affecting the overall legal and regulatory framework of the ICT industry are probably easier to design and more effective than narrow industrial policy interventions. This is clearly


illustrated by the beneficial effects of a more competitive overall market organisation. This does not necessarily imply that industrial policy is irrelevant. However, the conditions for its success are often poorly understood and the experience from one case (e.g. GSM) does not easily transfer to new generations of technologies (e.g. UMTS). I am tempted to argue that discretionary measures probably have a higher likelihood of success at a local level. Again, this is demonstrated by a large number of successful initiatives to utilise the benefits of ICT. From this vantage point, one can also derive a renewed argument for direct public sector activity to provide an alternative framework to private sector activity. Lastly, the complexity lens helps to explain the presence of hubris, identifed by Nicholas Garnham. The multiple and complicated relations in a social system cannot be understood or communicated easily. It is therefore tempting, particularly in the context of policy making, to simplify to the point of myths and to exaggerate the problem-solving capacity of policy. With the exception of scholars arguing from an institutional perspective, academics, experts and advisers bear their share in this grand over-simplification, which became the new shared mental model for the organisation of ICT. Nevertheless, if a productive way forward is sought, it is necessary, and in this point I agree wholeheartedly with Nicholas Garnham, to recognise the limits of governance. Does a complexity view lead to an even more pessimistic view than his? I think not: it forces us to re-think the basic foundations within which ICT unfolds and in doing so, encourages us to think in more fundamental alternatives than much of the policy debates during the past two decades. Policy does matter and its choices make a difference, albeit in ways that are more complicated than commonly perceived.


Half-Empty and Half-Full Glasses
by W. Edward Steinmueller21
Professor Nicholas Garnham and I are in broad agreement regarding many of the issues – particularly the Colbertist heritage of the Lisbon proclamations and the centrality of productivity improvements in the service sector as the holy grail of information and communication technology (ICT) advance. Nicholas Garnham claims hubris is the leitmotif in our current opera and one could certainly agree that the Lisbon pronouncements fit this role. His remarks, however, also indicate contradictory elements in policy that reflect, in my view, a policy praxis that is far less Colbertist than policy rhetoric. In part, the distinction is one of tone – while academics are able to admit to ignorance and uncertainty, policy makers find such confessions untenable. Policy makers, and politicians in particular, prefer the slow poison of loss of public trust that occurs when yesterday’s certitudes are buried with today’s proclamations, to the hemlock of confessing their own errors. There are two points however, on which I believe that Nicholas Garnham has reached conclusions that are quite incorrect. The first concerns the role of ICTs in the pursuit of the holy grail of service sector productivity improvement and in the so-called ‘productivity paradox’, a role that Nicholas Garnham discounts. When Daniel Bell first identified the coming of a post-industrial society, the context was one of sustained productivity growth in the domestic industry of the US. This productivity growth permitted the release of labour from industry to services, just as the prior improvements in agricultural productivity, largely the consequence of mechanisation, had permitted the release of labour to manufacturing. While productivity growth has largely returned to its historical rate of advance, that is, the one prevailing from 1870 through the 1920s, it has continued despite the fact that the share of service workers in modern economies has roughly doubled since Bell’s work was first published in 1973 while the share of manufacturing workers has halved. Professor Garnham joins several scholars in arguing that the making of ‘chips’ (or more precisely productivity improvements in the electronics and software industries) is central to the macroeconomic productivity improvements of the last years of the 20th century. The view, however, is contested by Moses
Professor in Information and Communication Technology Policy, Science and Technology Policy Research, University of Sussex.


Abramovitz and Paul A. David who argue, along with Stephen D. Oliner and Daniel E. Sichel, that appropriate quality adjustments (many of which are the consequence of ICT use) substantially raise productivity, improving the performance of other sectors. There are really only two distinct candidates (and maybe a third based on their interactions) for explaining the continued improvement in productivity – the first is organisational innovation, the other is the contribution of ICTs. Hubris is notably absent in the policy research community’s frank admission of ignorance concerning how the effects of general-purpose technologies can be reliably traced using existing methods. This ignorance, however, must not be translated to mean that these effects are absent. The second point on which Nicholas Garnham’s scepticism also risks discarding the baby with the bath water is with respect to the role of the global information infrastructure in realigning the international division of labour and creating ‘knowledge-based’ economies. The organisation and co-ordination of transnational value chains may produce deplorable labour conditions in those countries bidding for a ‘piece of the pie’ offered by access to global markets. This ‘globalisation’ process does, however, provide a basis for generating the export income necessary for gaining access to investment and technological resources that were far less mobile through most of human history than they have become in the past several decades. While liberalisation of international trade structures can be credited with enabling these developments, improvements in international telecommunications have been essential for their implementation. The ‘hollowing out’ of the corporate monoliths responsible for the realisation of the Fordist aspirations shared by all of the great powers is one of the uncomfortable consequences of these developments. The economies of scale that Professor Garnham argues were responsible for the post-World War II surge in industrial output, and that have provided the rationale for the single European market, are no longer the driving force in the growth of advanced economies. Instead, the flexible reconfiguration of international components sourcing and the management of the supply chains created by this global outsourcing have become central. The principal value added by the advanced industrial economies is now in the creation of intangibles – brand, design, customer service and logistics. These are all activities where specific kinds of knowledge play a central role and where ICTs are employed as the primary tools of production. Both of these comments emphasise the role of the telecommunication and computing industries in the supply of improved producer goods and services. Concerning the more difficult issue of the ‘retail’ provision of telecommunication and information services, I find less to disagree with in Professor Garnham’s remarks. Areas of broad agreement include:


Commodity production of content is a passenger in the vehicle driven by the use of the new technologies for inter-personal communication including ‘selfpublishing’ of content. The creation of new business for consumer e-commerce applications represents, to date, a significant addition to a minor industry – postal shopping or mail order. The efficiencies of liberalisation with regard to key network infrastructure technologies is over-emphasised, although the specific target of broadband development has served to offset this rhetoric.

That hopes for content, e-commerce and a self-organising infrastructure have failed to live up to expectations is, in both Nicholas Garnham’s and my view, the consequence of mistaken assumptions. We differ, however, about which assumptions are incorrect. Nicholas Garnham argues that information society theorists have adopted a simplified view of a complex reality, which has led to grandiose claims concerning policy relevance and opportunity. The incorrectness of assumptions in Nicholas Garnham’s view appears to be this simplification. My view is that the predominant strand in social science analysis of information society developments is a rejection of technocratic and modernist visions that assure a simple or regular route to progress. It is not that the theorists have got it wrong; it is that the only theorists being given the stage are those who are willing to subscribe to a simplistic and technological determinist vision of the immanence and benefits of information society developments. Those with more measured or complex views are left to pursue their academic musings with their students. Lest I join their ranks, let me be very concise. The knowledge-based economy and the information society are inextricably linked concepts in which the tools provided by advanced communication and computing technologies are of central importance in meeting human aspirations. The problem is that, as in all human endeavour, the aspirations of the participants are in conflict. Unlike traditional goals of public administration and management, the politics of the information society are underdeveloped and unbalanced. The supply side voices have for too long dominated the discussion, to the extent of suppressing other voices and even pushing them into a sort of ‘underground’ whose most visible manifestation is the free/libre open source software movement in Europe and other countries. As Eli Noam noted some years ago, networks have politics. Many of the inadequacies of our current policy frameworks are the consequence of failure to make a break from the politics of the past. Championing large industrial players prolonged and heightened the processes of re-adjustment and re-structuring. Embracing the new ‘insurgent’ Schumpeterian competitors is likely to be just as damaging. Restoring the ‘user’ and the concept of


‘public welfare’ to centre stage will go a long way towards addressing the policy disconnects that Nicholas Garnham correctly identifies.


Creating a ‘Critical Space’ for Analysis and Debate
by Martin Fransman22
Professor Nicholas Garnham has raised some fundamental issues regarding European policy making and the vested interests and theories that go along with it. He suggests that ‘we’ (including politicians, bureaucrats, policy-makers and researchers) have imbibed a ‘consensual vision of the information society’. However, he argues, this consensual vision, ‘in fact contains within it a number of distinct visions … each of which needs to be judged on its own terms’. The thrust of his speech is to try to unbundle this consensual vision and its components and to provide an explanation of why the unbundled components have taken the form they have, focusing specifically on the vested interests and thinking that have informed their construction. However, at a more general level he begins by asking ‘where are the voices of the sceptical researcher?’, implying that there is insufficient ‘sceptical research’. It is this issue that I want to focus on (leaving his substantive questions about the ‘new consensual vision of the information society’ for another occasion). The need for a ‘critical space’ I agree, as I think most serious analysts would (whether in academia, government or the private sector), that ‘sceptical research’ should be an important component in any healthy politicoeconomic system. Sceptical research may be defined as research that aims to ‘stand back’ from the phenomena under investigation in order to examine fundamental questions regarding these phenomena.23 These fundamental questions relate largely to issues of cause and effect. For example, within the European information and communication technology (ICT) area, it is important that questions such as the following are discussed. What are the main policies that are being pursued at European and national levels? What are the drivers of these policies, that is why are these policies being followed rather than other possible policies, including the difficult question of the interests that might have motivated the policies?24 What are the effects of the
Professor, Institute for Japanese-European Technology Studies, University of Edinburgh. This is not to suggest that research can ever be completely objective and impartial. We are all inevitably and irrevocably embedded in systems of thought of various kinds and these colour our judgements. 24 Most analysts would accept that policies are always motivated by specific interests, agreeing that interests and the power to give them effect are not evenly distributed amongst any population. This issue is analytically difficult, however, because the chains of causation that link interests to policies are often complex and indirect, resulting in a



policies, including who the main winners and losers are likely to be as a result of these policies?25 Important though it is to ask such questions, it is necessary to stress that it is unlikely that definitive answers will often be produced. Complexity and interest are significant constraints. It is precisely for this reason that a ‘critical space’ is needed within the politico-economic system so that these kinds of questions can be posed, counterposed, analysed and debated. Such a space will facilitate the open-ended process of analysis, criticism and scepticism that is needed. Do we in Europe have an adequate critical space? Nicholas Garnham clearly feels that the answer to this question on the whole is negative. He is not suggesting that what he calls sceptical research is impossible (his own paper being proof of the existence of such research). Rather he implies that there is not as much sceptical research as there should be. Support for his view comes from the recent boom and bust. One of the main puzzles left unanswered in an analysis of the telecoms/ boom and bust is why there were so few critical voices challenging the thinking that drove the events of the boom (that led, consequentially, to the bust)?26 This raises questions about the role played by institutions such as universities and research institutes. In short, the puzzle suggests the relative absence of a critical space for analyses and debates of precisely the kind that Professor Garnham has raised. In turn, this raises two further fundamental questions: 'Why is there the absence of a critical space?' and 'Can anything be done to create a critical space in national and global politicoeconomic systems?' Why is there the absence of a critical space? The puzzle referred to is a puzzle because national systems contain institutions that supposedly have evolved precisely to address the need for critical analysis. These institutions, including universities, corporate research organisations, and other research institutes, contain researchers or, as Adam Smith called them, philosophers, people of speculation whose job it is not to do anything, but to observe everything, who are relatively removed from routine everyday tasks in order to facilitate their critical function. Supposedly, it is the job of these researchers to provide the critical analysis that, it is commonly agreed, is a necessary component of a healthy politicoeconomic system. But, if the analysis of the relative absence of critical analysis in the telecoms/ boom and bust is correct (and Nicholas Garnham suggests there is a similar absence in the case of what he
degree of ambiguity regarding which particular interests have motivated which policies. 25 These questions are equally difficult, again largely because of complexity regarding both cause and effect. 26 For a detailed elaboration see Fransman, M. (2004, forthcoming) ‘The Telecoms Boom and Bust 1996-2003 and the Role of Financial Markets’, Journal of Evolutionary Economics.


calls the ‘new consensual vision’ of the information society), why have these institutions and their researchers not played the role of critic they were intended to play? My aim here is to raise this question, rather than to develop a detailed explanation of the complex processes that constrain (and, at the limit, prevent) the critical function from being realised. However, I will mention two relevant points. The first is that the evolution of relatively autonomous institutions27 (such as universities) does not necessarily create the possibility for relatively autonomous thinking. Thomas Kuhn’s research on scientific paradigms, and the ‘invisible (and sometimes too visible) colleges’ that support them, should be sufficient to convince us of this.28 The second point relates to the interests (conscious and unconscious) that influence the thinking and research of researchers. To give a blunt example, how many ICT researchers from supposedly relatively autonomous research institutions are ‘tied up’, one way or another, with the projects and participants of the European Commission’s information society-related programmes and, therefore, compromised to some extent in so far as critical thinking is concerned? Can anything be done to create a critical space? The obstacles are enormous. If it is correct to say that there are often (if not always) vested interests behind major policies and programmes that allocate significant resources, why would these interests be willing to support critical analyses that might include analyses of the role that they have played? Similarly, why would politicians, bureaucrats and policy-makers be willing to put up with critical analysis of the processes and outcomes which they have helped shape (and in some cases mis-shape)? In short, the sceptic (or the realist) may conclude that there is no role for real, independent, critical analysis – that is just the way the system works. When we add to these political points the Khunian constraints on critical thinking that takes place outside the paradigmatic box, the obstacles are compounded. However, there is, perhaps, one optimistic ray of light. As the telecoms/ boom and bust has amply demonstrated – and even more starkly, the Iraq War with its absence of weapons of
That is, relatively autonomous from political and private sector interests. According to Thomas Kuhn in his book, The Structure of Scientific Revolutions (1962), scientific thinking is advanced through the development of what he called paradigms, which conceptually structure thinking in a particular area. His view is that communities of practitioners support and are supported by the paradigms that they generate. He argues that ‘normal’ scientific advance typically involves the adaptation and extension of existing paradigmatic thinking rather than the serious consideration of alternative paradigms. Practitioner communities are strongly wedded to their paradigm, often constituting ‘invisible colleges’ of support. It is only when a paradigm collapses under the weight of anomalies and contradictions that cannot be explained by the paradigm that a ‘scientific revolution’ occurs and the conditions are created for a paradigm to be replaced by one or more alternative paradigms. In the context of this current piece the point is that paradigmatic thinking crosses institutional boundaries and may compromise attempts to create the conditions for critical thinking.
28 27


mass destruction, the main rationale for going to war – great dangers face politico-economic systems that lack a truly critical space. Might this provide the incentive to focus attention and priority on how to create a critical space? Three questions follow: 1. Are there ‘players’ with sufficient political clout who would want to create a critical space? 2. What conditions are necessary for a critical space to be created and for it to function effectively? 3. What processes would need to be put in motion for a critical space to be realised? Nicholas Garnham’s speech creates the opportunity for these questions to be put on the table for critical analysis and debate.


‘Le retour des eponges’
by Jean Paul Simon29
Voici le retour des éponges naguère méprisées. Michel Serres However sceptical or pessimistic Nicholas Garnham’s assessment of twenty years of European public policies and related research in the communications sector may appear, I do endorse most of its conclusions, including the final plea to be more humble. But I wonder if this plea could also be applied to Nicholas Garnham’s critique and deconstruction of the policy and research under scrutiny? In some way, Nicholas Garnham’s critique reminds me of the kind of criticism we heard at the end of the 1970s/beginning of the 1980s from a group of French philosophers (G. Deleuze, J.F Lyotard and others) about what they called the end of ‘grand narratives’ (fin des grands récits). The aim of these narratives was to provide a coherent account of reality, their consistency being an output of the narrative itself and relying more on the evidence that was omitted than on the evidence provided. Of course, such narratives, like the policies described by Nicholas Garnham, did not expect to be received with scepticism. On the contrary, they relied on a suspension of disbelief, without which they would be attacked,30 and as being plain ideology or a matter of simple faith. What is interesting though, is that these philosophers had previously been involved in some kinds of radical political activism31, hence their tone of disenchantment, which we can also detect in Nicholas Garnham’s text. Some of this disenchantment stems from an initial lack of modesty on the part of both sides: public policy

Senior VP for international regulatory strategy, France Telecom. Caveat : the views developed in this paper are purely my own and do not represent the viewpoint of France Telecom. For Deleuze it took the form of a plea for immanence against transcendency, J.F. Lyotard was heralding the coming of the ‘post-modern society’. Their critique of the co-extensivity of the ‘rational’ and the ‘real’ is also a critique of the Hegelian linear teleology. 31 For instance, J.F. Lyotard was a member in the 1950s and ‘60s of the radical left wing group ‘Socialisme et barbarie’ together with philosophers like C. Castoriadis and C. Lefort.


makers and social researchers. Nicholas Garnham is entirely right in (re)stating that ‘policy was driven by a range of different interests with differing definitions of the problem, different aims and different supporting economic models and theories’ later to be criticised. However, this is precisely what was concealed beneath the former ‘grand récit’. So it was not only out of naivety that we (and here I must include myself too) adhered to the goal of nurturing the dialogue and producing some sound social research to allow ‘a more rational policy’. This shared ‘consensual/grand vision’ was bringing together researchers in search of a lever for action, directed away from old-style academic research, and European policy makers looking for new areas to conquer: communications was one such32. It was a reciprocal process in which both parties gained from supporting the basic narrative, mixing politics and policy. So, where does the lack of modesty fit in relation to this tentative account of ‘failed’ policies? The mutual agreement between researchers and policy makers was based on a very crude or mechanical belief in it being possible to trigger the right output provided one chose the appropriate approach, another consequence of this kind of ‘grand récit’, by essence linear. We believed, or pretended to believe, although without any evidence, that as Nicholas Garnham states in its conclusion, ‘we had the policy instruments that could produce the desired results’. This was a very technologically deterministic view. Eventually, of course, we were to discover that not only was the reality much more complex as a result of the interaction between several factors, but also that the theories, and of course the policies, were not living up to expectations. Among the many aspects that were underestimated was the role of the user/consumer. As Nicholas Garnham rightly points out, this role was mostly a rhetoric to justify the policies that were to bring benefits to the consumers and ‘increase the social optimum’. Of course, in terms of macro-economic public policies this might seem familiar, especially for those policies related to industry, where consumer welfare was seen as a byproduct of ‘increased business efficiency’. In fact, because his focus is mainly on the internal contradictions of such an approach rather than on its reduced focus, Nicholas Garnham seems not to question this supply side approach. At the EU level, there is an irony in the fact that while some departments were framing such public policies, others were trying to take account of this social

It must be remembered that in the 1970s and early ‘80s the Commission was like a ‘sleeping beauty’ waiting to be awakened. Member states favoured intergovernmental relationships over leaving the field open to the Commission’s initiatives. The ITT Task Force was not launched until 1983.


dimension.33 Contradictions within large organisations are somewhat familiar to the social sciences, but not easily acknowledged by policy makers. During the same period research was being conducted on usage and usage patterns,34 but feeding the results into the public policy debate was extremely problematic. Grasping reality is always a difficult task, especially in the context of social phenomena that seem to be quite removed from what is on the surface. What cannot be denied is that we in the communications sector witnessed tremendous changes. They may not have been the direct result of the policies in place, but they did happen. Perhaps it was that it took some time to embody the ‘Daniel Bell vision’ as Nicholas Garnham puts it, or even the much celebrated Galbraith vision of the late ’60s for the corporate world. I personally am not convinced of the accuracy of these, also partial, visions. However, to stay in line with the hypotheses of some of the historians, (and even though I find this kind of formulation somewhat pompous), the coming of the communications era may well have coincided with the waning of the long nineteenth century that ended, according to French historian Maurice Agulhon, around 1950.35 J. Le Goff even talks of a long ‘middle age’. However, this is not an explanation; it is merely another assumption in need of justification. Without getting caught up in another kind of deterministic model (or circular explanation) let me just emphasise that the new models of communication that emerged were, in any case, more suited to the present social relations (individual interaction), assuming the vagueness of this expression and remaining agnostic about its causes, and that they took some time to emerge. Although remaining neutral about what triggered these models, one of the clear outputs is proliferation of information and communication systems, not only in quantitative terms (not so long ago we had one telecommuncation provider, one broadcaster), but also in qualitative terms (the drastic changes in the way we communicate with each other). This observation, however, still leaves aside the question of causality: are policy makers creating their environment or is it the other way around? But I will carefully leave aside the structure/agency issue, which is far too complex for such a short review. Suffice to say that here, too, we need to get away from mechanical deterministic models. My own view is that we need to work through the complexity of the interaction of several layers of parameters, of ‘mediations’36 to
For instance, the FAST programmes. See, for instance, the works of the ‘interactionist’ school, of Steve Woolgar, Roger Silverstone, and others. 35 See Histoire vagabonde. Ethnologie et politique dans la France contemporaine, Paris, Flammarion. 36 See the seminal work of C. Castoriadis (1975) L’institution imaginaire de la société, Paris: Le



understand the processes, and that the path from the structure to the agency (or vice versa) is pretty winding, and may even move in circles.37 To sum up, although agreeing with Nicholas Garnham’s conclusions and being impressed by the subtle deconstruction of some of our latest myths, this does not mean I share either his disappointment or his alleged pessimism. Obviously a lot of things have changed during the last two decades, and although most of the reasons offered to account for these changes may have been unconvincing, these changes did occur, and Nicholas Garnham underestimates them. Also, even if we doubt the capacities of policy makers and scientists, if we live in a world of disenchantment this should not prevent us from setting up a modest research programme agenda and avoiding another fit of ‘historical amnesia’. In other words, and continuing the historical metaphor, after leaving the ultimate ‘grands récits’ it may be time to start some ‘micro storia’, á la Ginzburg or N. Zemon Davis, with its intricate focus on micro-processes and the specificity of the entanglement of parameters. When communications become commonplace, the ‘grands récits’ are over, blind faith has been ousted, but this may offer the opportunity for more modesty and the chance to find new evidence from a different angle. Finally, I have to acknowledge that ‘reconstructing’ the past, even as some kind of social history of communications, may not be attractive to policy makers, but nevertheless, would produce some positive feedback into the public debate and, to end on an optimistic note, perhaps prevent policy failures. In his book Le passage du Nord-Ouest M. Serres as an epistemologist, explained that he was trying to find the north-west passage, that is the right connection between ‘hard’ and ‘soft’ sciences; we may have to follow a similar path to bridge the gap between critical research and public policies. However, this implies the taking into account of complexity, hence the metaphor of ‘sponges’.

Seuil. For communications see ‘Les médiations’, Réseaux 60, 1993. 37 See P. Flichy on innovation, L’innovation technique. Récents développements en sciences sociales. Vers une nouvelle théorie de l’innovation, Paris, La découverte, 1995.


European Research and Telecommunications Policy: an Evaluation Perspective by Peter Johnston38
It is refreshing and stimulating to hear such a robust critique of European telecommunications policy and of the research behind it. I agree with much of what Nicholas Garnham now identifies as problematic. However, he oversimplifies and overstates some of the failures and reaches a too modest conclusion about the new opportunities for constructive policy intervention. The original purpose of EuroCPR was to develop a European research capability in telecommunication regulation and information society developments and bring it into dialogue with policy makers. The aim was to support rational policy making based on the best available evidence and dispassionate analysis. This purpose is still valid, and is more necessary than ever – with an enlarged European Union, a greater than ever impact of electronic communications on society and economic growth and strengthened commitments to evidence-based policy development at the EU level. We should not be dismayed by the collective inability of the financial and research communities to foresee and anticipate the collapse of the investment boom of 1998-2000 in Internet-based businesses and telecommunications. The myopia was much wider, and in fact many observers within the analysis community did sound the alarm39 – the ‘consensual vision’ that high growth rates in 1997-2000 for a high range of ‘Internet start-ups’ could persist was clearly unfounded. However, new innovations such as those mobilised by Amazon and eBay and search-engines such as Google have been rewarded with very substantial first-mover, and winner-takesall monopoly rents. It should have been possible to anticipate that the growth in the personal computer (PC) market (driven by dial-up access to the Internet) and in GSM – Global System for Mobile Communications – handset sales – end-user markets which pulled up most of the ICT sector growth – would pass the turning
Head, Evaluation and Monitoring, DG Information Society, European Commission. The new technological style: mismatch, instability, plasticity and potential; Professor Andrew Tylecote, UK, p. 39 in ‘The New Economy of the Global Information Society’, DG-INFSO, European Commission, May 2000”.



point to saturation in 2000.40 However, this would have required an earlier and wider understanding of market dynamics in emerging complex ‘network’ sectors. Nicholas Garnham is right to observe that ‘economic theories and models have driven … and legitimate … the telecom/information society policy process’. He is also right to observe that the extremely complex system of the global economy requires new theoretical and policy capabilities, and that there has been no single policy goal – we have been engaged with a combination of the five policies he identifies, which I would re-define as three: 1. Industrial policy to consolidate the European ICT sector for a single market and as a globally-competitive private sector, rather than as public monopolies or ‘national champions’. 2. Consumer welfare policies to provide a wider range of more affordable communications services to all Europeans. 3. Policies for sustained economic growth stimulating innovation across all the economy, including in the provision of public services. These are tightly linked policy goals, and it has made good sense to pursue them together – in a coherent package of support for research and technology development (RTD) and transnational collaboration; legislative measures for competitive provision of an ever-widening range of services; and support for accelerated structural change. Nicholas Garnham sees incompatibility between industrial policies to enable a consolidated European Union information and communication technology (ICT) sector to contend with US and Japanese competitors, and a goal of ‘squeezing operator margins in the name of consumer welfare’ – with regulators left to balance two incompatible goals. I beg to differ. In general, consolidation and competition have achieved economies of scale and scope, with substantial price reductions for consumers, and a wider range of services available to more people. Fast technology development and innovation, together with new common standards (notably the Internet protocols – IPs) has radically changed the cost-structure of communications services to the great benefit of users. Nicholas Garnham is not right to claim that the information society label covers ‘continuation of a largely failed industrial policy’ and ‘protects at EU level … the budgets associated with it’. The European ICT sector has been successfully transformed – it is globally competitive in key areas (not all), and consumers – both individually and European society as a whole – have benefited. Measured against most criteria, the EU

‘The dynamics of Internet and GSM growth, and inherent market fluctuations’, Robert Pestel and Peter Johnston, ECPR 2003.



telecom and information society policies have not failed – they have been remarkably successful, but because of this now need re-orientation. Nicholas Garnham is right to stress the fundamental incompatibility of a ‘Hayekian Dogma’ of ‘technological neutrality in regulation’ with the development of network technologies with long lead times, and ‘Schumpeterian’ encouragement of innovation and structural change through widespread availability, affordability and use of broadband access to ‘Internet-type’ services. To achieve these goals, it is necessary to target specific technological areas for RTD, and the regulatory structure needs to encourage entrepreneurial innovation. The world has changed in the last decade. The mix of EU policies for an ‘information society’ that emerged through the 1990s, with significant re-definition in 199541 and in 2000,42 is in need of review and re-orientation.

The technology landscape and perspectives are radically different – with an established dominance of the IPs (including the transition to IPv6); the emergence of ‘open source’ software as a serious alternative to proprietary systems; with peer-to-peer infrastructures (e-mail and blogging; eBay and music sharing) invading domains of previous centrally-provided commercial activities. The globalisation of production and trade, including services, has made the ICT sector a highly interdependent global network of multinationals and their supply chains. It remains vital to keep Europe competitive – but this now means that the social, education, research, institutional and fiscal environments (and communications infrastructures) must facilitate competitive enterprise and innovation. It is not enough that we have a few ‘globally competitive’ ICT suppliers. This broader competitiveness goal is not a ‘zero sum’ goal that must be achieved at the expense of the US and Asia. It is a collaborative goal, in which all world regions can find a sustainable place. Europe needs to remain one of the key locations for global innovation and business – with Europe’s competitive advantage lying in its high skills and in its linguistic and cultural diversity.

Nicholas Garnham questions whether the ICT sector remains critical to development of an information society as envisaged by Daniel Bell, and whether ICT growth is a cause or result of a transition to a knowledge-based service economy. If ICT development has reached a new ‘plateau of capability’, the technologies themselves could clearly be treated as commodities, without strategic
41 42

The Bangemann Report, 1995. The ‘Lisbon Strategy’ and eEurope Action Plans.


value. However, all the evidence points to continued rapid evolution in technological capabilities – both in IT through nanoelectronics, and in communications through photonics, radio and Ground Radio Interface Device (GRID) collaboration technologies. In these circumstances, it does not matter whether we see growth in the ICT sector as a cause or result of social and economic change – the technologies will continue to co-evolve with society and the economy as part of a highly linked complex system. Nicholas Garnham clearly sees the goal of transforming Europe into a competitive and dynamic knowledge society as ‘policy overreach’, and concludes that we must be more humble – limiting ourselves to ‘short-term interventions in specific areas’. Yet, the European Union can surely not retreat into ‘rearranging the deck-chairs’. Public intervention to shape the future networked knowledge society, and secure Europe’s place in it, cannot be carried through at the national level. We have no global institutions – with the possible exception of the World Trade Organization – with the mandate or leverage to address societal goals in this transition. We are indeed fortunate to have an enlarged and strengthened European Union with the full set of instruments at its disposal for coherent and effective public policy intervention. We must continue to set ambitious goals and use all our instruments to attain them.


Are Industrial Policies Irrelevant or Obsolete?
by Anders Henten43
In his address Nicholas Garnham follows the best of traditions, building on the ancient Greek philosopher Socrates in revealing the inconsistencies in the thinking of his opponents (in this case the proponents of ‘information society policies’) as well as in emphasising that we really do not know much and need to be humble. However, I doubt whether, in contrast to Socrates, Nicholas Garnham is looking for a consistent Truth in the messy reality. According to Nicholas Garnham, all we can hope for is ‘messy, short term interventions in specific areas to solve specific problems’, and we should take ‘grand visions, plans and theories with a large pinch of salt’, as the reality is so complex that we will probably get it wrong in trying to intervene on the basis of visions, plans and theories. There are a lot of enlightened and instructive observations in Nicholas Garnham’s address from which we can all learn. However, I do not agree with its fundamental pessimism and would be unhappy with settling for and being content with ‘messy, short term interventions’ if this means not striving for more consistent policies in the area of information and communication technology (ICT) developments and usage. Furthermore, I am certain that not all other parties will lay down their ‘grand visions, plans and theories’. In the same vein as Nicholas Garnham’s realistic remarks that ‘we cannot avoid policies and regulatory interventions’ and that ‘no human society or social group can be unplanned in the extreme sense’, it is not realistic to avoid ‘grand visions, plans and theories’. They are necessary to rally the support for social projects of people, social groups and interests; there is an ongoing struggle for domination between different visions, plans and theories and it would unwise not to participate if one has a vision, a social interest or even just a point of view. ‘Information society’ visions and plans can easily be seen as an expression of this need for social visions. From the beginning of the 1990s, the information society concept has been adopted by policy makers as a new development of their social visions or as a substitute for their diminishing or lack of visions. A problem with

Associate Professor, Center for Tele-Information, Technical University of Denmark.


the information society concept, however, is that it is so broad that it can cover almost anything and may well function as a smoke-screen for many different policies. In a paper from 2000 on information society visions in the Nordic countries,44 I pointed to a number of paradoxes in relation to information society visions: that they are similar in all countries, even though these countries may differ greatly; that they are backed by all political quarters, even though information society developments are supposed to revolutionise societies; that the information society narrative is developing in a period where the great narratives are supposed to have disappeared; and that information society planning is developing in a period when state planning is considered to be obsolete. These paradoxes are, of course, stretched to the (parodic) limit and mainly serve to highlight some of the weaknesses in the information society concept, and the frequently extremely abstract character of information society visions and plans. All too often, the information society concept acts to cloud the understanding of the specific issues beneath it. In this concern I absolutely agree with Nicholas Garnham, but I still think that the development of information society concepts is an important battleground for different upcoming social visions. And now and then issues become clearer as in the case he mentions of intellectual property rights (IPRs), where one position is to use the protective mechanisms as incentives for innovation, and another focuses on the sharing of knowledge and joint learning. This is an example of an issue crying out for social science analysis including positive as well as normative approaches. Behind the information society smoke-screen Nicholas Garnham finds many different and often contradictory policies. The most important contradiction for him can be seen in the policies for maximising consumer welfare on the one hand, and supporting network and service innovation on the other – in other words, between competition policy and industrial policy. Not that he says that consumer welfare and innovation are necessarily in conflict, but rather that the related policies are different in nature and ‘need to be judged on their own terms rather than taken as a single package’. A reading of Nicholas Garnham’s address leaves the impression that he sees the major problem lying, at present, in industrial policies – the Colbertism of European Union policies as he denotes them. He stresses more than once in his address that there are also problems involved in the market models and competition promoted in recent years, and in reliance on letting the market rule. However, the main focus of his concern is on the side of industrial policy. And, this may be justified in the sense that industrial policy has received the most emphasis in recent EU policies.
Henten A. and Kristensen T. (2000) ‘Information society visions in the Nordic countries’, Telematics and Informatics, 17(1-2): 77-104.


After the big move toward liberalisation of the telecommunications area and the introduction of competition, there is now much emphasis in EU policies on the development of broadband access, and services requiring broadband capacity (the Lisbon strategy). Nicholas Garnham is worried that such industrial policies may be unnecessary, and probably even wrong and a waste of public money, if economic support of various kinds is involved. The reason, according to him, is that we understand the complex reality in a much too simplified manner. An alternative point of view might be that state intervention in different kinds of industrial policy moves is necessary to support markets and make them function in the best possible way. According to this view, state intervention cannot be limited to mere regulatory measures, but must also encompass industrial policy initiatives on the supply as well as the demand side. The dangers in such a point of view are obvious. Formerly much industrial policy was criticised for supporting dying industries on the basis of pressure from often unclear alliances between the owners and the people employed in them. Such industrial policies still exist, but are being superseded by policies supporting new industry areas such as biotechnology, nanotechnology and ICTs. But even here, there is a great risk in ‘picking the winners’, as the winners chosen may eventually be out-competed by similar winners in other countries reproducing the same industrial priorities. And, even in a situation where the choice was ‘right’ in the sense that a successful industrial development is achieved, one might question the necessity of supporting industrial development with public money that could have gone into, for instance, social programmes. However, in spite of these reasonable objections there may still be a need for industry policies which – on either the demand side or the supply side – support upcoming industries. Analyses of industrial development successes around the world often identify government intervention in the shape of industrial policy initiatives as one of the important building blocks. Nicholas Garnham’s basic assumption is that we know very little about the actual functioning of markets and the results of our policy interventions. We are dealing with complex systems, and the risk of getting it wrong is more than likely. He is not alone in this point of view: he notes in his address: ‘It is good to know that one is not alone’. Indeed, contenders of this point of view are also not alone. As Fred Weingarten in a short piece in a book edited by Sandra Braman45 describes in relation to the Office of Technology Assessment (OTA) in the US46 in 1995, there has for a number of years been a very sceptical attitude in the US toward government of any kind or at any level. This has affected a broad range of policies

Braman S (ed.) (2003) Communication Researchers and Policy-Makers, Cambridge, Massachusetts: MIT Press Sourcebooks. 46 Weingarten F. (2003) ‘Obituary for an agency’, in S. Braman (ed.) (2003) Communication Researchers and Policy-Makers, Cambridge, Massachusetts: MIT Press Sourcebooks, pp. 245-51.


including communication policies. In Europe, however, this attitude towards government intervention is not nearly so widespread. But it may very well influence our thinking on policy interventions in the markets, which is an example of a theme where simply avoiding the import of US discussions by developing a European research capability is important. This brings me to the last point – or rather Sandra Braman’s book leads me to it. The topic of this book is the relationship between social science research and policies in the communication areas. Nicholas Garnham also touches on this topic, but mainly he deals with the relationships between policies and markets. An obvious reason for this prioritisation is that relationships between the research community and the policy community have been relatively close, content-wise at least. It can with good reason be claimed that communication policy research in some cases has had fairly good relations with policy making. But this does not mean that research has been leading policy making. In various instances it has been the other way round – that some research has acted as justification for the dominant policy directions. In this criticism of some of the research one must agree with Garnham. There is a tendency for research in the areas related to ‘information society’ subjects to be more administrative than critical – to use the wording of Paul Lazarsfeld in a paper written in 1941.47 An attempt to describe and analyse European social science research in communication issues, including research in policy related matters, was made in a recent study commissioned by IPTS called ‘Mapping the European knowledge base on socio-economic impact studies of IST’.48 More work in this field absolutely needs to be done in order to better understand the relationships between research, policy making and market developments. One has to agree with Nicholas Garnham that there is often little clarity in research regarding information society issues. He starts by outlining a development in research through three cycles: liberalisation, convergence and the information society. Where policy ambitions have been increasing, he says, research has weakened. The reason is partly that liberalisation and competition are limited issues on which much knowledge has been developed over many decades. Information society issues are much broader and need more – not less – research, despite the criticisms that can be raised against some of the research performed. To sum up: in spite of the many points on which one can agree with Nicholas Garnham, my main objection to his address is that its basic pessimism could easily
Reprinted in Braman S. (ed.) (2003) Communication Researchers and Policy-Makers, Cambridge, Massachusetts: MIT Press Sourcebooks, pp. 493-509. 48 IPTS/ESTO (2004) ‘Mapping the European Knowledge base on socio-economic impact studies of IST’, March.


lead to a retreat from the discussions or even battles between different kinds of social interests shaping policies for industry and broader social developments around the information society concept. The most important issue in my opinion in Nicholas Garnham’s address and in this comment, is that the exclusion of industry policy initiatives seems to be an unnecessary handicap in the application of different societal tools in the industrial and broader economic development.


On Muddling Through Contested Terrain
by William H. Melody49
Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. J.M. Keynes Nicholas Garnham provides an incisive, insightful and sweeping critique of European information society policy over the past 20 years and the body of research that has sought to inform it. This has led some of his listeners/readers to the pessimistic conclusion that, according to Nicholas Garnham, there is little point in making policy in this field and even less in attempting policy research – and I suppose having EuroCPR conferences. Nicholas Garnham pulls back from these seemingly logical conclusions in the last few paragraphs, almost recoiling from the implications of his analysis, noting that the case for ‘letting the market rip’ is also subject to devastating critique (and he has done that on other occasions!).

Nicholas Garnham states at the end of his speech that he is not making a case against public intervention, but ‘there are no general answers or rules and the results of intervention are likely to be very different from what was planned’. This is certainly not an argument in support of public intervention or a suggestion that government policies, and particularly policy research, can expect to accomplish anything constructive. Nicholas Garnham’s belated unsupported qualification of his analysis is blown away by his preceding withering critique. He recommends that policy makers and policy researchers should be more humble, but he does not suggest that humility will improve the research or the policies, and offers no recommendations as to how either policy or policy research can be improved. This is indeed a pessimistic tale, but I suggest that not only is this not the whole story, but also the failures Nicholas Garnham sees are dramatic evidence of the urgent

Managing Director, LIRNE.NET and the World Dialogue on Regulation for Network Economies (WDR). Visiting Professor, Technical University of Denmark, London School of Economics and University of Witwatersrand.



need for more critical policy research as the only avenue through which policies and economic performance are likely to be improved. Nicholas Garnham’s end of tour-de-force observations recognise that the whole story has other important elements not explored in his address. The real world is characterised by imperfect policy research informing imperfect policy development attempting to shape imperfect markets. Much of the debate across the entire field of politics and markets50 (that is government policies attempting to shape markets) from Adam Smith to the present day has been essentially about which is less harmful to economic development, market failure or policy and regulatory failure. Smith attacked policy failure. Keynes attacked market failure. National telecommunication and other monopolies were justified because of market failure. Liberalisation was justified because the monopoly policies were failing. Garnham now attacks information society policy failure, but cagily does not tell us where he comes down on the relative merits and demerits of market and policy failures in the European information and communication technology (ICT)/information society area today, let alone what policy directions or policy research he would recommend. In more recent years, the politics and markets debate has been supplemented by a related debate over whether policy research is capable of generating knowledge that can help reduce both policy and market failures, and influence the trend of events more closely toward the goals of economic and social policies. EuroCPR represents researchers, policy makers and a few industry managers who believe that policy and policy research can make a difference in the communication/ICT/information society field. Nicholas Garnham’s critique argues that in its nearly 20 year experience, information society policy research in Europe has made no evident contribution, and the belief that it can is naive because the real world is far too complex. I believe this conclusion to be both inadequately supported and wrong. Policy making and policy research are arenas where alternative and often contradictory views, policy options, theories and evidence are highly contested in continuously changing environments. They do not yield a consistent pattern of evolution or development, and it is naive to expect that they should. Garnham has documented some of the contradictions in providing a critique of EU information society policy and its various rationales. The contradictions, which he has headlined, simply describe the policy and policy research processes. The specific critique, which gets

See Lindblom C.E. (1977) Politics and Markets: The World’s Political Economic Systems, New York: Basic Books, and (2001) The Market System, New Haven: Yale University Press.



second billing, is the real substance of his contribution deserving critical attention from the research community. The marketplace for policies and ideas It would be an error to read Nicholas Garnham’s critique as a linear logical positivist argument directed against the lack of direct linkages between good research results, good policy selection and a better economic and social order, as is evident from exposure to any of his extensive writings on the ‘contradictions’ of capitalism and the ‘confusions’ surrounding knowledge of communication processes, let alone markets and policies. Rather, it should be read as a wakeup call for policy makers and researchers who Nicholas Garnham believes have become far too complacent and self-satisfied during a period of little demonstrated accomplishment. Neither the policy nor the policy research has been particularly well founded, and there has been far too little critical research. Many of us will agree. Nicholas Garnham presents a powerful argument that policy makers and policy researchers should step back and take stock – that is, critically assess what they have accomplished and what they are doing, examine the tacit assumptions they are taking for granted, and take a fresh, more critical look at the issues of the day and the ways they are being examined. This commentary is not the place to do this, but let me set a framework and context for such an analysis and for assessing Nicholas Garnham’s contribution. This is important because the obvious conclusion of Nicholas Garnham’s analysis, although he does not state it, is an urgent need for more and better critical policy research to inform better policies to manage the highly imperfect markets that characterise information economies. The concepts of monopoly and competition, and their policy implications, are well understood with respect to markets. Monopoly power allows the terms of trade to be dictated by the monopolist, taxes consumers unjustifiably, and erects artificial barriers both to innovation and the participation of efficient competitors. Public intervention attempts to negate the monopoly power and/or its effects by competition policy, direct government regulation or public supply. The policy goal is to foster competitive markets where they can function effectively. With limited knowledge, bounded rationality, and a significant degree of uncertainty, contested markets are the best place to decide issues of resource allocation and economic efficiency. The overriding objective of policy here is to maintain a structure and process for the functioning of markets, not to determine or predict the results. The best results are unpredictable and likely to come from open, contested markets. They are characterised by contradictions and to outside observers apparent confusion.


The arena for policy development and implementation through regulation is structured differently, but in democratic societies similar objectives prevail. Policy makers and regulators have local monopolies in their areas of specialisation, but they too have limited knowledge and bounded rationality, and must contend with a significant degree of uncertainty, both with respect to policy processes and the external economic, social and political environment in which they must function. They must seek information in an attempt to deepen their knowledge and reduce uncertainty, but must recognise that most information provided to them is deliberately partial and intended to bias their views to favour one of the many special interests that will be affected by their policy decisions. Thus policy development and implementation is contested ground where the results – good or bad – are most often unpredictable, especially during periods of major technological or institutional change. Good policy and regulation, like markets, are also judged more by their structures and processes, for example whether they are independent, transparent, inclusive and accountable, than by whether the specific policies and regulations shaped the future as predicted. This is because we have some clear agreed standards for judging the former and in most cases only a few vague highly contested reference points for judging the latter. Here also, the best results are unpredictable and likely to come from open, contested competition among participants in the policy development process. This also seems full of contradictions and confusion to the outside observer. However, most of the research submitted to policy makers is in support of different preconceived positions on policy issues. It is administrative research purchased to bolster the case of some special interest. Thus the arena for policy research is also contested ground, but within the restricted limits of conflicting special interests it is a very constrained and narrowly focused contest, excluding some large interest groups in society as well as detached, independent, more objective policy research. Neither policy development nor the policy research informing it are extended to include all affected interest groups (for example general consumers, employees, the poor), or broader collective or public interests. In these respects, the structures and processes of policy development and policy research are often deficient. For research on the implications of policy options for general interest groups that are not active participants in policy processes, and for more detached critical research51 on policy issues and their implications for broader public interests, one must look to universities and other sources of independent policy research. This has always been a rump minority of the research community, but fundamentally
On the characteristics and implications of administrative and critical research, see Melody, W.H. and Mansell, R.E. (1983) ‘The Debate over Critical vs. Administrative Research: Circularity or Challenge’, Journal of Communication 33(3): 103-117.


important for keeping policy and regulation from being captured by special interests. EuroCPR, the UK’s Programme on Information and Communication Technologies (PICT) network, the Telecom Policy Research Conference (TPRC) in the US, and similar organisations in other countries were established to foster independent critical research on policy and regulatory issues and an ongoing dialogue with policy makers, as a recognised minority – but hopefully influential – input to policy-making processes. This helps overcome the deficiencies of the narrow policy development and policy research processes described above. It opens them to a broader range of contested competition among policy options, information sources and ideas. And as researchers know better than most people, the battle of ideas has always been highly contested ground. How then should one conceptualise this policy process, characterised by uncertainty, bounded rationality, limited information, contested policy options and contested policy research information? Charles Lindblom captured it well 45 years ago in a classic article, The Science of Muddling Through.52 Given the legacy constraints and conflicting pressures, in addition to the characteristics noted above, the policy making process in democratic societies is in reality one of muddling through to achieve incremental changes to the inherited system. If the policy process is essentially muddling through contested terrain, then the only certain way we have to judge its effectiveness is in the characteristics of its structures and processes. Policy that has beneficial consequences (predicted and unpredicted) is a bonus, and is to be sought by the cultivation of research, data accumulation, information gathering, theoretical development, knowledge building and advocacy. But it is only in unusual cases that clear evidence can support the attribution of success or failure to specific policies or policy research. In many cases, the primary contribution of policy research is helping to block policy options that would only serve narrow vested interests. Some evidence from personal experience Throughout my career I have spent considerable time as both a policy researcher and policy advisor in a number of countries. While at the US Federal Communications Commission (FCC) in the late 1960s and early 1970s, and later as an expert witness and advisor to the US Justice Department on the AT&T antitrust case, I was a leader in the movement to liberalise telecommunications in the US. I have been asked a number of times over the years whether, in retrospect, our
Public Administration Review (1959) 19(2): 79-88. See also Lindblom, C.E. ’Still Muddling, Not Yet Through.’ Public Administration Review 39(1979): 517-529, and his classic work, (1990) Inquiry and Change: The Troubled Attempt to Understand and Shape Society, New Haven: Yale University Press.


efforts yielded the changes we intended, and whether they actually yielded public benefits rather than just consumer confusion and a growing market for telecom lawyers and consultants. My best defence is structural. A lot of people had good ideas about improving telecom services in a wide variety of different circumstances. The monopoly policy of the day rejected their proposals out of hand. There were no good reasons that we could see for preventing these people from going ahead with their proposals, although there were a lot of contradictory and confusing theories and research studies put forward by the AT&T monopoly and its extensive network of interests and consultants. The proposed new competitors would be putting at risk only their own money, time, skills and credibility and harming no one. Moreover, research had demonstrated that a liberalised entry policy would allow the FCC to use competition as a tool for achieving public policy objectives that was far more powerful than any of the directives it could issue to AT&T. It made policy implementation more effective. We did not know whether any of the ideas proposed by new entrants would survive in the marketplace, and assumed that, as in any dynamic market, some would and some would not. Our policy research and policy recommendations were part of a highly contested policy process characterised by contradictory policy proposals, theories and evidence. As events unfolded, our critical research was accepted sometimes, and rejected sometimes. We won some battles, lost others, and many battles continued under different conceptualisations, evidence and analysis. Looking back on this period it would not be difficult to make a case that the liberalisation era in the US was characterised by contradiction, confusion and hubris. Our research failed to predict either that AT&T would defy FCC decisions and engage in rampant and extensive anti-competitive behaviour, or later that we would actually convince the judge that the dramatic step of breaking up AT&T was a necessary solution, an idea that had not been entertained at the time the FCC’s competition policy was developed. We never considered the possibility of widespread consumer fraud and confusion, the explosive growth launched in the terminal equipment and value-added services markets, the endless appeals on telecom issues to the courts, nor that we were preparing the ground for the introduction of the Internet. Was the policy change a good one? Was the policy research that informed it good? Did it achieve anything constructive? These are still contested issues today. Far greater contributions during my tenure at the FCC in my judgement were the organisational structure and institutional improvements with which I was associated: 1) the adoption of Notices of Inquiry as a vehicle for flagging evolving


policy and regulatory issues and obtaining advance information in an open, transparent process, used effectively in the first Computer Inquiry (1968), and now being used to gather information bearing on future Internet regulatory issues; 2) the establishment of an independent unit within the FCC to participate as an advocate of the public interest in important cases; 3) the establishment of a central policy planning unit to ensure that the benefits from research are fed into FCC policy analyses and deliberations; 4) the establishment of the TPRC to foster policy research and dialogue with policy makers. Have all these institutional changes led to better decisions by the FCC with additional benefits for the US economy and society? This might be a researchable question, but probably is not because of far too many intervening variables. All I can say is that the field of telecom policy research in universities and other independent research organisations has grown rapidly in the US and makes continuing inputs into policy processes that are manifestly more open, inclusive and informed than they were previously. Policy decisions may appear to some analysts to be as contradictory and confused as ever, but at least the process for reaching them is more informed, inclusive, transparent and accountable. This process at least tends to eliminate the worst policy options for society in general, but it does not guarantee the best, or a consistent pattern of development. Assessing Garnham’s critique As the first Director of PICT in the UK (1985-88), my objectives included fostering independent policy research in universities and dialogue with policy makers and the new regulator Oftel. This was desperately needed, in my judgement, in a UK environment where policy making was a very closed opaque process and academic research had virtually no role. Ironically, the best proposal for running an annual UK CPR conference came from the Centre headed by Nicholas Garnham at PCL (now University of Westminster). The UK CPR has since grown to EuroCPR and Nicholas Garnham has been the central figure in its activities over the years. So one can understand his desire to see some unequivocal evidence of positive effects from this enterprise. If Nicholas Garnham finds no comfort in his review of European information society policy development and policy research, perhaps he might find it in the institutional changes that have created a EuroCPR, an ENCIP, a PICT programme and other independent public interest policy research centres and networks. Unfortunately not. His address does not provide an assessment of EuroCPR and similar initiatives to foster public interest policy research that have brought structural changes in the policy research and policy formulation processes. But he has examined this issue on another occasion. In a 2002 paper, ‘Universities,


Research and the Public Interest’,53 he provides an analysis of comparable pessimism to the present paper, asking ‘whether university research can, if it ever could, any longer be seen as a source of disinterested public interest expertise, especially in the field of information and communication policy and regulation’ (p. 25), and then presents a number of reasons why he believes it cannot. In essence, he concludes that independence and capability for critical research in universities are being steadily eroded by both industry and government. Here also Nicholas Garnham offers no suggestions for attempting to resolve the dilemma he identifies in the institutions to which he has devoted his career. Nicholas Garnham’s analysis of the substantive policy issues is contestable on a number of grounds. A few are noted here. It fails to consider that the major factor determining the failures of economic and social policy implementation of all kinds over the past decade has been the overriding influence of the speculative rise and fall of stock markets, driven by factors entirely outside European information society policy and policy research, in a boom and bust scenario that has been a part of capitalist development historically.54 European information society policy has been a negotiated product between the EU and 15 national governments with widely differing problems, priorities and interests. The so-called European information society policy statements that Nicholas Garnham focuses on do not meet the simple definition of policy as an actionable commitment, as has been pointed out in publications by a number of EuroCPR members over the years,55 Most are called vision statements, which I characterised some time ago as statements of aspiration.56 They, along with the Lisbon mantra, are intended to inspire the nation states, under EU leadership, to make appropriate policy changes to move in the direction of idealistic goals. Apparent contradictions and confusions in policies between countries and time periods are often, on closer analysis, shown simply to be different policies for different circumstances and stages of economic development. Does the sudden reversion of France to old school industrial policy and support for national champions reflect contradiction, confusion or a new political strategy stimulated by the EU enlargement? The hubris, of course, is a requirement of French culture!
53 Mansell, R., Samarajiva, R. and Mahan, A. (2002) Networking Knowledge for Information Societies: Institutions and Intervention. Delft: DU Press, pp. 24-7. For an alternative analysis, see Melody, W.H. (1997) ‘Universities and Public Policy’, in Smith, T. and Webster, F. (eds) The Postmodern University: Contested Visions of Higher Education in Society, Milton Keynes: OU Press, pp. 72-84. 54 See Perez, C. (2002) Technological Revolutions and Financial Capital, Aldershot: Edward Elgar. 55 See, for example, Webster, F. (1995) Theories of the Information Society, London: Routledge. 56 Melody, W.H. (1996) ‘Toward a Framework for Designing Information Society Policies’, Telecommunications Policy, 20(4): 243-59.


Nicholas Garnham’s analysis also fails to consider the wide diversity in the policy research community. EuroCPR is primarily an organisation for the discussion of critical public interest research. Most of its members are based in universities or independent research centres. Most of the research that Nicholas Garnham criticises has been administrative research produced by the special interests hyping policies, products and stocks. They do not attend CPR. They are the ones who most need to hear Nicholas Garnham’s message, but I suspect their hubris levels have been lowered dramatically by the stock market collapse. The biggest problem with public interest research is the need for more powerful advocacy. Maybe EuroCPR participants need more hubris in the highly contested arena of policy research attempting to influence policy development. Nicholas Garnham’s doom and gloom scenario has overshadowed his insightful analysis of the central political economic forces driving European information society policy development over the past 20 years. He presents a highly plausible case that, beneath all the apparent contradictions and confusion, the underlying issue has been, and remains the Keynesian problem of systemic deficient demand. Whatever the policy labels and rationalising theories, the real underlying policy goal has been to provide a major stimulation to demand, primarily in the form of industrial policy that will foster employment and economic growth in the home market. This contribution to the contested arena of policy research deserves serious testing through critical analysis and debate as it has important implications both for policy analysis and policy research. Conclusion Nicholas Garnham has demonstrated once again his considerable skill as an analytical demolition expert. His wake-up call should be heeded and his critique used as a stimulus to improving both the quantity and quality of critical public interest research and its influence upon policy development. But it also time for him to direct his analytical skills more constructively to the important issue of how imperfect policy research can inform imperfect policy development to shape imperfect markets to align more closely to public interest goals. The imperfections in markets can only be mitigated by effective policy and regulation. The imperfections in policy and regulation can only be mitigated by better information and knowledge generated from research. This is the continuing formidable challenge to the research community in the years ahead.


Contradiction, Confusion and Hubris: An Afterword
by Nicholas Garnham
I am both gratified and flattered that so many distinguished scholars have taken the time and energy from what I know are busy lives, to respond so trenchantly and illuminatingly to my original intervention. That intervention was intended to stimulate debate and I am glad that it has begun to do so in such a fruitful fashion. I am pleased to find that there is a large measure of agreement with the broad thrust of my argument. Where there are disagreements it will not be my purpose here to defend myself. Rather this response gives me an opportunity to expand upon and clarify some of the main arguments and positions which the occasion of my original keynote meant were necessarily abbreviated in presentation; sometimes it is clear to the point of misunderstanding. First, in response in particular to Melody, let me stress that the target of my critique was Information Society (IS) policy and the way in which the narrower aims of telecoms policy and regulation had been highjacked to serve as a building block for a wider set of socio-economic policy goals dubbed ‘Information Society’. Thus I do not agree with Peter Johnston that the aim of EuroCPR was to build ‘a research capability in telecommunications regulation and Information Society developments’. The aim was to build capability in Communication Policy research, initially focused on telecommunications, but expanding under the impact of socalled convergence towards a wider view of the communications sector, including traditional media. In so far as it has developed in the direction of the IS it is part of the problem not part of the solution. Thus when Melody describes his participation in efforts at the FCC to deregulate US telecoms he is describing what I, and I think Bauer also, would call necessarily messy, short term experimentation. Because it is experimentation it necessarily rests upon an evidence-based test of its efficacy even if the actual political process of regulatory development in reality usually short cuts or aborts such evidence based experimentation. I have no problem with the continuation of such work and such policy intervention. Indeed I recently


contributed to a seminar assessing Ofcom's latest document on Telecommunication Regulatory Strategy in just this spirit. I agree that the imperfections of markets can only be mitigated by effective policy and regulation. The problems in telecoms policy thus narrowly defined remain those of defining the market and, in a situation of market turmoil, judging the moment when intervention is either desirable or possible. I also think, and I should perhaps have stressed this more clearly, that 'the public interest goals', for which in Melody’s model it is the purpose of regulation to align imperfect markets, are much more problematic and difficult to define than I think Melody is assuming. Indeed one of my main critiques of IS policy is that it defines a very vague set of public interest goals and then assumes that a range of very questionable policy interventions are the royal road to these goals. Here the public interest goals are just another word for the 'visions' for which Henten calls. I am accused of pessimism because I will not buy into these, in my view flawed, visions. Here I will just have to put my hand up and admit that I am not a vision person. Simon has thus correctly identified my disenchantment with ’grand narratives’. One can, I think, identify a number of visions with influence on policy, the interests they broadly serve; and in opposition one can point out their flaws whether in their realism or in their negative social consequences, and this critique can and should be based on evidence, and indeed on the 'micro storia' for which Simon calls, a call I heartily endorse. But the 'optimistic' alternative is not simply to produce another equally flawed, even if alternative, vision. As Brecht said 'pity the country that needs heroes'. Let me now turn therefore to my critique of IS policy and its use of telecoms and ICT industrial policy. Here my critique had two parts which may have become confused. The first was a macro level critique of the Lisbon agenda and the assumptions lying behind it as to the contemporary dynamics of global capitalism and Europe’s place within it. Here Melody is right to stress that underlying my whole position, and my original keynote intervention, is an argument in political economy. This was assumed rather than argued in my original intervention because the nature of the occasion led me to focus on the specifics of IS policy and its link with ICT and telecoms policy. But one response in particular makes it clear that this level of argument has to be pursued in more depth. As Melody rightly says ‘this contested arena of policy research deserves serious testing through critical analysis and debate’. The illustration that this is so comes in Johnston’s response, who, while apparently accepting many of my micro arguments concerning industrial policy, continues to support a broad continuation of such policies on the grounds of the very political economy of the global information society and


Europe’s position in it that I thought had been one of the main targets of my critique. Let me therefore clarify. My argument has two steps. The first is that IS policy, and the associated vision of a knowledge economy, is a response to a particular and flawed analysis of the global political economy. It is this that primarily invalidates the Lisbon agenda not its specifics relating to telecommunications and IS technologies. The second step in the argument is that, even if the political economic analysis was broadly correct, IS technologies do not play the role ascribed to them and thus related policies are misguided. In response Johnston claims first that the EU has had three policy aims: a) Industrial policy to consolidate the European ICT sector and make it globally competitive; b) Consumer welfare policies to provide a wider range of more affordable services; c) Policies for sustained economic growth stimulating innovation across all the economy, including in the provision of public services; and that these goals are tightly linked. The central thrust of my argument, to which I still hold, was to challenge this linkage, and especially the linkage between c) on the one hand and a) and b) on the other. It is on this assumed linkage that the IS policy argument rests. Johnston then goes on to claim that the policies in their first phase have been ‘remarkably successful’, in particular he claims ‘consolidation and competition have achieved economies of scale and scope, with substantial price reduction for consumers and a wider range of services available to more people’. There seems to be a general problem with how one can have greater consolidation and greater competition. In fact of course there has been consolidation in some parts of the total sector and greater competition in others. In so far as consumer welfare has been enhanced it has been the result of narrowly focused regulation and extensive liberalisation of carriage. The extent to which this would have been driven anyway, even in a less liberalised market, by technological development, remains in my view an open historical question. What does not seem to be questionable is that industrial supply side policies have had nothing whatever to do with it for better or worse. Johnston then goes on to argue that while policy needs adjustment in the light of changes in the ‘technology landscape’ and the global market, policies remarkably similar to those of the past are still need to ‘transform Europe into a competitive and dynamic knowledge society’, and that to reject this as I do is to retreat to ‘rearranging the deckchairs’. Well it is true that I would rather have better arranged deckchairs than the liner foundering because it is steering for a distant way point on


a faulty navigation system rather than skirting the reefs in its immediate vicinity through the visual sighting of channel markers. I think on the contrary that we need a much more disaggregated analysis of Europe’s supposed economic problems (much exaggerated in my judgement) and thus of the policy instruments that might possibly make a difference without seeking linkages we can neither understand nor control. This then leads me to the two major thematic issues raised by respondents – issues which I think need much further debate. The first is the central political economic question raised primarily by Steinmueller, but also by Simon and Bauer – how do we understand changes in the global capitalist economic system over the last three decades and what role have ICTs played in those changes. This remains difficult and contentious terrain. I will simply say here that I think the jury is still out and probably always will be out on the contribution of ICTs, and therefore of future IS oriented policies, to differential rates of productivity growth and the dynamics and restructuring of global markets. I will just note however first, that even if one accepts the measuring methodologies a good proportion of US productivity growth outside the ICT sector itself seems to be attributable to good old fashioned labour exploitation and second, that contrary to the post-Fordists and weightless economy advocates the drive for economies of scale in goods manufacturing still seems to be the major driver of globalisation and that the major shortages and bottlenecks now affecting the global economy are oil and physical transport infrastructures. Thus while I have no basic disagreement with Steinmueller that there has been a range of economic ‘activities where specific kinds of knowledge play a central role and where ICTs are employed as the primary tools of production’ leading to an emphasis on the role of the telecommunications and computing industries in the supply of improved producer good and services’ I do disaggree as to the relative weight to be placed upon this within an overall political economy, and the extent to which these developments can be shaped, whether to be accelerated, redirected or aborted, by policy intervention. Finally Steinmueller argues that the concepts of the knowledge-based economy and the IS, rightly understood, are inextricably linked to a shift away from supply side and technologically determined analyses and policies towards a user-based approach. While this sounds immediately sympathetic, since users can easily be taken to be you and me and all the other little people just waiting to be liberated from the supply dominated ‘system’, there are in my judgement two problem with this approach. This may be as much my failure of understanding and perception as a failure of the approach, but in debate we can perhaps see whether others share my worries. The first problem is with the definition of users, especially since as Steinmueller himself claims, the current major users are the supply side and at the same time the concept of users can too


easily be confused with consumers. Secondly, and here there are similarities with socialism, I have never yet seen a half way clear or realistic description of what a user oriented knowledge economy or set of policies would actually look like, or how they would work. Finally I turn to the important question raised by both Mansell and Fransman – namely if it is true that there has been an absence of sceptical, critical voices in the IS policy arena, how do we remedy the situation, how sociologically and politically do we create a space for the development and propagation of such voices. This is a particularly urgent and pertinent question if, as I have been arguing, policy is an essentially experimental process. It is also ironic that we should be asking this question in the context of the IS, which is defined by some at least as a society which maximises diversity of voice and thus maximises cultural diversity and innovation of all types. I have no answer to this question. But I think we do need to start by focusing on those contradictions in current IS policies, at least in Europe, that circulate around it. First of course Intellectual Property policy and the inherent tension, and thus balance to be struck, between the free circulation of ideas and incentives to creation. Second the tension between the drive to increase the output of trained ‘knowledge workers’ and the desire of governments and industrial corporations to minimise the expenditure on and supposedly maximise the economic efficiency of the knowledge worker and knowledge output system, higher education and research, and link it to narrow definitions of its contribution to the nation’s economy. Finally, in the EU, the centralisation and increased directedness of research policy and funding that stems from the topdown industrial and economic policy drive associated with the Lisbon agenda and other IS initiatives.