You are on page 1of 11

Computers and the Social Sciences 2

©Paradigm Press, Inc.

(1986)

Intellectual Assembly Lines: The Rationalization of Managerial, Professional, and Technical Work
art’
«

Judith A. Perrolle
Abstract: This paper

examines recent trends in

computer technology in the context of the theoretical argument that intellectual labor is being subjected to the same processes of rationalization and control that affected manual labor during the industrial revolution. It is argued that expert systems and other products ofknowledge engineering are being developed as mechanisms to rationalize and mechanize the mental labor of individuals in technical, professional, and managerial occupations. Because there are at present very few applications of “intelligent” software, empirical evidence oftheir effects is meager. However, debates about the impact of computerization on the labor process must take into account the theoretical potential for automating expert knowledge, professional judgment, and managerial decision-making.
Developments in the computer field becoming known as knowledge engineering offer the technical means to create intellectual assembly lines for managerial, professional, and technical occupations. An intellectual assembly line is a division of labor in which the rationality of the bureaucratic organization acquires the mechanized efficiency of the factory, and in which mental labor is subjected to both the rationalization ofits knowledge and the gradual automation of its productive activity. Technical, professional, and managerial work all involve the exercise of expert knowledge. Also involved in professional and managerial jobs are autonomous professional judgments based upon experience. Managerial activity in addition

includes the evaluation and control of the work of others. To argue these mental activities can be organized in assembly line fashion presumes that computers can perform as technical experts, can acquire a kind of judgment based upon general principles and experience, and can make managerial decisions. These are precisely the claims of the research area known as artificial intelligence, which is part of the emerging field of
knowledge engineering. Knowledge engineering includes efforts to organize intellectual activity into a set of computercoordinated tasks by means ofdata management and decision-support systems (Hayes-Roth, 1984). It also involves attempts to mechanize actual decision-making and knowledge production activities using expert systems and other types of artificial intelligence software (Coombs, 1984; Winston and Prendergast, 1984). While theoretical debates about the prospects for automating thought processes are foundin a variety of disciplines, social theories of the effects of computing (for a review, see Kling, 1980) generally neglect the issue of artificial intelligence. Although there are enormous discrepancies between optimistic claims thatknowledge engineering can embody intellectual activity in computer systems and the actual performance of intelligent software, there are enough successes

ti.

judith A. Perrolle teaches at Northeastern University.

to demonstrate that machines can perform what were previously human mental activities (Pylyshyn, 1980). Although there are fewer than 200 commercial expert systems in operation (Frenkel, 1985), the rapid spread of robotics in industry and the growing business and military support for the “fifth generation” technology are indicators that, despite theoretical reservations of

1

8

111

112

PERROLLE

philosophers and cognitive scientists, artificial intelligence is becoming a social fact.
The Deskilling Debate

Debates about the effects of computerization on the labor process involve differing assumptions about both the nature of intellectual work and management strategies. Theorists who see a general trend toward the subordination of intellectual work predict deskilling (Cooley, 1980), white collar proletarianization (Wright and Singelmann, 1982; Salaman, 1982), and a declining middle class (Kuttner, 1983). They share the view presented in Braverman’s Labor and Monopoly Capitalism (1974) that a devaluation of intellectual work to reduce the costs and power of labor is in the interests of business and industrial management. They also tend to agree that the structural consequences of such a devaluation will be to reduce the size, status, and power of the white collar middle-class occupations (Abercrombie and Urry, 1983; Goldthorpe, 1982). Claims that professional, managerial, and technical work will be deskilled assume that the different experiences of highly skilled and less skilled mental labor with computerization so far are due to the lack of means to subordinate higher-level work. These in turn rest on the assumption that Taylorism is the ideology behind the choice of technology under capitalism and that capital is ultimately extracted from subordinated labor. Under this set of assumptions, we could expect intellectual assembly lines to develop as the techniques of artificial intelligence improve upper management’s ability to control the mental labor process. Artificial intelligence would be viewed as a labor-saving technology, performing professional and middle-level managerial tasks. The replacement of highly paid professionals by computer systems operated by less skilled labor would be considered an improvement in labor organization. Different assumptions are made by those who argue that computers will enhance the quality and working conditions of intellectual labor, freeing humans from the drudgery of routine mental activity and freeing them for creative thought. The projected social consequences of this arrangement have been stated most optimistically by Daniel Bell (1980: 204-205), who envisions a

Available evidence on the deskilling debate is mixed and poorly supported by empirical research (Attewell and Rule, 1984). Deskilling claims are best supported for the lower levels of mental work—skil1ed blue collar, clerical and technician jobs (Ayres and Miller, 1983; Downing, 1980; Gottfried, 1982; Shaiken, 1984; Straw and Foged, 1983). For managerial, professional, and more highly skilled technical occupations, skill and autonomy enhancement (or at least the subjective impression of it) tend to be reported. Even among clerical workers, however, both skilling and deskilling evidence has been reported (Kling, 1985). Even when jobs are deskilled, the people performing those jobs may not be, as when unskilled workers enter low-skill computerized jobs, improving their relative position. It is clear that computer technology itself does not automatically have a single effect upon conditions of work. New means to rationalize intellectual activity and to embody technical skill, professional judgment, and decision-making logic in computers will not necessarily lead to intellectual assembly‘ lines. They will, however, extend the deskilling debate to higher levels of the stratification system.
The Rationalization of Mental Labor

growing egalitarianism as a large class of “knowledge elite” acquires computer-enhanced skills. The theoretical argument that intelligent software will enhance mental work assumes that creative intellectual activity is uniquely human, and can never be automated. Routine thought processes that are amenable to mechanization are considered mental drudgery; optimizing highly skilled human capital is believed to be the appropriate managerial strategy for dealing with intellectual labor. Under this set of assumptions, knowledge engineering applications should not reduce the wages, autonomy, or skill of employees in the professional, managerial, and higher-level technical categories.

The idea of using computer technology as a means to rationalize intellectual labor dates back at least to the end of the seventeenth century, when Leibniz wrote: It is unworthy\of excellent men to lose hours like slaves in the labor of calculation which could be safely relegated to anyone else if machines were used. (1959: 156-164)

INTELLECTUAL ASSEMBLY LINES

113

Charles Babbage, whose 1833 design for the “analytical engine” was the prototype of the modern digital computer, developed factories organized around the principle that
Human labor is similar to capital, raw materials, etc. It is therefore subject, or ought to be subject, to similar input/output analyses, measurement, standards and controls. (Babbage, 1982)
Although not a direct precursor of Frederick Taylor!s work, Babbage was interested in the same sort of time and motion studies that became the hallmark of Scientific Management (Gideon, 1982:114). The substitution of machinery for labor was an early part of the industrialization process. Karl Marx (19732110-126) agreed with Babbage’s definition of a machine as a division of labor in which a single engine links particular operations performed by a single instrument. Although Marx is often misquoted as having said that the hand mill produced feudalism and the steam mill produced capitalism, his theory was not one of simple technological determinism. Before new machinery could be introduced, he argued, those who have the power to redefine tasks and products must reorganize work to accommodate the equipment. While Marx applied his theory to the reorganization of manual labor under capitalism, the subject of rationalized intellectual labor was taken up by Max Weber and later theorists of bureaucratic organizations. Today, the automation of bureaucratically rationalized mental labor by computers is made possible by an extension of industrialization which Norbert Wiener called a “Second Industrial Revolution” in which “the sporadic design of individual automatic mechanisms” is replaced by “communication between machine and machine” (1967:208). Wiener’s cybemetics is the study of communication and control in humans and machines. Based on the theoretical work of Willard Gibbs (whose research institute became a model for the contemporary division of labor in science), cybernetics made intellectual assembly lines theoretically possible, with coordination of rationalized mental tasks performed by communication and control technology. However, because computers

can accommodate multiple tasks occurring at different tempos and sequences, intellectual assembly lines need not look like factories. Computers can be used to coordinate work performed

by geographically dispersed individuals working at their own pace without direct human supervision. In theory, the rationalization of mental labor which characterizes the intellectual assembly line could integrate individual efforts into larger human projects as -easily as it could subordinate their mental activity to alienating working situations. Thus, the way in which computerized work is rationalized depends more upon who is able to define whom as “excellent men” or “anybody else” than upon purely technological possibilities.
The Mechanization of Thought Processes

The field of computer science known as artificial intelligence involves the design of computer programs and automated equipment, such as industrial robots, with a limited capacity to behave in ways that at least resemble human thought processes (for a technical survey, see Barr and Feigenbaum, 1982; Hayes-Roth, 1983; or Coombs, 1984; for a sympathetic popular history, see McCorduck, 1979). Information from the outside world can be sought, interpreted, and used as the basis for “heuristic” decisions which in humans would be called “best guesses.” The program can, within the narrow range of the world to which they are applied, draw inferences, suggest solutions to previously unsolved problems, select relevant information according to their own internal criteria, and modify their own behavior as a result of the outcomes of their previous actions. The theoretical possibility of representing human knowledge and decision-making processes in computer programs has been fiercely debated on both scientific and moral grounds, with the strongest objections coming from the philosopher Hubert Dreyfus in What Computers Can ’t D0 (1972) and the artificial intelligence expert Joseph Weizenbaum in Computer Power and Human Reason (1976). One important issue is the degree to which human decision-making is believed to be rational and logical. Intelligent software has been most successful for those applications in which the knowledge of human experts is characterized by great rationality; to claim that such programs can

I

114

PERROLLE

imum rationalization and control over labor. Automated programming, industrial planning by machine, and mechanization of the professions were topics on the agenda of a 1958 international conference on the emerging field of artificial intelligence (National Physical Laboratory, 1959). In addition to Leibniz! goal of saving the labor of excellent men, managerial control and profitability were among the reasons advanced for supporting A.I. During the next twenty-five years, artificial intelligence was transformed from academic research projects to widely publicized commercial applications (Feigenbaum and McCorduck, 1983; Hayes-Roth, 1984). Expert system developers promise that their software will “capture” the knowledge of experts in programs that enable a less skilled person to achieve expert results:
Knowledge is a scarce resource whose refinement and reproduction creates wealth. Traditionally the transmission of knowledge from human expert to trainee has required education and internship years long. Extracting knowledge from humans and putting it in compatible forms can greatly reduce the costs of knowledge reproduction and exploitation. . . skill means having the right knowledge and using it effectively. Knowledge engineering addresses the problem of building skilled computer systems, aimed first at extracting the expert’s knowledge and then organizing it in an effective implementation. (Hayes-Roth, Waterman, and Lenat, 1983:5, 13)

perform in any area of human expertise is essentially to define all areas of human expertise as rationalizable. Such arguments are best received by industrial and professional managers interested in routine applications of knowledge and technique and by those who subscribe to the Taylorist principle that successful management involves max-

“put to wor at industrial and professional tasks, despite the reservations of many theorists. The most ambitious practical proposals involving expert systems are those for the new fifth generation “supercomputers” (Feigenbaum and McCorduck, 1983). Promising higher industrial productivity and greater national security, the proposals call for many areas of military and civilian expert decision-making to be turned over to the faster, soon-to-be smarter machines. In his thoughtful critique of the fifth-generation idea, Joseph Weizenbaum questions Feigenbaum’s assertion that computers will produce the future knowledge of the world, asking how are we to understand just what information a computer actually produces and how it does so (Weizenbaum, 1983). But if information itself is seen as a commodity produced for profit by the rational organization and mechanization of intellectual labor, then information can be produced by the computer in the same way that products were made by the factory machinery of the first industrial revolution—— through the alienation of laborers from the production process (Perrolle, 1985).
The Transformation of Technical Skill: Rationalization and Mechanization in Software Production In its short history, computer programming has been transformed from a manual task of wiring boards (performed by women clerical workers) to a romanticized craft popularly believed to be one of the major sources of future high-tech employment. Today, however, software production is being rapidly rationalized into routine work (Kraft, 1977; Kraft and Dubnoff, 1983). The word “computer” first described the jobs of women who per-

,

9

While the debate between those who argue that machines can think and those who argue that they can!t is quite complex (for reviews, see Boden, 1977, and Haugeland, ed;, 1981), the practical success of “intelligent” programs that play chess, infer chemical structures from molecular data, and diagnose illnesses indicates quite clearly that artificial intelligence is being

formed calculations and wired hardware for the pioneering ENIAC, and only later came to mean the machines that replaced them. The manual and routine mental work of the women was taken over by machines; the creative component was transferred to male mathematicians who became known as programmers. This process simultaneously produced both skill enhancement and deskilling as the intellectual work was differentiated into design and execution tasks. The design phase was redefined as creative work; the routinized mental labor was devalued in symbolic

,

INTELLECTUAL ASSEMBLY LINES

115

and monetary terms and viewed as the appropriate target for automation. From the compilers of the 1950s to contemporary structured programming, relational databases, application generators, and expert systems, technological developments in software production have all been applied to the routinization of programming, even though most were introduced to spare humans from mental drudgery. In 1958, Commander Grace Murray Hopper reported two consequences of her recently invented compiler: first, U.S. Naval officers found to their satisfaction that the new computer techniques gave project managers better control over the activities of programmers; second, experiments indicated that a new division of labor in programming, with highly skilled systems analysts producing flowcharts and clerically trained high school graduates producing code, was the optimal use of the new techniques. Programmers who at first opposed the change for fear of losing their jobs found that the new division of labor provided them with upward mobility while creating new low-level jobs for the coders (Hopper, 1959). Analyses of software production in the 1960s and 1970s have documented the emergence of a hierarchical division of labor similar to that in blue collar industries (Kraft, 1977; Kraft and Dubnoff, 1983). Today, structured programming and its extensions offer new control mechanisms at a time when data security from high-tech crimes is a growing concern in economic institutions. It offers a w ay to replace temperamental programmer-craftsmen with better disciplined and less expensive technical laborers organized into intellectual assembly lines. Structured programming began with a 1967 paper by the Dutch computer scientist Edgar Dijkstra, who may become known as the Henry Ford of computer programming. He offered an elegant mathematical approach to the problem of program complexity and thus the hope of “bug-free” software (Olson, 1984). By rationalizing the process of software design and coding, structured programming offers firms a 10 percent to 20 percent increase in program productivity (McClure, 1984). However, there is at present no good empirical research supporting these claims (Vessey and Weber, 1984).

Structured programs are easy to understand, fix, modify and divide into separate parts. Welldefined tasks for programmers that can be integrated into larger programming projects are the software equivalent of interchangeable parts. According to the software engineer Frederick Brooks, Jr., the major impact of structured programming has been to introduce the concept of “control structures” into program design (1982:144). But such control structures also control programmers by limiting the scope of their activity. An extension of the concept to database design has produced the “relational database,” which maintains data in forms that can be used without being directly accessed (Codd, 1982). Although this introduces important technical improvements in data security, task coordination, and software reliability, the restructured working conditions restrict the autonomy and responsibility of programmers. When combined with research on programmer knowledge (cf. Soloway and Ehrlich, 1984), structured programming techniques can be used in application generators. While application generators are not, strictly speaking, expert systems, they do enough “reasoning” to enable a relatively inexperienced programmer to produce software (Keller and Townsend, 1984). In a recent survey of one small company (50 programmers) which converted to application generators, productivity did increase markedly over a five-year period while real wages fell. Younger programmers were enthusiastic about them, reporting that their skills were enhanced. More experienced
programmers,

however,

reported

being

“deskilled” (Perrolle, et al., 1986). Many artificial intelligence experts believe that software production will soon be largely performed by expert systems (Wenger, 1984; Frenkel, 1985). According to Bruce Buchanan (Shurkin, 1983:77 ), a major problem in software production is the time it takes programmers to convert the acquired knowledge into programs. Implementation of “knowledge acquisition” systems connect the expert directly with the computer and save all that programmer labor. Programmer labor, however, is a significant part of those expanding high-tech jobs which proponents of the information revolution are promising.

116

PERROLLE

Exp er t Systems in the Professions Although business analysts report that “most of today’s expert systems are limited in scope and quite costly” (Alexander, 19842118), specialists within the computer industry (Hayes-Roth, 1983; d’Agapeyeff, 1984; Basden, 1984) predict a steady growth in the replacement of humans with expert systems in narrowly defined areas of expertise. About a quarter of the “serious” expert systems in use in 1984 were in the professions, as shown in Table 1. Some knowledge engineers have begun to identify their potential for automating professional Work as a problem. Feigenbaum recently pointed out that “Everyone Worries

about the fate of the blue-collar workers. . .it’s

the highly paid professionals we ought to start worrying about” (1984). Although the use of expert systems presupposes, at least initially, that there are human experts to be consulted, in their industrial and professional applications expert systems use those experts as models for work settings in which people of much lower skills can achieve the same results. This implies that the “knowledge elite” is likely to be much smaller than usually predicted. Also, rather than being composed of our most creative thinkers, it is likely to be composed of those who have most successfully kept their knowledge to themselves.

. 1 .

1

.

;

o-

—- —~ —

u.

TABLE 1.

SUCCESSFUL EXPERT SYSTEMS,

BY OCCUPATIONAL AREA, 1984

PROFESSIONAL

Medical Research
Engineering

15.9%
7.2%,

Professional Services TECHNICAL

3.7% 3.6%

Computing 19.6% Electronics 6.5% 011 and Mineral E x p l o r a t i o n 7.2%
Financial S e r v i c e s

MANAGERIAL

3.6%
10.9%

MILITARY
OTHER

21.8%
N=138

Based on data from Tim Johnson, The Commercg§l_Application of Systems Technolggy. London, Ovum,

Expert

INTELLECTUAL ASSEMBLY LINES

117

At the 1958 international artificial intelligence conference, physician Francois Paycha outlined the logic of medical diagnosis and argued that mechanization could solve some of its difficulties. Legal expert Lucien Mehl proposed that
a machine for processing information can be an effective aid in searching for sources of legal information, in preparing the decision of the administrator or judge, and finally in checking the coherence of the solutions arived
at. (Mehl, 19592757)

Although Paycha suggested that we could not anticipate the wider social consequences of mechanized medical diagnosis, Mehl echoed Leibniz! belief that the labor of excellent men would be saved for devotion “to research proper, to true scientific thought.” He further argued that, although judicial machines would be suited to conducting legal argument, they could never replace human legal experts because they were incapable of formulating precepts. In the next decades, medical and legal knowledge became the subject ofintensive efforts to develop intelligent databases and software. While expert systems developers would claim that computers do have the technical capabilities to replace many of the functions of lawyers, the trends in computer usage indicate that they are being adopted in ways which facilitate the existing arrangements of legal practice. Although computer programs could be developed to render rational judgments for some sorts of cases, the human quality remains an almost sacred element in the administration of justice; we are thus unlikely to experience computerized judges. Most legal experts would agree with Joseph Weizenbaum (1976) that any conceivable intelligence on the part of a computer would lack the element of human wisdom. Even using computers as “informants” or providers of expert testimony is controversial (Marx and Richman, 1984; Jenkins, 1979). What we can expect is accelerated use of computers to process court cases (now terribly backlogged in most jurisdictions) and to provide legal research services for attorneys. Wemay also expect computer law to become a professional specialization; by 1985 over 1000 lawyers be-

longed to national and regional computer law organizations (Connolly, 1985). The Lexis and Westlaw systems are examples of specialized database services for legal research (Bander and Sweetgall, 1983). Their use may lead to concentration of power in larger law firms, which are able to afford these services. Centralization tendencies could be avoided by making legal information services available inexpensively to individuals and small law firms. In the medical profession as well, expert systems seem to be emerging as aids for human experts. While some skills, like using a scalpel, may be lost to laser surgery (Freifeld, 1984), techniques like computer-animated x-rays will give physicians more skill in diagnosing patients (Scz°ence86 March, 1986:10). The serious threat to the status of doctors is from institutional pressures of hospital administrations and health care insurers (Anderson and Jay, 1985). Many government officials and health care administrators would like to rationalize the mental labor of physicians. But, despite the opinion among knowledge engineers that medical diagnosis is a relatively straightforward problem, no one has serious plans to automate doctors in the near future. In the long run, the impact of computers on the social organization of medicine may be as dramatic as the telephone’s and automobi1e’s contributions to shifting health care out ofdoctor!s offices and into hospitals (Starr, 1983). But instead of becoming automated, physicians may use computerbased communications networks to move health care back out of hospitals. The effects of computers on relatively powerful professions like law and medicine depend less upon technical possibilities for expert systems than upon political and economic issues of professional autonomy, credentials, regulation, and the role of paraprofessionals. Mental labor is most likely to be subject to rationalization, control, and eventual automation in professions that allow their work to be done by less-skilled assistants. This is not a question of whether legal secretaries and nurses are capable of performing more skilled tasks, or even whether they in fact do so under a doctor or lawyer’s direction. It is a question of whether the paraprofessional can be managed directly by institutions without the services of the supervisory professional. As in other

118

PERROLLE

instances of computerization, skill enhancement may occur for paraprofessionals as the professionals lose part of their privileged status. Spokesmen for professional engineering have warned for decades that professional status is reduced by c ha nge that threatens expert knowledge:
The engineer who at one time was the educated and elite leader in matching science to society is fast becoming just another member in the industrial labor force. (Forrester, 1967)

A review of the effects of computers on creativity in chemical engineering education (Drake and
Perrolle, 1984) suggests that the employment of less expensive and more narrowly trained technical people may exacerbate the problem of obsolescence for more experienced engineers. In addition it appeared that the mental labor saved by the use of expert systems may be subjected to heavy pressures for higher productivity rather than freed for more interesting types of work. In . actual implementations, however, intellectual assembly line arrangements sometimes prove unsatisfactory, even when initially chosen by management (Cass, 1985; Perrolle, et al., 1986). Engineering problem-solving often calls for broader understanding and more flexible thinking than can be embodied in even an extremely “intelligent” program. In the hands of experts, as in Digital Equipment Corporation’s most recent chip design project (Bairstow, 1985), expert systems can save the labor of excellent people.
Computerized Decision-Making

In the 1950s, the application of computers to management decision-making was believed to be limited to the performance of routine clerical tasks and to objective decisions based solely on economic criteria. While admitting that management decisions ought to be objective wherever possible (and thus should be subject to automation), Merriman and Wass (1959) affirmed the subjective nature of managerial decisions as part of the spiritual nature of man. Like doctors and lawyers, managers claimed for themselves a special and creative role in human decision-making. In the next decades’ debates over the possibility and desirability of mechanized thought process-

es, it was widely asserted that managers’ functions simply could not be performed by machines. Today, the capacities of expert systems include such domains as financial services currently performed by highly paid managerial employees (Sullivan, 1984). As Gio Wiederhold (1984) argues, the use of knowledge-based systems “can move wellunderstood human decision-making into the computer systems.” This includes a wide range of middle managerial tasks. Even more important are developments in management information systems that allow a concentration of decision-making into the hands of fewer managers. Despite the optimism of Herbert Simon (1985), Kenneth Arrow (1980) and other economists who have examined the impact of information systems on business decision-making that centralization will not occur, they recognize the possibility. In some areas, like modern petrochemical plants and the military, the new technological possibilities for centralized decision-making are already being realized. Embedded systems, combinations of hardware and software designed to function in integrated environments, are altering military and production technology. In chemical processing plants, integrated management information systems permit centralized control of everything from purchasing decisions on feedstocks to projected markets, pricing, process design and overall system optimization (Drake and Perrolle, 1984; “The New Cockpits of Industry,” 1983). This industrial trend extends the workplace routinization process to financial and other middle-level managers who formerly made independent evaluations and decisions in their own area of expertise. The U.S. Defense Department’s Ada project and Strategic Computing Initiative represent major efforts to rationalize, centralize, and automate military decision-making. Since almost all of the post-World War II developments in computer technology have been funded by the military (Atwater, 1982), Defense Department priorities will probably drive the development of intelligent software. The Ada project, intended to produce a huge standardized language for large-scale intelligent software applications, is a step toward promoting large, centralized control structures (for a description of the Ada language, see Barnes, 1982). Technical criticisms of the Ada

INTELLECTUAL ASSEMBLY LINES

119

project (Skelly, 1982; Ledgard and Singer, 1982; Winchman, 1984; and Hoare, 1981) include arguments that it is too large and too expensive to be implemented except by large organizations. Some critics suggest that it represents a stifling by military interests of other new programming ideas (Rosenberg, 1983; Begley, 1983). The Strategic Computing Initiative (Office of Technology Assessment, 1986; Lin, 1985; Parnas, 1985) is diverting expert systems research toward pilot’s assistants, autonomous tanks, and battlefield management systems. Although there will undoubtedly be non-military spinoffs, the hierarchical nature of military decision-making may strongly promote decision—centralizing software as the industry standard. Also, as many opponents of militarized expert systems and structured programming fear, the belief that such systems can be made bug-free and reliable enhances the probability that military decision-making (especially in the area of nuclear strategy) will be embedded in these structures. Critics believe that the risk of an accidental computerized triggering of nuclear war is being significantly increased by the Pentagon’s chosen directions in computer development.
The Devaluation of Mental Labor

,

In both an economic and a cultural sense, and regardless of the outcome of the deskilling debate, the spread of knowledge engineering will devalue some kinds of mental labor. In the economic sense, professional, technical and manage-

rial employees who do the kind of thinking that machines do (or that inexpensive labor does with machines) will see a relative reduction in their wages and salaries unless they can acquire new tasks or protect their existing areas of expertise from automation. As knowledge engineering rationalizes and automates some areas of mental labor, those who are less successful at finding creative new activities may shift the focus of job satisfaction from autonomy and real control over the labor process to symbolic gestures of social standing. Already the terminology of computer technology defines workers subjected to the control of management systems as computer “users.” Job titles containing the words “manager,” “designer,” and “analyst” often do not correspond very well to wages

and actual working conditions. Even computer equipment repairers (who often replace parts with little understanding of how the machinery works) wear business suits and carry their tools in briefcases. Among the middle class there is a growing concern for what Randall Collins (1979:72) calls a consciousness of formalism “directed away from the material realities of work experience and into the purely relative values of cultural currency.” In a culture concerned with self and status, the very meaning of work is changing. What one does in an instrumental sense is being replaced by what one displays in terms of symbolic status. So long as the illusion is maintained that employees on intellectual assembly lines are managing a system which enhances their intellectual skills, the symbolic token may be satisfactory. The contradiction in this arrangement is that if the computer software devalues labor in economic terms, the illusion will become increasingly difficult to maintain. In the long run, capitalist culture may teach that intellectual skills are not a source of human satisfaction; in the short run downwardly moble white-collar workers’ demand for the material rewards “due” their middle-class status is predicted to create a crisis of distribution (Leontiff, 1980). The mechanization of thought processes may be translated into a cultural devaluation of the rational, logical aspects of human knowledge and intelligence. Sherry Turkle (1983) finds young children exposed to computerized toys stressing “feelings” rather than “thinking” as the defining criteria of being alive and human. Critics of artificial intelligence and humanist critics of the social injustices of Western technological society (cf. Capra, 1982) tend to agree in condemning instrumental rationality as a form of tyranny over the human spirit. These combined assertions that the essence of human thought is “what machines can!t do” and that it is feelings rather than logic which make humans human, somewhat paradoxically help to legitimate turning over instrumental decision-making processes to expert systems programs. The machines are only behaving in coldly instrumental ways which are not true expressions of our humanity. Unfortunately, instrumental decision-making is at the heart of democratic political institutions. A devaluation of

.

I

120

PERROLLE

decision—making logic may render the democratic process even more concerned with emotional symbols of group solidarity and less concerned with rational discussions of issues than it already
IS.

REFERENCES
Abercrombie, Nicholas, and John Urry. 1983.Capital, Labor, and the Middle Classes. London: George Allen and Unwin. Alexander, Tom. 1984.“Why Computers Can!t Outthink the Experts,” Fortune, August 20:105-118. Anderson, James G., and Stephen J. Jay. 1985. “The Impact of Computers on the Practice of Medicine,” presentation to the American Sociological Association. Washington, DC: August. Arrow, Kenneth. 1980. “The Economics of Information,” in Michael Dertouzos and Joel Moses, eds. The Computer Age: A Twenty-Year View.Cambridge: MIT Press. Attewell, Paul, and James Rule. 1984.“Computing and Organizations: What We Know and What We Don’t Know,” Communications of the ACM 27, 12 (December):1184-1192. Atwater, Harry. 1982.“Electronics and Computer Development: History,” Technology and Responsibility 1, 2, al1\1/gillitglry :-. Ayres, Robert, and Steven Miller. 1983.“Robotic Realities: The Near5I"erm Prospects and Problems,” Annals of the American Academy of Political Science 470 (November):28—55. Babbage, Charles. 1982.Cited in Philip Kraft, Butler-Cox Foundation Lecture, Davos, Switzerland. Bairstow, Jeffrey N. 1985. “Chip Design Made Easy: A New Generation of Tools Enables Nonexperts to Design Custom Integrated Circuits Cheaply and Easily,” High Technology 5,6 (June):l8-25. Bander, Edward,and Susan Sweetgall. 1983.“Westlaw and Lexis: A Comparison,” ages 9-12 in New Technology and the Law, special issue of he Advocate 14,2. Barnes, J.G.P. 1982. Programming in Ada. Reading, MA: Addison-Wesley. Barr, Avron, and Edward A. Feigenbaum. 1982. The Handbook of Artificial Intelligence. Stanford: Heuris Tech Press. Basden,A. 1984. “Application of Expert Systems,” pages 59-75 in Coombs, Developments in Expert Systems. Begley, Sharon. 1983.“Can Ada Run the Pentagon?” Newsweek January 10:71. Bell, Daniel. 1980.“The Social Framework of the Information Society,” Michael Dertouzos and Joel Moses, The Computer Age: A Twenty-Year View. Cambridge: MIT Press. Boden,Margaret. 1977.Artificial Intelligence and Natural Man. New York: Basic Books. Braverman, Harry. 1974.Labor and Monopoly Capitalism: The Degradation of Work in the Twentieth Century. New York: Monthly Review. Brooks, Frederick P., Jr. 1982. The Mythical Man—Month: Essays on Software Engineering. Reading, MA: Addison-Wesley. Capra, Fritjof. 1982.The Turning Point: Science, Society, and the Rising Culture. New York: Bantam. Cass, Christopher. 1985. “Linking Computer Technology with Plastic Modelling to Produce Quality-Assured Piping Drawings,” paper presented to the American Engineering Model Society. Boston: (May). Codd,E.F. 1982.“Relational Database: A Practical Foundation for Productivity,” Communications of the ACM 25, 2

Women,” pages 275-287 in Tom Forester, ed., The Microelectronics Revolution. Cambridge, MA: MIT Press. Drake, Elisabeth, and Judith A. Perrolle. 1984.“Computer-Aided Creativity.” Presentation to the American Society of Chemical Engineers, Atlanta (March). Dreyfus, Hubert L. 1972. What Computers Can ’tDo. New York: Harper and Row. Feigenbaum, Edward.1984.Lecture at the Massachusetts Institute of Technology, October 31. Feigenbaum, Edward, and Pamela McCorduck. 1983. The Fifth Generation: Artificial Intelligence and ]apan’s Computer Challenge to the World.. Reading, MA: Addison-Wesley. Forrester, Jay. 1967.Speech at the NAE Fall meeting, Washington, DC, cited in Nigel Calder, Téchnopolis: Social Control of the Uses of Science. New York: Clarion Books, 1970:152. 1984.“Obsoleting the Scalpel,” Fortune (August 2 : . Freife]l)(l,1:§)aren. Karen A. 1985. “Toward Automating the SoftwareFrenkel, Development Cycle,” Communications of the ACM 28, 6

é‘

Gideon. 1982. Mechanization Takes Command. New York: Norton. Goldthorpe, John. 1982. “On the Service Class, It’s Formation and Future,” in Anthony Giddens and Gavin MacKenzie, eds. Social Class and the Division ofLabor. New York: Cambridge University Press: 162-185. Gottfried, Heidi. 1982. “Keeping the Workers in Line,” Science for the People 14, 4 (July/August):19-24. Mind Design. Cambridge: MIT Press. Haugeland, John, ed. 1981. Hayes-Roth, Frederick. 1984.“The Knowledge-Based Expert System: A Tutorial,” IEEE Computer 17,9 (September):11-28. ———. 1983.Roundtable discussion at Carnegie-Mellon (June 3) reported in IEEE Spectrum (November):114-1 15. ———, Donald A. Waterman, and Douglas B. Lenat, eds. 1983. Building Expert Systems. Reading, MA: Addison-Wesley. Hoare, C.A.R. 1981.“The Emperor’s Old Clothes,” Communications of the ACM 24, 2 (February):75-83. Hopper, Grace. 1959.“Automatic Programming: Present Status and Future Trends,” in National Physical Laboratory, 1959:155-194. Jenkins, Martha M. 1979. “Computer-Generated Evidence Specially Prepared for Use at Trial,” pages 283-295 in William E. Cwiklo, ed., Computers in Litigation Support. New York: Petrocelli. Keller, Robert, and Peter Townsend. 1984.“Know1edge-Based System,” Computerworld Office Automation 32. Kling, Rob. 1985.“The Impacts of Computing on the Work of Managers, Data Analysts, and Clerks,” working paper 78-64. Irvine, CA:Department of Information and Computer Science, University of California, Irvine. ———. 1980.“Social Analyses of Computing: Theoretical Perspectives in Recent Empirical Research,” Computing Surveys 12,1

(June):578-589.

Kraft, Philip. 1977. The Sociology of Computer Programmers. New York: Springer-Verlag. ———, and Steven Dubnoff. 1983.“The Division of Labor, Fragmentation, and Hierarchy in Computer Software Work,” paer presented to the society for the Study of Social Problems,

(March):61-110.

Collins, Randall. 1979. The Credential Society. New York: Academic Press. Connolly, James. 1985.“Patent Disputes, Hacking Major DP Law Issues in !85,” Computerworld (January 21):14. Cooley, Mike. 1980.Architect or Bee? The Human/Technology Relationship. Boston: South End Press. Coombs, M.J., ed. 1984.Developments in Expert Systems. New York: Academic Press. d’Agapeyeff, Alex. 1984. Quoted in Computer 17, 12
Downing, Hazel. 1980.“Word Processors and the Oppression of
(December):106.

(February):109-117.

.

Kuttner, Robert. 1983.“The Declining Middle,” The Atlantic Monthly (.luly):60-71. Ledgard, Hemy F., and Andrew Singer. 1982.“Scaling Down Ada (Or Towards a Standard Ada),” Communications ofthe ACM 25, 2 (February):121-125. Leibniz. 1959.In D.E. Smith, ed. A Sourcebook ofMathematics, Vol. 1. New York: Dover. Leontiff, W. 1980. “The Distribution of Work and Income,” in Scientific American!s The Mechanization of I/Vork. San Francisco: W.H.Freeman. Lin, Herbert. 1985.“The Development of Software for BallisticDefense,” Scientific American 253, 6 (December): 4 -5 . l\/Elsissgle Marx, Gary T., and Nancy Richman. 1984.“Routinizing the Discovery of Secrets: Computers as Informants,” American Behavioral Scientist 27,4 (March/April):423-452. Marx, Karl. 1973.The Poverty of Philosophy. Moscow: Progress Publishers. 9. Machines Who Think. San Francisco: McCorduck, Pamela. 197 W.H.Freeman.
»

etroit, Michigan.

INTELLECTUAL ASSEMBLY LINES

121

McClure, Carma. 1984.Computer Applications Seminar in Struc-

tured Techniques for-Fourth Generation Languages. April 2-4, Washington, DC. Mehl,L. 1958.“Automation in the Legal World,” National Physical Laboratory, Vol. II, 19592755-780. Merriman, J .H.H., and D.W.G. Wass. 1958.“To What Extent Can Administration be Mechanizated?,” National Physical Laboratory, Vol. II, 19592809-818. “The New Cockpits of Industry.” 1983. Fortune (November

National Physical Laboratory. 1959.Mechanisation of Thought Processes, Proceedings of a Symposium November 24-27, 1958. London: Her Majesty’s Stationery Office. Office of Technology Assessment. 1986. Strategic Defenses. Princeton, NJ: Princeton University Press. Olson, Steve. 1984. “Sage of Software,” Science84 (Januaryl February): 74-80. Parnas, David L. 1985.“Software Aspects of Strategic Defense American Scientist 73 (September-October): Systerrbs,” 432-44 Paycha, Dr. F. 1958.“Medical Diagnosis and Cybernetics,” in National Physical Laboratory, Vol. II, 19592635-660. Perrolle, Judith A. 1985.“Computers and Capitalism,” in Williamson, Evans, and Rustad, eds. Social Problems: The Contemporary Debates. Boston: Little, Brown. — — — , et al. 1986. Preliminary case studies of Massachusetts firms introducing computer-aided design systems. Pylyshyn, Zenon W. 1980.“Artificial Intelligence,” Chapter 6 in Bruce Arden, ed., What Can Be Automated: The Computer Science and Engineering Research Study. Cambridge, MA: MIT Press. Rosenberg, Ronald. 1983. “The Military Goes Great Guns for Ada,” The Boston Globe (January 23):A1-A5. Salaman,Graeme. 1982.“Managing the Frontier of Control,” in Anthony Giddens and Gavin Mackenzie, eds. Social Class and the Division of Labor. Cambridge: Cambridge University Press. Shaiken, Harley. 1984. Work Transformed: Automation and Labor in the Computer Age. New York: Holt, Rinehart, and Winston. Shurkin, Joel N. 1983. “Expert Systems: The Practical Face of Artificial Intelligence,” Technology Review (November/
.

28):108-117.

December):72-78.

Simon, Herbert. 1985.“The Consequences of Computers for Centralization and Decentralization,” in Williamson, Evans, and Rustad, eds., Social Problems: The Contemporary Debates. Boston: Little, Brown. Soloway, Elliot, and Kate Ehrlich. 1984. “Empirical Studies of Programming Knowledge.” IEEE Transactions on Software Engineering, SE-10, 5 (September):596-609. Skelly, Patrick G. (for the ACM Standards Committee). 1982. “The ACM Position on Standardization of the Ada Language,” Communications of the ACM 25, 2 (February):118-120. Starr, Paul. 1983. The Social Transformation ofAmerican Medicine. New York: Basic Books. Straw, Ronnie, and Lorel Foged. 1983.“Technology and Employment in Telecommunications,” Annals of the American Academy of Political Science 470 (November):163-170. Sullivan, Kathleen. 1984.“Financial Industry Fertile Ground for Expert Systems,” Computerworld, October 22:29-31. “Supercomputers: The High-Stakes Race ToBuild a Machine that Thinks,” Newsweek (July 4):58-64. Turkle, Sherry. 1983.“The Psychological Machine: Computers and the Culture of Self-Reflection.” Lecture at the New York Academy of Sciences Science Week Symposium (April 8). Vessey, Iris, and Ron Weber. 1984. “Research on Structured Programming: An Empiricist!s Evaluation,” IEEE Transactions on Software Engineering 10,4 (Ju1y):397-407. Weizenbaum, Joseph. 1983.“The Computer in Your Future,” New York Review (October 27):58-62. ———. 1976.Computer Power and Human Reason: From fudgment to Calculation. San Francisco: W.H.Freeman. Wenger, Peter. 1984.“Capital-Intensive Software Technology, Part 3: Knowledge Engineering,” IEEE Software 1,3 (July):33-37 . Wiederhold, Gio. 1984.“Knowledge and Database Management,” IEEE Software 1, 1 (January):63-7 3. Wiener, Norbert. 1967.The Human Use ofHuman Beings. New York: Avon Books.. Winchman, Brian A. 1984.“Is Ada Too Big? A Designer Answers Communications ofthe ACM 27, 2 (February): - . tics,” tl§e1((3)1:°3i 9 Winston, Patrick H., and Karen A. Prendergast, eds. 1984.The AI Business: The Commercial Uses ofArtificial Intelligence. Cambridge, MA: MIT Press. Wright, Erik Olin, and Joachim Singlemann. 1982.“Pro1etarianization in the Changing American Class Structure,” American Journal of Sociology 1/221. 88 Supplement:S176-S209.