You are on page 1of 18

MANAGERIAL ECONOMICS Monoplistic Price A monopoly is an enterprise that is the only seller of a good or service.

In the absence of government intervention, a monopoly is free to set any price it chooses and will usually set the price that yields the largest possible profit. Just being a monopoly need not make an enterprise more profitable than other enterprises that face COMPETITION: the market may be so small that it barely supports one enterprise. But if the monopoly is in fact more profitable than competitive enterprises, economists expect that other entrepreneurs will enter the business to capture some of the higher returns. If enough rivals enter, their competition will drive prices down and eliminate monopoly power.
Before and during the period of the classical economics (roughly 17761850),

most people believed that this process of monopolies being eroded by new competitors was pervasive. The only monopolies that could persist, they thought, were those that got the government to exclude rivals. This belief was well expressed in an excellent article on monopoly in the Penny Cyclopedia (1839, vol. 15, p. 741): It seems then that the word monopoly was never used in English law, except when there was a royal grant authorizing some one or more persons only to deal in or sell a certain commodity or article. If a number of individuals were to unite for the purpose of producing any particular article or commodity, and if they should succeed in selling such article very extensively, and almost solely, such individuals in popular language would be said to have a monopoly. Now, as these individuals have no advantage given them by the law over other persons, it is clear they can only sell more of their commodity than other persons by producing the commodity cheaper and better. Even today, most important enduring monopolies or near monopolies in the United States rest on government policies. The governments support is responsible for fixing agricultural prices above competitive levels, for the exclusive ownership of cable television operating systems in most markets, for the exclusive franchises of public utilities and radio and TV channels, for the single postal servicethe list goes on and on. Monopolies that exist independent of government support are likely to be due to smallness of markets (the only druggist in town) or to rest on temporary leadership in INNOVATION (the Aluminum Company of America until World War II). Why do economists object to monopoly? The purely economic argument against monopoly is very different from what noneconomists might expect. Successful monopolists charge prices above what they would be with competition so that customers pay more and the monopolists (and perhaps their employees) gain. It may seem strange, but economists see no reason to criticize monopolies simply because they transfer wealth from customers to monopoly producers. That is because economists have no way of knowing who is the more worthy of the two partiesthe producer or the customer. Of course, people (including economists) may object to the wealth transfer on other grounds, including moral ones. But the transfer itself does not present an economic problem.

Rather, the purely economic case against monopoly is that it reduces aggregate economic welfare (as opposed to simply making some people worse off and others better off by an equal amount). When the monopolist raises prices above the competitive level in order to reap his monopoly PROFITS, customers buy less of the product, less is produced, and society as a whole is worse off. In short, monopoly reduces societys income. The following is a simplified example. Consider the case of a monopolist who produces his product at a fixed cost (where cost includes a competitive rate of return on his INVESTMENT) of $5 per unit. The cost is $5 no matter how many units the monopolist makes. The number of units he sells, however, depends on the price he charges. The number of units he sells at a given price depends on the DEMAND schedule shown in Table 1. The monopolist is best off when he limits production to 200 units, which he sells for $7 each. He then earns monopoly profits (what economists call economic rent) of $2 per unit ($7 minus his $5 cost, which, again, includes a competitive rate of return on investment) times 200, or $400 a year. If he makes and sells 300 units at $6 each, he earns a monopoly profit of only $300 ($1 per unit times 300 units). If he makes and sells 420 units at $5 each, he earns no monopoly profitjust a fair return on the capital invested in the business. Thus, the monopolist is $400 richer because of his monopoly position at the $7 price. Table 1 Demand Schedule Price Quantity Demanded(units per year) $7 $6 $5 200 300 420

Society, however, is worse off. Customers would be delighted to buy 220 more units if the price were $5: the demand schedule tells us they value the extra 220 units at prices that do not fall to $5 until they have 420 units. Let us assume these additional 220 units have an average value of $6 for consumers. These additional 220 units would cost only $5 each, so the consumer would gain 220 $1 of satisfaction if the competitive price of $5 were set. Because the monopolist would cover his costs of producing the extra 220 units, he would lose nothing. Producing the extra 220 units, therefore, would benefit society to the tune of $220. But the monopolist chooses not to produce the extra 220 units because to sell them at $5 apiece he would have to cut the price on the other 200 units from $7 to $5. The monopolist would lose $400 (200 units times the $2 per unit reduction in price), but consumers would gain the same $400. In other words, selling at a competitive price would transfer $400 from the monopolist to consumers and create an added $220 of value for society.

The desire of economists to have the state combat or control monopolies has undergone a long cycle. As late as 1890, when the Sherman ANTITRUST law was passed, most economists believed that the only antimonopoly policy needed was to restrain governments impulse to grant exclusive privileges, such as that given to the British East India Company to trade with India. They thought that other sources of market dominance, such as superior EFFICIENCY, should be allowed to operate freely, to the benefit of consumers, since consumers would ultimately be protected from excessive prices by potential or actual rivals. Traditionally, monopoly was identified with a single seller, and competition with the existence of even a few rivals. But economists became much more favorable toward antitrust policies as their view of monopoly and competition changed. With the development of the concept of perfect competition, which requires a vast number of rivals making the identical commodity, many industries became classified as oligopolies (i.e., ones with just a few sellers). And oligopolies, economists believed, surely often had market powerthe power to control prices, alone or in collusion. More recently, and at the risk of being called fickle, many economists (I am among them) have lost both our enthusiasm for antitrust policy and much of our fear of oligopolies. The declining support for antitrust policy has been due to the often objectionable uses to which that policy has been put. The Robinson-Patman Act, ostensibly designed to prevent price discrimination (i.e., companies charging different prices to different buyers for the same good) has often been used to limit rivalry instead of increase it. Antitrust laws have prevented many useful mergers, especially vertical ones. (A vertical merger is one in which company A buys another company that supplies As inputs or sells As output.) A favorite tool of legal buccaneers is the private antitrust suit in which successful plaintiffs are awarded triple damages. How dangerous are monopolies and oligopolies? How much can they reap in excessive profits? Several kinds of evidence suggest that monopolies and small-number oligopolies have limited power to earn much more than competitive rates of return on capital. A large number of studies have compared the rate of return on investment with the degree to which industries are concentrated (measured by share of the industry sales made by, say, the four largest firms). The relationship between profitability and concentration is almost invariably loose: less than 25 percent of the variation in profit rates across industries can be attributed to concentration. A more specific illustration of the effect the number of rivals has on price can be found in Reuben Kessels study of the underwriting of state and local government BONDS. Syndicates of investment bankers bid for the right to sell an issue of bonds by, say, the state of California. The successful bidder might bid 98.5 (or $985 for a $1,000 bond) and, in turn, seek to sell the issue to investors at 100 ($1,000 for a $1,000 bond). In this case the underwriter spread would be 1.5 (or $15 per $1,000 bond).

In a study of thousands of bond issues, after correcting for size and safety and other characteristics of each issue, Kessel found the pattern of underwriter spreads to be as shown in Table 2. For twenty or more bidderswhich is, effectively, perfect competitionthe spread was ten dollars. Merely increasing the number of bidders from one to two was sufficient to halve the excess spread over what it would be at the ten-dollar competitive level. Thus, even a small number of rivals may bring prices down close to the competitive level. Kessels results, more than any other single study, convinced me that competition is a tough weed, not a delicate flower. Table 2 Number of Bidders and Underwriter Spread No. of Bidders 1 2 3 6 10 Underwriter Spread $15.74 $12.64 $12.36 $10.71 $10.23

If a society wishes to control monopolyat least those monopolies that were not created by its own governmentit has three broad options. The first is an antitrust policy of the American variety; the second is public regulation; and the third is public ownership and operation. Like monopoly, none of these is ideal. Antitrust policy is expensive to enforce: the Antitrust Division of the Department of Justice had a budget of $133 million in 2004, and the Federal Trade Commissions budget was $183 million. The defendants (who also face hundreds of private antitrust cases each year) probably spend ten or twenty times as much. Moreover, antitrust is slow moving. It takes years before a monopoly practice is identified, and more years to reach a decision; the antitrust case that led to the breakup of the American Telephone and Telegraph Company began in 1974 and was under judicial administration until 1996. Public regulation has been the preferred choice in America, beginning with the creation of the Interstate Commerce Commission in 1887 and extending down to municipal regulation of taxicabs and ice companies. Yet most public regulation has the effect of reducing or eliminating competition rather than eliminating monopoly. The limited competitionand resulting higher profits for owners of taxisis the reason New York City taxi medallions sold for more than $150,000 in 1991 (at one point in the 1970s, a taxi medallion was worth more than a seat on the New York Stock Exchange). Moreover, regulation of natural monopolies (industries, usually utilities, in which the market can support only one firm at the most efficient size of operation) has mitigated some monopoly power but usually introduces serious inefficiencies in the design and operation of such utilities.

A famous theorem in economics states that a competitive enterprise economy will produce the largest possible income from a given stock of resources. No real economy meets the exact conditions of the theorem, and all real economies will fall short of the ideal economya difference called market failure. In my view, however, the degree of market failure for the American economy is much smaller than the political failure arising from the imperfections of economic policies found in real political systems. The merits of laissez-faire rest less on its famous theoretical foundations than on its advantages over the actual performance of rival forms of economic organization.

BreakEven Point
A company's break-even point is the amount of sales or revenues that it must generate in order to equal its expenses. In other words, it is the point at which the company neither makes a profit nor suffers a loss. Calculating the break-even point (through break-even analysis) can provide a simple, yet powerful quantitative tool for managers. In its simplest form, break-even analysis provides insight into whether revenue from a product or service has the ability to cover

the relevant costs of production of that product or service. Managers can use this information in making a wide range of business decisions, including setting prices, preparing competitive bids, and applying for loans.

The break-even point has its origins in the economic concept of the point of indifference. From an economic perspective, this point indicates the quantity of some good at which the decision maker would be indifferent (i.e., would be satisfied without reason to celebrate or to opine). At this quantity, the costs and benefits are precisely balanced. Similarly, the managerial concept of break-even analysis seeks to find the quantity of output that just covers all costs so that no loss is generated. Managers can determine the minimum quantity of sales at which the company would avoid a loss in the production of a given good. If a product cannot cover its own costs, it inherently reduces the profitability of the firm.

Typically the break-even scenario is developed and graphed in linear terms. Revenue is assumed to be equal for each unit sold, without the complication of quantity discounts. If no units are sold, there is no total revenue ($0). However, total costs are considered from two perspectives. Variable costs are those that increase with the quantity produced; for example, more materials will be required as more units are produced. Fixed costs, however, are those that will be incurred by the company even if no units are produced. In a company that produces a single good or service, this would include all costs necessary to provide the production environment, such as administrative costs, depreciation of equipment, and regulatory fees. In a multi-product company, fixed costs are usually allocations of such costs to a particular product, although some fixed costs (such as a specific supervisor's salary) may be totally attributable to the product. Figure 1 displays the standard break-even analysis framework. Units of output are measured on the horizontal axis, whereas total dollars (both revenues and costs) are the vertical units of measure. Total revenues are nonexistent ($0) if no units are sold. However, the fixed costs provide a floor for total costs; above this floor, variable costs are tracked on a per-unit basis. Without the inclusion of fixed costs, all products for which marginal revenue exceeds marginal costs would appear to be profitable. In Figure 1, the break-even point illustrates the quantity at which total revenues and total costs are equal; it is the point of intersection for these two totals. Above this quantity, total revenues will be greater than total costs, generating a profit for the company. Below this quantity, total costs will exceed total revenues, creating a loss.

To find this break-even quantity, the manager uses the standard profit equation, where profit is the difference between total revenues and total costs. Predetermining the profit to be $0, he or she then solves for the quantity that makes this equation true, as follows: Let TR = Total revenues TC = Total costs P = Selling price F = Fixed costs V = Variable costs Q = Quantity of output TR = P Q TC = F + V Q TR - TC = profit Because there is no profit ($0) at the break-even point, TR TC = 0, and then P Q (F + V Q ) = 0. Finally, Q = F(P V). This is typically known as the contribution margin model, as it defines the break-even quantity (Q ) {note: Q is previously defined as quantity of output} as the number of times the company must generate the unit contribution margin (P V ), or selling price minus variable costs, to cover the fixed costs. It is particularly interesting to note that the higher the fixed costs, the higher the break-even point. Thus, companies with large investments in equipment and/or high administrative-line ratios may require greater sales to break even. As an example, if fixed costs are $100, price per unit is $10, and variable costs per unit are $6, then the break-even quantity is 25 ($100 [$10 $6] = $100 $4) When 25 units are produced and sold, each of these units will not only have covered its own marginal (variable) costs, but will also have contributed enough in total to have covered all associated fixed costs. Beyond these 25 units, all fixed costs have been paid and each unit contributes to profits by the excess of price over variable costs, or the contribution margin. If demand is estimated to be at least 25 units, then the company will not experience a loss. Profits will grow with each unit demanded above this 25-unit breakeven level. While it is useful to know the quantity of sales at which a product will cease to generate losses, it may be even more useful to know the quantity necessary to generate a desired level of profit. (Let D = desired level of profit.)

TR TC = D P Q (F + V Q) = D Then Q = (F + D) (P V) Here, the desired profit is regarded as an increase in the fixed costs to be covered by sales of the product. As the decision-making process often requires profits for payback period, internal rate of return, or net present value analysis, this form may be more useful than the basic break-even model. Break-even analysis is a technique widely used by production management and management accountants. It is based on categorising production costs between those which are "variable" (costs that change when the production output changes) and those that are "fixed" (costs not directly related to the volume of production). Total variable and fixed costs are compared with sales revenue in order to determine the level of sales volume, sales value or production at which the business makes neither a profit nor a loss (the "break-even point"). The Break-Even Chart In its simplest form, the break-even chart is a graphical representation of costs at various levels of activity shown on the same chart as the variation of income (or sales, revenue) with the same variation in activity. The point at which neither profit nor loss is made is known as the "break-even point" and is represented on the chart below by the intersection of the two lines:

In the diagram above, the line OA represents the variation of income at varying levels of production activity ("output"). OB represents the total fixed costs in the business. As output increases, variable costs are incurred, meaning that total costs (fixed + variable) also increase. At low levels of output, Costs are greater than Income. At the point of intersection, P, costs are exactly equal to income, and hence neither profit nor loss is made. Fixed Costs Fixed costs are those business costs that are not directly related to the level of production or output. In other words, even if the business has a zero output or high output, the level of fixed costs will remain broadly the same. In the long term fixed costs can alter perhaps as a result of investment in production capacity (e.g. adding a new factory unit) or through the growth in overheads required to support a larger, more complex business. Examples of fixed costs: - Rent and rates - Depreciation - Research and development - Marketing costs (non- revenue related) - Administration costs Variable Costs Variable costs are those costs which vary directly with the level of output. They represent payment output-related inputs such as raw materials, direct labour, fuel and revenuerelated costs such as commission. A distinction is often made between "Direct" variable costs and "Indirect" variable costs. Direct variable costs are those which can be directly attributable to the production of a particular product or service and allocated to a particular cost centre. Raw materials and the wages those working on the production line are good examples. Indirect variable costs cannot be directly attributable to production but they do vary with output. These include depreciation (where it is calculated related to output - e.g. machine hours), maintenance and certain labour costs. Semi-Variable Costs Whilst the distinction between fixed and variable costs is a convenient way of categorising business costs, in reality there are some costs which are fixed in nature but which increase when output reaches certain levels. These are largely related to the overall "scale" and/or complexity of the business. For example, when a business has relatively low levels of output or sales, it may not require costs associated with functions such as

human resource management or a fully-resourced finance department. However, as the scale of the business grows (e.g. output, number people employed, number and complexity of transactions) then more resources are required. If production rises suddenly then some short-term increase in warehousing and/or transport may be required. In these circumstances, we say that part of the cost is variable and part fixed.

2) Demand forecasting and Delphi Technique What is a demand forecast? Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase. Demand forecasting involves techniques including both informal methods, such as educated guesses, and quantitative methods, such as the use of historical sales data or current data from test markets. Demand forecasting may be used in making pricing decisions, in assessing future capacity requirements, or in making decisions on whether to enter a new market.

Methods that rely on qualitative assessment

Forecasting demand based on expert opinion. Some of the types in this method are,

Unaided judgment

Prediction market Delphi technique Game theory Judgmental bootstrapping Simulated interaction Intentions and expectations surveys Conjoint analysis

Delphi technique
The Delphi technique is a method for structuring a group communication process so that the process is effective in allowing a group of individuals, as a whole, to deal with a complex problem (Linstone and Turoff 1975:3). Furthermore, it is a method for the systematic solicitation and collation of judgments on a particular topic through a set of carefully designed sequential questionnaires interspersed with summarized information and feedback of opinions derived from earlier responses (Delbecq et al. 1975:10). It is used most frequently to integrate the judgments of a group of experts. A key feature of this technique, however, is that the respondents do not meet and their responses may be anonymous. We still consider it to be a dialogue method, however, because conversation between the parties occurs, even though it is not face-to-face. Three separate groups of actors are generally involved: 1. Respondent group: those whose judgments are obtained through completing the process 2. Staff group: those who design the initial questions, summarise the responses and prepare the questions for subsequent phases 3. Decision-makers: those wishing to receive a product such as a consensus position from experts, or a recommendation (adapted from Delbecq et al. 1975). Although some flexibility exists in implementation, the core method, as described by Delbecq et al. (1975:11), is as follows: First, the staff team in collaboration with decision makers develops an initial questionnaire and distributes itto the respondent group. The respondents independently generate their ideas in answer to the first questionnaire and return it. The staff team then summarizes the responses to the first questionnaire and develops a feedback report along with the second set of questionnaires for the respondent group. Having received the feedback report, the respondents independently evaluate earlier responses. Respondents are asked to independently vote on priority ideas included in the second questionnaire and mail their responses back to the staff team. The staff team then develops a final summary and feedback report to the respondent group and decision makers. Variations of this basic approach include:

whether the respondent group is anonymous

whether open-ended or structured questions are used to obtain information from the respondent group whether the responses are collected in written form or verballyfor example, over the phone how many iterations of questionnaires and feedback reports are used what decision rules are used to aggregate the judgments of the respondent group.

The number of participants can range from a few to many hundreds. The larger the number of iterations employed, the closer to consensus will be the result. Written questionnaires can be in pencil-and-paper form or distributed and returned using electronic communication tools including email and the internet. Computer-based systems, using highly structured questionnaires, can produce real-time findings.

Examples of its use in research integration

1. The environment: developing an environmental plan for a university What was the context for the integration, what was the integration aiming to achieve and who was intended to benefit? Senior administrators at Dalhousie University in Halifax, Nova Scotia, Canada, were aware of a significant gap between the universitys environmental policies and their implementation. As a result, they resolved to develop an implementation plan that would be acceptable to all those who would be responsible for making it work. Those responsible for developing the implementation plan used the Delphi technique to consult with key representatives of the university community in order to generate ideas about the most desirable and feasible ways in which to incorporate the new Environmental Policy into the activities and structure of the universityModifying a Delphi study for policy research can be used to generate ideas and provide decisionmakers with the strongest arguments for and against different resolutions to an issue. (Wright 2006:763) Who did the integration, how was it undertaken and what was integrated? A panel of 28 individuals was selected, with equal numbers drawn from the identified key stakeholders: students, staff, faculty, and administrators. A core feature was that the Delphi study participants would be anonymous to one another, as the Delphi technique was implemented by email between the panellists and the project managers, rather than through face-to-face discussion. This was considered important as it gave equal weight to each panellists judgments, avoiding problems that the power imbalances among the panellists (for example, between students and faculty) might otherwise create. No information was provided on what was integrated.

The Delphi questionnaires were distributed and responses received by email. Round one was the open-ended question: After reading the Environmental Policy, what recommendations do you have to incorporate it into the activities and structure of Dalhousie University? A master list of 125 suggestions was developed from the responses. In round two, the participants were asked to review the master list from round one and rate each item for desirability and feasibility (separately) on a five-point Likert scale. The responses to the second round were analysed statistically for measures of central tendency and dispersion. The items were categorised as those that received consensus: a) for being desirable and feasible; b) for being desirable but not feasible; or c) rated as either not desirable or unsure. Each participant received a personalised questionnaire in round three, listing that persons ratings in round two, along with the group responses. They were asked to reconsider their ratings and make any changes. In round three, the majority of participants modified two to five of their round two ratings. What was the outcome of the integration? The results of the Delphi technique study were used by the university managers as a key input to developing the Environmental Policy Implementation Plan. The features that made the Dephi technique useful were identified as anonymity, encouraging exploratory thought and developing innovative ideas, achieving consensus, serving as an educative tool about environmental issues and being a tool for empowerment (Wright 2006). 2. Public health: estimating the incidence of Salmonella poisoning What was the context for the integration? Despite food poisoning through food-borne Salmonella infection being an important public health problem, in the United Kingdom in the mid 1990s, official statistics were not able to provide an accurate estimate of the incidence of infections. It was agreed that the official data significantly underestimated the true incidence, but experts views differed about the level of underreporting. What was the integration aiming to achieve and who was intended to benefit? The Delphi technique was used by Henson (1997:197) to: a) reconcile differences in expert opinion and provide more reliable estimates of the incidence of food-borne Salmonella; and b) to identify expert opinion about the effectiveness of the available measures for control of the infection. This dialogue method was chosen because it was, in the view of the person who implemented the study, a recognised technique for reconciling differences in group judgements where there is inherent uncertainty as to the actual state of the world. In this case, the group consists of experts on food-borne Salmonella in the UK. The aim is to generate data which may overcome acknowledged problems with published statistics. (Henson 1997:196) Who did the integration, how was it undertaken and what was integrated?

The Delphi study was initiated by conducting a workshop in which seven experts in foodborne Salmonella infection examined the issues to be covered in the survey. They did so using the nominal group technique, discussed below. The workshop identified the precise wording to be used in the Delphi study questions. Some 62 experts (their areas of expertise were not specified) in food-borne Salmonella infection, identified by workshop participants, were then invited to be part of the Delphi study, and 42 of them agreed to do so. Five Delphi rounds were conducted during a seven-month period, with three exploring the experts judgments of the incidence of infection and all of them exploring the effectiveness of control measures. This was done by means of questionnaires, but further details were not given. The first question was: What would you estimate to be the total number of persons ill due to infection with non-typhoid Salmonella in the UK from all sources (food and nonfood), over the course of one year? The second asked what proportion of infections participants thought was food-borne and the third invited them to identify the proportion of cases by type of food. For each question, they were asked to advise how they produced their estimates and any difficulties they encountered in doing so. The results of the first and second rounds were fed back to participants, showing them the median, minimum and maximum responses from the whole panel and inviting them to revise their estimates of incidence. In round one, participants were also asked to list the control strategies available for reducing the incidence of food-borne Salmonella infection. In round two, they were asked to refine the list and in round three the refined list was presented along with the question, Taking each control strategy in turn, consider how effective it would be at reducing the total incidence of food-borne non-typhi Salmonella in the UK? This question was repeated in the fourth and fifth rounds, with the findings of the previous round fed back to participants. What was the outcome of the integration? An important outcome of the process was the narrowing of the range of estimates for the incidence of infection, as participants reflected on the median and range of responses to the incidence questions. Regarding the effectiveness of control measures, one approach (food irradiation) was identified by the panel as being particularly effective. Considerable disagreement remained, however, about which other measures were effective, even after three rounds considering this question. The author concludes that this is not really problematic as the Delphi study provides a good summary measure of expert opinion in an area which is characterised by great uncertainty and the spread of responses provides a good indication of the range within which we can expect to find the actual state of the world (Henson 1997:203). 3. Security: developing a new medical school curriculum addressing bio-terrorism What was the context for the integration?

Since the 11 September 2001 attacks on the New York World Trade Centre and the Pentagon in the United States, responding to the medical sequelae of bio-terrorism and biological warfare incidents is no longer considered solely the province of emergency medicine specialists. Rather, it is seen as something that all healthcare providers need to be prepared to handle. What was the integration aiming to achieve and who was intended to benefit? Medical educators in the United States set out to develop new medical school curriculum guidelines relating to bio-terrorism so as to equip the next generation of medical graduates to be able to respond to this threat. They used an internet-based Delphi survey to identify the educational objectives to be covered by the curriculum guidelines (Coico et al. 2004). The Delphi technique was chosen for this purpose because, in the views of those who wished to develop the new curriculum guidelines, it can provide a relatively rapid means of gaining a consensus on complex issues (Coico et al. 2004:367). What was being integrated? This consensus came through the integration of the judgments of a group of experts in microbiology and immunology, who were engaged in medical education in US universities. Some 89 per cent of panellists had PhD degrees, 7 per cent were physicians and 77 per cent were involved in medical curriculum development. Two-thirds rated their expertise concerning bio-terrorism, biological warfare and bio-defence as strong or moderate. Who did the integration and how was it undertaken? A total of 237 people were invited, by email, to join the Delphi panel and 64 (25 per cent) participated in one or more rounds. The Delphi process comprised three internet-based rounds using a dynamic Web-based questionnaire (Coico et al. 2004:367). The responses were captured from the web server onto spreadsheets. Before the first round, participants provided demographic information including self-assessment of their expertise in bio-terrorism. Previous workshop discussions had produced a list of six content-related curriculum categories for bio-terrorism teaching and learning: general issues, bio-defence, public health, infection control, infectious diseases and weaponizable toxins (Coico et al. 2004:368). These were put to the participants in the first Delphi round, and they were asked to add knowledge, skills and attitude objectives to the list of educational objectives. They were also asked for suggestions about any content areas that seemed irrelevant to the project. In round two, the responses to round one were fed back to participants and they were asked to assess, for three identified levels of medical training, the relative importance of each objective. The results of round two were fed back to the panel in round three in the form of percentage endorsed figures, and they were asked to identify

their top five curriculum objectives in each category. They were also asked to rate the usefulness of nine different methods of teaching/learning and assessment of bio-terrorism and bio-defence topics. The products of round two were also passed to an independent expert committee to obtain their views. This separate, independent committee had members who were experts from other professions and disciplines concerned with the issues being addressed by the panel. Its function was to receive the panels findings and consider their implications. What was the outcome of the integration? Although the authors of the paper reporting on this project stated that they would have benefited from a higher participation rate, they felt that the Delphi technique provided an opportunity to explore bioterrorism-related curriculum issues in depth (Coico et al. 2004:372). The outcome was the inclusion, in the US Medical Licensing Examination, of approximately one-third of the educational objectives identified through the Delphi study. 4. Technological innovation: developing professional association policies and practices for shifting from paper to electronic communications What was the context for the integration? The Institute of Electrical and Electronics Engineers (IEEE) describes itself as the worlds leading professional association for the advancement of technology, and the largest, with more than 365 000 members in more than 150 nations (<>). In the late 1990s, it identified the need to establish policies and procedures governing its transition from hard copy to electronic communication and dissemination of information within the institute and beyond it. Indeed, in 1996, it adopted the slogan IEEE: networking the world. What was the integration aiming to achieve and who was intended to benefit? The institute used the Delphi method to assess the benefits of and obstacles to its transition to electronic communications. It saw this method as a technique that is considered appropriate when the research purpose is to glean and synthesize expert opinion about complex issues and to identify recommendations for addressing them. The technique is frequently used in exploratory research and in efforts aimed at technological forecasting, including technological trajectories and the impacts of technological changeUse of the method in this research project has allowed the researchers an opportunity to pool a wide range of expert opinion in order to arrive at a series of focused predictions that may guide the IEEEs approach during this significant transition period. (Herkert and Nielsen 1998:80) Who did the integration, how was it undertaken and what was integrated?

A pool of institute membersthe exact number was not reportedwas identified by the project managers and invited to participate in the study. They came from five areas: institute leadership and staff, institute technical activities representatives, institute regional activities representatives, customers and informed others. Forty agreed to participate in the study and 30 provided demographic information and responded to round one. (It is not clear if the Delphi questionnaires were distributed and returned electronically or in pen-and-paper format.) In round one, participants were asked to assess: 1. the potential contribution of electronic communication and information dissemination in fulfilling the institutes strategic planning goals and objectives that did not rely explicitly on the use of electronic media 2. the impact of electronic communication and information dissemination with respect to the five strategic planning goals and objectives that relied explicitly on the use of electronic media. They were also invited to provide open-ended commentary on the benefits of and obstacles to the use of electronic media (Herkert and Nielsen 1998:824). A round one example question was: Products and Services Objective: Make all IEEE information products and databases of value to members available in electronic form as quickly as possible. I agree; making products and databases available in electronic form as quickly as possible is a valuable objective. I disagree; making products and databases available in electronic form as quickly as possible is not a valuable objective. Discuss your answer in the space provided below. In round two, the panellists were given a synthesis of the obstacles to the Institutes increasing reliance on electronic communication derived from round one and were asked to identify the 10 obstacles that each respondent considered most problematic. In round three, they were provided with a list of the 11 major obstacles identified in round two and asked what actions the Institute should take to benefit maximally from electronic communications while avoiding its potential pitfalls. Content analysis was conducted on the responses to round three to identify the path forward. What was the outcome of the integration? This application of the Delphi technique resulted in the IEEE identifying six key factors affecting the adoption and use of electronic media: 1. 2. characteristics of the IEEE as technology initiator; characteristics of the potential individual adopter;

3. characteristics of the potential organizational adopter; 4. characteristics of the technology; 5. outcomes, and 6. characteristics of the contextual environment. (Herkert and Nielsen 1998:956) This finding, combined with a content analysis of the panellists qualitative responses, enabled the investigators to develop a range of recommendations for consideration by the executives of the Institute to guide it in embracing electronic communication methods.