You are on page 1of 19
Policy Studies Review, Summer 1988 Vol. 7, No. 4, pp. 720-737 METHODS OF THE SECOND TYPE: COPING WITH THE WILDERNESS OF CONVENTIONAL POLICY ANALYSIS William N. Dunn INTRODUCTION In the past twenty years we have made considerable progress in develop- ing new and more appropriate theories, models, and methods for public policy analysis. Policy analysis has moved well beyond descriptive theories of the policy-making process into realms of normative decision theory such as cost-benefit analysis, input-output analysis, linear programming, delphi technique and a host of additional methodologies that help fashion poten- tial solutions to public policy problems. Indeed, it is no exaggeration to say that policy analysis is among the most powerful problem-solving methodologies available to the applied political scientist. Despite these notable achievements, the field of public policy analysis has not acknowledged and forthrightly faced a major methodological problem: Methods employed by most policy analysts are not appropriate for structuring policy problems as a prelude to their possible solution. This methodological deficit is often fatal, judging by the large number of sys- tematic studies, case materials, and records of experience showing that policy analysts fail more often because they formulate the wrong problem than because they choose the wrong solution (see, e.g., Ackoff, 1974b; Fischhoff, 1977, 1986; Hogwood & Peters, 1985; Mitroff, Mason & Barabba, 1983; Sieber, 1987). To paraphrase John Tukey, we sorely lack methods that yield "an approximate answer to the right question, which is often vague, [rather] than an exact answer to the wrong question, which can al- ways be made precise” (quoted in Rose, 1977, p.23). This methodological deficit creates enormous difficulties for practicing policy analysts. In the absence of appropriate problem-structuring methods, how can we expect to formulate problems that encompass the proper elements, for example, the proper policy objectives, alternatives, and expected outcomes? Given a particular problem formulation, how do we know whether all important elements have been included in the set? In short, how do we know when we have formulated an approximate solution to the right problem, as distinguished from an exact solution to the wrong problem? In addressing these questions, policy analysts face a situation similar to that of the homesteaders described by Kline (1981). The homesteaders, while clearing their land, are aware that enemies lurk in the wilderness, which lies beyond the clearing. To increase their security, the homes- teaders, clear a larger and larger area, but never feel completely safe. They frequently must decide whether to clear more land or attend to their crops and domesticated animals within the perimeter. The homesteaders do Earlier versions of this paper benefited from comments and criticisms offered by Ralph Banks, Eugene Bazan, Ronald Brunner, Anthony Cahill, Bahaman Fozouni, Burkhart Holzner, Stephen Linder, Iran Mitroff, Guy Peters, Jerome Ravetz, and three anonymous reviewers. 720 Dunn: Methods of the Second Type 721 their best to push back the wilderness, knowing full well that the wilder- ness is always there, and that one day enemies may surprise and destroy them. The homesteaders also are aware that there are enemies within, enemies that may undermine and distort their judgment. They hope they will not choose to tend the crops and livestock when they should have chosen to clear more land. THE STRUCTURE OF POLICY PROBLEMS Policy analysts, like homesteaders, require methods that enable them to know when they have cleared enough land, that is, to know when they have approximated the proper boundaries of a problem. The bulk of policy analytic methods available today, however, assume that the boundaries of policy problems are well defined. Writing more than twenty years ago, Zeckhauser and Shaefer (1968, p. 29) provided a view of policy problems that accurately describes much current analytic thinking. Here, the view is that problems come in relatively well-bounded packages involving an ex- plicit statement of a decision maker’s preferences, a careful exposition of the alternative actions available, and a model that relates these alterna- tives to the stated preferences in a manner that permits an efficient choice among the alternatives (see also Stokey & Zeckhauser, 1978; Nagel, 1982, pp.4-5). This view of policy problems assumes that the boundaries, which cir- cumscribe preferences, alternatives, and their relationships are relatively well-defined. In effect, the analyst is faced with what Mitroff (1974; also see Mitroff & Blankenship, 1973) defines as a structured decision problem, where the relationships between decision makers (Dj), preferences or utilities (Uy), alternatives (Aj), outcomes (Oj), and states of nature (Sj) are certain, probabilistic, or uncertain. Structured problems “are problems about which enough is known so that problems can be formulated in ways that are susceptible to precise analytic methods of attack" (Mitroff, 1974, p. 224). Structured decision problems are properly contrasted with problems that are ill-structured (Simon, 1973), squishy (Strauch, 1976), or messy (Ackoff, 1974b). Formally, an ill- structured problem is one for which decision makers (Dj), preferences or utilities (Uij), alternatives (Aj), out- comes (Oj), or states of nature (Sj) are unknown or equivocal. In practice, ill-structured problems (cf. Harmon & King, 1985, p. 28) have a number of common properties: *Policy Goals. The goals of policy are ambiguous or unknown, so that determining what goals to achieve is part of the problem. "Our problem is not to do what is right," stated Lyndon Johnson. "Our problem is to know what is right" (quoted by Wood, 1968, p. v). *Policy Phases. The phases through which policy goals are to be achieved are indeterminate. Since linkages among phases involve feed- back and feed-forward loops which may occur at any time, the pattern of phases is more like a tangled river network (Beer, 1981, p. 30) than an assembly line, tree, or cycle (cf. Jones, 1977). There is no assurance that success at one phase will lead to success at another, for example, that 722 Policy Studies Review, Summer 1988, 7:4 adopting an optimal policy alternative will lead to its successful im- plementation. *Policy Instruments. The policy instruments required to achieve goals are ambiguous or unknown. Knowledge about what policy instru- ments work best under which conditions is often rudimentary or simply unavailable (Linder & Peters, 1985, 1987). Even when analysts use ad- vanced decision technologies (e.g., computer-designed event and fault trees) they frequently overlook instruments which are critical to the suc- cess of policies. For example, nuclear near-disasters such as the Brown’s Ferry fire "was started by a technician checking for an air leak with a candle, in direct violation of standard operation procedures” (Fischhoff, 1977, p. 181). *Policy Problem Domain. The domain of potentially relevant goals, phases, and instruments is unbounded. No exhaustive or even ap- proximate set of goals, phases, and instruments is available to the policy analyst. The problem domain thus appears to be unmanageably huge, with policy analysts engaged in what Dery (1984, p. 6) represents as "a never-ending discourse with reality.” Characterized in this fashion, ill-structured problems are not uncom- mon; they are pervasive (Simon, 1973, p. 186). The pervasiveness of ill- structured problems is a consequence of the fact that conflicting representations of problems are continuously created, maintained, and changed by stakeholders who affect and are affected by the policy-making processes of modern governments. If one or a few policy stakeholders were harmoniously engaged in the for- mulation of problems, then policy analysts would spend most of their time solving relatively well-structured problems. The typical case, however, is one in which practicing analysts expend large amounts of time and energy investigating the conflicting problem definitions of large numbers of policy stakeholders. Such is the case with many health and educational polices, which Sieber (1981) characterizes as "fatal remedies," which arise as a con- sequence of faculty problems definitions. The "policy pathologies” docu- mented by Hogwood and Peters (1985) also involve ill-structured problems that inadvertently were defined as well-structured problems, thus promot- ing apparently correct solutions to the wrong problems. Relatedly, in legis- lative policy analysis, it is often the case that policy analysts not only must address the problem definitions of legislators, but also those of legislative staff, executive agency personnel, and representatives of a multitude of public interest groups. Seasoned analysts know well that the process of for- mulation problems occurs throughout the policy-making process; it invol- ves legislators and other immediate clients of policy analysis as well as street-level bureaucrats (Lipsky, 1971) and ordinary citizens situated at the "periphery" of the policy-making process (see Sabatier & Mazmanian, 1983, pp. 149-151). Thus, the process of formulating problems is not confined, temporally or spatially, to those phases of policy making conventionally labeled problem formulation or agenda setting. Competing problem formulations are distributed throughout the policy-making process, creat- ing a plethora of ill-structured problems. Dunn: Methods of the Second Type 723 Stakeholders situated at various locations in the policy-making process actively construct, on the basis of their own experiences, different repre- sentations of problems. In effect, stakeholders create ill-structured problems by bringing to the policy-making process competing sets of as- sumptions about those external events, which John Dewey liked to call a "problem situation." These sets of assumptions, variously described as con- ceptual models (Allison, 1971), systems of interpretation (Heclo, 1976), cog- nitive maps (Axelrod, 1976), schema (Taylor & Crocker, 1980), frames of reference (Holzner & Marz, 1979), and construction systems (Dunn, Cahill, Dukes, & Ginsberg, 1986), vary among policy stakeholders who construct for their own interest and survival competing representations of the same problem situation. Yet critical elements of a problem situation may lie out- side the boundaries of an individual's construction system; what is unrecog- nized and unknown cannot be understood or anticipated. Inadequately trained technicians at nuclear power facilities may endanger millions of citizens by searching for air leaks with candles (Fischhoff, 1977), warnings placed on cigarette packages may preclude future opportunities to deal with public health problems (Sieber, 1981), and the institutionalization of pretrial release actually may increase the jail population (Nagel & Neef, 1976). A major challenge for policy analysis is to deal with the complexity aris- ing from the mutual construction of policy problems. Policy analysts work- ing in real-life policy contexts, as distinguished from analysts working in the rarified atmosphere of academic publishing, typically are not faced with a single, well-formulated problem. Rather they are faced with many problems, which are distributed throughout the policy-making process, owned by diverse stakeholders whose perspectives and actions are inter- dependent, and continuously altered as a consequence of incremental policy learning. Under these circumstances the practicing policy analyst appears as a modern Diogenes engaged in a never-ending search for the right problem (Dery, 1984, pp. 6-7; see also Wildavsky, 1979). Thus, practicing analysts normally face a large, tangled network of com- peting problem formulations that are socially constructed, distributed, and dynamic. In these circumstances, practicing analysts are faced with a meta- problem (Dror, 1971), a problem-of-problems that is ill-structured because the domain of relevant policy goals, phases, and instruments is unbounded; that is, unmanageably huge. Here, the central task of policy analysis is to structure a problem-of-problems, a second-order entity that may be defined as the class of all first-order problems, which are its members. Unless these two levels of problems are clearly distinguished, analysts run the risk of for- mulating the wrong problem by confusing member and class, thus ignoring the rule that "whatever involves all of a collection must not be one of the collection" (Whitehead & Russell, 1910, p. 37; also see Watzlawick, Weak- land, & Fisch, 1974, p. 6). THE PRINCIPLE OF METHODOLOGICAL CONGRUENCE The distinction between member and class, between second-order and first-order problems, provides a basis for assessing the appropriateness of different methods now available to the practicing policy analyst. Methods of policy analysis can be assessed according to what might be called the prin- 724 Policy Studies Review, Summer 1988, 7:4 ciple of methodological congruence: The appropriateness of a particular type of method is a function of its congruence with the type of problem under investigation. The principle of methodological congruence is similar to a more general principle which, originally stated by L.A. Zadeh, asserts that conventional scientific methodologies are incompatible with social problems that have exceeded a given threshold of complexity (see Brewer & de Leon, 1983, p. 125). The bulk of methods available to practicing policy analysts are methods of the first type, that is, methods appropriate to relatively well-structured problems where relationships between decision maker (D)), utilities (Ujj), alternatives (Aj), outcomes (0j), and states of nature (Sj) are certain or prob- abilistic. These first-order methods, well-known to users of standard policy analysis textbooks, include a range of methodologies for policy modeling: input-output analysis, linear and nonlinear programming, macroeconomic and microeconomic analysis, land-use analysis, game theory, and system dynamics (see, e.g., Greenberger, Crenson, & Crissey, 1976; White, 1983). Yet the principle of methodological congruence asserts that these first- order methods are inappropriate, and, therefore, should not be used to ad- dress those second-order problems that have been characterized as squishy, messy, or ill-structured. Cost-benefit analysis, for example, is an ap- propriate and useful first-order method that enables calculations of net benefits or benefit-cost ratios for a set of well-bounded alternatives. But cost-benefit analysis does not and cannot generate the policy goals, phases, and instruments that should be included in the set. The generation of goals, phases, and instruments is a second-order task that cannot be ac- complished with methods of the first type. The principle of methodological congruence is generally ignored by those who continue to approach relatively ill-structured problems with methods suitable for relatively well-structured problems. Although many observers contend that the complexity of problems requires multiple methods (e.g., Brewer & de Leon, 1983, pp. 137-38; Quade, 1985), these observers do not clearly distinguish between first-order and second-order methods; nor do they acknowledge that first-order methods, whether used singly or in mul- tiples, are inappropriate for most second-order problems most of the time. When second-order methods are presented for consideration, it is more than a little doubtful whether the resultant methodological advice repre- sents much more than a set of general recipes, for example, “the analyst should prepare a menu of policy alternatives that cover a range of ap- propriate, possible, and feasible solutions to the problem" (Brewer & de Leon, 1983, p. 65). Such recipes, as Linder and Peters (1985, p. 240) cor- rectly observe, typically represent "advice on what not to do and what pit- falls to avoid in applying one’s intuitions ... an interactive and creative process with few rules and guidelines ...” The principle of methodological congruence, in addition to specifying the conditions under which distinct levels of methods are appropriate, helps remove the ambiguities evident in discussions of Type III errors in policy analysis. In contrast to Type I and Type II statistical errors, Type III er- rors are typically defined in somewhat vague terms as conceptualizing, for- malizing, or solving the "wrong" or "less appropriate” problems (see, e.g. Dunn 1981, pp. 109-110; Mitroff & Featheringham, 1976; Raiffa, 1968, p- 264). By drawing on the distinction between member and class we may for- Dunn: Methods of the Second Type 725 mulate a more concrete definition of Type III errors in policy analysis: Solu- ing the wrong problem by employing a method whose level is incongruent with that of the problem under investigation. The practical consequences of methodological congruence and incon- gruence are displayed below in Figure 1, which supplies several much- needed distinctions among key methodological aspects of the process of policy analysis. PROBL (DISSOLVING Figure 1 726 Policy Studies Review, Summer 1988, 7:4 *Problem Sensing and Problem Structuring. The process of policy analysis does not begin with clearly articulated problems, but a sense of diffuse worries and inchoate signs of stress (Rein & White, 1977, p. 262). These diffuse worries and inchoate signs of stress are not problems, but problem situations. Policy problems "are products of thought acting on environments; they are elements of problem situations that are abstracted from these situations by analysis. What we experience, therefore, are problem situatuions, not problems which, like atoms or cells, are conceptual constructs" (Ackoff, 1974b, p. 21). *Problem Structuring and Problem Solving. Policy analysis is a multilevel process that includes first-order methods of problem solving as well as second-order methods of problem structuring. This multilevel process is what others (e.g., Linder & Peters, 1985; Dryzek, 1983; Miller, 1985) describe as policy design or design science. Methods of problem structuring are meta-methods, in the sense that they are “about” and “come before” methods of problem solving. The principle of methodologi- cal congruence dictates that first-order methods of problem solving (e.g., cost-benefit analysis) are rarely if ever appropriate for structuring second-order problems that are ill-structured. When practitioners apply first-order methods to second-order problems they typically make "er- rors of a third kind: solving the wrong problem" (Raiffa, 1968, p. 264). *Problem Resolving, Problem Unsolving, and Problem Dissolv- ing. The terms problem resolving, problem unsolving, and problem dis- solving refer to three kinds of error correcting processes in policy analysis (see Simon, 1973, Ackoff, 1974a; Dunn, 1981, p. 99). Although the terms "resolving," "unsolving," and "dissolving" stem from the same root (L. solvere = to solve or dissolve), these error correcting processes occur at distinct levels (see Figure 1). Problem resolving involves the reanalysis of a correctly structured problem in order to reduce calibra- tional errors: for example, errors involving the rejection of the null hypothesis when it is true (Type I error) or accepting the null hypothesis when it is false (Type II error). Problem unsolving, by contrast, invol- ves the abandonment of a solution based on the wrong problem and a return to problem structuring as a means to formulate the right problem. Finally, problem dissolving involves the abandonment of an in- correctly formulated problem prior to any effort to solve it. In summary, policy analysis is a dynamic, multi-level process in which different methods perform distinctly different functions. Because methods appropriate at one level are inappropriate at the next level, questions of ap- propriateness cannot be satisfactorily resolved without first considering the level of the problem to which a method is applied. When policy analysts violate the principle of methodological congruence they are likely to solve the wrong problem, thus committing a Type III error. METHODS OF THE SECOND TYPE Methods of the second type, although generally available to the policy analysis community, are seldom included as an integral part of training in Dunn: Methods of the Second Type rer policy analysis. Textbooks, manuals, and handbooks of policy analysis rare- ly cover a range of second-order methods that have been expressly designed for structuring those second-order problems variously described as squishy, messy, or ill-structured. For example, authoritative sources such as the En- cyclopedia of Policy Studies (Nagel, 1983) contain almost no references to methods of the second type such as brainstorming (Osborn, 1948), synectics (Gordon, 1961), policy capturing (Hammond, 1980), the analytic-hierarchy process (Saaty, 1980), interpretive structural modeling (Warfield, 1976), multiple perspective analysis (Linstone et al., 1981), and strategic assump- tion surfacing and testing (Mason & Mitroff, 1981). Apart from one reference to the analytic hierarchy process in a chapter on policy analysis and management science (White, 1983a), the majority of references to these methods appear in syntheses of literature in areas of technology assessment (Porter & Rossini, 1983) and ethics (Dunn, 1983), areas which typically are regarded as peripheral to mainstream policy analysis. ‘Among the many hypotheses that might be advanced to explain the lack of attention to methods of the second type, several appear most plausible. First of all, it bears notice that those who write most about methods of policy analysis have far fewer opportunities to work in complex, real-life contexts than practitioners who are the presumed beneficiaries of these methods. Perhaps in response to the rarified incentive systems of academic tenure review and professional publishing, those who write about methods of policy analysis simply may be unaware that practicing policy analysts spend most of their time structuring policy problems, not solving them. Consequently, there is a considerable observed time lag between the emer- gence of practical needs and efforts to satisfy them through the develop- ment of appropriate new methodologies. A second plausible hypothesis has to do with the vague or ambiguous character of many methods of the second type. Although there is some recognition that competing perspectives of a problem may assist in the dis- covery of otherwise hidden policy goals, phases, and alternatives, the avail- able guidelines for conducting brainstorming sessions (Osborn, 1948), continuous decision seminars (Lasswell, 1960), or multiple perspective analyses (Linstone et al., 1981) represent general heuristics, which cannot be easily replicated or evaluated by two or more analysts. Even when methods of the second type are well specified and replicable, as is the case with policy capturing (Adelman, Stewart, & Hammond, 1975), the analytic hierarchy process (Saaty, 1980), and strategic assumption surfacing and testing (Mason & Mitroff, 1981), standards for evaluating the extent to which an analyst has performed well in structuring a problem are am- biguous or simply unavailable. As Mitroff and Mason acknowledge (1981, pp. 73-86), there is no test that guarantees the completeness of a set of problem representations. Saaty (1980, p. 14) makes much the same point when he acknowledges that "there is no set procedure for generating objec- tives, criteria, and activities to be included in a hierarchy or even a more general system." The ambiguity or outright absence of performance criteria is closely related to problems of interpreting the results of laboratory and field research on the efficacy of methods of the second type. Understandab- ly, research on the performance of these methods in producing improved problem representations has yielded equivocal or conflicting findings (see, e.g., Cosier, 1978, 1981; Cosier, Ruble, & Aplin, 1978; Mason, 1969; Mitroff, 728 Policy Studies Review, Summer 1988, 7:4 Barabba, & Kilmann, 1977; Mitroff & Mason, 1981a; Schwenk & Cosier, 1980). The further development and application of methods of the second type presuppose the identification of explicit criteria for assessing the perfor- mance of analysts in structuring problems. A first step in this process is to recognize that all methods of the second type are heuristics that aim at the discovery of elements that define a problem and/or one or more of their general relationships. Apart from this broad aim, methods of the second type perform functions that are constitutive and regulative. Methods designed to discover the elements that define a problem are constitutive, since they answer the question: What elements constitute the problem? Regulative methods, by contrast, aim at the discovery of patterned relations among these elements. Here the question is: How are the elements that define the problem regulated? The distinction between constitutive and regulative methods parallels the conclusion that problem definition in policy analysis is based on what philosopher John Searle calls constitutive rules, not regulative ones (Dery, 1984, p. 5). The distinction also punctuates the fact that the process of structuring policy problems requires that we test our explanation or comprehension of the patterns believed to regulate a problem (regulative rules) as well as our definition of its con- stituent elements (constitutive rules). Methods of the second type also differ in terms of their replicability. Methods with low replicability involve general and vague guidelines, while methods with high replicability involve specific and readily comprehensible prescriptions for carrying out a defined sequence of operations. As instru- ments of discovery, such highly replicable methods have been characterized by Landa (1984, p. 39) as algorithmic-heuristics and contrasted with non- algorithmic heuristics that are based on imprecise and vague methodologi- cal prescriptions; for example, the prescription to examine multiple perspectives of a problem in order to discover its true nature and potential solutions. Methods of the second type may be sorted into four distinct categories (see Figure 2) formed by the intersection of types of purpose that are con- stitutive or regulative, and degrees of replicability that are high or low. In one category are methods the primary purpose of which is to enumerate ele- ments believed to constitute the boundaries of a problem. Although they represent a powerful medium for generating potential policy goals and al- ternatives, these methods are sufficiently ambiguous, imprecise, and FUNCTIONS OF METHODS OF THE SECOND TYPE PURPOSE OF METHOD REPLICABILITY OF METHOD Constitutive Regulative Element Enumeration Pattern Enumeration Low (Multiple Perspective (Interpretive Structural Analysis) Modeling) Element Estimation Pattern Estimation High (Policy Grid Analysis) (Strategic Assumption Surfacing and Testing) Figure 2. Dunn: Methods of the Second Type 729 general that they cannot be easily replicated by two or more analysts. Methods of the second type such as brainstorming (Osborn, 1948), con- tinuous decision seminars (Lasswell, 1960), and multiple perspective analysis (Linstone et al., 1981) appear to belong in this category. In a second category are methods the main purpose of which is to enumerate patterns, which regulate elements within the boundaries of a previously defined problem space. Although they represent a useful medium for representing a variety of patterned relations among policy goals and alternatives--for example, patterns represented by physical and biologi- cal metaphors and analogies such as trees, rivers, waterfalls, or epidemics- -these methods are again so imprecise, ambiguous, and general that they cannot be replicated with the confidence that two or more analysts will reach the same conclusions. Synectics (Gordon, 1961) and other forms of analogical reasoning (see, e.g., Rein, 1976; Schon, 1983) appear to belong in this category of methods of the second type. A third category of methods of the second type includes procedures the main purpose of which is to estimate elements that constitute the boun- daries of a problem. These methods are highly replicable by virtue of their relative specificity, precision, and comprehensibility. Presently, there are few methods in this category, which means that the process of defining policy problems still resembles what Dery (1984, pp. 6-7) appropriately characterizes as "a never-ending discourse with reality, to discover yet more facets, more dimensions of action, more opportunities for improvement li.e., problems]." Although the absence of boundary-approximating rules or tests frequently makes it impossible to know whether we have defined the right problem (see Saaty, 1980, p. 14; Mitroff & Mason, 1981b, pp. 73-86), methods such as policy grid analysis (Dunn, Cahill, Dukes, & Ginsberg, 1986; Dunn & Ginsberg, 1986; also see Kelly, 1955) appear to provide replicable estimation procedures. In the fourth category are methods of the second type that aim primari- ly at estimating patterns believed to regulate relationships between pre- viously defined elements, for example, patterns of conflict and cohesiveness, distance and proximity, or consistency and inconsistency among policy actors, policy goals, policy alternatives, and policy outcomes. In this category are highly replicable methods that include policy capturing (Adelman, Stewart, & Hammond, 1975), interpretive structural modeling (Warfield, 1976), Q-methodology (Brown, 1980), the analytic hierarchy process (Saaty, 1980), and strategic assumption surfacing and testing (Mason & Mitroff, 1981). Singly and in combination, these methods are a powerful medium for creating spatial, geometric, and quantitative repre- sentations of the structure of problems. In addition to their replicability, these methods provide tests for estimating the ecological validity (Adel- man, Stewart, & Hammond, 1975), consistency (Saaty, 1980), and plausibility (Mason & Mitroff, 1981) of patterns believed to represent the structure of policy problems. Nevertheless, these methods do not permit estimates of the boundaries of a problem. PERFORMANCE CRITERIA _ Our confidence in methods of the second type would be enhanced con- siderably if explicit criteria were available to assess the performance of 730 Policy Studies Review, Summer 1988, 7:4 policy analysts in structuring ill-structured problems. In this context, Res- cher (1980) has formulated an integrated set of criteria for assessing the plausibility of inductive estimates made under conditions where deter- ministic and probabilistic conclusions are not possible. These criteria are suitable for assessing the performance of policy analysts in using methods of the second type. Methods of the second type, as we saw above, are appropriate under those numerous real-life conditions where the analyst’s problem is relative- ly ill-structured, that is, where the analyst does not know and therefore must discover the appropriate decision makers (Dj), utilities (Uij), alterna- tives (Aj), outcomes (O)), and states of nature (Sj). The function of methods of the second type is to produce plausible estimates of the elements that constitute the problem as well as the patterns regulating these elements (see Figure 2). The plausibility of these estimates--as opposed to their statistical probability or deductive certainty--depends on the extent to which they satisfy requirements of inductive estimates in general (see Res- cher, 1980, pp. 24-26): *Character. An estimate (P*) of problem (P) must have the same character as its estimanda. For example, an estimate of a length must be a length, not a temperature. In estimating the elements that con- stitute a problem we typically must estimate elements which are subjec- tive in character, since policy problems are products of thoughts acting on environments. In turn, estimates of patterns that regulate relation- ships among elements typically must be systemic, since policy problems are second-order entities composed of the diverse problem formulations of many interdependent stake-holders. In short, any estimate P* of a problem P must have the character of a representation that is subjec- tively meaningful to a complex system of policy stakeholders. *Replicability. The process of producing an estimate (P*) of a problem (P) must be replicable, thus maximizing the likelihood that two or more analysts can obtain similar results in similar circumstances. Without replicable methods policy analysts are little more than intelligent clinicians who must rely on their own wits and whatever anecdotal knowledge they have acquired through trial and error in the field (Fischhoff, 1986, p. 112). The replicability of methods of the second type is a function of their specificity, precision, and comprehensibility to analysts faced with ill structured problems. *Coordination. An estimate (P*) of a problem (P) must coordinate with the elements of that problem and their relationships. The closer the process of coordination, the more accurate the estimate. Estimates based on observations of the ways that policy stakeholders actually con- strue problems should coordinate more closely than estimates based on the specialized constructs of analysts with a problem, for example, con- structs such as "rationality," “utility,” or "revealed preference.” *Cost-Effectiveness. The cost of producing an estimate (P*) of 4 problem (P) must be reasonable. A method that yields inaccurate es- timates at a low cost may be far less cost-effective than a method that Dunn: Methods of the Second Type 731 yields more accurate estimates at a higher cost, particularly in cases where an inaccurate estimate obscures externalities, sleeper effects, spillovers and other negative unanticipated consequences. In short, the opportunity costs of defining the wrong problem may be enormous. *Correctness-in-the-Limit. When the information on which an es- timate is based becomes increasingly complete, the estimate (P*) of a problem (P) eventually should approximate the true answer being es- timated. Although the "true" definition and structure of a problem can- not be known with certainty, increasingly complete information should produce increasingly accurate estimates. In the limit, an estimate (P*) should converge on the true but unknown value of the problem (P). Methods of the second type now available to practicing policy analysts satisfy one or several, but not all, of these criteria. For example, methods such as multiple perspective analysis and synectics, while they satisfy the criterion of character, do not fare well on criteria of replicability, coordina- tion, and correctness-in-the-limit. Methods such as policy capturing, Q- methodology, interpretive structural modeling, the analytic hierarchy process, and strategic assumption surfacing and testing satisfy in different ways criteria of character, replicability, and coordination. As we have seen, however, these methods of the second type do not provide any rule or test that permits analysts to assess the completeness of a set of elements defin- ing a problem. In effect, these methods do not satisfy the correctness-in- the-limit criterion and, as such, do not provide plausible estimates (P*) of the elements that constitute a problem (P). In contrast to these methods of the second type, policy grid analysis (Dunn, Cahill, Dukes, & Ginsberg, 1986; Dunn & Ginsberg, 1986; also see Bannister & Mair, 1968) satisfies a performance criterion on which all other methods are lacking: correctness-in-the-limit. Apart from its meeting the criteria of character, replicability, coordination, and cost-effectiveness, the unique advantage of policy grid analysis is its capacity to estimate the boun- daries of a second-order problem composed of the competing problem rep- resentations of policy stakeholders. The method of estimation is precise and readily comprehensible, since it involves a series of three simple steps (Dunn, Cahill, Dukes, & Ginsberg, 1986). The first step is saturation sampling, whereby the analyst generates a snowball sample of stakeholders who affect or are affected by an existing or proposed policy. Each person is an initial set of stakeholders, selected to maximize differences in issue position and influence, is asked to name others who agree and disagree with one or more existing or proposed policies. The stakeholders so named are asked to name others, who in turn provide additional names. The process is continued until no new stakeholders are named, at which point "there is no sampling variance, be- cause the total universe has been surveyed, unless one considers the group as some subsample from a super population" (Sudman, 1976, p. 211). The second step involves the elicitation of constructs used by stakeholders to represent a given problem situation. These constructs are the "ideas, basic paradigms, dominant metaphors, standard operating pro- cedures, or whatever else we choose to call the systems of interpretation by which we attach meaning to events" (Heclo, 1976, pp. 253-254). Although 732 Policy Studies Review, Summer 1988, 7:4 there are many ways to elicit policy-relevant constructs (e.g., face-to-face interviews, telephone interviews, mailed questionnaires, computer con- ferencing), the basic procedure is to ask stakeholders to contrast a range of policies and then supply the constructs in terms of which the contrasts were made. At that point when stakeholders can supply no new constructs, the analyst assumes that an accurate estimate (P*) of the boundaries of the stakeholder’s problem (P) has been made. The constructs that form the stakeholder’s representation of the problem situation can be subjected to various forms of statistical analysis (see Bannister & Mair, 1968; Dunn, Cahill, & Kearns, 1985; Slater, 1977), although this is not a necessary fea- ture of the process. The third step in policy grid analysis is second-order boundary estima- tion. Here the analyst constructs a cumulative frequency distribution of all unique constructs elicited from stakeholders at various stages in the selec- tion of the snowball sample. The various stakeholders are plotted on the horizontal axis; the number on nonduplicative and therefore new con- structs is plotted on the vertical axis (Figure 3). As information about the constructs is of a gradually expanding sample of stakeholders becomes more complete, estimates of the elements that define the systemic problem space become more accurate. These improvements in the accuracy of estimates represent a continuous transformation of individual problem repre- sentations into a systemic or second-order problem representation. 30 Pa = dx/dy = 0 0 3 10 15 20 SOURCE: Dunn, Cahill, and Dukes (1986), p. 372. Figure 3. The cumulative frequency distribution displayed in Figure 3, based on data from a project in criminal justice policy, illustrates how the boundaries of an ill-structured problem may be estimated. As the analyst plots the non- duplicative and therefore new constructs of each successive stakeholder, the slope of the curve displays different rates of change. Initially, there is a rapid rate of change that decreased to the point of stagnation, which rep Dunn: Methods of the Second Type 733 resents the true but unknown boundary of the problem, as estimated by the rule P* = dx/dy = zero. This rule (the first derivative of a function) is one among several conceivable ways for estimating the boundaries of the problem and, at the same time, reaching the entirely credible conclusion that the collection of additional information from policy stakeholders is un- likely to improve the accuracy of the problem representation. The estimation procedures outlined above would appear to satisfy re- quirements of inductive estimates in general: character, coordination, cost- effectiveness, and correctness-in-the-limit. Previous applications of similar procedures in contexts, which are at least as complex as public policy (see, e.g., Stabell, 1978), suggest that systemic problem representations normally may be estimated on the basis of a small number of interviews, perhaps as few as twenty, which means that the universe of available problem representations is limited and exhaustible. These estimation pro- cedures permit policy analysts to transform an otherwise unmanageably huge set of individual problem representations into a second-order problem representation that satisfies the rule: "whatever involves all of a collection must not be one of the collection" (Whitehead & Russell, 1910, p. 37). CONCLUSION Policy analysts, like allegorical homesteaders, face the seemingly un- manageable task of coping with the wilderness of ill-structured problems, while concurrently attending to well-structured problems that can be ad- dressed with conventional methods of policy analysis. What has appeared thus far as an unmanageable task may become manageable once analysts acknowledge the principle of methodological congruence: The appropriate- ness of a particular type of method is a function of its congruence with the type of problem under investigation. In practical terms this means that policy analysts who wish to improve the problem-solving capacities of governments can employ methods that do not ignore the political, social, cultural, and psychological complexity of the policy-making process. To be sure, conventional methods of the first type are appropriate and useful for solving first-order problems that are relatively well-structured. Contexts of practice, however, are pervaded by second-order problems that, variously described as squishy, messy, or ill structured, are the class of all first-order problems. Just as methods of the first type are congruent with the analytic demands of first-order problems, so are methods of the second- type congruent with the complex analytic demands of the second-order problems. When policy analysts fail to observe this principle of methodological congruence they are likely to solve the wrong problem. Methods of the second type are not limited to general heuristics, but in- clude specific and readily comprehensible procedures for estimating the boundaries and structure of ill-structured problems. Since these estima- tion procedures appear to satisfy requirements of inductive estimates in general, policy analysts can assess their own performance in providing ap- proximate answers to the right question, thus coping with the enemies who lurk in the wilderness of conventional policy analysis. 734 Policy Studies Review, Summer 1988, 7:4 REFERENCES Ackoff, R.L. (1974a). Beyond problem solving. General Systems, XIX, 237- 239. Ackoff, R.L. (1974b). Redesigning the future: A system approach to societal problems. New York: John Wiley. Adelman, L., Stewart, T.R., & Hammond, K.R. (1975). A case history of the application of social judgment theory to policy formulation. Policy Scien- ces, 6, 137-159. Allison, G.T. (1971). Essence of decision: Explaining the Cuban missile crisis, Boston: Little, Brown. Axelrod, R.A. (1976). Structure of decision. Cambridge, MA: Harvard University Press. Bannister, D., & Mair, J.M.M. (1968). The evaluation of personal con- structs. New York: Academic Press. Beer, S. (1981). The brain of the firm. New York: John Wiley. Brewer, G.D., & de Leon, P. (1983). The foundations of policy analysis. Homewood: The Dorsey Press. Brown, S.R. (1980). Political subjectivity: Applications of Q-methodology in political science. New Haven: Yale University Press. Cosier, R.A. (1981). Dialectical inquiry in strategic planning: A case of premature acceptance. Academy of Management Review, 6, 643-648. Cosier, R.A. (1978). The effects of three potential aids for making strategic decisions on prediction accuracy. Organizational Behavior and Human Performance, 22, 295-306. Cosier, R.A., Ruble, T.L., & Aplin, J.C. (1978). An evaluation of the effec- tiveness of dialectical inquiry systems. Management Science, 24, 1483- 1490. Dery, D. (1984). Problem definition in policy analysis. Lawrence, KS: University Press of Kansas. Dror, Y. (1971). Design for policy sciences. New York: Elsevier. Dryzek, J. (1983). Don’t toss coins into garbage cans: A prologue to policy design. Journal of Public Policy, 3 345-367. Dunn, W.N. (1983). Values, ethics, and standards in policy analysis. InS.S. Nagel (Ed.), Encyclopedia of policy studies. New York: Marcel Dekker. Dunn, W.N., Cahill, A.G., Dukes, M.J., & Ginsberg, A. (1986). The policy grid: A cognitive methodology for assessing change dynamics. In W.N. Dunn (Ed.), Policy analysis: Perspectives, concepts, and methods. Green- wich, CT: JAI Press. Dunn, W.N., Cahill, A.G., & Kearnes, K.P. (1984). Applications of the policy grid to work-related frames of reference. (Working paper). Pittsburgh: University of Pittsburgh, University Program for the Study of Knowledge Use. Dunn, W.N., & Ginsberg, A. (1986). A sociocognitive approach to organiza- tional analysis. Human Relations, 39(11), 955-975. . Fischhoff, B. (1986). Clinical policy analysis. In W.N. Dunn (Ed.), Policy analysis: Perspectives, concepts, and methods (pp. 111-128). Greenwich, CT: JAI Press. ; Fischhoff, B. (1977). Cost benefit analysis and the art of motorcycle main- tenance. Policy Sciences, 8, 177-202. Dunn: Methods of the Second Type 735 Gordon, W.J. (1961). Synectics. New York: Harper and Row. Greenberger, M., Crenson, M.A., & Crissey, B.L. (1976). Models in the policy process. New York: Russell Sage Foundation. Hammond, K.R. (1980). Introduction to Brunswikian theory and methods. New Directions for Methodology of Social and Behavioral Science, 3, 1- 12. Harmon, P., & King, D. (1985), Expert systems: Artificial intelligence in business. New York: John Wiley. Helco, H.H. (1976). Policy dynamics. In R. Rose (Ed.), The dynamics of public policy. Beverly Hills: Sage Publications. Hofstadter, R. (1979). Godel, Escher, Bach. New York: Random House. Hogwood, B.W., & Peters, B.G. (1985). The pathology of public policy. Ox- ford: Clarendon Press. Holzner, B., & Marx, J. (1979). Knowledge application: The knowledge sys- tem in society. Boston: Allyn and Bacon. Jones, C.0. (1977). An introduction to the study of public policy (2nd ed.). North Scituate, MA: Duxbury Press. Kelly, G. (1955). The psychology of personal constructs. New York: M.W. Norton. Kline, M. (1980). Mathematics: The loss of certainty. New York: Oxford University Press. Landa, L.N. (1984). Algorithmic-heuristic theory. In R.J. Corsini, & B.D. Ozak (Eds.), Encyclopedia of psychology. New York: John Wiley. Lasswell, H.D. (1960). Technique of decision seminars. Midwest Journal of Political Science, 4(2), 213-226. Linder, S.H., & Peters, B.G. (1985). From social theory to policy design. Journal of Public Policy, 4(3), 237-259. Linder, S.H., & Peters, B.G. (1987). A design perspective on policy im- plementation: The fallacies of misplaced prescription. Policy Studies Review, 6(3), 459-475. Linstone, H.A., et al. (1981). The multiple perspective concept: With ap- plications to technology assessment and other decision areas. Tech- nological Forecasting and Social Change, 20, 275-325. Lipsky, M. (1971). Street-level bureaucracy and the analysis of urban reform. Urban Affairs Quarterly, 6, 391-409. Mason, R.O. (1969). A dialectical approach to strategic planning. Manage- ment Science, 15, 403-414. Mason, R.O., & Mitroff, I.I. (1981). Challenging strategic planning as- sumptions: Concepts, techniques, and methods. New York: John Wiley. Miller, T.C. (1985). Conclusion: A design science perspective. In T.G. Miller (Ed.), Public sector performance: A conceptual turning point. Bal- timore: Johns Hopkins University Press. Mitroff, II. (1974). The subjective side of science. New York: Elsevier. Mitroff, I.I., Barabba, V.P., & Kilmann, R.H. (1977). The application of be- havioral and philosophical technologies to strategic planning: A case study of a large federal agency. Management Science, 24, 44-58. Mitroff, I.I., & Blankenship, L.V. (1973). On the methodology of the holis- tic experiment: An approach to the conceptualization of large-scale so- cial experiments. Technological Forecasting and Social Change, 4, 339-353. 736 Policy Studies Review, Summer 1988, 7:4 Mitroff, I.I., Featheringham, T. (1976). Towards a behavioral theory of sys- temic hypothesis-testing and the error of the third kind. Theory and Decision, 7, 205-220. Mitroff, I.1., & Mason, R.O. (1981). Creating a dialectical social science: Concepts, methods, and models. Dordrecht: D. Reidel. Mitroff, LI., Mason, R.O., & Barabba, V.P. (1983). The 1980 census: Policymaking amid turbulence. Lexington, MA: D.C. Heath. Nagel, S.S. (1982). Policy evaluation. New York: Praeger. Nagel, S.S. (1983). Encyclopedia of policy studies. New York: Marcel Dek- ker. Nagel, S.S., & Neef, M. (1976). Two examples from the legal process. Policy Analysis, 2(2), 356-57. Osborn, A.F. (1948). Your creative mind. New York: Charles Scribner. Porter, A.L., & Rossini, F.A. (1983). Technological innovation and its as- sessment. In S.S. Nagel (Ed.), Encyclopedia of policy studies. New York: Marcel Dekker. Quade, E.S. (1985). Analysis for public decisions (2nd ed.). New York: El- sevier. Raiffa, H. (1968). Decision analysis. Reading, MA: Addison-Wesley. Rescher, N. (1980). Induction. Pittsburgh: University of Pittsburgh Press. Rein, M. (1976). Social science and public policy. Baltimore: Penguin Books. Rein, M., & White, S.H. (1977). Policy research: Belief and doubt. Policy Analysis, 3(2), 239-272. Rose, R. (1977). Disciplined research and undisciplined problems. In C.H. Weiss (Ed.), Using social research in public policy making (pp. 23-36). Lexington, MA: D.C. Heath. Saaty, T.L. (1980). The analytic hierarchy process. New York: McGraw- Hill. Sabatier, P.A., & Mazmanian, D.A. (1983). Policy implementation. In S.S. Nagel (Ed.), Encyclopedia of policy studies. New York: Marcel Dekker. Schon, D.A. (1983). The reflective practitioner. New York: Basic Books. Schwenk, C.R., & Cosier, R.A. (1980). Effects of the expert, devil’s advo- cate, and dialectical inquiry methods on prediction performance. Or- ganizational Behavior and Human Performance. 26, 409-424. Sieber, S.D. (1981). Fatal remedies. New York: Plenum. Simon, H.A. (1973). The structure of ill-structured problems. Artificial Intelligence, 4, 181-201. Slater, P. (1927). Dimensions of intrapersonal space. London: Wiley. Stabell, C.B. (1978. Integrative complexity of information environment perception and information use: An empirical investigation. Organiza- tional Behavior and Human Performance, 22, 116-142. Stokey, E., & Zeckhauser, R. (1978). A primer of policy analysis. New York: Norton Strauch, R.E. (1976). A critical look at quantitative methodology. Policy Analysis, 2, 121-144, Sudman, S. (1976). Applied sampling. New York: Academic Press. Taylor, S.E., & Crocker, J. (1980). Schematic bases of social information processing. In E. Higgins (Ed.), Social cognition. Hillsdale, NJ: Erlbaum. Dunn: Methods of the Second Type 737 Warfield, J.N. (1976). Societal systems: Planning, policy, and complexity. New York: John Wiley. Watzlawick, P., Weakland, J., & Fisch, R. (1974). Change: Principles of problem formation and problem resolution. New York: W.W. Norton. White, M.J. (1983). Policy analysis and management science. In S.S. Nagel (Ed.), Encyclopedia of policy studies. New York: Marcel Dekker. Whitehead, A.N., & Russell, B. (1910). Principia Mathematica, 1. Cambridge: Cambridge University Press. Wildavsky, A. (1979). Speaking truth to power. Boston: Little, Brown. Wood, R.C. (1968). Foreword. In R.A. Bauer & K.J.Gergen (Eds.), The study of policy formation. New York: The Free Press. Zeckhauser, R., & Shaefer, E. (1968). Public policy and normative economic theory. In R.A. Bauer & K.J. Gergen (Eds.), The study of policy formation. New York: Free Press. Copyright of Policy Studies Review is the property of Policy Studies Organization and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

You might also like