You are on page 1of 6

SEP, ‘Scientific Explanation’

http://plato.stanford.edu/entries/scientific-explanation/

Nayuta Miki Jan/24/2012

1

Background
• Contrast and continuation between scientific and ordinary explanation. Examples from ordinary life are often taken seriously. • Many philosophers, especially DN theorists, think of concepts like “explanation”, “law”, “cause”, and “support for counterfactuals” as part of an interrelated family or circle of concepts that are “modal” in character. They assume that it is circular to explain one of them in terms of others and prefer the Humean account of causation and related concepts.

2

The Deductive-Nomological Model

The Basic Idea An explanandum is a sentence that describes the phenomenon to be explained. Explanans are the class of those sentences that are adduced, or cited, to account for the phenomenon. the DN model The explanans successfully explain the explanandum iff 1. The explanandum is a logical consequence of the explanans. 2. The sentences constituting the explanans are true. 3. The explanans contain at least one “law of nature” as an essential premise (without it, the derivation would be unvalid).

Problems • What counts as a law? Is Mendel’s law, which has exceptions, a law? • Do stastical laws explain and if so how? → Hempel(1965):

1

If this underlying “explanation” is computationally intractable and full of irrelevant detail. Many users of “non-ideal” explanations typically doesn’t know the hidden structure. • Conterexamples to sufficiency: 1. – Explanans: All males who take birth control pills regularly fail to get pregnant (law). John Jones is a male who has been taking birth control pills regularly → “a derivation can satisfy the DN criteria and yet fail to identify the causes of an explanandum—when this happens the derivation will fail to be explanatory. (Scriven 1962) → Hempel(1965): the use of “caused” implicitly claims there is a “law” (the hidden structure strategy). Then how does it contribute to understanding? It seems that what makes a “non-ideal” explanation an explanation must be features that can be known by those who use the explanation. – Inductive-statistical explanation: An explanation is successful to the extent that its explanans cofers high probability on its explanandum outcome. Explanatory Irrelevancies: – Explanandum: John Jones fails to get pregnant. • A counterexample to necessity: The impact of my knee on the desk caused the tipping over of the inkwell.– Deductive-statistical explanation: Involving the deduction of a narrower stastical uniformity from a more general premises.” Or we can put it in this way: “the regularity account of causation favored by DN theorists is at best incomplete. laws about the rectilinear propagation of light. the angle of the sun above the horizon. in what sense it is an ideal against which the original explanation should be measured? – Assume that an explanation provides understanding. – Suppose that we are presented with an explanation from econimics that does not appeal to any generalization that can be counted as a law but that underlying this “non-ideal” explanation is some incredibly complex set of facts described in terms of classical mechanics and electromagnetism. → This satisfies the DN criteria but doesn’t seem explanatory.” 2 . 2. Explanatory Asymmetories: – Explanandum: the hight of a flagpole – Explanans: the length of the shadow cast by the pole.

The SR Model An explanation of why some member x of the class characterized by attribute A has attribute B consists of the following information: 1. A homogeneous partition of a class A is a set of subclassesCi of A that are mutually exclusive and exhaustive such that • ∀i. The prior probability of B within A 2. together wit the probability of B within each cell of the partition 3.R) = P (Q | S.-R) = P (Q | S. an attribute C will be statistically relevant to another attribute B if and only if P (B | A. Statistically irrelevant properties are not.Ci .C) = P (B | A).Ci ) = P (B | A.-R∨)].T.Ci ) = P (B | A.-R] is a homogenous partition of S with respect to Q. The SR Model (intuitive) Statistically relevant properties are explanatory.-T. j[i = jRightarrowP (B | A.Dk ) → No further statistically relevant partition can be made with respect to B. a statement of the probability of recovery in each of the two cells of the above partition 3 . Example S: having a strep infection Q: recovering quickly T : treated with penicillin R haveng a penicillin-resistant strain Assume that P (Q | S.T. An SR explanation of why x who has a strep infection recovers quickly consists of 1.T.R ∨ -T.Cj )] • ∃Dk ⊂ A[P (B | A. [S.R) = P (Q | S. A homogeneous partition of A with respect to B.(T.-T. The cell of the partition o which x belongs *No reference is made to the values of the relevant probabilities.R ∨ -T.3 The Statistical Relevance Model (Salmon 1971) The Basic Idea Given some class A. a statement of the probability of quick recovery among all those with strep 2.-R) Then [S.

*the cell to which x belongs is said to be S. • The Causal Markov condition underdetermines causal relationships. 3. Notice that the same explanans will explain both shy a subject with strep and certain other properties recovers quickly if he does.T. The SR model assumes that what is explained is an individual fact. There are problems on this condition: • When the variables are characterized in an insuficiently fine-grained way. If this is correct. The situation where A is a common cause of B and S and the one where B causes A that. there is no obvious need for a separate theory of statistical explanation of individual outomes of the sort that Hempel and Salmon sought to devise. Problems 1. {X} and the set of non-descendants of X are independent. in turn.T. Causal Markov condition: A pair. of a directed acyclic graph G and a probability distribution P satisfies the Causal Markov condition ⇔ For each node.3.-R]. and also why he does not recover if he does not. 2. But we can think that what is explained is not an individual fact but a more general one. such as Albert becoming a juvenile deliquent. which is [S. In both cases. causes S can have the same statistical relevance relationships. this condition fails to hold. X. But I think it is a mistake. B A S B A S 4 . It is dubious that the sorts of “statistical explanation” found in the social abd biomedical sciences satisfy the homogeneity condition.B) = P (S | A). conditional on the set of all parents of X. In order to bridge causal claims and statistical relevance relations. such as why the expected incidence of deliquence is higher among certain subgroups than others. P ).R in the text. P (S | A. Salmon assumes Causal Markov condition. a statement of the cell to which x belongs. (G. of G.

→ Still the same problem arises. The CM Model An explanation of some event E will trace the causal processes and interactions leading up to E. as well as describing the processes and interaction that make up the event itself.. as is shown by the transmission of the chalk mark. which is then transferred to the eight ball on impact. and the eight ball ar causal processes. there are many cases in which the explanatorily relevant variables are not conserved quantities.g. a dent in an automobile fender) in a continuous way. → Both of them are insufficient to pick out explanatorily relevant causal relationships.g. A causal interaction is a spatio-temporal intersection between two causal processes which modifies the structure of both (e.] citing such facts about processes and interactions explains the motion of the balls after the collision. – Salmon (1997): Both of statistical relevance relations and connectiong causal processes are required for an explanation. and the collision of the cue stick with the cue ball and the collision of the cue and eight balls are causal interactions. set in motion by the mpact of a cue stick. Those features of a process P in virtue of which it qualifies as a causal process may not be the features of P that are causally or explanatorily relevant to the outcome E that we want to explain (Hitchcock 1995).. Why the linear momentum of a moving ball is causally relevant while other conserved quantities are not? Furthermore. which is defined as a quantity so characterized in physics. → e. [. The cue stick. The impact of the stick also transmits some blue chalk to the cue ball. but only some are explanatory relevant. – Salmon (1994): A causal process is a process that transmits a non-zero amount of a conserved quantity at each moment. 5 . No reason to assume that this new proposal work well. They are all marks that makes the process involving them causal (according to Salmon’s definition). strikes a stationary eight ball with the result that the eight ball is put in motion and the cue bal changes direction.” Problems 1. • The usual elementary textbook “scientific explanation” of the motino of the balls following their collision refers to the mass and velocity of the balls rahter than their color or the presence of the blue chalk mark.g. the cue ball. “Suppose that a cue ball. or at least some portion of these. a collision between two cars that dents both).4 The Causal Mechanical Model (Salmon 1984) The Basi Idea A causal process is a physical process that is characterized by the ability to transmit a mark (e....

Standard explanations refer not to the causal processes and interactions of molecules but to the overall behavior of the gas.” • The expansion of gas into a larger container from a smaller one. Even if an account that traced individual molecular trajectories were to be produced.. e.” • Causation by omission. a great deal of the information it contains will be irrelevant to the behavior of the gas.g. “This treatment abstracts radically from the details of the causal processes involving particular individual molecules and instead focuses on identifying higher level variables”. Cases that lack a spatio-temporal interaction at the individual level (explanations involving complex systems): • Newtonian gravitational theory which involve “action at a distance. “I killed the plant by not watering it. 6 .2.