My father, who was a political scientist at Harvard, used to say with a superior smile that the Department of Social Relations (Harvard's home for sociologists) ought to be investigated. It was the 1950s and he was imagining Talcott Parsons investigated by some intellectual equivalent of the House Un-American Activities Committee. I laughed, and majored in Economics. A few years later, though, when I was a section man (so the sexist terminology of 1966) in the interdisciplinary major called Social Studies I was required to teach Marx, Durkheim, Weber to undergraduates, which entailed actually reading some sociology. So I soon lost my sneering rights about the field.
Since then I've never been quite able to close my mind to what sociologists say, though like most economists I've given it the old college try. Despite my professional oath never listen to anyone outside economics, I've listened to David Riesman and C. Wright Mills, for example, and to a long list of sociologically-oriented anthropologists, and latterly to the group of British sociologists (and an occasional Frenchman) doing social studies of science: Michael Mulkay, Harry Collins, Trevor Pinch, Bruno Latour, among many others.
But I worry. My worry is something like the opposite of my father's. It's not that sociology is insufficiently Rigorous. It's that sociology may be rigorously following political science itself into what might be called econowannabe-ism: the promiscuous use of rational choice "models" backed with econometrics. I was a colleague of Gary Becker's when he was conspiring with Jim Coleman at Chicago to accomplish just this for sociology. At the time I thought it was neat. But since then I've seen it for what it is: one idea of how people behave, useful so far as it goes; but really stupid as an all-purpose scientific program. Really. Stupid.
I don't want to offend any of you. I just want to inform you, of two modest little points of method. They are obvious, and obviously true. I don't think if you examine them you'll find anything wrong with them, so trivial are they. The merest common sense. But the consequences of violating them have become so grim in modern economics that the field has stopped advancing scientifically. Really, my dears, I want you to know what you are getting into before you become honorary economists.
The two little points are these:
1) Nothing about, though of course plenty about how we speak about, the world-can be proven on a blackboard. Obvious, right? You want to know about the power dynamics of families? Well, game theory written out on a blackboard can keep you clear about your own way of speaking about threat points and the like. That's nice. I'm not against theory. Swell. But the blackboard stuff doesn't in itself, without calibrating the theory to the facts of the world, tell you anything at all about families in the world. Not anything. If you know as much about the history of philosophy as I do (viz., nothing but what I learned in a freshman survey) you'll recognize this as the proposition that synthetic a priori is an impossibility.
2) No finding of fit or statistical significance testifies in itself to the scientific importance of an effect. Fit and importance are not the same thing. Nor is fit something that you "first" determine, and "then" move to substance. The substance of an effect is, to use a technical term, its oomph. Oomph ordinarily has nothing whatever to do with whether the coefficient is statistically significant at the .01 or .05 or .10 level.
Now if you are a rational choice wannabe, or if you have participated in the long and distinguished tradition, quite independent of econometrics, of applications of statistical methods to sociology, you are going to misunderstand me on one or both of these points. How can I be so sure? Because economists do.
A theoretical economist like the Nobel laureate Kenneth Arrow, who agrees with me about little point Number 1, statistical significance [Arrow 1959], thinks that what I'm saying in little point Number 2 is that economists should eschew theory. Ken said so [McCloskey 1998]. I'm not. Theoretical ideas, thinking about the economy or the society, is inevitable and desirable, and mathematics is surely one way to think. What I am objecting to is what Joseph Schumpeter long ago called the Ricardian Vice, after the English economist who first masterfully practiced it: namely, the claim to draw of conclusions about the world from blackboard theorizing. Thus, Ricardo showed that free trade was good because it exploited comparative advantage. Likewise, Gary Becker "shows" that families are efficient because they exploit comparative advantage. In neither case does the theoretical reasoning in itself, without calculating the oomph, "show" anything of the kind. It shows that under assumption A the conclusions C follow. Well, grand. But now it remains to be seen whether the assumptions A have oomph in the world. That is an empirical inquiry.
An empirical economist who agrees with me about little point 2, the futility of theory without observation, will think that what I'm saying in little point 1 is that statistical observation is a bad thing. No, no, a thousand times no (how's that for piling up sample size?). There's nothing at all wrong with Isaac Ehrlich, one of Gary's students so long ago, fitting murder rates across states to the presence of capital punishment and coming up with a 7:1 coefficient: one execution prevents seven murders, said Isaac. If true (it's not, but that's not the issue here), the magnitude of the capital-punishment effect on murder rates is scientifically relevant. Its statistical significance in the narrow sense is not. That our sample sizes or the quality of our data is such that the fits are good or bad is a tad interesting, for some few uses of Isaac's finding. But without a loss function, without some notion of how much it matters to us whether we commit Type I or Type II errors, the calculation of one side of the matter (statistical significance) is mostly useless. It certainly is no way to select the truth.
In 1933 the founders of the official method of modern statistics (which is never used by applied statisticians in economics or sociology) put it this way:
Is it more serious to convict an innocent man or to acquit a guilty? That will depend on the consequences of the error; is the punishment death or fine; what is the danger to the community of released criminals; what are the current ethical views on punishment? From the point of view of mathematical theory all that we can do is to show how the risk of errors may be controlled and minimised. The use of these statistical tools in any given case, in determining just how the balance should be struck, must be left to the investigator. Neyman and Pearson 1933, p. 296
Abraham Wald, another founder of theoretical statistics, went further: "The question as to how the form of the weight [that is, loss] function should be determined, is not a mathematical or statistical one. The statistician who wants to test certain hypotheses must first determine the relative importance of all possible errors, which will depend on the special purposes of his investigation" (1939, p. 302, italics supplied).
Economists, however, have carried on without the slightest recognition of the two little points. When I would make the point about the insignificance of statistical significance I would often get the reply, "Oh yeah. But we don't do such a stupid thing as mixing up merely statistical significance in R. A. Fisher's way of talking with scientific or policy or oomphity significance." Uh-huh. I got so tired of this reply that Stephen Ziliak and I decided to test it on the entire empirical contents of the American Economic Review in the 1980s. We applied a 19 item questionnaire to each paper about the proper use of statistical techniques to the 182 main empirical articles (most of the rest were violations of point 2, trying to prove on a blackboard that city size results from economies of information or that anticipated changes in Federal Reserve policy have no effect on inflation). The result? Fully 96 percent of the papers misused statistical significance to claim substantive significance, and 70 percent only used statistical significance, never asking how big is big.
Let me be clear. A scientist must of course think and watch, theorize and observe. Real and progressive sciences, like social history or geomorphology, do both. But the trouble with blackboard proof and statistical significance is that though they look like thinking and watching, they are actually not. They are, as the physicist Richard Feynman once memorably expressed it, "cargo cult" methods [1985]. He was referring to the New Guinea tribesmen who were so impressed by the cargo planes that would take off and land during the War that afterwards they made coconut shell runway lights and smoothed earth landing fields to encourage the great birds to return. Their "airports" looked a lot like the real thing, and so far as the tribesmen were concerned were the real thing. (I'll bet the tribeswomen had more sense). But no aviation was accomplished.
I regret to say that blackboard theorizing without empirical purchase and statistical significance without practical significance constitutes a cargo cult in modern economics-and in modern population biology, and in choice theoretic sociology, and (terrifyingly) medical statistics. Please, please, let's get back to the sort of real science that social historians and particle physicists do, where we deal as scientists do with the one thing needful in science, oomph. And shun the cargo cult, oh ye econowannabes.
Works Cited