Philosophy of special sciences

The last decade has seen the flourishing of this new genre well beyond the three most familiar disciplines: philosophy of mathematics, philosophy of physics, and philosophy of biology. (I suppose philosophy of mind is a close cousin, even if it is not quite the same as what one might call philosophy of cognitive science.) A selection of recent titles:

Baird, Scerri, and McIntyre, Philosophy of Chemistry (2011)

Bechtel, “Philosophy of Cell Biology” (2019)

Boniolo and Nathan, Philosophy of Molecular Medicine (2016)

Brigandt, “Philosophy of Molecular Biology” (2018)

Broadbent, Philosophy of Epidemiology (2013)

Cartwright and Montuschi, Philosophy of Social Science (2014)

Fagan, Philosophy of Stem Cell Biology (2013)

Gabbay, Thagard, and Woods, Philosophy of Statistics (2011)

Green, “Philosophy of Systems and Synthetic Biology” (2017)

Griffiths and Stotz, Genetics and Philosophy (2013)

Kincaid and Ross, The Oxford Handbook of Philosophy of Economics (2009)

O’Malley, Philosophy of Microbiology (2014)

Plutynski, Explaining Cancer (2018)

Thagard, Woody, and Hendry, Philosophy of Chemistry (2012)

Valles, Philosophy of Population Health (2018)

Beyond these there have also been several new books in the philosophy of medicine: Broadbent, Howick (on evidence-based medicine), Stegenga, Thompson and Upshur.

A rather glaring omission is the philosophy of engineering.

No knower is an island

Woodblock depicting the island of Bensalem from Bacon’s New Atlantis

Popper on “Crusonian science” in The Open Society and Its Enemies (1945), a particularly vivid illustration of what has become a central area of research in academic philosophy:

Two aspects of the method of the natural sciences are of importance in this connection. Together they constitute what I may term the ‘public character of scientific method’. First, there is something approaching free criticism. A scientist may offer his theory with the full conviction that it is unassailable. But this will not impress his fellow-scientists and competitors; rather it challenges them: they know that the scientific attitude means criticizing everything, and they are little deterred even by authorities. Secondly, scientists try to avoid talking at cross-purposes. (I may remind the reader that I am speaking of the natural sciences, but a part of modern economics may be included.) They try very seriously to speak one and the same language, even if they use different mother tongues. In the natural sciences this is achieved by recognizing experience as the impartial arbiter of their controversies. When speaking of ‘experience’ I have in mind experience of a ‘public’ character, like observations, and experiments, as opposed to experience in the sense of more ‘private’ aesthetic or religious experience; and an experience is ‘public’ if everybody who takes the trouble can repeat it. In order to avoid speaking at cross-purposes, scientists try to express their theories in such a form that they can be tested, i.e. refuted (or else corroborated) by such experience.

This is what constitutes scientific objectivity. Everyone who has learned the technique of understanding and testing scientific theories can repeat the experiment and judge for himself. In spite of this, there will always be some who come to judgements which are partial, or even cranky. This cannot be helped, and it does not seriously disturb the working of the various social institutions which have been designed to further scientific objectivity and criticism; for instance the laboratories, the scientific periodicals, the congresses. This aspect of scientific method shows what can be achieved by institutions designed to make public control possible, and by the open expression of public opinion, even if this is limited to a circle of specialists. Only political power, when it is used to suppress free criticism, or when it fails to protect it, can impair the functioning of these institutions, on which all progress, scientific, technological, and political, ultimately depends.

In order to elucidate further still this sadly neglected aspect of scientific method, we may consider the idea that it is advisable to characterize science by its methods rather than by its results. Let us first assume that a clairvoyant produces a book by dreaming it, or perhaps by automatic writing. Let us assume, further, that years later as a result of recent and revolutionary scientific discoveries, a great scientist (who has never seen that book) produces one precisely the same. Or to put it differently we assume that the clairvoyant ‘saw’ a scientific book which could not then have been produced by a scientist owing to the fact that many relevant discoveries were still unknown at that date. We now ask : is it advisable to say that the clairvoyant produced a scientific book? We may assume that, if submitted at the time to the judgement of competent scientists, it would have been described as partly ununderstandable, and partly fantastic; thus we shall have to say that the clairvoyant’s book was not when written a scientific work, since it was not the result of scientific method. I shall call such a result, which, though in agreement with some scientific results, is not the product of scientific method, a piece of ‘revealed science’.

In order to apply these considerations to the problem of the publicity of scientific method, let us assume that Robinson Crusoe succeeded in building on his island physical and chemical laboratories, astronomical observatories, etc., and in writing a great number of papers, based throughout on observation and experiment. Let us even assume that he had unlimited time at his disposal, and that he succeeded in constructing and in describing scientific systems which actually coincide with the results accepted at present by our own scientists. Considering the character of this Crusonian science, some people will be inclined, at first sight, to assert that it is real science and not ‘revealed science’. And, no doubt, it is very much more like science than the scientific book which was revealed to the clairvoyant, for Robinson Crusoe applied a good deal of scientific method. And yet, I assert that this Crusonian science is still of the ‘revealed’ kind; that there is an element of scientific method missing, and consequently, that the fact that Crusoe arrived at our results is nearly as accidental and miraculous as it was in the case of the clairvoyant. For there is nobody but himself to check his results; nobody but himself to correct those prejudices which are the unavoidable consequence of his peculiar mental history; nobody to help him to get rid of that strange blindness concerning the inherent possibilities of our own results which is a consequence of the fact that most of them are reached through comparatively irrelevant approaches. And concerning his scientific papers, it is only in attempts to explain his work to somebody who has not done it that he can acquire the discipline of clear and reasoned communication which too is part of scientific method. In one point—a comparatively unimportant one—is the ‘revealed’ character of the Crusonian science particularly obvious; I mean Crusoe’s discovery of his ‘personal equation’ (for we must assume that he made this discovery), of the characteristic personal reaction-time affecting his astronomical observations. Of course it is conceivable that he discovered, say, changes in his reaction-time, and that he was led, in this way, to make allowances for it. But if we compare this way of finding out about reaction-time, with the way in which it was discovered in ‘public’ science—through the contradiction between the results of various observers—then the ‘revealed’ character of Robinson Crusoe’s science becomes manifest.

To sum up these considerations, it may be said that what we call ‘scientific objectivity’ is not a product of the individual scientist’s impartiality, but a product of the social or public character of scientific method; and the individual scientist’s impartiality is, so far as it exists, not the source but rather the result of this socially or institutionally organized objectivity of science.

A rather promiscuous, historically minded syllabus on this subject broadly construed—what we could mean by the “social character” of knowledge—might run through Socratic dialogue; Descartes on self-knowledge; Hegel and the dialectical turn; Freud and the psychoanalytic turn (as rupture of Cartesianism); Marx and the ideological turn (as rupture of autonomous liberal subject); reactions to British idealism, solipsism, and skepticism; Peirce, Dewey, and other pragmatists on communities of inquiry; Kuhn, Lakatos, and midcentury philosophy of science (normal science, research programs); Barthes and Foucault on the author; post-structuralism and existentialism on humanism; more contemporary logical puzzles over private language, self-knowledge, and common knowledge; externalism in epistemology, philosophy of language, and philosophy of mind; social construction and the science wars (post-Sokal); Foucault, Ian Hacking, and historical ontology; the career of “social epistemology” (compare “standpoint epistemology”) as new research programs (especially the new epistemology of trust and testimony); the rise of sociology and especially political economy of science; epistemic scrutiny of mathematical practice and new anxieties over mathematical knowledge (post-Four Color Theorem); and contemporary work on democracy and epistemology.

Writ large, this story is often as much about the self—its transparency or opacity, its autonomy or social conditioning—as it is about knowledge. How much can one do alone? How far can one be Crusonian? On one side there is inwardness, individuality, privacy, personality, property, skepticism, logic, and pure reason (or at least the romantic artist, the Crusonian pure reasoner); on the other there is outwardness (whether the external world or other selves), community, public life, impersonality, the commons, trust, conversation, and the dialogic imagination.

See also

Elizabeth Anderson, “The Epistemology of Democracy”

Michael Brady and Miranda Fricker, The Epistemic Life of Groups

Helen Longino, Science as Social Knowledge

Science in crisis

An abbreviated list of a new genre yoking together meta-science, sociology of science, and social epistemology, focusing on varieties of scientific malfeasance:

R. Barker Baussell, The Problem with Science: The Reproducibility Crisis and What to do About It (2021)

Aubrey Clayton, Bernoulli’s Fallacy: Statistical Illogic and the Crisis of Modern Science (2021)

Nicolas Chevassus-au-Louis, Fraud in the Lab: The High Stakes of Scientific Research (2019)

Ben Goldacre, Bad Science: Quacks, Hacks, and Big Pharma Flacks (2010)

Gareth Leng and Rhodri Ivor Leng, The Matter of Facts: Skepticism, Persuasion, and Evidence in Science (2020)

Philip Mirowski, Science-Mart (2011)

Stuart Ritchie, Science Fictions: Exposing Fraud, Bias, Negligence, and Hype in Science (2020)

There’s also a generic counterpart that defends science against these worries (cf. Latour on critique running out of steam):

Harry Collins, Rethinking Expertise (2008)

Harry Collins, Are We All Scientific Experts Now? (2014)

Harry Collins and Robert Evans, Why Democracies Need Science (2017)

Lee McIntyre, The Scientific Attitude (2020)

Naomi Oreskes, Why Trust Science? (2019)

And finally, there are volumes that focus on trust, democracy, mistrust and distrust, consensus and dissensus, and the sociology and politics of expertise:

Mark B. Brown, Science in Democracy: Expertise, Institutions, and Representation (2009)

Gil Eyal, The Crisis of Expertise (2019)

Stephen Hilgartner, Science on Stage: Expert Advice as Public Drama (2000)

Philip Kitcher, Science in a Democratic Society (2011)

Robert K. Merton and Londa Schiebinger, Agnotology: The Making and Unmaking of Ignorance (2008)

David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020)

Naomi Oreskes and Erik M. Conway, Merchants of Doubt (2011)

Zeynep Pamuk, Politics and Expertise: How to Use Science in a Democratic Society (2021)

The rise and fall of therapeutic rationality

This ProPublica story—not just the spread of disinformation about these drugs, but specifically doctors’ complicity in generating runs and shortages, endangering patients who need them for chronic diseases such as lupus—reminds me of what the physician-historian Scott Podolsky calls a “pyrrhic victory” in the battle over “therapeutic rationality” in his wonderful book The Antibiotic Era: Reform, Resistance, and the Pursuit of a Rational Therapeutics—which anyone interested in the history or philosophy of medical evidence should go read immediately.

Podolsky shows that in the 1970s a powerful backlash from a coalition of doctors and pharmaceutical companies against the FDA’s new power to regulate drugs helped ensure we have no robust, centralized public oversight of prescription practices. (If you’re surprised to see doctors opposing what you think of as the public good, you’ll be even more surprised to read about their opposition to universal health insurance in Paul Starr’s The Social Transformation of American Medicine: The Rise of a Sovereign Profession and the Making of a Vast Industry.)

Here’s how Podolsky puts it:

The limits to government encroachment on the prescribing of antibiotics in the United States would be reached with Panalba and the fixed-dose combination antibiotics. While the FDA had been empowered to remove seemingly ‘irrational’ drugs from the marketplace, no one had been empowered to rein in the seemingly inappropriate prescribing of appropriate drugs. The 1970s would witness ongoing professional and government attention given to the increasingly quantified prevalence of ‘irrational’ antibiotic prescribing and its consequences, and such attention would in fact lead to attempts to restrain such prescribing through both educational and regulatory measures. The DESI process, though, had generated a vocal backlash against centralized attempts to further delimit individual antibiotic prescribing behavior in the United States, resulting in generally failed attempts to control prescribing at local, let alone regional or national, levels in the United States.

In the case of antibiotics, the result has been decades of promiscuous prescription, as overuse of antibacterials helped to breed a new generation of antibiotic-resistant “superbugs”—at the very same time that pharmaceutical companies, deciding that these drugs weren’t profitable, stopped trying to develop new ones. We thus have very few antibiotics to take the place of the ones that no longer work, even though isolated voices have been sounding the alarm all along—just as others have regarding pandemics. (Obama’s administration not only put in place a pandemic response team that Trump’s administration dismantled. It also developed a “National Action Plan for Combating Antibiotic-Resistance Bacteria.”) This is maybe the least familiar massive negative market externality of our time. Another result of such promiscuous prescription is much better known: we call it the opioid crisis.

However you view the FDA today—emblem of consumer protection or bureaucratic mismanagement, regulatory capture or government barrier to innovation, success story or failure—there is no question that public oversight of drugs is important and that it is high time to rethink how we regulate prescriptions, too.

The most amazing fact

A charming discussion of what should be called the fundamental theorem of computation theory, in Epstein and Carnielli, Computability: Computable Functions, Logic, and the Foundations of Mathematics (2008):

We have studied one formalization of the notion of computability. In succeeding chapters we will study two more: recursive functions and functions representable in a formal system.

The Most Amazing Fact
All the attempts at formalizing the intuitive notion of computable function yield exactly the same class of functions.

So if a function is Turing machine computable, it can also be computed in any of the other systems described in Chapter 8.E. This is a mathematical fact which requires a proof. […] Odifreddi, 1989 establishes all the equivalences. […]

The Most Amazing Fact is stated about an extensional class of functions, but it can be stated constructively: Any computation procedure for any of the attempts at formalizing the intuitive notion of computable function can be translated into any other formalization in such a way that the two formalizations have the same outputs for the same inputs.

In 1936, even before these equivalences were established, Church said,

We now define the notion, already discussed, of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers (or of a lambda-definable function of positive integers). This definition is thought to be justified by the considerations which follow, so far as positive justification can ever be obtained for the selection of a formal definition to correspond to an intuitive notion.

So we have

Church’s Thesis: A function is computable iff it is lambda-definable.

This is a nonmathematical thesis: it equates an intuitive notion (computability) with a precise, formal one (lambda-definability). By our amazing fact this thesis is equivalent to

A function is computable iff it is Turing machine computable.

Turing devised his machines in a conscious attempt to capture in simplest terms what computability is. That his model turned out to give the same class of functions as Church’s (as established by Turing in the paper cited above) was strong evidence that it was the “right” class. Later we will consider some criticisms of Church’s Thesis in that the notion of computability should coincide with either a larger or a small class than the Turing machine computable ones.

Mechanism in biology

William Bechtel, Discovering Cell Mechanisms: The Creation of Modern Cell Biology (2006)

William Bechtel, “Mechanism and Biological Explanation,” Philosophy of Science (2011)

William Bechtel, “Biological Mechanisms: organized to maintain autonomy,” in Systems Biology: Philosophical Foundations (2007)

Carl Craver and James Tabery, “Mechanisms in Science,” Stanford Encyclopedia of Philosophy (2015)

Carl F. Craver and Lindley Darden, In Search of Mechanisms: Discoveries Across the Life Sciences (2013)

Margaret Gardel, “Moving beyond molecular mechanisms,” Journal of Cell Biology (2015)

Daniel J. Nicholson, “The concept of mechanism in biology,” Studies in History and Philosophy of Biological and Biomedical Sciences (2012)

Rob Phillips, “Musings on mechanism: quest for a quark theory of proteins?” Journal of the Federation of American Societies for Experimental Biology (2017)

James Tabery, Monika Piotrowska, and Lindley Daren, “Molecular Biology,” Stanford Encyclopedia of Philosophy (2015)

The two cultures of statistical modeling

Peter Norvig, “On Chomsky and the Two Cultures of Statistical Learning” (2011):

At the Brains, Minds, and Machines symposium held during MIT’s 150th birthday party, Technology Review reports that Prof. Noam Chomsky

derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior.

The transcript is now available, so let’s quote Chomsky himself:

It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

This essay discusses what Chomsky said, speculates on what he might have meant, and tries to determine the truth and importance of his claims. Chomsky’s remarks were in response to Steven Pinker’s question about the success of probabilistic models trained with statistical methods.

  1. What did Chomsky mean, and is he right?
  2. What is a statistical model?
  3. How successful are statistical language models?
  4. Is there anything like their notion of success in the history of science?
  5. What doesn’t Chomsky like about statistical models?

The abstract of Leo Breiman, “Statistical Modeling: The Two Cultures” in Statistical Science (2001):

There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidly in fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools.