The rise and fall of therapeutic rationality

This ProPublica story—not just the spread of disinformation about these drugs, but specifically doctors’ complicity in generating runs and shortages, endangering patients who need them for chronic diseases such as lupus—reminds me of what the physician-historian Scott Podolsky calls a “pyrrhic victory” in the battle over “therapeutic rationality” in his wonderful book The Antibiotic Era: Reform, Resistance, and the Pursuit of a Rational Therapeutics—which anyone interested in the history or philosophy of medical evidence should go read immediately.

Podolsky shows that in the 1970s a powerful backlash from a coalition of doctors and pharmaceutical companies against the FDA’s new power to regulate drugs helped ensure we have no robust, centralized public oversight of prescription practices. (If you’re surprised to see doctors opposing what you think of as the public good, you’ll be even more surprised to read about their opposition to universal health insurance in Paul Starr’s The Social Transformation of American Medicine: The Rise of a Sovereign Profession and the Making of a Vast Industry.)

Here’s how Podolsky puts it:

The limits to government encroachment on the prescribing of antibiotics in the United States would be reached with Panalba and the fixed-dose combination antibiotics. While the FDA had been empowered to remove seemingly ‘irrational’ drugs from the marketplace, no one had been empowered to rein in the seemingly inappropriate prescribing of appropriate drugs. The 1970s would witness ongoing professional and government attention given to the increasingly quantified prevalence of ‘irrational’ antibiotic prescribing and its consequences, and such attention would in fact lead to attempts to restrain such prescribing through both educational and regulatory measures. The DESI process, though, had generated a vocal backlash against centralized attempts to further delimit individual antibiotic prescribing behavior in the United States, resulting in generally failed attempts to control prescribing at local, let alone regional or national, levels in the United States.

In the case of antibiotics, the result has been decades of promiscuous prescription, as overuse of antibacterials helped to breed a new generation of antibiotic-resistant “superbugs”—at the very same time that pharmaceutical companies, deciding that these drugs weren’t profitable, stopped trying to develop new ones. We thus have very few antibiotics to take the place of the ones that no longer work, even though isolated voices have been sounding the alarm all along—just as others have regarding pandemics. (Obama’s administration not only put in place a pandemic response team that Trump’s administration dismantled. It also developed a “National Action Plan for Combating Antibiotic-Resistance Bacteria.”) This is maybe the least familiar massive negative market externality of our time. Another result of such promiscuous prescription is much better known: we call it the opioid crisis.

However you view the FDA today—emblem of consumer protection or bureaucratic mismanagement, regulatory capture or government barrier to innovation, success story or failure—there is no question that public oversight of drugs is important and that it is high time to rethink how we regulate prescriptions, too.

The most amazing fact

A charming discussion of what should be called the fundamental theorem of computation theory, in Epstein and Carnielli, Computability: Computable Functions, Logic, and the Foundations of Mathematics (2008):

We have studied one formalization of the notion of computability. In succeeding chapters we will study two more: recursive functions and functions representable in a formal system.

The Most Amazing Fact
All the attempts at formalizing the intuitive notion of computable function yield exactly the same class of functions.

So if a function is Turing machine computable, it can also be computed in any of the other systems described in Chapter 8.E. This is a mathematical fact which requires a proof. […] Odifreddi, 1989 establishes all the equivalences. […]

The Most Amazing Fact is stated about an extensional class of functions, but it can be stated constructively: Any computation procedure for any of the attempts at formalizing the intuitive notion of computable function can be translated into any other formalization in such a way that the two formalizations have the same outputs for the same inputs.

In 1936, even before these equivalences were established, Church said,

We now define the notion, already discussed, of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers (or of a lambda-definable function of positive integers). This definition is thought to be justified by the considerations which follow, so far as positive justification can ever be obtained for the selection of a formal definition to correspond to an intuitive notion.

So we have

Church’s Thesis: A function is computable iff it is lambda-definable.

This is a nonmathematical thesis: it equates an intuitive notion (computability) with a precise, formal one (lambda-definability). By our amazing fact this thesis is equivalent to

A function is computable iff it is Turing machine computable.

Turing devised his machines in a conscious attempt to capture in simplest terms what computability is. That his model turned out to give the same class of functions as Church’s (as established by Turing in the paper cited above) was strong evidence that it was the “right” class. Later we will consider some criticisms of Church’s Thesis in that the notion of computability should coincide with either a larger or a small class than the Turing machine computable ones.

Mechanism in biology

William Bechtel, Discovering Cell Mechanisms: The Creation of Modern Cell Biology (2006)

William Bechtel, “Mechanism and Biological Explanation,” Philosophy of Science (2011)

William Bechtel, “Biological Mechanisms: organized to maintain autonomy,” in Systems Biology: Philosophical Foundations (2007)

Carl Craver and James Tabery, “Mechanisms in Science,” Stanford Encyclopedia of Philosophy (2015)

Carl F. Craver and Lindley Darden, In Search of Mechanisms: Discoveries Across the Life Sciences (2013)

Margaret Gardel, “Moving beyond molecular mechanisms,” Journal of Cell Biology (2015)

Daniel J. Nicholson, “The concept of mechanism in biology,” Studies in History and Philosophy of Biological and Biomedical Sciences (2012)

Rob Phillips, “Musings on mechanism: quest for a quark theory of proteins?” Journal of the Federation of American Societies for Experimental Biology (2017)

James Tabery, Monika Piotrowska, and Lindley Daren, “Molecular Biology,” Stanford Encyclopedia of Philosophy (2015)

The two cultures of statistical modeling

Peter Norvig, “On Chomsky and the Two Cultures of Statistical Learning” (2011):

At the Brains, Minds, and Machines symposium held during MIT’s 150th birthday party, Technology Review reports that Prof. Noam Chomsky

derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior.

The transcript is now available, so let’s quote Chomsky himself:

It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

This essay discusses what Chomsky said, speculates on what he might have meant, and tries to determine the truth and importance of his claims. Chomsky’s remarks were in response to Steven Pinker’s question about the success of probabilistic models trained with statistical methods.

  1. What did Chomsky mean, and is he right?
  2. What is a statistical model?
  3. How successful are statistical language models?
  4. Is there anything like their notion of success in the history of science?
  5. What doesn’t Chomsky like about statistical models?

The abstract of Leo Breiman, “Statistical Modeling: The Two Cultures” in Statistical Science (2001):

There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidly in fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools.