The most amazing fact

A charming discussion of what should be called the fundamental theorem of computation theory, in Epstein and Carnielli, Computability: Computable Functions, Logic, and the Foundations of Mathematics (2008):

We have studied one formalization of the notion of computability. In succeeding chapters we will study two more: recursive functions and functions representable in a formal system.

The Most Amazing Fact
All the attempts at formalizing the intuitive notion of computable function yield exactly the same class of functions.

So if a function is Turing machine computable, it can also be computed in any of the other systems described in Chapter 8.E. This is a mathematical fact which requires a proof. […] Odifreddi, 1989 establishes all the equivalences. […]

The Most Amazing Fact is stated about an extensional class of functions, but it can be stated constructively: Any computation procedure for any of the attempts at formalizing the intuitive notion of computable function can be translated into any other formalization in such a way that the two formalizations have the same outputs for the same inputs.

In 1936, even before these equivalences were established, Church said,

We now define the notion, already discussed, of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers (or of a lambda-definable function of positive integers). This definition is thought to be justified by the considerations which follow, so far as positive justification can ever be obtained for the selection of a formal definition to correspond to an intuitive notion.

So we have

Church’s Thesis: A function is computable iff it is lambda-definable.

This is a nonmathematical thesis: it equates an intuitive notion (computability) with a precise, formal one (lambda-definability). By our amazing fact this thesis is equivalent to

A function is computable iff it is Turing machine computable.

Turing devised his machines in a conscious attempt to capture in simplest terms what computability is. That his model turned out to give the same class of functions as Church’s (as established by Turing in the paper cited above) was strong evidence that it was the “right” class. Later we will consider some criticisms of Church’s Thesis in that the notion of computability should coincide with either a larger or a small class than the Turing machine computable ones.

Mechanism in biology

William Bechtel, Discovering Cell Mechanisms: The Creation of Modern Cell Biology (2006)

William Bechtel, “Mechanism and Biological Explanation,” Philosophy of Science (2011)

William Bechtel, “Biological Mechanisms: organized to maintain autonomy,” in Systems Biology: Philosophical Foundations (2007)

Carl Craver and James Tabery, “Mechanisms in Science,” Stanford Encyclopedia of Philosophy (2015)

Carl F. Craver and Lindley Darden, In Search of Mechanisms: Discoveries Across the Life Sciences (2013)

Margaret Gardel, “Moving beyond molecular mechanisms,” Journal of Cell Biology (2015)

Daniel J. Nicholson, “The concept of mechanism in biology,” Studies in History and Philosophy of Biological and Biomedical Sciences (2012)

Rob Phillips, “Musings on mechanism: quest for a quark theory of proteins?” Journal of the Federation of American Societies for Experimental Biology (2017)

James Tabery, Monika Piotrowska, and Lindley Daren, “Molecular Biology,” Stanford Encyclopedia of Philosophy (2015)

The two cultures of statistical modeling

Peter Norvig, “On Chomsky and the Two Cultures of Statistical Learning” (2011):

At the Brains, Minds, and Machines symposium held during MIT’s 150th birthday party, Technology Review reports that Prof. Noam Chomsky

derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior.

The transcript is now available, so let’s quote Chomsky himself:

It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

This essay discusses what Chomsky said, speculates on what he might have meant, and tries to determine the truth and importance of his claims. Chomsky’s remarks were in response to Steven Pinker’s question about the success of probabilistic models trained with statistical methods.

  1. What did Chomsky mean, and is he right?
  2. What is a statistical model?
  3. How successful are statistical language models?
  4. Is there anything like their notion of success in the history of science?
  5. What doesn’t Chomsky like about statistical models?

The abstract of Leo Breiman, “Statistical Modeling: The Two Cultures” in Statistical Science (2001):

There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidly in fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools.