Untyped lambda calculus is inconsistent - General

Welcome to the Functional Programming Zulip Chat Archive. You can join the chat here.

Pedro Minicz

What does it mean to say that the untyped lambda calculus is inconsistent?

A bit of context: I've been intrigued by the Metamath proof assistant for a while now. It is basically a "substitution check", the user writes a series of substitution steps and Matamath makes sure they match. Basically the user carries computations by hand, like if the user was writing a series of alpha/beta reductions steps and the computer was checking they are valid. This equivalence between logic and computation is fantastic, and weird.

Pedro Minicz

I suspect it has to do with terms that don't normalize, such as Ω=(λx.xx)(λx.xx)\Omega = (\lambda x. x x) (\lambda x. x x). If that is the case, what is there of inconsistent in non-terminating programs? (I can see why that would be inconsistent in a language where types encode propositions, because an infinite loop inhabits any type, but I don't see how this would make a language where terms themselves encode propositions which is how I assume logic would be encoded in untyped lambda calculus.)

Pedro Minicz

Also, suppose we have an oracle that can tell if a lambda term is normalizes (modulo some reduction strategy). Would the set of all normalizing untyped lambda terms be consistent?

Georgi Lyubenov // googleson78

I'm almost ready to bet this inconsistency is described in Barendregt's untyped lc book

Georgi Lyubenov // googleson78

"The Lambda Calculus: Its Syntax and Semantics"

Georgi Lyubenov // googleson78

in there, iirc, lambda calculus is defined as a formal system with some rules, like beta-reduction, alpha-reduction, etc, and then, using these rules, you can prove any term you like
and I guess this is what is described in the wikipedia article I linked that's on Curry's paradox

Gabriel Lebec

Who says untyped LC is inconsistent? Different people mean different things by consistency, apparently.

Gabriel Lebec

Another early problem in the λ-calculus was whether it is consistent at all. In this context, inconsistency means that all terms are equal: one can reduce any λ-term M to any other λ-term N. That this is not the case is an early result of λ-calculus.

Gabriel Lebec

The Church-Rosser theorem gives us, among other things, that the plain λ-calculus—that is, the theory of equations between λ-terms—is consistent, in the sense that not all equations are derivable.

Gabriel Lebec

It sounds to me like untyped LC is consistent.

Gabriel Lebec

The original system was shown to be logically inconsistent in 1935 when Stephen Kleene and J. B. Rosser developed the Kleene–Rosser paradox.[8][9]

Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus.[10] In 1940, he also introduced a computationally weaker, but logically consistent system, known as the simply typed lambda calculus.[11]

Gabriel Lebec

Hm. This is ambiguous wikipedia-speak, it is not clear whether STLC is consistent but ULC isn't, or whether both ULC and STLC are…

Gabriel Lebec

Hard to pin down from informal wikipedia entries…

Pedro Minicz

I am pretty sure I've heard it many places, usually not explained at all, just "ULC is inconsistent" then proceeding to talk about STLC. I believe Barendregt's Lambda Calculi with Types also makes such claim.

Gabriel Lebec

The intro to that paper says LC is consistent, and that it was only an earlier version of the system which dealt with more than functions which was inconsistent

Gabriel Lebec

I guess it doesn't quite say that

Gabriel Lebec

just that "the subsystem dealing with functions only became a successful model"

Reed Mullanix

Consistency means a lot of different things to different people, sadly. It's like the term "normal"

Pedro Minicz

Gabriel Lebec said:

87D2195C-A9B0-4D9E-8EEF-F645BDD5A4B3.jpg

Indeed, "consistency" depends on the situation. I'd interpret LC as "consistent for modeling" computations (the definition of "consistency" could be "can compute anything Turing machines can") and "inconsistent for logic" as in "bad for trying to encode propositions and reason about them".

Pedro Minicz

Just a thought I had, I'll leave it here in case it interests anyone. If LC is ("logically" or "ultimately") inconsistent, then Turing machines are inconsistent, then the concept of computation is inconsistent. If computation is inconsistent you so are proof assistants and the process of elaborating proofs on paper. Therefore, LC cannot be ultimately inconsistent, just very, very inconvenient to use for formal logic/all simple encodings fail (you could implement Coq or Agda or any proof assistant on LC and do your proofs using it).

Nick Scheel

You’re mixing up usages of inconsistency. Lambda calculus and turing machines are both perfectly fine models of computation, there’s no dispute there. They both allow for potentially nonterminating computations, which is desirable for some applications. But untyped lambda calculus is horrible as a language for math/logic in two ways:

At the level of propositions (think: types), lambda calculus cannot be used as a language for expressing propositions, because every proposition would then be derivable (inconsistency). Note that this is shown via adding logical symbols (like implication, conjunction, etc.) to the lambda calculus to represent logical propositions, and then using standard deductive inference rules to derive the inconsistency. (This is Curry’s Paradox.)

At the level of proofs of propositions (think: terms of types), untyped lambda calculus is still inconsistent. In particular, if you try to assign types to terms in naïve ways that still let in fixpoint operators, you quickly find that you can produce a proof of any proposition (a term of any type). This is the kind of inconsistency that is usually meant when talking about this kind of thing. (Haskell is logically inconsistent for the same reason: every type has an inhabitant.) There are probably deeper/stronger results lying around about the relationship of nontermination computations leading to trivial proof systems (that don’t involve fixpoint operators), but I don’t know them off the top of my head.

(Note that in dependent type theories, lambda calculus is used to represent both types and terms, propositions and proofs. So it’s doubly bad if the types aren’t strong enough to prevent these inconsistencies.)

Anyways. The cause of both of these issues is the unlimited recursion allowed by naïve fixpoint operations. The problem thus becomes how to allow certain fixpoints while remaining consistent, in the senses outlined above. And it turns out to require computational termination of the fixpoints, so there is a single unique solution.

I hope this Wikipedia article will clear up further confusion you may have: https://en.wikipedia.org/wiki/Deductive_lambda_calculus

Logic as a field is full of disappointing results and many limitations, but it is also very rewarding once you understand more of the big picture of how things fit together.

Nick Scheel

P.S. We don’t actually know that ZFC set theory is consistent. We suspect it is, and we know many (type) theories that are equiconsistent with it, but we cannot prove that ZFC is consistent from it’s own axioms (this is Gödel’s theorem)

Cody Roux

The original intent was this: the λ\lambda-calculus is both simple, and provides a binding construct and the ability to encode and compute over various datatypes, particularly booleans.

Because of this, Church (and others) came up with a powerful and simple foundational system, based on the usual encoding of the booleans (=λxy.x\top = \lambda x y. x, =λxy.y\bot = \lambda x y.y), , where every proposition could be expressed as an equality between lambda terms. For example the proposition x,P(x)\forall x, P(x) would be simply encoded as λx.P(x)\lambda x. P(x) and its truth would simply be the equality λx.P(x)=λx.\lambda x.P(x) = \lambda x.\top.

Unfortunately it is pretty easy to build a contradiction in this system: simply take C:=Y (λp.¬p)C := Y\ (\lambda p. \neg p). Then we have C=¬CC = \neg C as a theorem.

To remedy this system, Church introduced (simple) types, and the resulting system is still used, e.g. as the basis of the HOL theorem proving system. See e.g. the LOG theory from this document.