In the last post, consideration related to partial functions lead us to present a logic without truth and implication, using the binary minus operation as a dual of implication and substitute for unary negation. But logic without implication and equivalence is strange. So in this post we want to omit negation and falsehood instead.
(This post got surprisingly long. Sometimes I got slightly sidetracked, for example the discussion of the parity function and its sequent rules could have been omitted. But I already tried to remove most of those sidetracks. The real reason for the size explosion are the examples of axiomatic theories in predicate logic. But those had to be discussed, since they make it clear that one can really omit falsehood without loosing anything essential.)
The classical sequent calculus
Before starting to omit logical operations, let us first recall the classical sequent calculus including as many logical operations as barely reasonable. If we limit ourself to constants, unary operations, and binary operations, then we get:
- Constants for truth () and falsehood ()
- An unary operation for negation ()
- Binary operations for: or (), and (), implication (), minus (), nand (), nor (), equivalence (), and xor (). The remaining binary operations add nothing new: reverse implication, reverse minus, first, second, not first, not second, true, and false
We work with sequents , and interpret the propositions (and ) as subsets of some universe set . We interpret the sequent itself as . Let stand for arbitrary finite sequences of propositions.
|Left structural rules||Right structural rules|
|Left logical rules||Right logical rules|
The rules for equivalence were taken from here, where they are also proved. Observe that . Clearly is the parity function, which is famous for its computational complexity. It is easy to derive the rules for it (from the rules given above), but those rules have sequents above the line.
The rules for and are obvious. The rules for and are also obvious, if we agree that this should be read as and . Then and , so there is no need for multi-argument versions of nand and nor.
Omitting falsehood from classical logic
If we want to omit falsehood, it is not sufficient to just omit falsehood (). The operations minus () and xor () must be omitted too, since . Also nand () and nor () must be omitted, since and . (They are functionally complete, so the only surprise is the length of the formulas). And if we want to keep truth () or any of the remaining binary operations (or (), and (), implication (), or equivalence ()), then also negation () must be omitted.
But without negation, how can we express the law of excluded middle or double negation? Both and use negation. If we look at proof by cases, then or rather its generalization suggests itself, which is called Peirce’s law. And the sequent calculus suggests , which expresses a nice algebraic property of classical logic. But how do we prove that we get classical logic, and what does that mean?
At least classical tautologies like not using any of the omitted operations should still be tautologies. If “still be tautologies” is interpreted as being provable in the classical sequent calculus, then this follows from cut elimination. The cut elimination theorem says that the rule (Cut) can be avoided or removed from proofs in the sequent calculus. Since for all other rules, the formulas above the line are subformulas of the formulas below the line, only operations occurring in the sequent to be proved can occur in a cut-free proof.
However, my intended interpretation of “still be tautologies” is being provable in the natural deduction calculus. But let us not dive into the natural deduction calculus here, since this post will get long anyway. We still need to clarify what it means that we get classical logic. The basic idea is that logical expressions involving falsehood can be translated into expressions avoiding falsehood, which still mean essentially the same thing. But what exactly can be achieved by such a translation? Time for a step back.
Boolean functions of arbitrary arity
A Boolean function is a function of the form , where is a Boolean domain and is a non-negative integer called the arity of the function. Any Boolean function can be represented by a formula in classical logic. If we omit falsehood, a Boolean function with certainly cannot be represented, for otherwise our goal to remove falsehood would have failed. The question is whether we can represent all Boolean functions with , and whether the complexity of this representation is still comparable to the complexity of the representation where falsehood is available.
A two step approach allows to convert a given representation to one where falsehood is avoided. The first step is to replace by , by , by , by , and by . This step increases the size of the representation only by a small constant factor. The second step is to replace by the conjunction of all input variables. This step increases the size of the representation by a factor in the worst case. It works, since at least one of the input variables is , if is evaluated at an argument different from .
If one can define a new variable, then the factor can be avoided by defining . However, the underlying idea is have a logical constant together with axioms . This is equivalent to the single axiom which simplifies to .
Interpretation of logic without falsehood
When I initially wondered about omitting negation and falsehood from classical logic, I didn’t worry too much about “what it means that we get classical logic” and said
For a specific formula, falsehood gets replaced by the conjunction of all relevant logical expressions.
Noah Schweber disagreed with my opinion that “you can omit negation and falsehood, and still get essentially the same classical logic” and said
First, I would like to strongly disagree with the third sentence of your recent comment – just because (basically) the same proof system is complete for a restricted logical language when restricted appropriately, doesn’t mean that that restricted logic is in any way similar to what you started with.
So I replaced my harmless initial words by
One approach might be to translate as , where is a free propositional variable. It should be possible to show that this works, but if not then we just keep falsehood in the language.
However, these new words are problematic, since is a classical tautology, but is not. My initial words were not ideal too, since the meaning of “the conjunction of all relevant logical expressions” remained unclear.
The intuition behind my initial words was that falsehood is just the bottom element of the partial order from the Boolean algebra. Removing the explicit symbol (name) for the bottom element from the language should be harmless, since the bottom element can still be described as the greatest lower bound of “some relevant elements”. (However, removing the symbol does affect the allowed homomorphisms, since the bottom element is no longer forced to be mapped to another bottom element.)
The intuition behind my new words was an unlucky mix up of ideas from Curry’s paradox and the idea to use a free variable as a substitute for an arbitrary symbol acting as a potentially context dependent replacement for falsehood. We will come back to those ideas in the section on axiomatic theories in predicate logic without falsehood.
What are the “relevant logical expressions” for the conjunction mentioned above? Are , , or relevant logical expressions? The first two are not, since , i.e. the propositional variable itself is already sufficient. The third one is relevant, at least for predicate logic with equality. For propositional logic, it is sufficient to replace falsehood by the conjunction of all proposition variables occurring in the formula (and the assumptions of its proof). For predicate logic, it is sufficient to replace falsehood by the conjunction of the universally quantified predicate symbols, including the equality predicate as seen above (if present).
Axiomatic theories not using negation or falsehood
If the axioms or axiom schemes of an axiomatic theory (in classical first order predicate logic) don’t use negation or falsehood or any of the other operations we had to remove together with falsehood, then we don’t need to explicitly remove falsehood.
Let us look at the theory of groups as one example. If we denote the group multiplication by and the unit by , then the axioms are
We see that neither falsehood nor negation occurs in those axioms. This is actually a bit cheated, since it would seem that implication also doesn’t occur. It actually does occurs in the axioms governing equality. Here are some axioms for equality:
They only use implication, but something like the axiom schema
is still missing. Here is any formula, and may contain more free variables in addition to and . This doesn’t work for us, since this could also include formulas which use negation or falsehood! But at least for the theory of groups, the following two axioms (derived from instances of the axiom scheme) should be sufficient.
Here is a typical theorem of the theory of groups that should now be provable.
Here is an informal proof: Since and , we have . Since and , we have . Since and , we have . If we could show (this was omitted in axioms given above), then the proof would be complete.
Axiomatic theories in predicate logic without falsehood
Let us next look at Robinson arithmetic, which is essentially Peano arithmetic without the induction axiom schema.
Here are the additional axioms for equality.
Universal quantification has been omitted, but that should not worry us here. We see that the first axiom is the only one using falsehood. We could replace it by
Or we could also introduce a constant and replace by the axioms
The second approach has the advantage that if we add the induction axiom schema to get Peano arithmetic, we can just replace falsehood by the constant . The first approach has the advantage that is shows that falsehood was never really required for the induction axiom schema.
The second approach comes closer to my motivations to remove falsehood from the language. I wanted to avoid negation and falsehood, because if there is no (bottom) element connecting everything together, then the damage caused by the principle of explosion can be restricted to specific domains of discourse. I also had the idea to use more specific propositional constants instead of falsehood. For ZFC set theory, one could use one propositional constant for the axiom asserting the existence of the empty set, and another propositional constant for the axiom asserting regularity. However, as soon as one adds the axioms and , both constants will probably turn out to be equivalent.
Such a long post, for such a trivial thing as removing falsehood from the language. At least it should be clear now that one can really remove falsehood if desired, and what it means that one doesn’t loose anything essential. But is there anything to be gained by removing falsehood? The initial idea to remove falsehood came from the rule (suggested by the sequent calculus), which characterises classical logic by a nice algebraic property (not involving negation or falsehood). The motivation to finish this post came from the realisation that implication is the central part of logic. I just read this recent post by Peter Smith on the material conditional, and realised that one really gains something by removing falsehood: the counterintuitive material conditional goes away, and implication becomes the first class citizen it should have always been.
The counterintuitive part didn’t really disappear. The material conditional decomposes into the two sequents and . The proof sketch for those is and . The proof of the second doesn’t even use the special properties of classical logic, but this part is the counterintuitive one: Why should be true, if ? Well, the intuitive everyday logic is more a sort of modal logic than a propositional or predicate logic.