This course begins with logic. For our purposes, logic is a sort of model for "truth" analogous to how equations can model the flights of objects in physics. Practically speaking, a brief encounter with logic will give you a foundation for parsing, understanding, and finally proving mathematical statements.Everybody who has worked in formal logic will confirm that it is one of the technically most refractory parts of mathematics. The reason for this is that it deals with rigid, all-or-none concepts, and has very little contact with the continuous concept of the real or of the complex number, that is, with mathematical analysis. Yet analysis is the technically most successful and best-elaborated part of mathematics. Thus formal logic is, by the nature of its approach, cut off from the best cultivated portions of mathematics, and forced onto the most difficult part of the mathematical terrain, into combinatorics.
We will begin with propositional logic and expand from there later on. The basic element of propositional logic is the proposition. Propositions simply model statements that can either be true or false.
Definition 1.1. A proposition is a statement that can be true or false.
Example. The following sentences are propositions:
The following are not propositions:
The upshot is that we'll be very permissive about what counts as a
proposition. As long as we can reasonably interpret a statement as
an assertion about something that can be true or false, it's a proposition.
∎
If propositional logic stopped there, it wouldn't be very interesting. But we can already notice that simply modeling statements as true and false suggests some intuitive relationships. For instance, we don't expect that the first two examples can both be true (since Illinois only has one capital), and the third example is clearly related somehow to the first two.
To begin unraveling these relationships and refining our model for truth, we can classify propositions into two broad types.
Definition 1.2. An atomic proposition is one that cannot be broken down into smaller propositions. A compound proposition is one that can.
Example. In the first example, arguably the following are atomic propositions:
In propositional logic, we represent propositions symbolically by formulas. Formulas are made up of propositional variables, parentheses, and connectives. Propositional variables, usually lowercase roman letters like $p,q,r,\dots$, represent atomic propositions and connectives allow us to build compound propositions by joining one or more propositions together. Parentheses are used to denote the ordering that we should interpret the connectives in. Hypothetically, we would be able to express every proposition that shows up in this course symbolically, but we won't go that far.
There are four basic logical connectives, called Boolean connectives, after George Boole who defined them in 1854. We will define and consider each one.
Definition 1.3. $\neg P$ is called negation and is pronounced "not $P$". $\neg P$ is true if and only if $P$ is false.
For instance, $\neg q$ is the proposition I cannot take two weeks off. Note that negation is a unary connective, in that it only applies to one formula, while all the other connectives are binary. Observe that because of this, we tend to express atomic propositions positively.
The unary connective $\neg$ is defined for a propositional $P$ by the following truth table:
$P$ | $\neg P$ |
---|---|
$T$ | $F$ |
$F$ | $T$ |
Definition 1.4. $P \wedge Q$ is called conjunction, and is pronounced "$P$ and $Q$". $P \wedge Q$ is true whenever $P$ and $Q$ are true.
This connective expresses the idea that both $P$ and $Q$ are true. For instance, $Q \wedge R$ is the proposition I can take two weeks off and I can fly to Tokyo.
The binary connective $\wedge$ is defined for propositional formulas $P$ and $Q$ by
$P$ | $Q$ | $P \wedge Q$ |
---|---|---|
$T$ | $T$ | $T$ |
$T$ | $F$ | $F$ |
$F$ | $T$ | $F$ |
$F$ | $F$ | $F$ |
Definition 1.5. $P \vee Q$ is called disjunction and is pronounced "$P$ or $Q$". $P \vee Q$ is true if and only if $P$ is true or $Q$ is true or both.
For example, $Q \vee R$ is the proposition I can take two weeks off or I can fly to Tokyo. One tricky thing to note with English is that many times we use "or" to mean "exclusive or". For instance, when you are asked whether you prefer beef or chicken, the expectation is that you may only choose one and not both. (In many cases, like it's my way or the highway, it's implicit that "my way" is not the highway.) This logical connective $\vee$ allows for both $P$ and $Q$ to be true, which corresponds to something like "and/or" in English.
"Or" is defined for propositional formulas $P$ and $Q$ by
$P$ | $Q$ | $P \vee Q$ |
---|---|---|
$T$ | $T$ | $T$ |
$T$ | $F$ | $T$ |
$F$ | $T$ | $T$ |
$F$ | $F$ | $F$ |
Definition 1.6. $P \rightarrow Q$ is called implication and is pronounced "if $P$, then $Q$" or "P implies Q". $P \rightarrow Q$ is true when $Q$ is true or $P$ is false.
For example, $P \rightarrow R$ would be the proposition If flights to Tokyo are under \$900, then I can fly to Tokyo. The direction in which you read this connective is very important. For instance, it may be the case that flights to Tokyo may not be under \$900 but I may still be able to fly. Here, $P$ is called the hypothesis and $q$ is called the conclusion.
Implication is defined for propositional formulas $P$ and $Q$ by
$P$ | $Q$ | $P \rightarrow Q$ |
---|---|---|
$T$ | $T$ | $T$ |
$T$ | $F$ | $F$ |
$F$ | $T$ | $T$ |
$F$ | $F$ | $T$ |
Implication is probably less familiar than conjunction or disjunction but it's particularly important since it's heavily used in expressing mathematical statements. Again, unlike conjunction and disjunction, implication is not commutative, which means that $P \rightarrow Q$ and $Q \rightarrow P$ are not logically equivalent.
One way to think about implications is as a claim, as we do in mathematics. Is the claim true? This depends on whether the hypothesis is true. If the hypothesis isn't true, then the claim can be considered vacuously true. This is sort of like when you begin a claim about all positive numbers that are less than 0: you can say anything about these numbers because they don't exist!
However, if the hypothesis is true, then the truth of our claim depends on whether the conclusion is also true. If it is, then all is right with the world. But if our hypothesis is true while our promised conclusion is false, then we have to conclude that we didn't know what we were talking about and that our claim was false after all.
It's important to note that while "implication" or "if A then B" as commonly used in English implies some causal or logical connection between A and B, no such connection is required in our setting. For example, If one plus one is two then Neptune is a planet is true simply on the basis on the conclusion being true.
Finally, we use the symbol $P \leftrightarrow Q$, pronounced "P if and only if Q" ("P iff Q" for short), as shorthand for $(P \to Q) \wedge (Q \to P)$. This will be true whenever $P$ and $Q$ are both true or both false.
Implication leads us naturally to the notion of proof. Two things I might wish to prove are that some statement is always true (a tautology) or always false (a contradiction). One straightforward approach is to use truth tables, which extend the tables we used above.
Consider the statement $A \to (B \to A)$. We claim that this statement is always true (feel free to take a moment to think about why). To show this, we'll consider every possible truth value for $A$ and $B$:
$A$ | $B$ | $B \to A$ | $A \to (B \to A)$ |
---|---|---|---|
$T$ | $T$ | $T$ | $T$ |
$T$ | $F$ | $T$ | $T$ |
$F$ | $T$ | $F$ | $T$ |
$F$ | $F$ | $T$ | $T$ |
Note that we introduced an intermediate column for $B \to A$ that we were then able to reference when constructing the final column. Since the final column was always true, $A \to (B \to A)$ is a tautology.
For a more complex example, let's try to show $(A \wedge B) \to C \leftrightarrow A \to (B \to C)$. Note that $\leftrightarrow$ is deemed to bind loosely, so we're showing the equivalence of $(A \wedge B) \to C$ and $A \to (B \to C)$.
$A$ | $B$ | $C$ | $A \wedge B$ | $(A \wedge B) \to C$ | $B \to C$ | $A \to (B \to C)$ | $(A \wedge B) \to C \leftrightarrow A \to (B \to C)$ |
---|---|---|---|---|---|---|---|
$T$ | $T$ | $T$ | $T$ | $T$ | $T$ | $T$ | $T$ |
$T$ | $T$ | $F$ | $T$ | $F$ | $F$ | $F$ | $T$ |
$T$ | $F$ | $T$ | $F$ | $T$ | $T$ | $T$ | $T$ |
$T$ | $F$ | $F$ | $F$ | $T$ | $T$ | $T$ | $T$ |
$F$ | $T$ | $T$ | $F$ | $T$ | $T$ | $T$ | $T$ |
$F$ | $T$ | $F$ | $F$ | $T$ | $F$ | $T$ | $T$ |
$F$ | $F$ | $T$ | $F$ | $T$ | $T$ | $T$ | $T$ |
$F$ | $F$ | $F$ | $F$ | $T$ | $T$ | $T$ | $T$ |
Note that we've elided the columns for $((A \wedge B) \to C) \to (A \to (B \to C))$ and $(A \to (B \to C)) \to ((A \wedge B) \to C)$, both due to space constraints and because we can easily just compare the values in the 5th and 7th columns to determine the value of the 8th.
As we can see, truth tables can get very large: if you have $n$ different propositional variables, you're looking at a truth table of size $2^n$. This is bad. In the next lecture, we will introduce natural deduction, a system for proving the correctness of statements in propositional logic (and more powerful logics as well).