CMSC 27100 — Lecture 1

Welcome to CMSC 27100! Basic course information can be found on the home page of the course website.

What is this course about?

This course is discrete mathematics, to be contrasted with continuous mathematics. Continuous mathematics is the kind of math that people like Euclid (Geometry) and Newton (Calculus) studied, requiring them to come up with notions like irrational numbers. By contrast, we will only be concerned with things like integers and rational numbers, which are conveniently the numbers that we use to count and to program computers.

It is also fundamentally about proof. Proof forms the foundation of mathematics, which in turn formed the foundation of computer science: Turing, von Neumann, Hopper and the other founders of CS were all mathematicians. As a result, every theoretical computer science course you take will have proof at its foundations (and more courses involve deep theory than you may realize).

Why should you take this course?

There are two good reasons to take this course:

This course will enable you to become a computer scientist. It will give you the foundations to analyze the running time of algorithm (Algorithms), to classify problems according to their difficulty (Complexity Theory) and solvability (Computability Theory). It will allow you to reason about deductive systems (Type Theory), number theory (Cryptography) and stochastic processes (Machine Learning). Ultimately, it will allow you to do computer science.

(The other core course for doing any advanced computer science is Linear Algebra, which forms the basis for key areas of computer science from Machine Learning to Quantum Computing. Unfortunately, we will not have time to cover Linear Algebra in this course, but we highly recommend it for future courses.)

This course will make you better at life. Two of the subjects covered in this course, logic and probability theory, are core to making good decisions in almost every endeavor. While "logic" in our context is not precisely the logic of Mr. Spock from Star Trek (fortunately) or "logic" in its colloquial sense, it is broadly useful. At their core, the arguments you encounter in everyday life approximate the kind of formal logic we will study in this class. Translating these arguments into formal logic will allow you to see holes where they exist and identify fallacies. Probability theory proves even more central to life, from allowing you to win games to helping you assess risk in investing. Probability is essential to a wide variety of endeavors, from allowing you to model elections to balancing risk against rewards in a pandemic.

What will we be covering in this course?

This course is a grab bag of fun discrete mathematical treats.

  1. Logic and set theory. These form the basis for the ideas and proof strategies that we'll use throughout the course.
  2. Number theory. Number theory is the study of the properties of numbers and in particular, the prime numbers.
  3. Combinatorics. Combinatorics is the study of how to count things. This sounds trivial until you have to start trying to figure out how many unique ways to there are.
  4. Probability theory. Probability is the study of quantifying the likelihood that events will occur. We focus on discrete probability, where events occur with countably many possible outcomes.
  5. Graph theory. Graph theory is the study of graphs. These are not graphs in the calculus or stats sense, but graphs as a set of vertices connected by edges. Graphs can represent relations among a set of objects.

Keep in mind that while this is an explicit list of content that we expect to cover, there is a lot more that we'll be covering implicitly, in the course of studying these topics. As already mentioned, a big part of this course is to develop your skills in mathematical proof.

Logic

Everybody who has worked in formal logic will confirm that it is one of the technically most refractory parts of mathematics. The reason for this is that it deals with rigid, all-or-none concepts, and has very little contact with the continuous concept of the real or of the complex number, that is, with mathematical analysis. Yet analysis is the technically most successful and best-elaborated part of mathematics. Thus formal logic is, by the nature of its approach, cut off from the best cultivated portions of mathematics, and forced onto the most difficult part of the mathematical terrain, into combinatorics.

Mathematical logic is the study of reasoning from the point of view of mathematics. On its own it's a rich and deep field of study. In the context of computer science, the development of mathematical logic in the early 20th century leads directly to the notions of computability. For the purposes of this course, the basics of mathematical logic allow us to develop a language to express mathematical ideas and a framework in which to prove their validity.

In the strictest sense, the language is really just a collection of symbols and some rules about how to arrange them. So, you're allowed to write something that looks like this, but not this and so on and so forth. The magic comes when we define meaning for these symbols.

Propositional logic

We will begin with propositional logic and expand from there later on. The basic element of propositional logic is the proposition.

Definition 1.1. A proposition is a statement that is either true or false.

We can classify propositions into two broad types.

Definition 1.2. An atomic proposition is one that cannot be broken down into smaller propositions. A compound proposition is not atomic.

In mathematical logic, we represent propositions symbolically by formulas. Formulas are made up of propositional variables, parentheses, and connectives. Propositional variables, usually uppercase roman letters like $P,Q,R,\dots$, represent atomic propositions and connectives allow us to build compound propositions by joining one or more propositions together. Parentheses are used to denote the ordering that we should interpret the connectives in. We represent propositional formulas by lowercase Greek letters, $\alpha, \beta, \dots$. Hypothetically, we would be able to express every proposition that shows up in this course symbolically, but we won't go that far.

There are four basic logical connectives, called Boolean connectives, after George Boole who defined them in 1854. We will define and consider each one.

Definition 1.3. $\neg P$ is called negation and is pronounced "not $P$". $\neg P$ is true if and only if $P$ is false.

For instance, $\neg q$ is the proposition "I cannot take two weeks off". Note that negation is a unary connective, in that it only applies to one formula, while all the other connectives are binary. Observe that because of this, we tend to express atomic propositions positively.

The unary connective $\neg$ is defined for a propositional $\varphi$ by the following truth table:

$\varphi$ $\neg \varphi$
$T$ $F$
$F$ $T$

Definition 1.4. $P \wedge Q$ is called conjunction, and is pronounced "$P$ and $Q$". $P \wedge Q$ is true whenever $P$ and $Q$ are true.

This connective expresses the idea that both $P$ and $Q$ are true. For instance, $Q \wedge R$ is the proposition "I can take two weeks off and I can fly to Tokyo".

The binary connective $\wedge$ is defined for propositional formulas $P$ and $Q$ by

$\varphi$ $\psi$ $\varphi \wedge \psi$ $\varphi \vee \psi$
$T$ $T$ $T$ $T$
$T$ $F$ $F$ $T$
$F$ $T$ $F$ $T$
$F$ $F$ $F$ $F$

Definition 1.5. $P \vee Q$ is called disjunction and is pronounced "$P$ or $Q$". $P \vee Q$ is true if and only if $P$ is true or $Q$ is true or both.

For example, $Q \vee R$ is the proposition "I can take two weeks off or I can fly to Tokyo". One tricky thing to note with English is that many times we use "or" to mean "exclusive or". For instance, when you are asked whether you prefer beef or chicken, the expectation is that you may only choose one and not both. This logical connective $\vee$ allows for both $P$ and $Q$ to be true, which corresponds to something like "and/or" in English.

"Or" is defined for propositional formulas $\varphi$ and $\psi$ by

$\varphi$ $\psi$ $\varphi \vee \psi$
$T$ $T$ $T$
$T$ $F$ $T$
$F$ $T$ $T$
$F$ $F$ $F$

Definition 1.6. $P \rightarrow Q$ is called implication and is pronounced "if $P$, then $Q$" or "P implies Q". $P \rightarrow Q$ is true when $Q$ is true or $P$ is false.

For example, $P \rightarrow R$ would be the proposition "If flights to Tokyo are under \$900, then I can fly to Tokyo". The direction in which you read this connective is very important. For instance, it may be the case that flights to Tokyo may not be under \$900 but I may still be able to fly. Here, $P$ is called the hypothesis and $q$ is called the conclusion.

Implication is defined for propositional formulas $\varphi$ and $\psi$ by

$\varphi$ $\psi$ $\varphi \rightarrow \psi$
$T$ $T$ $T$
$T$ $F$ $F$
$F$ $T$ $T$
$F$ $F$ $T$

Implication is probably less familiar than conjunction or disjunction but it's particularly important since it's heavily used in expressing mathematical statements. Again, unlike conjunction and disjunction, implication is not commutative, which means that $P \rightarrow Q$ and $P \rightarrow Q$ are not logically equivalent.

One way to think about implications is as a claim, as we do in mathematics. Is the claim true? This depends on whether the hypothesis is true. If the hypothesis isn't true, then the claim can be considered vacuously true. This is sort of like when you begin a claim about all positive numbers that are less than 0: you can say anything about these numbers because they don't exist!

However, if the hypothesis is true, then the truth of our claim depends on whether the conclusion is also true. If it is, then all is right with the world. But if our hypothesis is true while our promised conclusion is false, then we have to conclude that we didn't know what we were talking about and that our claim was false after all.

It's important to note that while "implication" or "if A then B" as commonly used in English implies some causal or logical connection between A and B, no such connection is required in our setting. For example, "If One Plus One is Two then Neptune is a Planet" is true simply on the basis on the conclusion being true.

Finally, we use the symbol $P \leftrightarrow Q$, pronounced "P if and only if Q" ("P iff Q" for short), as shorthand for $(P \to Q) \wedge (Q \to P)$. This will be true whenever $P$ and $Q$ are both true or both false.

Truth Tables

Implication leads us naturally to the notion of proof. Two things I might wish to prove are that some statement is always true (a tautology) or always false (a contradiction). One straightforward approach is to use truth tables, which extend the tables we used above.

Consider the statement $A \to (B \to A)$. We claim that this statement is always true (feel free to take a moment to think about why). To show this, we'll consider every possible truth value for for $A$ and $B$:

$A$$B$$B \to A$$A \to (B \to A)$
$T$ $T$ $T$$T$
$T$ $F$ $F$$T$
$F$ $T$ $T$$T$
$F$ $F$ $T$$T$

Note that we introduced an intermediate column for $B \to A$ that we were then able to reference when constructing the final column. Since the final column was always true, $A \to (B \to A)$ is a tautology.

For a more complex example, let's try to show $(A \wedge B) \to C \leftrightarrow A \to (B \to C)$. Note that $\leftrightarrow$ is deemed to bind loosely, so we're showing the equivalence of $(A \wedge B) \to C$ and $A \to (B \to C)$.

$A$$B$$C$$A \wedge B$$(A \wedge B) \to C$$B \to C$$A \to (B \to C)$$(A \wedge B) \to C \leftrightarrow A \to (B \to C)$
$T$$T$$T$$T$ $T$ $T$ $T$ $T$
$T$$T$$F$$T$ $F$ $F$ $F$ $T$
$T$$F$$T$$F$ $T$ $T$ $T$ $T$
$T$$F$$F$$F$ $T$ $T$ $T$ $T$
$F$$T$$T$$F$ $T$ $T$ $T$ $T$
$F$$T$$F$$F$ $T$ $F$ $T$ $T$
$F$$F$$T$$F$ $T$ $T$ $T$ $T$
$F$$F$$F$$F$ $T$ $T$ $T$ $T$
Note that we've elided the columns for $((A \wedge B) \to C) \to (A \to (B \to C))$ and $(A \to (B \to C)) \to ((A \wedge B) \to C)$, both due to space constraints and because we can easily just compare the values in the 5th and 7th columns to determine the value of the 8th.

As we can see, truth tables can get very large: if you have $n$ different propositional variables, you're looking at a truth table of size $2^n$. This is bad. In the next lecture, we will introduce natural deduction, a system for proving the correctness of statements propositional logic (and more powerful logics as well).