I mentioned earlier that one of the things that this structure allows us to do is, under certain conditions, "divide" things in the sense that there is an operation that we can perform on elements of our structure that reverses multiplication. I say "divide" because the operation that we perform is not really division. It's more accurate to say that we'll be showing that multiplicative inverses exist. These are numbers $a^{-1}$ such that $a \cdot a^{-1} = 1$.
First, we need the following notions.
Definition 8.7. An integer $p$ greater than 1 is called prime if the only positive divisors of $p$ are 1 and $p$. Otherwise, $p$ is called composite.
Definition 8.8. Two integers $a$ and $b$ are relatively prime (or coprime) if $\gcd(a,b) = 1$.
Example 8.9. Primes are obviously relatively prime to any number less than them, since they don't have any divisors except 1 and themselves. However, non-prime numbers (called composite numbers) can be relatively prime, even to each other. The numbers 10 and 21 are not prime, since $10 = 2 \cdot 5$ and $21 = 3 \cdot 7$. However, they are relatively prime, since $\operatorname{Div}(10) = \{\pm 1, \pm 2, \pm 5, \pm 10\}$ and $\operatorname{Div}(21) = \{\pm 1, \pm 3, \pm 7, \pm 21\}$.
Theorem 8.10. If integers $m \gt 0$ and $n$ are relatively prime, then $n$ has a multiplicative inverse mod $m$. That is, there exists an integer $a$ such that $n \cdot a \equiv_m 1$.
Proof. Per Bézout's identity, there exist integers $a$ and $b$ such that $a \cdot n + b \cdot m = 1$. Then, \begin{align*} [1]_m &= [a \cdot n + b \cdot m]_m \\ &= [a]_m \cdot [n]_m + [b]_m \cdot [m]_m \\ &= [a]_m \cdot [n]_m + [b]_m \cdot [0]_m \\ &= [a]_m \cdot [n]_m \end{align*} $$\tag*{$\Box$}$$
Example 8.11. Consider $\mathbb Z_4$. There are four equivalence classes: $0,1,2,3$. Since 1 and 3 are relatively prime to 4, they have inverses: $1^{-1} = 1$ (this is obvious) and $3^{-1} = 3$, which we get by observing that $3 \cdot 3 \equiv_4 9 \equiv_4 1$. However, 2 has no inverse: \begin{align*} 2 \cdot 0 &= 0 \\ 2 \cdot 1 &= 2 \\ 2 \cdot 2 &= 4 \equiv_4 0 \\ 2 \cdot 3 &= 6 \equiv_4 2 \end{align*}
One might ask whether there is any integer $m$ for which $\mathbb Z_m$ has a multiplicative inverse for all non-zero elements. Our discussion about prime numbers gives us a hint.
Corollary 8.12. If $p$ is prime and $p \not\mid a$, then $a$ has a multiplicative inverse mod $p$.
This is easy to see since every integer $2, \dots, p-1$ aren't divisors of $p$ and therefore share no non-trivial common divisors with $p$.
Definition 8.13. When it exists, we denote the multiplicative inverse of $a$ by $a^{-1}$.
Up until now, we've been working in $\mathbb Z$, where "division" "doesn't work". However, we've proved sufficient conditions to create structures where "division" does work in a sense, in that multiplicative inverses are guaranteed to exist for every nonzero element. This means that we can solve linear equations like $$6x \equiv_{17} 144$$ using our usual algebraic techniques. In fact, assuming we were working with the same modulus, it's not hard to solve a system of equations in the usual way.
By now we've seen the basics of number theory, including the Divison Theorem, Euclid's algorithm, modular arithmetic, and the notions of prime and relatively prime numbers. There's more to see but in order to progress any further, we're going to need another tool in our mathematical toolbox: Induction.
Induction is a general technique for proving mathematical statements that works particularly well for theorems that have the form $\forall n\in\mathbb{N}, P(n)$ for some predicate $P$.
Theorem 9.0. Let $P(n)$ be a predicate with domain $n \in \mathbb{N}$. If
This is called the principle of mathematical induction.
How and when do we make use of this? Mathematical induction is a property of the natural numbers. What this means is that if there is some property that we want to prove about all natural numbers, all we need to do is prove that the property holds for the number 0 and that for any other natural number $k$, if it's true for $k$, then we can show that it's true for the next natural number, $k+1$.
More formally, to prove that for all $n \in \mathbb N$ that $P(n)$ holds,
So why does this even work? Basically, what we're really proving is a very large, infinite chain of implications, \begin{align*} P(0)&, \\ P(0) &\rightarrow P(1), \\ P(1) &\rightarrow P(2), \\ P(2) &\rightarrow P(3), \\ & \vdots \end{align*} The usual analogy for this is a chain of dominoes: we set up our argument so that any one domino is guaranteed to knock over the next domino ($P(k) \rightarrow P(k+1)$ for any $k$). Once we've set this up, all we need to do is knock over the first domino (showing $P(0)$) to set off the chain reaction.
This leads us to what a proof by induction actually looks like.
There is an injured shark thrashing about in the water. Nearby there are $n$ sharks. Each would like to eat the injured shark but would become injured in the process and potential victims of the remaining sharks. Every shark is perfectly logical and values its life more than eating. What will happen?
Feel free to give this puzzle some thought. Assume there is always a closest shark that could, if it chose to, eat the injured shark.
A good way to think about such puzzles (and math problems broadly!) is to consider small numbers. Let's do that.
There's an emerging pattern here, so let's state our theorem.
Theorem 9.3. If $n$ is odd, the closest shark will eat the injured shark, otherwise nothing will happen.
Note that we started with the $0$ case which was trivial. That doesn't matter, $0$ remains the smallest number. In this case it might feel tempting to also prove the $n=1$ case, which is fine as long as we prove the $0$ case as well. In the future we will see some proofs where it's acceptable to start with some $n>0$, because the claim starts with "if $n$ is greater than $5$ then..." But even in those cases, we can start with zero (we might just have to consider in our inductive hypothesis whether $k$ is $5$).
If you enjoy these kinds of problems, try the Blue Eyes logic puzzle. (If you have brown eyes, I hope you don't find yourself on an island with 99 blue eyed people and a green eyed guru.)
We can more easily justify induction if we define the natural numbers themselves inductively, like you might define a datatype in Haskell, Racket, or ML.
In the late 1800s, Giuseppe Peano presented the following definition of the natural numbers that relied on only two symbols: \begin{align*} \mathbb{N} =~& 0 & \\ \text{or } & S~n & \text{where $n$ is a number} \end{align*}
$S$ is just a symbol here, standing for successor. In this system, we would write $1$ as $S~0$ and $5$ as $S~(S~(S~(S~(S~0))))))$. Essentially, this is just counting with your fingers. (Compare this to our prior definition of numbers is terms of sets, back in lecture 4.)
There's a nice correspondence here with how you would define a list in most functional programming languages. Here's Haskell:
data List a = Nil
| Cons a (List a)
Since $0$ and $S~n$ together define all the numbers, it should make sense that if a proposition $P$ is true about $0$ and adding an $S$ preserves $P$'s truth, then $P$ is always true. Formally:
Definition 9.1. Peano's Axiom of Induction: For any proposition $P$ about a natural number if:
Let's prove something in Peano arithmetic. To begin with, here's Peano's definition of $+$.
Note that the definition doesn't say that $n + 0 = n$. So let's prove it ourselves - by induction!
Theorem 9.3. $\forall n, n+0 = n$
Using these axioms, we can prove all of the desired properties of $+$, $\times$, and $\leq$, including commutativity, associativity and the transitivity of $\leq$. These make great (strongly recommended) exercises for practicing induction.