I was having dinner two friends, a mathematician and a scientist. The mathematician mentioned that “regular” mathematicians (a funny idea) take no notice of the foundations of mathematics. That work belongs to logicians and philosophers. The classic story is about Zermelo-Fraenkel set theory, which is too weak to be really interesting. In other words, it doesn’t assume enough to get to what we consider interesting math. But when you add the special ingredient, the axiom of choice, you get enough power to do interesting math, but you also end up with paradoxes.
These paradoxes were intensely troubling to early twentieth century mathematicians. Bertrand Russell’s famous paradox asks if the set of sets that contain themselves does, on its own, contain itself. The troubling thing is that it turns out that math, if sufficiently powerful, is self-contradictory: it leads to paradoxes. But if you make it weaker, you don’t get anything interesting out. Mathematicians, for the most part, don’t worry about Russell’s paradox any more than the rest of us worry about whether the statement “this statement is false” is true or false.
David Hume, my favorite philosopher, asked a similar, troubling question about science. He asked, how do we know that the sun will rise tomorrow? Other people had worked on a similar question, which is: what is the probability that the sun will rise tomorrow? Hume pointed out that we cannot ever really know that the sun will rise tomorrow. We just guess that, because it has risen so many times before, it will probably rise again.
This was intensely troubling to other philosophers (especially Kant). It is also problematic for scientists, because it questions the possibility of induction. Knowledge like, “I did this experiment and saw this result” isn’t very interesting; we want to conclude that “If you do this experiment, you will see this result”. Strictly speaking, however, you can never prove a theory. Karl Popper later came up with the answer, empirical falsification, that most scientists go to. We say you can’t ever prove a theory, but you can disprove theories by making contrary observations. In other words, it is relatively easy to disprove a theory, since you just do the experiment and hope for a different result than the theory predicts. To prove the theory, on the other hand, you’d need to do the experiment everywhere, under all possible conditions, forever.
Just as the mathematician was losing no sleep over the axiom of choice, the scientist was losing no sleep over the problem of induction. Sure, she said, we can’t show that a theory is definitely true, but we can show that it’s probably good enough, and induction works, so we are in good shape. I don’t debate that, but it is interesting that we rely on induction to show that induction works! Today’s scientists are a product of hundreds of years of inductive thinking about induction. We’ve all heard the story about how the Newtonian physics we learn in high school isn’t strictly true, but it’s pretty close to true for a whole range of applications. We’re used to the idea that theories can be incredibly accurate but not universal because we’ve seen that process play out many times.
It’s interesting to wonder to what degree our comfort with induction comes from observing this history of scientific theories and to what degree it comes from other social factors, like postmodernism’s relativist view on the world. If we’re embedded in a culture that asserts that there is no universal truth, then it’s easier, as a scientist, to conclude that there is no universal truth, but pin that knowledge on scientific induction rather than some other influence.
I’m not saying induction is useless, or that we know nothing. I’m just saying that, strictly speaking, we don’t know that we know anything, and that’s pretty fun to think about!