Ever feel like the future is a complete mystery? That life is just one unpredictable event after another? We often talk about things being “unforeseeable” or “coming out of nowhere,” but what if I told you that our brains are actually predicting the future every single second, and that much of the world around us is far more predictable than we give it credit for?
The truth is, we are all prediction machines, constantly forecasting everything from the next breath we take to whether the corner shop will have granola. We couldn’t function otherwise. The reason we often don’t see this inherent predictability, or dismiss it as simple “common sense,” is because the true nature of prediction is often hidden, woven into the fabric of our reasoning and the complex systems of the world.
Your Brain: A Master of Hidden Predictions
Think about it: every time you walk, your brain is predicting the ground beneath your feet, the movement of your limbs, and how to maintain balance. When you reach for a coffee cup, your brain is predicting its location, weight, and temperature. These are not mystical visions, but predictions based on information gathered from your past experiences.
Neuroscientists suggest that our brains are fundamentally Bayesian machines. This means our perception isn’t just a passive reception of sensory data, but an active process where information travels “up” from our senses and “down” from our internal models of the universe. Our perception is the commingling of these bottom-up and top-down streams. We constantly form hypotheses (priors) about the world and then test them against the evidence from our senses (data).
The crucial element in this process is prediction error: the difference between what our brain predicts and what it actually receives from our senses. When our predictions match reality, our brain “stays quiet”. But when they don’t, it generates signals that prompt updates to our internal model of the world. This constant effort to minimize prediction error is fundamental to how our brains learn and adapt. It’s why a sudden silence after a repetitive background hum is so noticeable—your brain predicted the sound, and its unexpected absence creates a prediction error that grabs your attention.
The Language of Predictability: Probability and Bayes’ Theorem
While much of this happens unconsciously, the underlying principles are deeply rooted in mathematics, particularly in probability theory and Bayes’ Theorem.
Quantifying Uncertainty: Mathematics provides tools to reason about uncertain outcomes. When we talk about probability, it’s often a statement about our degree of belief in something, reflecting our knowledge and ignorance about the world, rather than an absolute fact.
The Crucial Distinction: A common trap is confusing two related but distinct questions:
Sampling Probability: “How likely am I to see this data, given a certain hypothesis?”.
Inferential (Inverse) Probability: “How likely is it that my hypothesis is correct, given this data?“. This is what we really want to know for understanding the world.
The Role of Priors: Bayes’ Theorem is the equation that allows us to move from sampling probability to inverse probability. To do this, it requires a prior probability – essentially, how likely you thought a hypothesis was before seeing any new evidence. This prior belief profoundly influences how new evidence shifts your confidence. For instance, a highly accurate medical test might still yield a positive result for a healthy person if the disease itself is very rare (i.e., your prior probability of having the disease is very low).
Why We Miss the Obvious Predictions
Despite our inherent predictive abilities and the mathematical tools available, we often fall into traps that make the world seem less predictable than it is:
Intuition’s Weakness: Our common sense is strong for basic concepts like addition, but it’s “pretty weak and unreliable” when it comes to assessing the likelihood of rare events. This is why improbable things happen a lot. We’re often surprised by these occurrences, like a lottery number combination appearing twice in a week, even though given enough chances (e.g., all lottery games across the country for years), such coincidences are not surprising at all. The Baltimore stockbroker parable perfectly illustrates how we are surprised by improbable successes because we are blind to the much larger number of failures.
Misinterpreting Statistical Significance: In science, we often use p-values to decide if a result is “statistically significant”. However, many people, including scientists, commonly misinterpret a p-value of 0.05 (meaning a 1-in-20 chance of the data occurring if the null hypothesis is true) as meaning there’s only a 5% chance their hypothesis is false. This ignores the crucial prior probability. Without considering prior beliefs (e.g., that psychic powers are unlikely), an exciting-sounding “statistically significant” result can still be very probably wrong.
The “True or False” Trap: We tend to think of beliefs as binary: either true or false. This rigid thinking makes it hard to incorporate new, nuanced evidence. Instead of gradually adjusting our confidence, we’re forced to abruptly “reject or accept”.
The File Drawer Problem: The scientific community often only publishes “novel,” “statistically significant” results, while studies that find no effect (“null results”) are often “file-drawered”. This creates a distorted view of the evidence, making it seem like certain effects are more prevalent than they are, just like the Baltimore stockbroker only shows you his winning picks.
P-hacking: There’s pressure to get publishable results, leading to practices (often unintentional) like tweaking analyses until a p-value drops below the arbitrary 0.05 threshold. Scientists might “torture the data until it confesses”.
Prediction in the Big Picture
Beyond our personal cognition, prediction plays a vital role in larger systems:
Weather Forecasting: While chaotic in the long term (a “hard limit” of about two weeks due to sensitive dependence on initial conditions), short-range weather prediction has dramatically improved thanks to more data and computational power.
Artificial Intelligence (AI): At its heart, AI is about predicting uncertain things, and it is fundamentally Bayesian. Even large language models (LLMs) that “predict the next word” do so by implicitly building sophisticated models of the world to make accurate predictions.
The Wisdom of Crowds and Prediction Markets: When done right, aggregating information from diverse groups can lead to superior predictions. Prediction markets, for example, are a promising tool that uses market prices to predict event probabilities, leveraging dispersed information from many people who are incentivized to state their true beliefs.
Embracing Uncertainty for Better Prediction
While some problems, like the ultimate questions of existence, may reach the “limits of quantitative reasoning”, for many real-world situations, we can become better predictors by understanding the principles of probability and Bayesian thinking:
Acknowledge Degrees of Belief: Instead of rigid “true/false” thinking, cultivate a mindset of confidence levels. This allows for a more nuanced understanding where new evidence can adjust beliefs up or down, rather than forcing an “all-or-nothing” change.
Beware of “Definition” Arguments: Many everyday arguments are about the definition of words rather than objective facts that change predictions about the world.
“Crooked Scientists”: Even as we learn, some “hardwired priors” are resistant to change (e.g., avoiding pain or death). This is part of our evolutionary programming, driving us to change our environment to match these critical predictions, rather than changing the predictions themselves.
Learning from Superforecasters: Experts in forecasting, known as superforecasters, implicitly or explicitly use Bayesian reasoning. They start with an “outside view” (base rates or prior probabilities) and then update that with an “inside view” (specific details of the situation). They also meticulously keep score of their predictions to calibrate their confidence and reduce overconfidence.
In essence, our lives are a continuous exercise in prediction and inference. By understanding the underlying principles—that probability is often about our degree of belief, that our brains are constantly updating their models, and that even improbable events are bound to happen given enough chances—we can sharpen our “common sense” and make more informed decisions. Mathematics provides an “exoskeleton” for our intuition, allowing us to navigate the world with a more principled approach to uncertainty.