
In our era of increasingly sophisticated artificial intelligence, what can an 18th-century Scottish philosopher teach us about its fundamental limitations? David Hume‘s analysis of how we acquire knowledge through experience, rather than through pure reason, offers an interesting parallel to how modern AI systems learn from data rather than explicit rules.
In his groundbreaking work A Treatise of Human Nature, Hume asserted that “All knowledge degenerates into probability.” This statement, revolutionary in its time, challenged the prevailing Cartesian paradigm that held certain knowledge could be achieved through pure reason. Hume’s empiricism went further than his contemporaries in emphasizing how our knowledge of matters of fact (as opposed to relations of ideas, like mathematics) depends on experience.
This perspective provides a parallel to the nature of modern artificial intelligence, particularly large language models and deep learning systems. Consider the phenomenon of AI “hallucinations”—instances where models generate confident but factually incorrect information. These aren’t mere technical glitches but reflect a fundamental aspect of how neural networks, like human cognition, operate on probabilistic rather than deterministic principles. When GPT-4 or Claude generates text, they’re not accessing a database of certain facts but rather sampling from probability distributions learned from their training data.
The parallel extends deeper when we examine the architecture of modern AI systems. Neural networks learn by adjusting weights and biases based on statistical patterns in training data, essentially creating a probabilistic model of the relationships between inputs and outputs. This has some parallels with Hume’s account of how humans learn about cause and effect through repeated experience rather than through logical deduction, though the specific mechanisms are very different.
These philosophical insights have practical implications for AI development and deployment. As these systems become increasingly integrated into critical domains—from medical diagnosis to financial decision-making—understanding their probabilistic nature becomes crucial. Just as Hume cautioned against overstating the certainty of human knowledge, we must be wary of attributing inappropriate levels of confidence to AI outputs.
Current research in AI alignment and safety reflects these Humean considerations. Efforts to develop uncertainty quantification methods for neural networks—allowing systems to express degrees of confidence in their outputs—align with Hume’s analysis of probability and his emphasis on the role of experience in forming beliefs. Work on AI interpretability aims to understand how neural networks arrive at their outputs by examining their internal mechanisms and training influences.
The challenge of generalization in AI systems—performing well on training data but failing in novel situations—resembles Hume’s famous problem of induction. Just as Hume questioned our logical justification for extending past patterns into future predictions, AI researchers grapple with ensuring robust generalization beyond training distributions. The development of few-shot learning (where AI systems learn from minimal examples) and transfer learning (where knowledge from one task is applied to another) represents technical approaches to this core challenge of generalization. While Hume identified the logical problem of justifying inductive reasoning, AI researchers face the concrete engineering challenge of building systems that can reliably generalize beyond their training data.
Hume’s skepticism about causation and his analysis of the limits of human knowledge remain relevant when analyzing AI capabilities. While large language models can generate sophisticated outputs that might seem to demonstrate understanding, they are fundamentally pattern matching systems trained on text, operating on statistical correlations rather than causal understanding. This aligns with Hume’s insight that even human knowledge of cause and effect is based on observed patterns.
As we continue advancing AI capabilities, Hume’s philosophical framework remains relevant. It reminds us to approach AI-generated information with skepticism and to design systems that acknowledge their probabilistic foundations. It also suggests that we could soon approach the limits of AI, even as we invest more money and energy into the models. Intelligence, as we understand it, could have limits. The set of data we can provide LLMs, if it’s restricted to human-written text, will quickly be exhausted. That may sound like good news, if your greatest concern is an existential threat posed by AI. However, if you were counting on AI to power economic progress for decades, then it might be helpful to consider the 18th-century philosopher. Hume’s analysis of human knowledge and its dependence on experience rather than pure reason can help us think about the inherent constraints on artificial intelligence.
Related Links
My hallucinations article – https://journals.sagepub.com/doi/10.1177/05694345231218454
Russ Roberts on AI – https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/
Cowen on Dwarkesh – https://www.dwarkeshpatel.com/p/tyler-cowen-3
Liberty Fund blogs on AI
Joy Buchanan is an associate professor of quantitative analysis and economics in the Brock School of Business at Samford University. She is also a frequent contributor to our sister site, AdamSmithWorks.
READER COMMENTS
Ahmed Fares
Dec 26 2024 at 3:56pm
Islam is a religion which rejects causality. For example, you might think that the reason birds stay up in the air is because they flap their wings. The Qur’an would like to disabuse you of that notion:
Do they not see the birds above them with wings outspread and [sometimes] folded in? None holds them [aloft] except the Most Merciful. Indeed He is, of all things, Seeing. —Qur’an 67:19
David Hume would end up calling that “constant conjunction”. I believe Hume got it from Al-Ghazali, who wrote the following hundreds of years earlier:
It is because of continuous creation that causality does not and cannot exist. There is no causal glue to bind events together. Continuous creation is known by scripture and is confirmed by experience.
Incidentally, Kant was a moron. He said that it was David Hume’s refutation of causality which awoke him from his dogmatic slumber. The guy spent his whole life arguing against the wrong person.
Roger McKinney
Dec 27 2024 at 10:52am
And that’s why modern science was born in Christian Europe and not the Muslim world.
David Seltzer
Dec 27 2024 at 1:47pm
Kant was a moron? Dude! COME ON!
Ahmed Fares
Dec 28 2024 at 4:44pm
re: what if Churchill was a Kantian?
There is a debate about how many German attacks the British allowed to take place in order to protect the fact that they had broken the Enigma code. A quote:
History detective: Did Churchill sacrifice a city to protect a secret?
A cover story breaks Kantian ethics because it is a lie, and it uses people as a means to an end.
Kantians tie themselves up in knots with questions like these.
David Seltzer
Dec 28 2024 at 8:13pm
Your argument is worth considering as you point out means and ends. An ad hominem attack diverts your point. I don’t believe Kant was feeble minded. In the case of the Enigma code, how else does a leader ,Churchill, make that trade off when the empire is on the brink of destruction?
Comments are closed.