The Book Club on Mike Huemer’s Knowledge, Reality, and Value continues. Today, I cover Part 2 of the book. To repeat, though I’m a huge fan of the book, I’m focusing almost entirely on disagreements.
1. One of Huemer’s preferred responses to the classic Brain-in-a-Vat (BIV) scenario is that – especially compared to the Real World story – its a weakly-supported theory.
The Real World theory predicts (perhaps not with certainty, but with reasonably high probability) that you should be having a coherent sequence of experiences which admit of being interpreted as representing physical objects obeying consistent laws of nature. Roughly speaking, if you’re living in the real world, stuff should fit together and make sense. The BIV theory, on the other hand, makes essentially no predictions about your experiences. On the BIV theory, you might have a coherent sequence of experiences, if the scientists decide to give you that. But you could equally well have any logically possible sequence of experiences, depending on what the scientists decide to give you.
Why, though, couldn’t we race the Real World theory against the Simulation-of-the-Real-World theory? Instead of a generic story of BIVs, we could have a specific story saying the real world exists and provides the inspiration for the simulation we’re in. I know it sounds ad hoc, but notice that modern technologists are already trying to simulate the real world with virtual reality and such.
Huemer goes on to argue for “direct realism” as an even more straightforward response to BIV. Question for him: To what extent is this approach compatible with just saying that the reasonable Bayesian prior probability assigns overwhelming to the Real World story? (In Moorean terms, this closely resembles an appeal to “initial plausibility.”)
2. Huemer exhaustively covers the modern Gettier-inspired objections to the classic definition that knowledge is “justified, true belief.” The objections to the definition are convincing. Still, what’s wrong with the slightly modified view that when we call X “knowledge,” we almost always mean that X is a “justified, true belief”? By analogy, we would probably call a floating antigravity platform a “table.” Yet when we call something a table, we almost always mean a platform on top of one or more supports. In other words, we can think of “justified, true belief” as a helpful description of knowledge rather than a strict definition. What if anything is wrong with that?
3. Crazy as it seems, I have no other notable disagreements with Part 2. So let me end with my favorite insight. After showing the many desperate attempts to define “knowledge,” Huemer explains that mathematically precise definitions are grossly overrated:
This is no counsel of despair. The same theory that explains why we haven’t produced any good definitions also explains why it was a mistake to want them. We thought, for instance, that we needed a definition of “know” in order to understand the concept and to apply it in particular circumstances. Luckily, that isn’t how one learns a concept, nor how one applies it once learned. We learn a concept through exposure to its usage in our speech community, and we apply it by imitating that usage. Indeed, even if philosophers were to one day finally articulate a correct definition of “knowledge”, free of counter-examples, this definition would doubtless be so complex and abstract that telling it to someone who didn’t already understand the word would do more to confuse them than to clarify the meaning of “knowledge”.
So often I’ve heard people say, “Is X good? It depends on how you define ‘good.'” And what I want to reply is, “We both know what the word ‘good’ means. The question is whether X possesses it.”
As usual, please leave questions for Huemer and myself in the comments, and we’ll respond in the next week or so.
READER COMMENTS
Anon
Jun 8 2021 at 6:06pm
To Huemer,
1.To my mind, part of the problem with questions like “Is there a God?” is not that they are meaningless or that they have no answer. Rather, it’s that they are unanswerable.
Think of the question “how many hairs were there in Hittler’s mustache?” For sure there is a correct answer, but it is impossible for us to know what it is. This, of course, does not mean that we can’t reason through the problem and rule out silly answers, but the actual fact of the matter will remain unknown.
2. In pages 87-88 you use two kinds of beliefs to illustrate some of problems with relativism. You use the example of belief (or lack thereof) in the afterlife, and belief in the goodness/badness of polygamy.
The first of these examples is a factual question that must have a definite answer (either there is an afterlife or there isn’t). The second example, however, is not a factual question, and will depend on what each particular culture considers good and bad.
It seems as though you see these two examples as equally effective at rejecting relativism of all kinds, whereas I would have imagined that only the first example is able to disprove relativism, and only epistemic relativism, not moral relativism.
Curt
Jun 13 2021 at 7:08pm
I too found the discussion of ‘polygamy is wrong’ to be ignoring the ambiguity of the proposition… and thus not very convincing as an argument against relativism. Wrong in a legal sense? Wrong as in damaging to others or oneself? Wrong as in against God’s wishes? How can we test the proposition?
Benjamin
Jun 9 2021 at 1:10am
Hello, first I need to say that I am a big fan of both of you, and I love these book clubs, so thanks Bryan and Michael for your blogs. And the book is great!
Now for my own disagreement.
Michael seems to dismiss Karl Popper too fast, but I have a hard time distinguishing his epistemology from Popper’s (at least as presented by David Deutsch). Concretely he says
This sounds exactly as Popperian fallibilism, since you are admitting that the moment you get a good reason to think what it seems to you is false, you should doubt it, and thus the original “foundation” is still fallible.
Similarly, the critique of the BIV is basically the same as David Deutsch’s critique in his book The Fabric of Reality: “BIV is a bad explanation because from it anything goes and so is not really an explanation” Deutsch (and I think most people if not all) is satisfied with that for preferring Real World to BIV, but Michael goes on an unnecessary argument (not even completely expressed in the book) with made up probabilities to try and justify this. Did he really do the calculations to come up with the 99.999999999999% he mentions?
When discussing the critique to justificationism
But that is not so, you don’t have to change your beliefs unless you have a good reason to change them, or have a better alternative. But just because “your belief is not justified” is not a good reason to change it, specially if the belief to which you should change is also not justified. Since justifications skeptics would say that no belief is justified, they would say you need a reason to change your belief (other than being justified or not).
Maybe what is happening here is that Popper/Deutsch use “justified” to mean “with absolute certainty, and you don’t have to worry about it ever again”, while Michael uses “justified” as “having good reasons, but you can change it if a good reason comes along”. With this meanings in mind I don’t think there is a disagreement between Popper and Michael.
In summary what I don’t understand is why is better to believe
– I am justified to believe X because is the theory with the highest probability (but new arguments/evidence can make me change my mind, so my believe is still fallible)
than
– I am not justified to believe X but I can do so until I have a good reason to stop and pick an alternative.
And yes, what is “a good reason” is also fallible. But all the probability arguments in the world would still be fallible, so is not like you are gaining much.
Finally I will like to mention that a lot of the discussion on knowledge is about “X is true” as a necessary reason for believing X. But “X contains some truth” seems more important to me.
Every day construction works are carried over using equations approximating the theory of gravity from Newton. These equations are false. They still contain some truth though, and that is why these constructions work well enough.
Thanks again!
Henri Hein
Jun 10 2021 at 2:46pm
I agree you shouldn’t change your beliefs willy-nilly, but I don’t think Huemer was implying that. Rather, if we find our beliefs to be unjustified, we should keep looking around until we discover belies that are justified, assuming or hoping they exist.
Benjamin
Jun 11 2021 at 10:16am
Exactly. But I think the point of Popper was that you will never be guaranteed that your justification is correct.
The moment you have a problem with your justification you should start looking for some other belief. But until you have something better you can still act as if your belief is true, if is the best one you have. Popper idea is that you should not worry about perfect beliefs (justified completely), just better ones.
The classical example is Quantum Theory and General Relativity, they cannot both be correct since they contradict each other, but while we find something better we can treat them as true.
Hellestal
Jun 9 2021 at 9:34am
Non-simulation is VASTLY simpler than Brain-in-Vat.
The same mis-conception has come up in his blog, too. Puzzling. “In the BIV Hypothesis, you only need the brain in its vat, the scientists, and the apparatus for stimulating the brain, to explain all experiences. Vastly simpler.”
No.
From a mathematical perspective — entropic measure of information — describing the accurate rules of physics of our entire universe is going to be orders and orders and orders of magnitude simpler than describing in detail any relevant subset of the universe.
In order to create an accurate description of “only” the brain in its vat, the scientists, and the brain apparatus — as if that were all that existed, without relying on the simple rules of physics playing out from a (presumably simple) original condition — you would need an absolutely absurd quantity of description. Overwhelming.
This is similar to the question of whether the universe sharply ends at the edge of observable space. Using Huemer’s implied notion of “simplicity”, we should assume that objects outside of the cosmological horizon simply vanish into nothingness, because that’s “simpler”. There’s less stuff.
But that is not, actually, simpler. Using the relevant metric, it would require (again) an enormous explosion in the complexity of the the basic rules of physics to posit that everything suddenly vanishes once it reaches the place where we are no longer able to perceive it. (You don’t actually need the cosmological horizon for this: you can just as easily assume that nothing in the universe exists except for when you, personally, are paying attention to it. Same idea. Same problem of ENORMOUS complexity in the fundamental rules of the world for such a thing to work.)
This also relates to the low entropy starting condition of our universe, which has also come up on his blog. I don’t know why this puzzles anyone. A low entropy starting condition is a mathematically simpler starting condition.
That is literally what “low entropy” means: simplicity.
If we have a prior probability that we should favor simpler explanations that adequately describe what we see over more complex ones — and we SHOULD: there is often one simplest explanation, but once we add complexity there is literally no end to what we can speculate, no end to the extra complexities we can add — then the most sensible prediction for the start of the universe is low entropy. That is the mathematically simplest way to start.
For BIV: You need a universe, which should ideally have simple rules of physics and a simple (low-entropy) starting condition. Then the simple rules of physics need to play out to, eventually, create the scientists and their mad inventions. Then you need them to SUCCEED in creating (or, ugh, harvesting) the brain for their apparatus. And then their apparatus needs to work, according to the rules of the universe, in a way that can potentially create what we see now with enough complexity and consistency (or apparent consistency) for us to assume it is real.
For our world: You need a universe, which should ideally have simple rules of physics and a simple (low-entropy) starting condition. Then the simple rules of physics need to play out to, eventually, create us. And that’s it. That’s all. You’re done.
There is no comparison here. BIV really does gets its ass stomped. It’s not close.
John Alcorn
Jun 9 2021 at 12:24pm
Part II of of the book — chapters 6, 7, & 8 — is about epistemology. I learned much from Prof. Huemer’s refreshingly clear account.
To my surprise, Prof. Huemer largely neglects social epistemology. The exception is Section 8.5.3 (pp. 128-132), “A Wittgensteinian View of Concepts.” Here is Prof. Huemer’s apt punchline:
However, social epistemology has much greater scope than ‘how one learns words & concepts.’
Compare the Wikipedia entry (and embedded links there):
Compare also Arnold Kling, “Epistemology as a social process,” a blogpost prompted by Prof. Huemer’s new book. Note also Prof. Huemer’s comment at Dr. Kling’s blogpost.
Henri Hein
Jun 11 2021 at 2:04am
The burning question I have: is the typo in the footnote on p.95 on purpose?
I was also struck by Huemer’s concluding paragraph(s) in Part 2, but for a different reason. I was comparing it with chapter 3. In the section “Attacks on Objectivity,” Huemer gives short shrift to those who are critical of objectivity and rationality. Most people that use “know” in the vernacular, though, form that knowledge without any recourse to rationality, or at best use rationality as a cursory cloaking of their belief, which they call knowledge. This is true not only for religious and moral absolutists, but for anyone with a strong opinion. For instance, many Flat Earthers will claim they know the Earth is flat. Many progressives will claim they know the optimal minimum wage. The Overconfidence Effect tells me I should be skeptical when people claim they know things.
I sympathize both with the skeptical view and the defenders of knowledge. I think the skeptic’s claim that we cannot know anything with 100% certainty must be correct. Some of the counters to skepticism seems a bit off, at least the ones that tries to categorize skepticism itself as knowledge. As an analogy, let’s imagine Matt showing us a mathematical proof. We point out that the jump from, say, step 3 to step 4, is too wide, and thus we reject Matt’s proof. It is not a rebuttal for Matt to claim that we don’t know step 4 does not follow from step 3, and we haven’t disproved his proof, so he stands by it. He must either prove that step 4 follows from steps 1-3, or admit his proof falls short. The presumption is against the proof. The ideal of knowledge should meet the same standard.
It’s OK to admit there is not such ideal, but then we still need to know when knowledge is “good enough” to proceed with. That might be more of a practical question that falls outside the bounds of philosophy, but surely, if epistemology is good for anything, it should at least inform the engineers and scientists and others that need to know the answer to that question.
Thanks to John Alcorn for the pointers to social epistemology. We shouldn’t be surprised Huemer doesn’t cover stuff like this; the book is a general overview, not an exhaustive treatment. I don’t know if it’s social epistemology, but I do strongly believe that our most solid cases of knowledge is built from cooperation. In the example with the octopus, if my friend is next to me and is also seeing the octopus, and we both talk about what we are seeing, and our descriptions match, the likelihood that I am truly seeing the octopus goes up. If our descriptions differ, and what my friend describes sounds more like a coffee cup, then at least one of us should start distrusting our senses. I do understand that I’m relying on sensory input to exchange views with my friend, but if they are describing in words what I am seeing with my eyes, it would be an extraordinary coincidence if there was a match by accident.
Comments are closed.