The Book Club on Mike Huemer’s Knowledge, Reality, and Value continues.  Today, I cover Part 2 of the book.  To repeat, though I’m a huge fan of the book, I’m focusing almost entirely on disagreements.

1. One of Huemer’s preferred responses to the classic Brain-in-a-Vat (BIV) scenario is that – especially compared to the Real World story – its a weakly-supported theory.

The Real World theory predicts (perhaps not with certainty, but with reasonably high probability) that you should be having a coherent sequence of experiences which admit of being interpreted as representing physical objects obeying consistent laws of nature. Roughly speaking, if you’re living in the real world, stuff should fit together and make sense. The BIV theory, on the other hand, makes essentially no predictions about your experiences. On the BIV theory, you might have a coherent sequence of experiences, if the scientists decide to give you that. But you could equally well have any logically possible sequence of experiences, depending on what the scientists decide to give you.

Why, though, couldn’t we race the Real World theory against the Simulation-of-the-Real-World theory?  Instead of a generic story of BIVs, we could have a specific story saying the real world exists and provides the inspiration for the simulation we’re in.  I know it sounds ad hoc, but notice that modern technologists are already trying to simulate the real world with virtual reality and such.

Huemer goes on to argue for “direct realism” as an even more straightforward response to BIV.  Question for him: To what extent is this approach compatible with just saying that the reasonable Bayesian prior probability assigns overwhelming to the Real World story?  (In Moorean terms, this closely resembles an appeal to “initial plausibility.”)

2. Huemer exhaustively covers the modern Gettier-inspired objections to the classic definition that knowledge is “justified, true belief.”  The objections to the definition are convincing.  Still, what’s wrong with the slightly modified view that when we call X “knowledge,” we almost always mean that X is a “justified, true belief”?  By analogy, we would probably call a floating antigravity platform a “table.”  Yet when we call something a table, we almost always mean a platform on top of one or more supports.  In other words, we can think of “justified, true belief” as a helpful description of knowledge rather than a strict definition.  What if anything is wrong with that?

3. Crazy as it seems, I have no other notable disagreements with Part 2.  So let me end with my favorite insight.  After showing the many desperate attempts to define “knowledge,” Huemer explains that mathematically precise definitions are grossly overrated:

This is no counsel of despair. The same theory that explains why we haven’t produced any good definitions also explains why it was a mistake to want them. We thought, for instance, that we needed a definition of “know” in order to understand the concept and to apply it in particular circumstances. Luckily, that isn’t how one learns a concept, nor how one applies it once learned. We learn a concept through exposure to its usage in our speech community, and we apply it by imitating that usage. Indeed, even if philosophers were to one day finally articulate a correct definition of “knowledge”, free of counter-examples, this definition would doubtless be so complex and abstract that telling it to someone who didn’t already understand the word would do more to confuse them than to clarify the meaning of “knowledge”.

So often I’ve heard people say, “Is X good?  It depends on how you define ‘good.'”  And what I want to reply is, “We both know what the word ‘good’ means.  The question is whether X possesses it.”

As usual, please leave questions for Huemer and myself in the comments, and we’ll respond in the next week or so.