let me outline an argument for the importance of overcoming bias:

1. Our beliefs have many errors, i.e., deviations from truth.

2. Reducing error is important goal, for which we are willing to pay substantial costs.

3. The causes of our errors can be seen as ranging from context specific to general trends.

4. We in fact have many identifiable stable general error trends, in addition to legion context specific causes.

5. By reflecting on error causes, we can seek ways to adjust our pattens of thought and social institutions to reduce error.

6. For a substantial fraction of error causes, we can in fact find feasible adjustments.

7. It is often more cost-effective to seek and implement adjustments for general trends, than for context specific errors.

I think that there are many strategies for finding truth. A biased trial-and-error strategy that makes lots of trials may be better than a strategy that focuses on bias but makes few trials. The only bias that really worries me is confirmation bias.

Think of seeking truth as like one of those parlor guessing games where you make a guess and then get feedback of the form “You’re getting warmer” or “You’re getting colder.”

Suppose you and I each get half an hour of guesses and feedback. If I make lots of biased guesses and you make one unbiased guess, then at the end of half an hour I may be a lot closer than you are to the right answer, even though where I am is conditional on my initial biases.

The one bias that really causes trouble is confirmation bias. Confirmation bias means that when I get feedback that says, “You’re getting colder,” I interpret it as saying “You’re getting warmer.” In that case, additional guesses and feedback may not help me.

I have nothing against trying to overcome other types of biases. But for me, the most important bias to worry about is confirmation bias. That is the one that merits eternal vigilance.

To use statistical jargon, I believe that as long as I do not suffer from confirmation bias, then a process of guessing and then obtaining feedback will asymptotically converge to the truth. It may not be as efficient as a process that is free of other biases, but the cost of getting rid of the other biases may not necessarily exceed the benefits.

## READER COMMENTS

## Adrian Tschoegl

## Sep 11 2007 at 11:55am

You may be interested in my March 25, 2007 contribution to Robin’s blog under the rubric,

“Useful bias”

I would like to introduce the perhaps, in this forum, heretical notion of useful bias. By useful bias I mean the deliberate introduction of an error as a means to solving a problem. The two examples I discuss below are concrete rather than abstract and come from my training as an infantry officer many years ago. Now technology solves the problems they solved, but the examples may still serve to illustrate the notion.

The first example comes from land navigation, which is the use of compass and map to get from one point to another. One standard problem is to get from a point in a wood, or other occluded terrain, to a point on a road or the like, some distance away. The unbiased approach is to take a bearing, i.e., determine a direction, from where one is to where one wants to go, and then follow it. The problem is that as one follows the bearing, with each step a little random lateral error creeps in so that when one reaches the road one may not be sure whether the point one is seeking is to the right or the left. The biased approach is to follow a bearing that is sufficiently to the left or right of the objective that when one reaches the road one can assume with a high degree of probability that the objective is to the right or left.

The second example comes from directing artillery fire to strike a target that one can observe, but that is an unknown distance away. The unbiased approach is to estimate (guess) the distance, notify the gunners, observe the first shot, and then walk subsequent shots towards the target in increments of distance. (Up 100. Up 100. etc.) The biased approach is “bracketing” the target. In bracketing, the observer estimates (guesses) the distance, and then adds a large increment to the estimate to ensure that the first shot will fall beyond the target. The observer then adjusts the fall of the sequence of subsequent shots by halving the distance between subsequent shots. (ideally, by cycling through a sequence of over and under shots. As n increases, X plus (0.5) to the nth power, times β sub n, where β is the unknown bias in estimating the unknown range X, will converge on X. Experiments have shown that on average, bracketing will result in a faster convergence of the fire on the target than will walking.

So long as satellites and batteries don’t go dead, GPS and laser range finders now solve the land navigation and ranging problems in an unbiased manner. Still, the questions that motivated this post remain: is the notion of useful bias itself useful? That is, are there other, more pacific examples in the cognitive realm?

## Matt

## Sep 11 2007 at 12:38pm

Adrian’s example of a useful bias when tageting an enemy formation should be looked a when the forward observer is not sure what kind of enemy formation he is looking at. A mechanized unit with fast speed may not be the target you want to creep up on, as your creep rate may be slower than the enemy movement.

Confirming biases are a problem for economists because we often see the first, expected effect, and the side effects come later. We may know the target is out there, if we creep up on it then it performs differently, respond to the bias and give the wrong signals.

But, overall, Arnold seems to lean toward the comfort zone effect, where there is a level of error in everyday things that we need to operate efficiently.

## General Specific

## Sep 11 2007 at 12:53pm

In a mathematical sense, one can make an initial estimate that prevents a solution from converging to the optimal answer. Humans are not computers, but one must still be concerned that bias is both inefficient and non-convergent. I would assume that a good faith effort on the part of a human inquirered would overcome bias.

The example discussed in above comments, of purposely introducing errors, is valid but it must be clear that bias or errors are effective, as described above, only if the errors are random. There are efficient algorithms for solving a problem–whether genetic algorithms or otherwise–in which the initial estimates are randomized. Errors are not an effective means if not random. Bias is not random, therefore it cannot be considered an effective way to reach an optimal solution.

## Adrian Tschoegl

## Sep 11 2007 at 1:22pm

General Specific: I like your point about seeding certain search algorithms with random errors to reduce the probbility of finding local maxima, but would reiterate that the deliberate errors in the land navigation and the adjusting fire examples are biased errors (they have a direction) aimed at overcoming problems created by random error.

## General Specific

## Sep 11 2007 at 1:59pm

Adrian: Understood. Your examples are interesting. What distinguishes them: they have a known solution that is visible. We know when we have the answer–we’ve reached the target. In the first example, a bias is introduced so that the final error is in a known direction–and can then easily be removed. The second example is like a binary search–guess low, high, then halve away (or something appropriate) until the solution is reached.

The difficulty is when the solution (target) is unknown. E.g. are higher infant mortality rates in the US a result of improved technology, poor pre-natal care, etc. Who knows. In the best of all worlds, people with different biases will pursue the issue and then–working together–create an optimal solution. In the real world though, I see people with different biases–conservative and liberal to use somewhat useless and loaded terms–as talking past one another. Their biases prevent them from accepting any solution that doesn’t fall within their desired solution set.

## Stan

## Sep 11 2007 at 4:41pm

I seem to be biased towards Robin’s viewpoint for a couple reasons.

1. He seems to have studied physics, economics, statistics, philosophy, and mathematics very extensively. I have interests in all these areas, but being an economics grad student, seem to have only dabbled in each of them by comparison. Since we have the same interests but he has studied more than I have, it is hard to take his opinions with a grain of salt.

2. The passion and sincerity with which Robin approaches overcoming bias, combined with his evidently high degree of intelligence make me suspect that there is something that Arnold and Tyler are missing. I have not seen the same passion from Arnold on a topic, even when he discusses health care or global warming.

Comments are closed.