Somebody mentioned “claims to test rationality by asking a group of people to rate their driving/whatever ability. Most people report ‘above average’, and the conclusion drawn is that people irrationally overstate their abilities.”
What’s going on here is amazingly complicated. To judge a skill, you (or any agent) have to rely on your own judgment. And you can’t very well notice all your own mistakes, or you wouldn’t make them. Your own mistakes are necessarily hard for you to see.
This is like resubstitution error in statistics: If you use data to construct a statistical model, then calculating the model’s error rate by plugging the same data back into the model (the resubstitution error rate) will normally give you too small a number. The only general ways I know to get accurate model error rates involve constructing the model with less than all the data. A rational agent ought to make use of all the information it has, but that has the side effect that the agent can’t tell how often it is wrong, and so by extension it can’t tell how often other agents are wrong either. How do you know if somebody else is wrong if you don’t know what’s right?
I think it’s safe to ignore the exact details of this kind of psych experiment. People mostly don’t know the difference between mean and median, or have much idea of accident rates (not that they’d take base rates into account anyway, according to other findings). You might be able to find an understandable way to ask a more precise question, like “Imagine all the drivers in this area ranked according to their probability of being involved in a fatal accident. What do you think is your percentile rank?” But I suspect you’d get the same kind of answers, with the same kind of errors.
So what these experiments suggest (though they don’t prove it) is that humans fail to (fully) correct for the error of relying on their own judgment. I don’t think it matters whether one calls that rational or irrational.
Why are you worried about rationality in the first place? The theory that humans are approximately decision-theoretically rational was touched with empirical antimatter by the 1970’s, and nothing can unannihilate it. The modern idea of “limited rationality” admits that no real agent can be completely decision-theoretically rational anyway, because that requires the ability to follow arbitrarily long chains of reasoning instantly. So nowadays the trend is away from approximate complete rationality and toward exact limited rationality, if you can make sense of that. In the case of humans, we don’t know what the limits are to our reasoning ability, so we don’t have the means to decide whether humans are limitedly rational in any given sense. (If you like, you can assume a set of limitations, which amounts to choosing to define humans as rational or irrational. But why?)
In other words, the question “are humans rational?” is ill-formed. Economists can stop worrying about it. Unfortunately, instead you have to worry about mysterious details of human psychology. And that is the point of behavioral economics, though I doubt that many behavioral economists see it quite my theoretical way.
Original version, October 1997.
Updated and added here January 2012.