Book Review: Fooled By RandomnessPosted: May 25, 2010 | Author: Greg Linster | Filed under: Book Reviews | 2 Comments »
Using his trademark aphoristic bent, Friedrich Nietzsche wrote: “Arrogance in persons of merit affronts us more than arrogance in those without merit: merit itself is an affront”. I’ve come to realize that some people find Nassim Taleb’s arrogance quite repugnant, but, personally, I find it rather charming. I suspect that the same people who find Taleb’s arrogance off-putting are the people who wish they possessed a shred of his erudition. Nietzsche was certainly on to something; it’s hard to avoid being offended by your betters.
I think I first read Fooled By Randomness circa 2006. Recently, I felt a longing to reread Taleb’s first non-technical book again. Wow, what a wise decision that was! I actually digested more from the rereading than I did from the initial reading (and I digested quite a bit from the first reading). Both times, I focused on reading the book very, very slowly. Obviously, the fact that I spent the time to reread this book is indicative of how valuable I think it is.
Known for his great wit, the baseball pitcher Vernon Louis “Lefty” Gomez was fond of saying that, “I’d rather be lucky than good.” This phrase, in essence, is one of the central themes of the book. Although it sounds like a hackneyed platitude, Gomez, understood the role of randomness in our lives. However, due to myriad biases, we humans often tend to attribute our successes to our skill and blame bad luck for our failures. Is your rich neighbor or your boss really as skilled as she thinks she is?
Parts of the book are also about the hindsight bias and the narrative fallacy. We humans are great at fabricating post hoc narratives about our world. It’s how we understand (and misunderstand) the world, but we must remember not to take our stories too seriously. “A mistake is not something to be determined after the fact,” writes Taleb, “but in the light of the information until that point.”
One of Taleb’s favorite philosophers is Karl Popper. However, Taleb wasn’t always enthralled with the man who espoused the beauty of empirical falsification. Prior to rediscovering the great philosopher, Taleb went through a self identified anti-intellectual phase early in his career as a trader. He feared becoming a corporate slave with “work ethics” (a term which he interprets to mean inefficient mediocrity). “Philosophy, to me,” Taleb writes, “became something rhetorical people did when they had plenty of time on their hands; it was an activity reserved for those who were not well versed in quantitative methods and other productive things. It was a pastime that should be limited to late hours, in bars around the campuses, when one had a few drinks and a light schedule — provided one forgot the garrulous episode as early as the next day. Too much of it can get a man in trouble, perhaps turn one into a Marxist ideologue.” As they say, the dose determines the poison.
Speaking of poison, another interesting idea that Taleb espouses is that being too attached your beliefs is poisonous. As he puts it: “Loyality to ideas is not a good thing for traders, scientists, — or anyone”. I like to think about it this way, there are times we shouldn’t trust experts precisely because they are experts. This is because they are no incentives to be brutally critical of your own ideas. A scientist or a preacher who has built their career on a certain idea obviously has a lot invested in that idea. How likely are they to be critical of their own position when their livelihood depends on it being accepted? What if they are putting out pseudo-scientific nutritional guidelines that cause harm, but help them keep their job?
According to Popper there are only two types of theories:
1) Theories that are known to be wrong, as they were tested and adequately rejected (he calls them falsified).
2) Theories that have not yet been known to be wrong, not falsified yet, but are exposed to be proved wrong.
If you accept Popper’s epistemology, like I also do, you can never claim that you know a theory to be true. In other words, we can only gain knowledge through proving that things are false. For instance, when I accidentally find myself in a theistic debate, people often challenge me to tell them how the universe came into existence. When I say ‘I don’t know’, they become infuriated. How dare I have the gall to dismiss some of their religion’s claims as not true without projecting my own claim to reality? Yet, that’s exactly the point. I gain knowledge through knowing what’s wrong, not through making claims about what I think is right.
So what should we make of Taleb’s extreme and obsessive Popperism in a more practical sense? How does he recommend we apply to it our lives? I think it can be summarized in the following passage:
I speculate in all of my activities on theories that represent some vision of the world, but with the following stipulation: No rare event should harm me. In fact, I would like all conceivable rare events to help me. My idea of science diverges with that of the people around me walking around calling themselves scientists. Science is mere speculation, mere formulation of conjecture.
The following thought experiment really helped me internalize this message. Assume you participate in a gambling game that has 999/1000 chance of winning $1 [Event A] and a 1/1000 chance of winning $10,000 [Event B]. Using some straightforward calculations the expectation of a loss is roughly $9 (multiply the probabilities by the outcome for each event and then sum them) Which event would you bet on? I suspect that most people consider the frequency or probability in their decision, but this is totally irrelevant. According to Taleb, even people like MBAs and economists with some statistical training fail to understand this point. The magnitude of the outcome should be the only relevant factor in the decision. Think of a trader who focuses on event B, sure, he is likely to bleed slowly for long periods of time, but when the rare event happens the payoff is astronomical compared to the losses. Most of us, however, are schooled in environments that focus on games with symmetrical outcomes (e.g., a coin toss). The great psychologist and father of behavioral economics, Daniel Kahneman, also reminds us that we are loss averse and psychologically struggle with idea of bleeding out small losses for extended periods of time, even if there is eventually the opportunity for a huge payday.
Once you realize that life is full of scenarios with asymmetrical payoffs, you’re thinking (if you’re anything like me anyway) will be permanently altered. In fields like, say, writing, the outcomes are asymmetrical. In other words, there is not a linear relationship with the number of hours spent writing and the amount of income one makes. One may spend a long time writing for free and then finally catch a huge book deal. For me, this is somewhat of a moot point because I’d write for free without any other justification other than the fact that it’s fun and makes me happy. However, if all other things were equal, and I could also make money doing something I love, I would be very happy.
Here’s another piece of practical wisdom that I really enjoyed: “stay away from people of a competitive nature, as they have a tendency to commoditize and reduce the world to categories, like how many papers they publish in a given year, or how they rank in the league tables.” These are the same kinds of people who think that their GPA reflects their intelligence. Or that the number of hours they spend running on a treadmill reflects their fitness. Or that their inherited wealth says something about their genetic fitness. Or that their expensive clothes make them beautiful. I could continue on and on, but I think you get the point.
I often hear those around me complaining about how life will be better when they achieve “X”. Alas, I’m human and guilty of making claims like this on occasion too. The trouble is that, for most of us anyway, we won’t really experience long-term improvements in our happiness when we achieve “X”. Throughout the book, Taleb devotes a fair amount of time alerting readers of what the literature in behavioral economics tells us about our irrational tendencies and biases.
For example, there’s the social treadmill effect: you get rich, move to rich neighborhoods, then become poor again once you compare yourself to your new peers. Then, you may work your ass off and get rich again, only to repeat the cycle. If you want to feel worse about yourself, then the best piece of positive advice I know of is to hang around people who are wealthier than you. I often try to remind myself that I’m living a life that is materially better than 99.9% of all humans that have ever existed and yet I still have the audacity to claim that I don’t have enough sometimes. Pathetic.
At one point in the book, Taleb writes: “I see no special heroism in accumulating money, particularly if, in addition, the person is foolish enough to not even try to derive any tangible benefit from wealth (aside from the pleasure of regularly counting the beans)”. In other words, money is only valuable if you use it as a tool to extract enjoyment from life.
If it isn’t clear, I think he is making reference to the likes of Warren Buffett, whom people tend to see as being virtuous simply for the fact that he has been able to accumulate hordes of money. What I think many people fail to understand is that there is nothing virtuous about having money just for the sake of having it. How someone earned what they have tells you a lot more about them than how much they have. We generally tend to think that having money signals other traits about a person, but I’ll remind you that there is a lot of noise in those signals (think inheritance). Having money doesn’t necessarily signal any superior traits.
Those who want to make a lot of money are greedy and shouldn’t try to deny that motivation. Greed, however, is not necessarily a bad thing. As Adam Smith taught us, another mans’ greed can create more wealth for society as a whole (provided the individual’s wealth is ethically obtained).
Do cigarette smokers understand probabilities? If so, how can they rationally understand the ills of cigarettes and yet be foolish enough to smoke them anyway? When I go for walks near hospitals I’m always surprised by the number of people in scrubs (perhaps some of whom are doctors and nurses) who I assume are well aware of how harmful cigarettes are, but smoke them anyway. Apparently, intellectually understanding something and being able to put it into practice are two different things.
One thing Taleb also writes about is the selection bias in blogging and book reviewing. The cover of my edition of Fooled By Randomness has an excerpt praising Taleb as one of the “hottest thinkers” in the world. While I certainly agree, I couldn’t help but smirk after reading that line — can you say selection bias?
Any book that is worth reading twice is worth reading more than twice. When you love a writer, you want to hear his opinion on just about everything.