Malcolm Gladwell was recently in London marketing his latest book, Talking to Strangers. The book is about the nature of truth. It weaves together stories, some well-known, some less so, of people who are misunderstood, often with damaging consequences.
Most of the stories are about people who were trusted when they shouldn’t have been. People like Bernie Madoff, who ran his Ponzi scheme undetected for over twenty years, and Ana Montes, a Cuban spy who operated at the heart of US intelligence for sixteen years. Others are about people who aren’t trusted when they should have been, like Amanda Knox, wrongfully convicted of murder by an Italian court. Gladwell wonders why we often get it so wrong. His focus is less on the peddler of the deceit and more on the consumer. He tells the story of Montes through the eyes of Reg Brown and Scott Carmichael, counterintelligence personnel who, for all the signs that were there, failed to uncover the double agent among them. The Bernie Madoff story he tells through the eyes of Harry Markopolos, one of the only people to doubt Madoff, yet whose warnings went unheeded.
Gladwell suggests three reasons why it’s so difficult to get people’s intentions right. The first he calls default to truth — our core working assumption that people are basically honest. The second he calls the illusion of transparency, that we trust our own instincts more than the evidence suggests we should. And the third is that we underestimate the importance of context.
Each of these is a notable behavioural bias. Gladwell pulls them together to create a framework for understanding why communication with strangers can go off-kilter. But by themselves they each offer useful insights into how we think. In this post I discuss the first two — default to truth and the illusion of transparency. I will return to the last idea, the importance of context, later.
Default to Truth and Fake News
We all know politicians lie. The latest Ipsos MORI Veracity Index, taken in the heat of the UK General Election campaign, found that just 14% of the public said they trust politicians in general to tell the truth, the lowest score in ten years. (Although pollsters themselves ranked fairly low at 54% which may call into question the veracity of the data!). Surveys conducted by the fact checking charity, Full Fact, show that 77% think truthfulness in UK politics has got worse since the last general election of 2017. Indeed, according to Tortoise Media, a total of 77 untruths were uttered by parties during their campaigning for the 2019 general election. It’s no wonder that the audience laughed when, in a party leaders’ election debate during campaigning, Boris Johnson said the truth was important to him.

It’s the same in the US. The Washington Post runs a tally of President Trump’s false and misleading statements. At the time of writing he was on 15,413, equivalent to an average of 14.6 per day in office, with the rate having increased over the past few months.
Yet people continue to be taken in by what politicians say. And that’s in spite of what the various fact checking services that have been formed over recent years produce. Are people just gullible? According to Tim Levine, professor of communications at University of Alabama, they’re not — at least not consciously. Levine’s theory is that our minds are hardwired to accept virtually all of the messages we receive as honest. He calls it Truth-Default Theory and it’s a big part of Gladwell’s book. It explains why we trust people we shouldn’t.
From an evolutionary standpoint, this makes sense. As infants, we have no choice but to trust that our parents will cater to our needs, and as adults it is simply easier and more harmonious to conduct a lifestyle based on openness rather than paranoia. At an individual level, studies show a broad correlation between generalized trust and well-being. Indeed, sometimes we actively don’t want to spot lies in order to preserve our self-confidence. Collectively, defaulting to trust removes sufficient friction from relationships to allow co-operation to develop, and the creation of social institutions which concentrate trust allows such co-operation to scale.
Yet our blissful ignorance of deceit leads to a paradox. It’s called the Gell-Mann Amnesia Effect, after Nobel prize winning physicist Murray Gell-Mann. His friend Michael Crichton describes it:
“You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.”
So hardwired is our default to truth, that it resets immediately after an alarm has been triggered.
The phenomenon is described in Levine’s work, all of which is collated in his book Duped: Truth-Default Theory and the Social Science of Lying and Deception, published in the shadow of Gladwell. In one of his classic experiments he showed that subjects were able to spot a truth 84% of the time and a lie 23% of the time. The reason they were better at identifying truths than lies is directly because of our propensity to accept what is said as truth. In fact, the only reason lie accuracy was as high as 23% is because some people are exceptionally bad liars.
The experiment was repeated in an environment of high suspicion. Subjects were directly and explicitly told that the experiment was about lie detection, that the partner would be lying about some of their answers, and that it was their job as a subject in the experiment to identify the lies. Despite these cues, people were still better at identifying truths than lies. They now spotted truths with 74% accuracy and lies with 43% accuracy. Even when supposedly looking for lies, they performed worse than chance at identifying them. The alarm did nothing to quash their propensity to see things as inherently truthful.
But does heightened suspicion lead to a better overall outcome? A lie detection rate of 43% is worse than chance but it’s a lot better than the 23% subjects were hitting before their antennae were raised. Perhaps that is enough to improve accuracy over the combined wave of truths and lies that subjects are exposed to?
Well, not really. For any wave of communication where the proportion of truths is 66% or more, overall accuracy in the base case will be better than overall accuracy in the case of high suspicion. For example, if the mix is 90% truths, 10% lies, the base case yields an overall accuracy of 78% (0.84*0.90 + 0.23*0.10) and the high suspicion case yields an overall accuracy of 70% (0.74*0.90 + 0.43*0.10). Being highly suspicious becomes a handicap to detecting whether you are being told a lie or not. It is only when the proportion of truths slips below 66% and the proportion of lies moves above 34% that, according to these numbers, the increased suspicion pays off. And that’s quite extreme. UK politicians told 77 lies during the 2019 election campaign; as a fraction of everything they said, that’s way below 34%.
Low Suspicion | High Suspicion | |
Truth accuracy | 84.3% | 73.5% |
Lie accuracy | 22.5% | 43.1% |
Overall accuracy based on mix: | ||
100% Truth | 84.3% | 73.5% |
90% Truth / 10% Lies | 78.1% | 70.5% |
80% Truth / 20% Lies | 71.9% | 67.4% |
70% Truth / 30% Lies | 65.8% | 64.4% |
60% Truth / 40% Lies | 59.6% | 61.3% |
50% Truth / 50 % Lies | 53.4% | 58.3% |
60% Lies / 40% Truth | 47.2% | 55.3% |
70% Lies / 30% Truth | 41.0% | 52.2% |
80% Lies / 20% Truth | 34.9% | 49.2% |
90% Lies / 10% Truth | 28.7% | 46.1% |
100% Lies | 22.5% | 43.1% |
Of course, this analysis depends on the numbers, which will vary depending on conditions. These numbers, though, are directionally right. Of the multiple experiments overseen by Levine, truth accuracy has averaged around 72%, and lie accuracy around 43%. We typically perform better than chance at spotting truths and worse than chance at spotting lies.
The analysis is also based on a pretext that the proportion of untruths out there may have gone up. Sure, we are overwhelmed with fake news. But the sheer amount of overall news has also gone up. The increase in total communication volume we are subject to has been huge — tweets, articles, emails, texts, posts, podcasts, the list goes on. Most of it is true. In fact, according to Levine, the distribution of lies tends to be very highly skewed, with a few prolific liars generating most of the lies. Whereas most people are mostly honest most of the time, deception by contrast tends to come from a small number of actors. It’s a classic power law.

Serota and Levine, Few Prolific Liars
The bottom line is this. If the proportion of lies out there hasn’t increased, but our suspicion threshold has gone up, then we will end up getting more wrong. Even if the proportion of lies has increased slightly, it may still lead to getting more wrong. We may catch slightly more of the liars, but at the cost of discounting many of those bearing the truth.
Fraud: The Canadian Paradox
Another way to look at truth and lies is through fraud. Arguably fraud is deceit with a financial incentive. Over the past few years, like fake news and empowered by the same online connectivity, fraud volumes have risen.
According to UK Finance, unauthorised financial fraud losses through the banking system totalled £844.8 million in 2018, an increase of 16% compared to 2017. In addition, banks and card companies prevented £1.66 billion in unauthorised fraud in 2018, indicated total attempted fraud of £2.5 billion. The largest component of this is card fraud (56%). Relative to the volume of turnover done on cards, fraud has not changed that much over the past ten years. In 2018 card fraud losses amounted to 8.4p on every £100 spent, down from the peak of 12.4p in 2008. Clearly there is a cyclical component to this, as fraud will typically rise in periods of recession. In addition, new methods of fraud are being devised. UK Finance began collating instances of ‘authorised push payments’ in 2017 (the genuine customer him/herself processes a payment to another account which is controlled by a criminal). These now make up 30% of fraud losses.

Dan Davies points out in his book, Lying for Money, a seeming paradox whereby fraud is typically higher in high trust societies, and lower in low trust societies. He calls it the Canadian paradox and asks why it is that the Canadian financial sector is so fraud-ridden that Vancouver was once dubbed the ‘scam capital of the world’ (the strange tale of Quadriga being the latest example), while ship owners in Greece regularly do multimillion dollar deals on a handshake.
The explanation of the paradox lies in Levine’s Truth-Default Theory. Fraud exploits our natural default towards truth; if we weren’t so naturally trusting, it wouldn’t happen. The difference between fraud and other forms of deceit is that fraud can be outlawed. But that imposes costs. Davies proposes that fraud is an equilibrium quantity in the economic sense. He suggests that a key decision any economy has to make is how much to spend on fraud checks. Given that checking costs money and trust is really productive, the optimal level of fraud is unlikely to be zero.
Whether the cost is direct as in Davies’ formulation, or indirect through higher generalised levels of suspicion — which prevents bad stuff from happening but makes it harder for the good stuff to get through — too much of it creates a friction that can cause more harm. (Those who have recently tried to open a bank account or transfer sums of money may wonder whether that point has already been breached).
The Illusion of Transparency
Tim Levine says: “Appearing honest and being honest are usually unrelated”.
He’s right, it’s why it’s so difficult to detect a liar, yet everybody thinks they can do it. In her book The Confidence Game: The Psychology of the Con and Why We Fall For It Every Time, Maria Konnikova cites an exercise Stanford Business School professor Roderick Kramer conducts on his class. Each year he asks students how good they are at judging someone’s trustworthiness. About 95% of them think they are above average. Over three quarters of the class rank themselves in the top quarter for their character reading ability. It clearly makes no sense, but it is a trap we fall into time and time again.
And not just in lie detection. Daniel Kahneman describes what he terms the planning fallacy in his book, Thinking, Fast and Slow. It stems from plans and forecasts that are unrealistically close to best-case scenarios and that could be improved by consulting the statistics of similar cases. But people rarely consult the statistics of similar cases, seeing their case as special, into which they have special insight. Which is why in rail projects undertaken worldwide between 1969 and 1998, average cost overrun was 45% and average passenger take-up was overestimated by 106%. And why the Scottish Parliament building in Edinburgh came in at a cost of £431 million, after an initial estimate of £40 million.
Michael Mauboussin has built on this principle to encourage investors to adopt an outside view when they look at investment opportunities. He suggests they consider base rates gleaned from similar types of situations that have occurred in the past in order to protect against the thinking that ‘this time is different’.
When it comes to trust, people think they can identify it by getting up close to give them the inside view. We’ve already established that people don’t much trust politicians. In the US according to Gallup, 20% of people approve of the way Congress is handling its job (Jan 2019). Strangely though, at the same time, 53% of people approve of the way their individual congressional representative is handling their job. Everyone approves of the representative they know, but not of the collective. Known as Fenno’s paradox, this can also explain the success of many candidates who pitted themselves against Parliament in the recent UK “People versus Parliament” election.
The ‘circle of trust’ (as Robert de Niro dubbed it in Meet the Parents) has wider political implications. Nassim Taleb has said:
“With my family, I’m a communist. With my close friends, I’m a socialist. At the state level of politics, I’m a Democrat. At higher levels, I’m a Republican, and at the federal levels, I’m a Libertarian.”
But there’s a huge difference between intimacy and seeing someone briefly as a means to gauge trust. Judges could take heed. Gladwell cites research in his book showing that judges in bail hearings are not that good at predicting who will reoffend. Out of over 550,000 defendants brought before arraignment hearings in New York between 2008 and 2013, judges released 400,000. Out of those 400,000, 15% failed to appear in court later and 26% were re-arrested for another crime while on bail. A computer algorithm designed by a group of researchers did much better. The algorithm showed that the judges missed many of the really bad guys: half of the defendants flagged as the riskiest group were given bail by the judges. Among this group 56% failed to show up to court and 63% went on to re-offend while on bail.
Nor were the judges necessarily consistent. Hannah Fry describes in her book, Hello World, a study in which UK judges were asked whether they’d award bail to a number of imaginary defendants. Not only did the judges fail to agree unanimously on a single case, but in several cases individual judges didn’t manage to make the same decision on the same case when confronted with it a second time.
The illusion of transparency provides some support to the use of artificial intelligence in lie detection. We think we can spot liars, but we can’t; computers don’t make that pretence. Clearly relying on computers for such sensitive work has drawbacks, a key one being that they have the capacity to embed bias in decisions, for example around gender or race. As Fry writes, “using algorithms as a mirror to reflect the real world isn’t always helpful, especially when the mirror is reflecting a present reality that only exists because of centuries of bias.”
There’s another reason though why opposition arises to artificial intelligence, and that’s because when things go wrong we prefer to blame a person rather than a system. The trouble with this reasoning is that everything is a system, including the one the judges operate in both consciously and unconsciously.
Gladwell opens and closes his book with the story of Sandra Bland, a young African American woman who was pulled over by police near Houston, Texas in July 2015 for a minor traffic violation. She ended up being arrested and jailed, and three days later committed suicide in her cell. Most people blame Brian Encinia, the policeman who pulled her over. But after a detailed analysis of the case, Galdwell writes:
What went wrong that day on FM 1098 in Prairie View, Texas, was a collective failure. Someone wrote a training manual that foolishly encouraged Brian Encinia to suspect everyone, and he took it to heart. Somebody else higher up in the chain of command at the Texas Highway Patrol misread the evidence and thought it was a good idea to have him and his colleagues conduct Kansas City stops in a low-crime neighbourhood. Everyone in his world acted on the presumption that the motorists driving up and down the streets of their corner of Texas could be identified and categorized on the basis of the tone of their voice, fidgety movements, and fast-food wrappers. And behind every one of those ideas are assumptions that too many of us share—and too few of us have ever bothered to reconsider.
A system was at play, and the way we think about systems is the subject of the next post.
Nice post Marc. Hadn’t heard of Dan Davies either so I looked him up too.
It was NOT Malcolm Gladwell who called naturally believing people (even liars) “default to truth.” Rather it was “Daniel Kahneman and Amos Tversky’s work on heuristics [shortcuts to problem-solving giving good-enough solutions given limited time and info] in judgement and decision-making demonstrated common errors of the human psyche.