Historical analogies are a way for us to find our footing. Psychologists regard them as a key plank in the learning process. ‘History doesn’t repeat itself, but it often rhymes,’ they say. So it’s no surprise that the coronavirus crisis draws comparison with a range of historic events.
The disease itself has been compared with the flu pandemic of 1918. Although there have been other flu pandemics since, the ‘Spanish flu’ of 1918 is known for its heavy death toll of 20-50 million. Like all analogies there are differences and a stand-out one here is that the 1918 flu targeted the young, while coronavirus seems to target the old. That it was, oddly, largely forgotten as a historical event is a feature that also doesn’t seem to resonate given the way life is being experienced under coronavirus.
The public response has been compared with the Second World War. Angela Merkel said (on 18 March), “Since the Second World War, there has been no challenge to our nation that has demanded such a degree of common and united action.” Meanwhile, in the UK, commentators have summoned the spirit of the Blitz to ease the burden from social restrictions. Of course, this time both nations are fighting a common enemy. And Keep Calm and Carry On is being used a lot more now than it was back then.
The suddenness is being compared with 9/11. Worldviews changed that day, just as they may be changing again. 9/11 heralded a fear of terror on a scale never before confronted. Like today’s virus, the terror did not discriminate. It is hard to believe that eight weeks ago the stock market was making new highs and the prospect that the state could legislate against basic human freedoms to the extent it has was ridiculous. In the space of a few weeks travel restrictions escalated from a few flights being cancelled from ‘high risk’ places like Beijing and Northern Italy, to flights being cancelled because demand for non-essential travel collapsed, to flights being cancelled because countries literally closed their borders.
From an economic perspective, the analogy is the global financial crisis of 2008. Company finances are being ravaged and monetary authorities are rolling out many of the packages to ease strains in the financial system that they moth-balled after the 2008/09 financial crisis. In fact, the CEO of Marriott International, Arne Sorenson, went as far as to say on 20 March that “Covid-19 is having a more severe and sudden financial impact on our business than 9/11 and the 2009 financial crisis combined.” On Thursday 26 March, US jobless claims jumped to 3.28 million, four times the prior peak in records that have been compiled since 1967. JPMorgan economists expect US GDP to be down 40% in 2Q with unemployment hitting 20%.
Other analogies abound. The Great Depression of the 1930’s; natural disasters; and Prohibition – which amputated from the economy an entire industry in the way that industries such as hospitality are being poleaxed today.
Of them all, the financial crisis of 2008 seems to offer the best framework. But not for its steer on the economic fallout, rather for the spotlight it shines on our understanding of the virus itself.
Mark to Market
One of the challenges in the early stages of the financial crisis was dimensioning the losses that were building. No-one knew how big they were, nor where they were. The Financial Crisis Inquiry Commission report recalls that in early 2007 “Goldman marked mortgage-related securities at prices that were significantly lower than those of other companies.” The divergence came to a head in April 2007 when Goldman valued a Bear Stearns hedge fund’s positions at between 65 cents and 100 cents on the dollar. A few weeks later it revalued the positions, going as low as 55 cents on the dollar. Ralph Cioffi, the manager of the Bear Stearns fund, resisted. He proposed using fair value marks based on his team’s models, implying losses that were significantly less than those using Goldman’s marks.
The different approach to marks back then is reflected in the different approach to testing today. On 1 March Nassim Taleb tweeted this. His point was that the “losses” were by then endemic; it was just that some countries were measuring them more accurately than others.
Several weeks later the lack of wholesale testing still makes it difficult to dimension the true scale of the problem. By now it is apparent that the number of confirmed cases (2 million globally at the time of writing) is not a valid reflection of the number of actual cases. But how many actual cases there are remains unknown.
- An Imperial College study estimates that 4.04% of the UK population has been infected as at 14 April (with 95% confidence between 2.87% and 5.67%).
- The UK based Centre for Mathematical Modelling of Infectious Diseases estimates that 3.6% of symptomatic cases in the UK were reflected in official reported numbers as of 14 April (with 95% confidence between 3.1% and 7.7%).
- A survey of 1,500 people in Austria found that 0.33% were “acutely infected” at the beginning of April (with 95% confidence between 0.12% and 0.76%).
- Iceland’s small population and developed healthcare system has allowed it to test more of its population than any other nation. By 11 April it had tested 10% of its population with results implying that between 0.3% and 0.8% of the country could be infected.
- In Italy all the residents of the small town of Vò were tested in the two weeks leading up to 20 March. The process found that at the time of the first symptomatic case some 3% of the population had already been infected.
Like with the Bear Stearns hedge funds over a decade ago, some countries are also incentivised to under-represent their caseload. One example is Japan, where the number of confirmed cases began to ramp up immediately after the Tokyo Olympics had been officially postponed, leading some to speculate the number had been artificially suppressed. China too was criticised for under-reporting the severity of the disease when it first broke out in Wuhan in December 2019.
Mark to Model
As well as questioning the validity of valuations being employed, the financial crisis also exposed flaws in the models used to underpin various financial structures. “Financial institutions and credit rating agencies embraced mathematical models as reliable predictors of risks, replacing judgment in too many instances. Too often, risk management became risk justification.” (Financial Crisis Inquiry Commission report).
Right now, policy is being steered by a range of epidemiology models that are being used to predict the spread of the virus. The problem with these models – as with all models, including those used on Wall Street in 2007/08 – is that they are only as good as the assumptions they rest on. “All models are wrong,” said statistician George Box, (“but some are useful.”) When it comes to epidemiological models there is a broad range of uncertainty around many of the inputs.
That uncertainty can be categorised in three buckets.
The first covers the quality of the data, such as the number of people infected. As outlined above, this remains a key unknown. Some models use statistical techniques to infer total infections, such as the one from the Centre for Mathematical Modelling of Infectious Diseases which extrapolates from fatalities; others use sampling such as in Iceland. Even the hardest variable of all – fatalities – is subject to uncertainty. The UK’s Department of Health and Social Care (DHSE) reports daily the number of people who died in hospital before 5pm the previous day. But what about those who didn’t die in hospital, but died in care homes or at home? (My neighbour was allowed home from hospital to die – where does he appear in the DHSE dataset?) And what about those who died anyway from other causes – those who died with Covid rather than from Covid (like a friend’s father)? There is also typically a lag between when the death occurs and when it is recorded in the data – incurred but not reported is the jargon used in the insurance industry. If that lag is consistent it may not impact the output of the model much, but any change to the lag can skew resulting projections materially.
The amount of data being generated through this crisis is immense. Unlike in 1918 when ‘20-50 million’ died, by the time this is over we will know precisely how many died. But it may take hindsight to discern the signal from the noise. When I look back at my notes from the financial crisis, there were a lot of datapoints that were fleetingly useful yet turned into dead ends.
Data quality is further compromised when comparing between countries. Datasets between countries are not really compatible with each other, yet with every political system on the planet confronted with a common challenge, there is a tendency for commentators to compare responses. While the overall shape of the spread looks similar, mortality rates remain stubbornly different. There are many explanations include healthcare capacity, population density, global connectedness, prevalence of underlying conditions like obesity, lockdown initiation, and demographics. But it may not be until we are further along the curve that robust comparisons can be made. Singapore had the virus under control before seeing a second wave emerge in April. Germany is being seen as a leader in testing, yet its regime is not without criticism. Sweden has resisted a lockdown on the scale mandated in other European countries; the jury is still out on its strategy.
The second source of uncertainty stems from how the models have been calibrated. The Imperial College model is a bottom-up “individual-based simulation model”. The Institute for Health Metrics and Evaluation (IHME) model is top-down “curve-fitting tool to fit a nonlinear mixed effects model to the available… cumulative death data”. Both models were initially calibrated using data from Wuhan. In Imperial’s case “based on fits to the early growth-rate of the epidemic in Wuhan, we make a baseline assumption that R0=2.4 but examine values between 2.0 and 2.6.” In IHME’s case “the value of the covariate multipliers… was assumed to closely follow the fit obtained from data from Wuhan, which is the time series to reach a stable state in the training dataset.”
Yet Wuhan may not be representative of how the virus spreads in other places and may skew the models to the pessimistic side. Clearly as more data accrues from other parts of the world the models’ performance should improve. So-called VAR models employed by financial institutions before the financial crisis purported to predict with at least 95 percent certainty how much a firm could lose if market prices changed. But the models failed for their reliance on assumptions that were based on limited historical data.
The third source of uncertainty stems from the feedback mechanism inherent in the models. What, for example, is the realised impact of policy intervention on behaviour? It seems that in the first three weeks of lockdown in the UK, compliance has been higher than anticipated – schools had been expected to operate at 20 percent capacity, but have been running at 2 percent; the furlough scheme had been expected to cost £10bn, but recent estimates are that it will cost £40bn. These compliance rates may not be constant as we move into the second three weeks.
Another feedback loop emerges from the unintended consequences associated with behaviour driven by the model. The best example of this outside of epidemiology is in car design. Studies have shown that motorists typically drive faster when wearing seatbelts and closer to the car in front when their vehicle is fitted with anti-lock brakes. Epidemiological models don’t currently incorporate widespread mask-wearing as a parameter, at least in the UK, where it hasn’t been advised yet, but if they did, they would presumably also have to accommodate an increase in risk behaviour as more people use masks as cover to take risks they otherwise wouldn’t.
The key here is that models are not static – they are dynamic. Their dynamism sometimes breeds confusion as if they should spit out an unchanging target. Republicans on the House Oversight Committee are calling for a hearing to review the ‘modelling platforms’ used to project the extent and impact of the coronavirus pandemic, claiming they exhibit “conflicting data”.
There are echoes of the Y2K bug in some of the criticism being levelled at modellers. An estimated US$300bn was spent prior to 1 Jan 2000 in solving a problem that never materialised. Planes didn’t fall out of the sky, power systems didn’t fail and bank accounts weren’t wiped out. Yet because the alternative history in which that US$300bn wasn’t spent was never exposed, so-called ‘fear mongers’ got a bad name. That bad name persisted through Brexit and into the current crisis.
The rise of artificial intelligence means that the world runs on models like never before. But just when they should be at their most useful, there is a danger that they lapse into their most useless. Even the simple earnings model of the kind I have been grinding out for 25 years is under scrutiny. Jamie Dimon, Chairman and CEO of JPMorgan said on his recent earnings call:
“There are no models that have GDP down 40%, unemployment growing this rapidly… There are also no models have ever dealt with a government, which is doing a PPP program, which might be $350 billion, it might be $550 billion, unemployment where it looks like 30% or 40% of people going unemployment but higher income than before they went on unemployment. What does that mean for credit card or something like that or that the government is just going to make direct payments to people? … And I think people – you’re making too much mistake trying to model it. When we get the end of the second quarter, we’ll know exactly what happened in the second quarter…”
Perhaps the key flaw inherent in financial models in 2007/08 relates to correlation. A major assumption backing many of the securitisation models in circulation at the time was that house prices would not decline simultaneously across the whole of the US. After all, they hadn’t before. When house prices started to fall nationwide and defaults increased, it turned out that mortgage-backed securities were in fact much more highly correlated than the rating agencies had estimated. Eventually, in late 2008, Moody’s was forced to introduce an asset correlation assumption two to three times higher than it used before the crisis. By then it had downgraded most of its ratings on CDO structures.
Health system capacity is similarly built on the assumption that people do not get sick simultaneously across a population. Nor does it account for the additional layer of correlation that emerges when healthcare staff themselves get sick at the same time, restricting the supply of healthcare provision while demand is going up. On 5 April the UK Secretary of State for Health and Social Care reported that 5.7% of hospital doctors were off sick or otherwise absent because of the virus; a doctors’ survey suggested a much higher figure of 14.6%.
A lot of modelling of optimal critical care capacity has taken place, but it tends to optimise for efficiency rather than for resilience and understates the likelihood of a rise in correlation. At midnight on Thursday 27 February there were 4,122 adult critical care beds available in England, according to NHS data, of which 81.1% were occupied. That occupancy rate had ranged between 75.3% and 88.1% over the ten years since August 2010. Since then, demand has clearly gone up. Fortunately, UK healthcare capacity is not yet full. The ability to create new capacity quickly is helpful, as the fit-out of the NHS Nightingale Hospital at the ExCeL Centre demonstrates, although the availability of additional ventilators remains an issue.
The financial crisis taught that buffers are needed to compensate for the risk of correlation. Banks established additional capital and liquidity buffers following their crisis. It seems clear that those same buffers were not in place with respect to critical care beds, ventilators, personal protective equipment and other elements of healthcare provision. It is perhaps no coincidence that Goldman Sachs had at its disposal millions of masks that it was able to donate to healthcare systems in the U.S. and Europe, which it had acquired over a number of years following prior epidemics like SARS “as a part of our operational risk management efforts”.
Network effects and super-spreaders
When officials let Lehman go bankrupt on Monday 15 September 2008, they didn’t anticipate the chaos it would unleash. Andrew Ross Sorkin relays in his book, Too Big to Fail, the conversation that Hank Paulson, US Secretary of the Treasury, had with President Bush that morning:
Paulson said he was cautiously optimistic that investors would be able to accept the news but warned him that there could be further pressure on the financial system… Paulson walked Bush through the Fed’s plan to keep Lehman’s broker-dealer functioning so that it could complete its trades with other banks. “We’re hoping that over the next couple of days, they can unwind this thing in an organized way,” he said. While Paulson was clearly more disturbed than the president about Lehman’s bankruptcy, he expressed his elation about Bank of America’s decision to buy Merrill Lynch, a sign, he suggested, “of strength” in the market that might “mitigate” the possibility of panic.
But the possibility of panic wasn’t mitigated. Over the next few days shockwaves reverberated throughout the financial system. On Monday officials spent the day looking for ways to support AIG. On Tuesday Lloyd Blankfein, CEO of Goldman Sachs “told Paulson about a new problem he was seeing in the market: Hedge funds that had traded through Lehman’s London unit were suddenly being cut off, sucking billions of dollars out of the market.” On Wednesday Tim Geithner, President of the New York Fed, started the day “thinking about what fresh hell the day would bring. He was most anxious about the latest shocking development: A giant money market fund, Reserve Primary Fund, had broken the buck a day earlier… Investors had started liquidating their accounts, which in turn forced managers to impose a seven-day moratorium on redemptions. Nobody, Geithner worried, knew just how extensive the damage could end up being.”
What officials had underestimated is the connectedness of the financial system. The two most widely cited explanations as to why Lehman wasn’t bailed out are that moral hazard made it politically unacceptable, and that legal structures were not in place to do it. But a third is an error of omission: policymakers lacked a detailed understanding of the linkages at play within the financial system. The ultimate cost involved in cleaning up the chaos – the bailout of AIG, the support given to other financial institutions, the structures put in place to underpin money market funds – was a multiple of what it would have cost to bail out Lehman. One estimate puts the cost at US$498bn. This compares with losses that Lehman creditors suffered of an estimated US$145bn which is the maximum loss the US government may have incurred had it bailed out Lehman (the actual loss would likely have been lower had the resulting chaos been averted).
The message here is clear and it is one Matthew Jackson makes in his book, The Human Network:
From a network perspective, stopping contagions at earlier points is always easier and cheaper than letting them play out and then trying to clean things up afterward.
There are many features of a financial network that make it prone to contagion. Adam Kucharski lists some of them in his book, The Rules of Contagion. Two in particular are relevant to coronavirus. First, rather than connections being scattered evenly, a handful of firms typically dominate the network. This is apparent in the representation of the 66 largest banks in the Fedwire system shown in the chart:
Second, financial networks are typically ‘disassortative’ which means that highly connected elements, rather than forming clusters between themselves, are mostly linked to less connected elements. This can lead to a more widespread contagion. Lehman was undoubtedly a dominant player in the financial network; when it failed it had trading relationships with over a million counterparties. Although the post-crisis rallying cry centred around banks being ‘too big to fail’ the more accurate representation was that many were ‘too connected to fail’.
Disease can propagate through a network of people in a similar way. Epidemiologists have long known that some people are simply more connected than others and some places more ‘hot’. In the SARS epidemic, this was a major factor: 20 percent of cases caused almost 90 percent of transmission. In the current coronavirus epidemic, the spread occurred very quickly through a religious sect in South Korea. Indeed, across the world, various hubs have become sources of ‘super-spreading’ events – churches, ski resorts, hospitals, a Biogen company conference in Boston. Yet it’s not known how: whether the cause is asymptomatic carriers, people with symptoms that linger but who are not sick enough to isolate, or people who shed an unusual amount of virus. And from a population perspective, unlike SARS, the precise distribution of super-spreaders is not currently known either.
After the crisis, financial regulators learned the importance of super-spreaders from epidemiologists. Andy Haldane, currently Chief Economist at the Bank of England, argued in 2009 that financial ‘super-spreaders’ need to be actively targeted. Subsequently, the most connected banks were formally tagged (as ‘Global Systemically Important Financial Institutions’) and required to hold more capital; ring-fences were put in place around bank activities; and the network was more regularly mapped.
Even though these ideas emerged from epidemiology, how granularly they are employed in the current crisis is unclear. Most epidemiological models assume a homogeneous network structure with people drifting around and interacting with each other in fairly evenly distributed ways. For example, the Imperial model assumes that social distancing reduces contact outside the household by an even 75 percent.
Since the financial crisis, financial institutions have had to regularly log their (trading) positions with regulators. People, on the other hand, don’t have to log their (physical) positions with regulators, at least in Western liberal democracies. But coronavirus could be changing that. Apple and Google recently announced that they are collaborating on building surveillance capability into their smartphones. Given such specific and real-time information on how people interact, a much more accurate model of virus transmission can be crafted. According to CNBC, “the way the system is envisioned, when someone tests positive for Covid-19, local public health agencies will verify the test, then use these apps to notify anybody who may have been within 10 or 15 feet of them in the past few weeks.” Apple and Google have insisted that governments will not be able to require citizens to use the software and that users will have to opt-in. But the more people that opt in, the more accurate the network representation. At 100 percent adoption there’s no need for a model at all – the territory will have displaced the map.
Fortunately, one way in which financial networks are unlike the networks through which coronavirus can spread is in the number of ways in which transmission can occur. In the financial crisis, transmission occurred through lending relationships, through shared exposure to trading positions, as well as indirectly through fear and panic. In this way, it was more like HIV than coronavirus. HIV can spread via sexual relationships, needle exchange, blood transfusion. As far as we know, the only way coronavirus can spread is through respiratory droplets.
Coming out of this…
When the outbreak began, I thought the analogy to 2008/09 would extend to measures of what to look out for as a leading indicator of the recovery. Back then the flow of new non-performing loans was a useful metric to track and its peak presaged the recovery. Today’s analogue would be the rate of change in confirmed coronavirus cases. Now I’m not so sure. Too much uncertainty overshadows these numbers, and the resultant numbers are anyway a function of the trade-off taken to lock down the economy. Fatalities represent a more robust number but they necessarily lag.
The real leading indicator for recovery is a peak in uncertainty – new NPL formation was simply the proxy. And the thing that will reduce uncertainty now is more testing, including serological testing to reveal how many people have had the disease and by extension how many remain susceptible. Unusually financial markets are discounting less uncertainty than broader society because the financial regulators worked quickly to reduce the range of outcomes by supporting asset prices. The VIX ‘uncertainty gauge’ peaked on 16 March and has been coming down since. This is different from 2008/09 when the markets discounted a much higher degree of uncertainty than broader society.
By the time we come out of this, the world will have changed. How fleetingly is a subject of much debate. Some hold the view that life will revert to normal, much as it did after the 1918 flu pandemic. Others have written that the experience may leave generational scars that will influence how we behave for years to come. And a third perspective is that the crisis will simply turbocharge trends that were in train anyway.
Looking back at the financial crisis, several developments seem likely.
First, just as banks were made to sacrifice efficiency for resilience after their crisis, other segments of the economy will be made to do so now. For banks this shift came through their requirement to hold more capital and more liquidity. New financial securities were devised to provide them with contingent capital that would kick in at the point of maximum need (securities that are being stress tested in the current environment). Banks’ historic tendency towards procyclicality – whereby they would take on more risk at the same time as everyone else and vice versa – was tackled by regulators through the introduction of ‘countercyclical’ buffers.
It seems clear that like banks before them, other players in the economy will hoard more cash post this crisis than they would have done prior, either voluntarily or through regulation. The impact this has on the economy could be significant. Moreover, as is apparent looking at bank valuations post-crisis, the equity value of resilience is a lot lower than the equity value of efficiency. That’s not entirely consistent with theory. The argument goes that higher resilience ➡ less risk ➡ lower cost of capital, so even though returns would be lower, valuations need not be. That argument didn’t play out, and banks are left in this crisis trading on lower valuations than they did in the last one. The implications for overall market valuations as more segments of the market choose resilience over efficiency are significant.
Second, the financial crisis put a big spotlight on how a part of the world people had paid little regard to actually works. Ben Bernanke, Chairman of the Federal Reserve, told the House Financial Services Committee on 24 September 2008:
“People are saying, ‘Wall Street, what does it have to do with me?’ That is the way they are thinking about it. Unfortunately, it has a lot to do with them. It will affect their company, it will affect their job, it will affect their economy. That affects their own lives, affects their ability to borrow and to save and to save for retirement and so on.”
The same could be said of public health policy today. Healthcare is a public good. Private health insurance does not provide immunity from the disease nor from the economic and social fallout that stems from it. We want to be as confident going about our lives that our neighbours are no more carriers of disease than they are a security threat. Public healthcare is more than just a safety net, it is a core element of our societal infrastructure: this crisis shows that we need it collectively as well as individually. Of course, in countries like the UK, the health system is publicly-owned, so the asymmetry at the heart of the financial crisis – that banks kept their profits but socialised their losses – is not present. But a debate around resource allocation will inevitably arise, with those at the centre being held to account.
Third, the financial crisis of 2007/08 was not a single event, but a series of crises that rippled through the financial system and ultimately the economy. This point is often lost on those that seek to establish a single cause. As prevailing market conditions change, bad business models have a tendency to be exposed. Warren Buffett said, “Only when the tide goes out do you discover who’s been swimming naked.” So far, the coronavirus crisis has led to a distinct crisis in the oil market. However, few other shoes have dropped that haven’t been picked up by central banks. That could be a function of the economy freezing or it could be because insufficient time has elapsed. It seems likely that more shoes will drop and the further they are from the epicentre, the more concerning.
Fourth, as discussed above it became apparent during the financial crisis that regulators did not have an accurate map of how the financial system worked. The same is true now of how social mixing works. Reports suggest that technology may be deployed to scale up contact tracing as part of a wider exit strategy. The implications for personal privacy are significant. But just as banks agreed to comply in a revised contract with the regulators, people may agree to comply with this. The trade-off between personal privacy and collective health is one that we may be asked to make.
Perhaps the biggest takeaway is a reminder not to fight the last battle. So much work has gone into avoiding a specific repeat of the 2007/08 financial crisis that we forgot to draw more general conclusions from that experience. Those lessons – always mark-to-market, question your models, watch out for correlation, build buffers and map your network – have never been more relevant.