Long-time readers will be familiar with the work of
. His 2018 book on fraud, Lying for Money, is a classic in the genre. In the early days of Net Interest, Dan contributed a post that looked at Wirecard as an extension of earlier European frauds. Well, I’m pleased to host Dan here again to coincide with the release of his new book, The Unaccountability Machine: Why Big Systems Make Terrible Decisions – and How The World Lost its Mind. Like me, Dan is a former equity research analyst and has seen up close how large organisations, in the finance industry and beyond, frequently make poor decisions. His book explores the phenomenon through the lens of “management cybernetics”, a useful framework you’ll no doubt apply in all sorts of places once you’ve read the book.Paid subscribers can read on for my comments on banks’ first quarter earnings. It’s been a year since the collapse of Silicon Valley Bank. Has anything changed?
First, though, over to Dan:
Decisions Nobody Made
“The purpose of a system is what it does” — Stafford Beer
About five years ago, I started to get very interested in an obscure subject called “management cybernetics”. It was a product of the technological dreamscape of the 1960s and 70s; after the invention of the computer, but before it became ubiquitous, in a period when there was room for speculation about how the new world of artificial intelligence would change our world.
I had just finished my previous book (Lying for Money: How Legendary Frauds Reveal the Workings of the World), and was keen to say a bit more about how organisations go wrong. It seemed to me that it might be possible to expand the concept of a “criminogenic organisation” (one where the incentives structurally produce illegal behaviour) to a more general “bad-decision-o-genic organisation”.1 And furthermore, that the weird mixture of pure mathematics, philosophy, accountancy, physics and economics that came together in the work of now-forgotten management gurus like Stafford Beer and Barry Clemson might be the way to think about it.
What do bad decision-making organizations have in common? Quite a few things, but one of the clearest signs is something you might call an “accountability sink”. This is something that might be familiar to anyone who has been bumped from an overbooked flight. There is no point getting angry at the gate attendant; they are just implementing a corporate policy which they have no power to change. But nor can you complain to the person who made the decision – that is also forbidden by the policy. The airline has created an arrangement whereby the gate attendant speaks to you with the voice of an amorphous algorithm, but you have to speak back as if to a human being like yourself. The communication between the decision-maker and the decided-upon has been broken – they have created a handy sink into which negative feedback can be poured without any danger of it affecting anything.
This breaking of the feedback links is, I think, one of the most important things that has happened to large organisations – banks, but also large corporations and government departments – over the last fifty years. In most cases, it’s not been carried out purely as a responsibility-dodging exercise, or as part of a conscious effort to make things worse. That has happened, on occasion, but for the most part, after spending a lot of time looking into examples, I concluded that feedback links were being broken simply because they had to be. The world keeps growing and getting more complicated, which means that individual managers gradually become overwhelmed; the problem of trying to get a sensible drink from the firehose of information that pours into any large organisation every day has become unbearable.
And this is why institutions have started delegating decisions to systems – credit scoring algorithms, regulatory risk weighting formulas and the like. As well as allowing decision-making to be automated and industrialised, they provide a psychological defence system, preventing individual human beings from the consequences of having to make a decision and own it.
Most of the time, these systems work well. But when they break down, the consequences can be spectacular. Because every such algorithm or rulebook is, implicitly, based on a model of the thing they’re meant to govern. And every such model is capable of failure. And when something comes along that’s outside the model – like, for example, a sustained nationwide fall in US house prices – you end up in a situation where literally nobody knows what to do.
It’s really striking to compare the two big crises of the last two decades. The Global Financial Crisis beginning in 2007 was purely a matter of book entries in computers. No actual physical capital was destroyed, nobody died. By contrast, the COVID-19 pandemic beginning in 2020 was a massive blow to productive capacity – millions of people died, buildings were rendered unusable. But it was the first of these two crises that led to massive scarring and a prolonged global recession, not the second. Why?
It might be said that the reason why is that if you consider the world economy as an organism, the pandemic attacked its muscles and sinews while the financial crisis attacked its brain. The global financial services industry is a crucial part of the distributed decision-making system of the world, and its core component is a very old, but still surprisingly poorly understood technology called debt.
In the strictest, purest sense, debt is an “information technology” – it’s one of the mechanisms human beings have invented to handle information. By structuring an investment in someone else’s project as a debt, you immediately reduce the space of possible outcomes to two – you get paid back, or you don’t. There are a lot of other information-processing techniques that banks and investors use, from statistical credit scoring to modern portfolio theory, but this is the big one. It allows a modern bank to keep track of vastly more financial investments than would ever have been possible for a medieval merchant in the first days of double-entry book-keeping. Rather than having to preserve face-to-face relationships with every single borrower, you can rely on the fact that 99% of mortgage loans get paid back in full and on time, and concentrate your attention on managing the 1% of cases where something goes wrong.
The trouble is that if you build a business on this basis, what happens when it turns out that there’s a small variance? Unfortunately, a small variance in the proportion of good loans from 99% to 97% means a tripling in the number of bad loans, and consequently a huge excess load on the systems that are meant to deal with them. Faced with this massive cognitive overload, the system froze. And even more unfortunately, in a world in which trillions of dollars need to be rolled over and refinanced every day, the one thing that the financial sector cannot do is stop for a moment to regain its bearings. If information processing was free and the bankruptcy process frictionless, the Global Financial Crisis would have been over in a month. As it was, all the information which had been attenuated by the use of multiple layers of secured debt came back, suddenly unattenuated and needing to be dealt with.
That’s the “cybernetic history” of the debt crisis which I outline in my book, and I think it’s a useful alternative perspective to the economic one, and one which makes it more comprehensible that a relatively small market for synthetic CDOs turned into a continental crisis. But this might not even have been the most pernicious use of debt seen in our lifetimes.
Another aspect of debt, considered as an information technology, is that if affects the information environment of the borrower. If you are managing a company which has borrowed money, making your payments becomes one of the survival conditions for that company. At low levels of debt, generating short term cash flow is one priority among others, but for a highly indebted company it becomes a signal which swamps all others. You might want to change the world, but if you don’t meet the coupon payments, you’ll never get the chance to see if your other strategic priorities would have worked.
Consequently, a company with lots of debt cannot help but have a bias toward the short term. Which might be considered problematic, as the last few decades in the Western capitalist world have seen the rise of an industry (leveraged buyouts, or “private equity”) which has made it part of its fundamental operating strategy to load companies up with debt. Considered in this light, debt is a technology of control as well as of information – it’s a means of exerting discipline on management teams who might otherwise be tempted to follow priorities other than short-term financial returns.
This is, as far as I can tell, the real meaning behind the populist critiques of “financialisation” in the economy. There’s really nothing particularly bad about the growth of the financial sector, even to the extent that it’s outstripped the growth of the “real” economy. Quite simple mathematics ought to be enough to convince us that as the economy grows, the number of links and relationships between producers, consumers and investors will grow at a faster rate, and so you’d expect the parts of the economy in which decision making and information processing take place to grow faster than the “real” economy. It’s the same logic by which the brains of primates take up proportionally more energy than those of rodents; finance is part of the real economy, just like the cerebellum is a real organ.
What’s bad about “financialisation” is neither more nor less than the over-use of debt. Modern corporations do often behave badly, and they make systematically worse decisions than they used to, this isn’t a delusion of age. They do this partly because they have outsourced key functions (cutting themselves off from important sources of information), and partly because their priorities are warped by the need to generate short term cash flow. Both of these problems can in large part be traced back to the private equity industry, working either as a direct driver of excess leverage, or as a constant threat which makes managers behave as if they were already subject to its discipline.
Management science and cybernetic history is all about things which began as solutions, then turned into problems because the world changed. Once upon a time, back in the 1970s, private equity and LBOs were the solution to a problem of lazy, sclerotic incumbent management teams, self-dealing and failing to make tough decisions. But it’s now the 2020s, and private equity may itself be the biggest problem in our global information processing system. The way that corporate history progresses is that we try to keep up with the ever-increasing complexity of the world, and then when this is no longer possible, we have a crisis and reorganise. We’ve had the crisis – or perhaps we are still going through it – and now it’s time to think about how to reorganise.
The Unaccountability Machine by Dan Davies, Profile Books £22, 304 pages
Paid subscribers read on for my update on US bank earnings and how they are doing against the backdrop of interest rates that are priced to stay higher for longer.