grace blakeley 2.2
The Enemy Within
On 18 June 1984, five thousand striking miners descended on Orgreave, South Yorkshire, intent on disrupting deliveries of coal to the British Steel Corporation coking plant.10 They were matched by six thousand policemen, many mounted on horseback. The battle that ensued has been described as one of the most violent industrial disputes in British history. Having been outmanoeuvred by strikers in the past, the police approached Orgreave as a battle — as a chance to put an end to the strike once and for all. After trapping them in a nearby field, the Chief Constable ordered a mounted charge on the striking miners, followed by another, and another. During the final onslaught, officers charged in behind the cavalry and began beating miners with batons. Several hours later, the few miners that remained were charged again — this time entirely out of the field — leaving an “out-of-control police force [charging] pickets and onlookers alike on terraced, British streets”.
Despite the clear overreaction on the part of the police, almost a hundred miners were charged with various offences relating to the dispute. Seventy-one faced life sentences. They were acquitted after the Independent Police Complaints Commissioner concluded that the police had used excessive force, as well as exaggerating the violence they faced from the strikers, and committing perjury in an attempt to prosecute them. In an eerie precursor to the Hillsborough disaster, the miners also faced trial by media, where they were depicted as violent thugs attempting to undermine the rule of law. The escapade was described as “the worst example of a mass frame-up in this country this century” by one of the miners’ lawyers. The Battle of Orgreave — as it is now known — remains a stain on British policing to this day.
The Battle of Orgreave was one of the bloodiest confrontations to take place over the course of the miners’ strike of 1984–1985, which was called in response to Thatcher’s programme of pit closures. Though they did not know it at the time, Thatcher’s decision to close the pits was a deliberate attempt to spark a confrontation between mineworkers and the state — a confrontation for which she had been preparing from the moment she stepped into office. The Ridley Plan — named after its initiator, right-wing Conservative MP Nicholas Ridley — was drawn up in 1977 and leaked to the press in 1978, and laid out a strategy for dealing with the “political threat” from the “enemies of the next Tory government”.11 It was an exhaustive battle plan detailing how the Conservatives could conclusively defeat a major union in the event of a national strike. First, the government would spend several months stockpiling coal and planning imports before announcing pit closures. When strike action began, the plan recommended listing unions in order of their strength and going after them one after another, taking on the most militant first. It even included plans to come after the strikers in their homes by cutting off their access to dole money. Ridley anticipated that these actions would build up to a series of outright battles, for which the government should prepare by drastically expanding the capabilities of the police force, equipping them with the latest anti-riot gear and training them for what amounted to a military exercise.
Ridley’s plan was followed almost to the letter. Thatcher stockpiled six months of coal by expanding output dramatically from 1983. One miner described how, looking back, the government had effectively forced them to “dig their own graves” in this pre-strike period.12 Ian MacGregor, the multi-millionaire US businessman who Arthur Scargill — president of the National Union of Mineworkers — referred to as the “Yankee steel butcher”, was brought in to increase the “efficiency” of the UK’s coal industry by announcing a brutal programme of pit closures. Isolated strike action began to break out in pits across the country, and in March 1984 Scargill called a national miners strike. This was when the Ridley Plan truly came into its own. Thatcher’s beefed-up and newly mobile police force was ruthless. Orgreave is perhaps the most famous encounter of the strike, but many other skirmishes took place throughout the country, leaving hundreds injured and thousands arrested. The miners didn’t stand a chance. Within ten years, Thatcher had closed down nearly every pit in the country, and all but broken the miners.
Looking back, this course of events has an air of inevitability about it. The miners were fighting an uphill battle against the rise of renewable energy sources and cheap labour from abroad. The demise of the dirty, dangerous, and polluting coalfields was portrayed as a story of modern-isation, which would see Britain transition from a traditional manufacturing economy to a modern service-based one. In the end, coal mining may not have had a much longer future in the UK. But the brutality with which the miners were repressed, the speed with which functioning collieries were closed after the end of the strike and the decline into which many pit communities sank through the 1980s and 1990s were far from inevitable. In what often appeared like a personal vendetta, Thatcher decided almost as soon as she became Conservative leader to stake her entire political career on a face-off with what she called the “enemy within”. What could she possibly have hoped to gain from imposing such acute suffering onto the electorate? Pit villages in South Wales hardly seemed a threat to her deregulation of the stock market or her privatisation agenda.
But this is to fundamentally mistake the nature of Thatcher’s vision. In order to build the new economic model theorised by the right-wing activists at Mont Pélerin, the last remnants of the old one had to be destroyed. As long as the British labour movement was there to resist it, Thatcher would never have been able to institutionalise neoliberalism. As one striking miner put it, “[w]e knew from day one we were firmly in Thatcher’s sights. What was stopping privatisation, what was stopping letting rip with profits, their philosophy of a free-market economy? The thing that was stood in the way was us”. In taking on the miners, Thatcher wasn’t just putting the nail in the coffin of the British mining industry, she was waging war on the labour movement as a whole. By taking out the strongest and best organised workers first, she knew that when she came for the remainder, resistance would seem futile. Workers in nationalised industries found themselves unable to resist privatisation, which proceeded apace with few disruptions. What remained of the labour movement found it far harder to counter the steady decline in wages relative to productivity, the deterioration in conditions and the rise of flexible working. Union membership has more than halved since the 1980s, even as the population has grown.13
From the start, this project was cloaked in the language of “efficiency”, “modernisation”, and — most pernicious of all — “economic freedom”.14 Thatcher’s groupies argued that the unions were vested interests getting in the way of the operation of the free market. The neoclassical theory of wage determination posits that workers are paid a wage equal to the “marginal product” of their labour.15 Essentially, firms pay workers a wage equal to the value of the output they produce. If a firm paid a worker over this amount, another firm could afford to undercut them whilst still making a profit, and if they paid workers less, then another firm could poach the worker with a higher salary and still make a profit. In the perfect world of equilibrium inhabited by the professional economist, the economy runs like a well-oiled machine, everyone fulfils their function, and society’s resources are used in the most optimal way. By extension, workers who demand wages above their marginal productivity reduce the profitability of the companies they work for, therefore reducing the efficiency of the economy as a whole. The unions were committing a cardinal sin — disrupting the operation of the free market — and the state had no choice other than to intervene.
But over the course of the post-war period, the marginal productivity theory of distribution largely held. When the UK had a powerful labour movement able to argue for pay rises on behalf of their workers, on aggregate wages and productivity rose in unison — workers were paid a wage equal to what firms could afford, no more and no less. In fact, there is now a great deal of evidence to suggest that strong union movements actually raise productivity and improve firm performance.16 But after Thatcher’s battle with the unions, wages stopped rising in line with productivity. Without the unions to demand that firms paid workers a salary equal to their marginal output, bosses had no incentive to do so. Instead, they set about internally redistributing resources from workers to shareholders, making billions in the process.17
The downward pressure on workers’ wages was reinforced by the Conservatives approach to macroeconomics. Theoretically, those workers being paid less than their fair share of output could have left their companies and found other jobs. But the neoliberals also had a plan to prevent workers from voting with their feet. Before starting her war on the unions, Thatcher announced a war on inflation, which was running at 13% in the year she took office. Her main weapon was to be the new economic ideology of monetarism: the theory that governments can control inflation by controlling the money supply. The growing attractiveness of monetarism emerged out of Keynesianism’s failure to explain the concurrent increases in inflation and unemployment of the 1970s.18
According to the Phillips Curve, there should have been a trade-off between these two variables — and for most of the post-war period there was — but this relationship broke down in the 1970s. Expanding government spending would have been the solution to rising unemployment, but reducing it would have been the response to inflation — with both happening at the same time, the Keynesians were stuck. Monetarists explained this phenomenon by attributing the rise in inflation to low interest rates and too much government spending. The only way to tackle stagflation was to reduce government spending and raise interest rates to reduce inflation. They argued that, whilst such a course of action might create mass unemployment or cause a recession, this was the price that had to be paid for the greater good of controlling the money supply.
And this is exactly what happened. As soon as she came into office, Thatcher raised interest rates to 17%.19 Given that high levels of inflation were primarily being driven by rising costs not rising demand, this had the predictable effect of strangling economic activity. The economy shrank by 2% in 1980 and a further 1% in 1981. Companies laid off workers and the ranks of the “reserve army” swelled. Unemployment began to rise at the start of the 1980s, and between 1983–1986 it never fell below three million — double the level of 1979. At such levels of unemployment, workers who are laid off find it almost impossible to find another job. Without any voice in determining their own pay and conditions, and unable to leave their jobs to find others, workers had no choice but to accept the pay that they were offered by corporations. Bosses, of course, knew this, and reduced workers’ pay accordingly — either actively docking pay or allowing it to be eroded by increases in inflation.
In this sense, the war on inflation functioned as Thatcher’s second front in her war against the unions. When discussing the monetarist policies of the 1980s Alan Budd, a government advisor at the time, worried that “what was engineered in Marxist terms was a crisis of capitalism, which recreated a reserve army of labour and has allowed the capitalist to make high profits ever since”.20 Disem-powering the unions and increasing unemployment would reduce wages — meaning more money going to owners rather than workers — and permanently reduce workers’ collective bargaining power. Monetarism was, in this way, an overtly political approach to monetary policy, even as its adherents claimed it was based on neutral economic analysis.
There is, of course, no such thing as neutral economic analysis, even though the neoliberal narrative presented itself as such. The war against the unions was justified in terms of “efficiency”, whilst monetarism was justified on the basis that it would prevent inflation. But the demise of the unions has created inefficiency in the labour market by increasing the returns to capital well above what they should be in the imaginary neoclassical economy. And monetarism failed to achieve its stated aim of controlling the money supply: as we’ll see in the remainder of the chapter, Thatcher’s deregulation of the banking system meant that the broad money supply increased faster than at almost any other point in history. But as this debt was mainly driven into housing rather than consumer goods, it created asset price inflation rather than consumer price inflation — in other words, inflation that benefitted the wealthy, not workers. Thatcher’s war on the labour movement was aimed at breaking the last vestige of opposition to financialisation — and she succeeded.
Privatised Keynesianism
In crushing organised labour, Thatcher set the stage for the institutionalisation of finance-led growth. Without resistance from the country’s workers, she could go about entrenching neoliberalism and empowering the financial elites that had brought her to power. Her reforms to the stock market, the removal of restrictions on capital mobility and the rise of shareholder value ideology had ushered in a new world order for corporate Britain.21 Profitability was restored, and businesses no longer had to worry about the problem of union activity. Stock prices soared. Top salaries skyrocketed. And the profit share of national income grew at the expense of the labour share.
But whilst this system worked for a time, such levels of excess are unsustainable over the long term. Rising inequality leads to falling domestic demand, as the rich spend a lower proportion of their incomes than the poor, which ultimately harms capitalists’ profits.22 Both problems had begun to assert themselves by the end of the 1980s. The financialisation of the firm and the demise of the unions led to falling pay and increasing inequality. Whilst real wages grew by an average of 3% through the 1970s and 1980s, as unemployment increased and bargaining power fell, this figure fell to just 1.5% in the 1990s and 1.2% in the 2000s.23 Rising GDP benefitted owners rather than workers. Modelling from the TUC suggests that the wage share of national income has fallen from a peak of 64% in the mid-1970s to around 54% in 2007.24
Whilst pay was increasing in absolute terms in this period, most of these increases went to the top of the income spectrum, and inequality increased substantially. The UK’s GINI coefficient rose from 3 at the start of the 1980s to 3.4 by the start of the 1990s, a rise which been driven primarily by increases in pay at the top.25 Whilst overall increases in income for the top 10% averaged 2.5% between 1980–2000, they increased at just 0.9% for the bottom 10%.26 The ratio of CEO pay to the pay of the average worker increased from 20:1 in the 1980s to 149:1 by 2014.27
Secondly, and relatedly, investment in fixed capital — in the physical machinery and infrastructure needed for production — began to fall substantially from the end of the 1980s onwards. Investment in fixed capital matters because it is a critical determinant of long-term productivity, and therefore the health of the economy. As we’ve seen, if businesses aren’t investing in production, they’re either distributing their cash to shareholders or investing in financial markets. Investment in fixed capital fell from around 25.4% GDP in 1989 to 18.9% just five years later.28 And it kept falling: by 2004, it had reached just 16.7%. This was partly due to the state cutting its investment in the real economy. But, as outlined in the previous chapter, investment was also falling due to the financialisation of the corporation, with firms distributing their revenues to shareholders, investing them in financial markets or buying up other corporations. The long and slow decline of the UK’s manufacturing sector, which invests more in fixed capital than the services sector, also contributed.
Falling pay, rising inequality, and low investment threatened to recreate the conditions that preceded the Great Depression. Keynes and others argued that the best way to combat the low-wage, low-investment, low-demand doom-loop was for the government to intervene at strategic points to curb the twists and turns of the business cycle. They would signal their willingness to do this by committing to maintaining full employment, whatever the stage of the business cycle. If unemployment was rising, the state would step in to pick up the slack — either by directing increasing spending or cutting interest rates to boost investment in the private sector. The problem, as outlined in Chapter one, was that this model of growth gave workers more power. Thatcher’s goal was to get back to a time when “the markets” — i.e. the bosses — were in control; a time when workers could be traded by businesses like any other input to production, rather than causing trouble by demanding bosses treat them like human beings. But she had to do this without creating a return to Depression-era economics.
The solution to this problem was to change the engine of demand: rather than business investment and state spending fuelling economic growth, from the 1980s debt-fuelled consumption came to be the main driver of increasing output.29 Increases in consumption came to outstrip increases in wages. In the context of stagnating wages, the gap between income and expenditure would be covered by personal borrowing. 1988 was the first year ever that consumers’ expenditure exceeded their incomes.30 The Lawson boom — the economic boom named after the Chancellor who presided over it — saw tax cuts, a reduction in interest rates (once the union movement had been dealt with, of course), and an across-the-board increase in household borrowing and spending. Growth increased in the short-term, before collapsing in an equally large bust when interest rates had to be hiked again to keep the UK in the Exchange Rate Mechanism. In a mini precursor to 2008, a housing crisis ensued. But it wasn’t long before stability was restored, and debt began to climb once again — and it didn’t stop climbing for nearly two decades.
The genius of basing demand on private debt was that it allowed people to buy more, propelling economic growth, whilst also directing a greater portion of peoples’ income towards interest payments and fees that went to financiers.31 Under this system, individuals would use the tools provided to them by financial markets to weather the storms created by changes in the business cycle, making a tidy profit for the finance sector in the process.32 In this way, “privatised Keynesianism” replaced the Keynesian model of demand management that governed the post-war consensus.33
Had this model relied only on unsecured lending, like credit cards or student loans, it wouldn’t have lasted very long. If you’re borrowing to go on holiday, you have to assume that your wages are going to carry on rising so that you’ll easily be able to afford the interest payments tomorrow. And as we know, wage increases weren’t keeping up with productivity increases at this point. Instead, privatised Keynesianism relied on secured lending — lending backed up by an asset like a house. When you borrow to purchase a house, you’re left with an asset that can produce income and be sold if you can’t make the payments. What’s more, when lots of people invest in the same asset, the price of that asset tends to rise. If everyone wants to buy housing, and is able to access a mortgage, but the housing stock remains fixed, then the price of housing will rise. From the end of the 1990s recession, the amount of money created and directed into housing increased at a far faster rate than the number of houses for sale, increasing prices.
In place of rising wages, Thatcher may as well have said “let them eat houses”. Financial deregulation and right-to buy, combined with the pension fund capitalism released by the Big Bang, allowed the Conservatives to transform the British middle earners into mini-capitalists who would benefit from the financialisation of the economy. By providing capital gains to a large swathe of the population, the Conservatives would be creating a class of people who had a material interest in the economy remaining as it was, even if most of the gains from growth were going to the top 1%.
Blowing Bubbles
In October 2018, the record for the most expensive UK home was broken when the penthouse at One Hyde Park was sold for £160m.34 Initially, the identity of the buyer was shrouded in mystery. The property had been purchased through a shell corporation located in the tax haven of Guernsey, where companies aren’t required to disclose their beneficial owners. But a few days later the buyer’s identity was revealed. As it turns out, the developer, multi-millionaire property tycoon Nick Candy, sold the penthouse to himself via Project Grande (Guernsey) Limited — a joint venture between his brother Christian Candy and the former Prime Minister of Qatar — so he could release the equity with a £80m loan from Credit Suisse.
Together, Nick and Christian Candy are worth £1.5bn. In the mid-1990s they got their first break when a family member gave them a £6,000 loan that they used to buy, renovate, and sell a flat in London, making a £50,000 profit. Like many property developers at the time, they used these profits to buy and flip a series of flats in London, riding the wave of the housing boom and making themselves incredibly rich. The brothers managed One Hyde Park, the most expensive development in the world when it was completed in 2016. Today, they are famous for their fabulous wealth, their aggressive tax avoidance and their long list of celebrity clients, including Kylie Minogue and Katy Perry. How was it possible, that over a period of just twenty years, these two brothers turned £6,000 into £1.5bn (that is, £1,500,000,000) just by investing in UK property?
The Candy brothers aren’t alone in making their fortunes on soaring London property prices. In fact, they are only the UK’s 52nd wealthiest property developers, falling far behind names such as the Reuben brothers, the Grosvenors, and the Barclays. 163 of the top 1000 richest people in the UK made their money in property, making property wealth the biggest single source of wealth on the Sunday Times rich list. But it’s not just the wealthy who have benefitted from rising house prices: everyone who bought a home before the boom of the 1980s has seen a windfall gain. Property wealth is the second most significant source of wealth in the UK after private pensions wealth, worth £4.6bn.35 Prices in London have risen faster than those in other parts of the country, and now property wealth represents almost 50% of the net wealth of people living in London, compared to 26% for those living in the North East.
This increase in house prices began during the 1980s, as part of Thatcher’s push to create a “property-owning democracy”.36 Right-to-Buy legislation, which allowed tenants of social housing to purchase their home at between one- and two-thirds of its market value, was passed in 1980. In 1984, the amount of time a tenant had been living in a flat before they were able to benefit from Right-to-Buy was reduced, and the potential discounts on the property’s value were increased. In the first seven years of the 1980s, 6% of Britain’s social housing stock was sold to private owner-occupiers. But the privatisation of Britain’s social housing stock would not have been enough to create Thatcher’s nation of home owners. People needed mortgages — and that required a change to the country’s financial system. So, Thatcher deregulated the banks.
When banks lend, they create new money.37 This unique state-provided privilege is what makes a bank a bank, differentiating banks from other financial institutions like building societies. If I deposit £100 in, say, a building society, the society could keep £10 of this money and lend £90 to someone else. No new money has been created — it has just been moved from one place to another. Banks, on the other hand, can lend out money without first taking a deposit, because states give them the right to issue loans in the national currency, subject to certain rules. BigBank Inc could lend £90 to a consumer, without actually having £90 in deposits. The amount that banks are able to lend is determined by central bank regulation. The central bank might say that commercial banks must hold a certain amount of highly liquid capital (cash, shareholders’ equity, or anything relatively easy to sell) relative to its loans. Once it has lent the £90 out, it might have to find £9 worth of capital to keep within state regulation. But the remaining £81 is new money — the bank has not borrowed it from anyone else, it has simply created it out of thin air. Increases in bank lending can therefore increase the money supply — the total amount of money in circulation.
Prior to the 1980s, there were many more restrictions on banks’ ability to create money in this way. Before 1981 the banking “corset” — otherwise known as the supplementary deposits scheme — required banks to keep a certain amount of cash at the Bank of England before issuing loans above a certain amount, which restricted lending and therefore the money supply.38 When restrictions on capital mobility were removed, banks found it much easier to bypass these restrictions by moving their activities abroad, so the “corset” was removed. The removal of restrictions on capital mobility also meant that banks now had access to the big pools of money that had by then emerged at the international level. They found it much easier to borrow — whether from institutional investors, or from other banks — and could therefore use this borrowing to increase their lending. Banks started to play a much greater role in mortgage lending — in 1980, banks were responsible for just 5% of mortgage lending; this had risen to 35% just two years later.
Another important set of reforms to the financial system were the changes made to the UK’s building societies.39 Building societies had been a feature of the British financial system since the eighteenth century, when they were created in the UK’s new industrial towns and cities to build homes for those who could afford them. Older homeowners put their savings in the societies, which were then lent to younger members as mortgages. They weren’t like banks in that they weren’t able to create money — they could only loan out the money they had in deposits. Building societies continued to grow until, by 1980, they were responsible for 90% of UK mortgage lending, which gave them, in the words of the Bank of England, a “virtual monopoly” on the mortgage market.
In 1986 — the same year as the Big Bang — the Building Societies Act was passed, which aimed to increase competition in the sector by removing the regulation that prevented building societies from operating like normal banks. After the Act was passed, building societies could do pretty much all of the things that banks could do — including create money by issuing credit. Members were bought out, becoming rich in the process, while new borrowers faced higher interest rates. Eventually many of these building societies — including Northern Rock — ended up undertaking the kind of sub-prime lending activities that caused the crisis.
Throughout the 1980s, banks and former building societies issued millions of pounds worth of mortgage debt to allow people to purchase their own homes — many used this debt to purchase their council homes. This lending surge led to an increase in the UK’s broad money supply, which increased from around 40% of GDP in 1985 to 85% in 1990, mirrored by an increase in the amount of credit provided by financial institutions.40 There was now a wall of money chasing after the same amount of housing stock — and the inevitable consequence of such a scenario is house price inflation. To illustrate this point, imagine that two people both want to buy the same house.41 The asking price is £50,000, so couple A goes to the bank and asks for a £50,000 mortgage, which is granted. Couple B then goes to another bank and asks for a £55,000 mortgage to outbid couple A, which is granted, so couple B returns to ask for £60,000. Prices are pushed up based on the amount that banks are willing to lend — and without strict limits on bank lending, this led to an increase in house prices.
As finance came to colonise the real estate market, housing was transformed into a speculative asset. Average house prices increased tenfold between 1979 and the 2008 crash, whilst consumer prices increased by just half that amount.42 In London and the south east, the situation is even more extreme. In purchasing a house, one was no longer purchasing a roof over one’s head, one was purchasing a future: a pension, an inheritance for one’s children, and thirty years of continuous mortgage repayments. The boom, of course, ended in a bust. But before the crisis, this debt-fuelled consumption-driven growth model transformed the nature of Britain’s economy, its society, and its politics.
Thatcher couched her push for home ownership as a progressive bid to turn the country into a nation of responsible homeowners, who wouldn’t rely on the state to support their ambitions for a better life.43 Sensible, savvy consumers would choose to invest their hard-earned savings in housing and pensions, with the promise that these would continuously increase in value. Free markets would carry the nation to prosperity, unencumbered by the overbearing influence of the paternalistic state. If you were too poor, stupid or lazy to take advantage of the fantastic opportunity given by this brave new world, well, that was your own fault. The state’s role would be limited to controlling inflation, which might erode the value of your assets and hard-earned savings, by controlling interest rates.
It’s worth stating at this point just how much of a lie this vision really was. Too often people critique the Thatcherite vision by arguing that, whilst individual freedom is just as important as Thatcher claimed, it is also important to look after the collective. Sometimes individual freedom needs to be curbed in order to control the markets and reduce social ills like inequality and poverty. This argument may be true, but it accepts the neoliberal discourse on its own terms. One cannot understand Thatcherism by looking at Thatcher’s language — one has to understand the aims of her vision by looking at who benefitted from these changes. In doing so, it is easy to see how the language of neoliberalism served to conceal what was really going on: a transfer of society’s resources from those who work for a living to those who own the assets.
The Conservatives transformed British political economy by providing the wealthy with free money — realised in the capital gains they derived from increases in the value of their homes and their pensions. These rising asset prices compensated for falling wages amongst those who were able to access the credit required to purchase these assets. Middle earners were persuaded to support Thatcher’s model on the basis that they too might become wealthy one day. The rest of society — the majority — didn’t feature, other than as inputs to the production process. Thatcher may have talked about freedom, but she created a society based on unfreedom: the non-choice between work at a wage below what one deserves and destitution.
Similarly, Thatcher’s government never actually tackled inflation or the money supply, despite its monetarist rhetoric. The broad money supply increased dramatically over the course of the 1980s because of rising mortgage debt. Instead, they focused on curbing wage-inflation by cutting the size of the state and restricting collective bargaining. Asset prices — mainly houses and other financial assets — rose substantially under Thatcher, even as consumer price inflation was brought under control. The ideological battle between individual freedom and collective justice provided a smokescreen that allowed the neoliberals to stratify British society — co-opting middle earners by turning them into mini-capitalists and creating a margin-alised class of poorly-paid, precarious, and heavily indebted workers beneath them.
A central plank of the finance-led growth regime has been the replacement of wages with debt and private wealth as the central determinants of many households’ sense of economic prosperity.44 When households fall on hard times, rather than taking their fight to employers, they are much more likely to take out new debt. When planning for the future, those who own homes and have private pensions are more likely to rely on the value of these assets than they are on social security provided by the state. In other words, the financialisation of the household has radically individualised peoples’ experience of the economy, leaving them to rely on individual financial management rather than collective political mobilisation to improve their standard of living.
Financialisation and Politics
The capitalist societies described by Marx were divided between the owners of the most important resources and those who worked for them. Capitalists would use their political and economic power to force everyone else to work for them for a wage below what they deserved. Land owners and financiers used their control over land and capital to extract wealth from both capitalists and workers. Property was passed down through the generations, and it was all but impossible to transition between the classes. The state existed to protect owners — the franchise was strictly limited, and policy was determined by battles between different classes of owner.
But during the golden age of capitalism, Marx’s analysis of class no longer seemed to fit the experience of the global North. Strong unions meant that most workers were being paid an income approaching the value they produced for capitalists. The extension of the franchise transformed the state, which was now stepping in to provide public services and promote full employment. A new class of professional managers emerged, earning high wages, and often being remunerated in shares as well, undermining the distinction between owners and some workers. Increasing social mobility and high wages produced a society that looked much less stratified than the one analysed by Marx in the nineteenth century. Many resources were owned collectively, meaning that the amount people had to spend on rent was relatively low. And finance was reined in, meaning less was being spent on interest payments.
But under finance-led growth, society has come to look a lot more like the ones described by Marx. The wage share has fallen, and the profit share has risen. Within the profit share, the rentier share has also risen. The increase in income accrued from sources like interest and property rents has made financialised capitalism much less productive. When large amounts of income are diverted to economic rents, less money is reinvested in production and more accrues to the owners of already-existing assets. No new jobs are created when I pay my landlord rent or when a corporation pays interest to a bank — income is simply transferred from one place to another. The combination of a falling wage share and a rising rentier share saps demand out of the real economy, as well as increasing financial instability, contradictions that will be analysed later in this book.
The divide between the owners of the means of production and rentiers on the one hand, and those who are forced to work for a living on the other, is the divide between the many and the few — between those who live off work and those who live off wealth. This is the fundamental divide that characterises capitalist societies today. The political salience of this divide may rise and fall depending upon wider political economic conditions, but it never goes away. Even before the overt conflict of the 1970s, the class divisions in British society were a primary feature of politics. The division between owners and workers has become more obvious under finance-led growth as profits and asset prices have risen and wages have stagnated. But as society has become more polarised, this division has seemed to become less politically salient. Thatcher may have managed to physically constrain the resistance to her agenda of privatisation and deregulation in the 1980s, but why did people continue to support politicians advocating similar policies all the way until 2007?
The genius of Thatcherism was to mute peoples’ awareness of the divide between owners and workers by extending asset ownership to middle earners. The expansion of home ownership and the financialisation of the housing market convinced middle earners to side with owners instead of workers. The Conservatives built a large, stable voter base by creating an alliance between homeowners and the 1%. Middle earners who happened to be alive at the right time were able to buy homes and invest their savings in stock markets and benefitting from capital gains. Bankers and financiers made huge amounts of money through mortgage lending and securitisation whilst middle earners benefitted from rising wealth. The former provided the money, the latter provided the votes. This group by no means represents a majority of British society, but they have emerged as an exceptionally powerful minority.
Today we know that from the 1990s, the UK was sleepwalking into a debt crisis — one that would end in a much bigger crash than that of 1989.45 But at the time, the country was blissfully unaware of the problems that were being stored up for the future. To many people, the avalanche of cheap credit seemed like a gift from the heavens. This boom coincided with the fall of the Berlin Wall and a new era of globalisation, during which cheap consumer goods from all over the world would become more readily available than ever before in history. Working people were able to afford plasma screen TVs, mobile phones, and video game consoles.
But peoples’ experiences of the long boom differed depending upon their class position. On the one hand, the new property-owning classes were able to release equity from their homes to finance consumption. In this way, middle-earners were able to acquire elite identities through wealth, even as wealth has become much more concentrated amongst the top 1%, who have also become much less socially mobile than ever before. On the other hand, those without access to such wealth and capital gains could still buy into the new consumer culture by taking out unsecured credit through credit cards, overdrafts, and payday loans. Corporations also started jumping on the bandwagon by offering consumers low-interest credit to purchase cars and consumer durables.
Over time, the differential experiences of finance-led growth led to a divergence between the economic experience of the property-owning classes and those of everyone else. For the wealthy, debt is a luxury. Access to interest-only or low-deposit mortgages has allowed many families to jump onto the property ladder and watch their wealth increase, transforming their class identity. For others, debt is a curse. Barely able to make ends meet on their low incomes, it has become easier and easier for the poorest in society to get access to emergency loans which charge usurious rates of interest. Payday lenders will target the most desperate people in society — those with poor credit scores who have fallen on hard times — knowing that they will be unable to access credit anywhere else. A single parking ticket, a broken car or a dental emergency can leave these people bankrupt — or, for those like Jerome, much worse.
The decline of the union movement has only exacerbated these problems. Before financialisation took off, British workers had been bound together by their collective experience of exploitation in the workplace, and their organisation against it in the union movement. Without participation in the labour movement, the experience of exploitation and poverty came to be terrifyingly individualised. This was a process helped along by the changing nature of the labour market, the erosion of the welfare state and the general decline in civic participation and social capital that marked the financialisation of society. Many of those previously employed in mining or manufacturing found themselves perpetually on the dole, chastised by the media as the welfare-dependent, undeserving poor. Their experience of poverty was unique, one infused with overtones of shame, isolation, and anger.
Others found work in poorly-paid, insecure jobs in the emerging service sector, in hospitality or retail. Even the most dedicated unionists found it hard to organise in these sectors, with workers spread across the country, paid partly in tips or commissions, and forced to undertake the kind of psychologically-warping emotional labour that can erode one’s capacity for genuine connection with other human beings. Those who had been granted an education might have been allowed access into the ranks of the civil service, joining the salaried professionals themselves. Most found themselves in debt of one kind or another. This was perhaps the most tortuous aspect of the new poverty: the power asymmetry between a debtor and a creditor is far more extreme than that between a worker and a boss. There can be no organising against loan sharks or payday lenders, still less against commercial banks.
At the same time, the state was retreating from providing the kind of social security that had been a hallmark of the post-war era. Risks that had formerly been socialised were privatised, encouraging middle-earners to “think like capitalists” in planning and insuring for risks. Private health insurance coverage has increased as wealthier consumers seek out better care than that which is available on the NHS. Rising tuition fees have also shifted the burden for paying for education onto individuals, who find themselves saddled with debt well into their careers. Many working families were taken in by the “delusion of thrift”, believing that their pensions and properties were increasing in value because of smart investments rather than a generalised environment of asset price inflation. This was, of course, a delusion — one that was quickly shattered in 2007 and the legacy of which many families are still dealing with. Many peoples’ pensions were effectively wiped out in 2008 (only to be revived through QE), some homes were foreclosed upon, and personal bankruptcies soared. Unsurprisingly, this assumption of what were previously socialised risks by ordinary households has led to a pervasive rise in feelings of anxiety and insecurity.
As greater portions of society came to rely on priva- tised insurance to mitigate personal risk, the socialised risk of the welfare state came to be seen as something for the poor and, increasingly, the lazy. Those who owned property extricated themselves from the welfare system, relying on asset price inflation to insure them against future risks. But this has had profound political consequences, including “alienating those with property from a welfare state for which they pay but from which they derive little benefit”.46 Such a situation allows the welfare state to be redefined as something for the poor, and eventually the lazy and unproductive. Slowly, as state benefits are restricted to an ever-smaller section of the population, support for the welfare state declines and it becomes far easier to cut. This process has been reinforced by the “neoliberal welfare discourse”, which locates the blame for worklessness amongst the unemployed themselves.
The changing relationship between class and politics was made strikingly clear with the rise of New Labour, which Thatcher later reflected on as her greatest achievement. It might appear odd for Thatcher to praise a party that kept hers out of government for almost fifteen years, but she was, as ever, astute to observe that the rise of New Labour consolidated Thatcher’s grand bargain between elites and the mini-capitalists. The New Labour project was based around the idea that class was no longer politically relevant; that electoral politics could be confined to societal and cultural issues, and debates over how the gains from growth should be distributed. The commanding heights of the economy would be “left to the free market”, under the watchful eye of independent technocrats in central banks and regulatory bodies. But the entire economic model that New Labour had blindly accepted was premised upon the continuous expansion of debt and asset prices, and the Tories managed to hand the reins over just as it was entering its least stable phase. Thatcher described New Labour as her greatest achievement because, just as in the 1950s the Conservative Party couldn’t touch the unions, in the 1990s the Labour Party couldn’t touch the banks.
CHAPTER FOUR THATCHER’S GREATEST ACHIEVEMENT: THE FINANCIALISATION OF THE STATE
The Establishment decided Thatcher’s ideas were safer with a strong Blair government than with a weak Major government. — Tony Benn.
On 26 June 2002, Gordon Brown delivered a speech to City dignitaries assembled at Mansion House. “Mr Lord Mayor, Mr Governor, my Lords, Aldermen, Mr Recorder, Sheriffs”, he pronounced, “let me at the outset pay tribute to the contribution you and your companies make to the prosperity of Britain”.1 These might sound like strange remarks from the party that had, just over two decades previously, pledged to nationalise the banking system. But in many ways, its close relationship to the City was one of the defining characteristics of New Labour, which consistently deregulated the finance sector. Blair attempted to woo ordinarily hostile investors and executives in the City through his famous “prawn cocktail offensive”. Financiers have always been, and would continue to be, natural supporters of the Conservative Party. But Blair and Brown made significant inroads with the sector during their tenure. The consequences of this offensive were, as later noted by the FSA, a total failure to properly regulate financial institutions, which ultimately contributed to the financial crisis.2
Given the power that the City of London Corporation holds within British politics, it is perhaps unsurprising that Blair felt the need to get the institution on side. Some have referred to the City as a state within a state: a shady, arcane institution designed to corrupt British politics and promote the interest of reclusive financiers.3 The City is the only space in the UK over which Parliament has no authority, and its representative in the House of Commons is the only unelected member allowed to enter the chamber.4 Its political architecture continues to be based on the Medieval guild system, under which businesses have votes, with larger businesses having greater weight than smaller ones. In 2011, the Bureau of Investigative Journalism revealed that the City had spent more than £92m lobbying politicians and regulators in the wake of the financial crisis to limit new regulation.5 The Bureau was able to link these lobbying efforts with a series of legislative changes, including reductions in bank taxation and regulation.
But whilst the relationship between the political parties and the City may occasionally veer into outright corruption, the influence of the City on British politics is less of an aberration than a reflection of the UK’s political economy.6 In other words, it’s not so much that a small set of financial interests centred in the City of London have “captured” policymaking (though they undoubtedly have); it’s more that the individuals making policy conflate the interests of the City with those of the British state as a whole. Politicians like Blair and Brown weren’t simply vying for access to the City’s lobbying budget, they genuinely believed that deregulating financial markets would help to boost economic growth and tax revenues that could be spent on making society more equal. By taxing the revenues of the big banks, and the salaries of their employees, the British state would be able to provide public services and welfare for those in parts of the country where traditional industries had been destroyed. Globalisation may have harmed British manufacturing, but it could help to provide support for those “left behind” by bolstering the City as a global financial centre.
Whilst finance has always played a central role in British politics, in the 1980s and 1990s the City’s dominance was taken to a whole new level. This was initially catalysed by Thatcher’s policies — from bank deregulation, to right-to-buy, to the Big Bang. But Blair and Brown took this process one step further. They developed a complex and arcane regulatory architecture for the City that was easy for insiders to manipulate. These organisations were given a mandate to implement “light touch” regulation on the finance sector, to encourage “innovation” and promote investment.7 Meanwhile, billions of pounds were pumped into the UK’s real estate market, inflating a bubble that would eventually burst in the biggest financial crisis since 1929. The revenues from this model were then used to expand the provision of welfare and public services for those left out of the boom, under the auspices of the private sector, which was given responsibility for delivering public services. In other words, Blair maintained Thatcherite political economy, but sought to make the grossly stratified society that resulted slightly less unfair. However, in expanding the size of the state without challenging the dominance of finance, Blair managed to do something that no other government had achieved: financialise the state itself.
Thatcher’s Greatest Achievement
By the 1990s, high and rising levels of inequality were a defining feature of the British economy. The Conservatives had attempted to naturalise this inequality by claiming that it was the result of market forces in a globalised world.8 Over the long term, they claimed, the efficiency gains from trade would make everyone better off. Of course, the kind of globalisation taking place in the 1990s was not primarily based on trade. Instead, the 1980s marked the start of the era of financial globalisation, which was only ever meant to serve the interests of the 1%.9 Financiers had been pushing behind the scenes for the removal of restrictions on capital mobility for decades, and when their wish was finally granted, it precipitated a global financial boom. With the political foundations of the new world order firmly hidden from sight politicians were free to claim that rising inequality was a natural state of affairs. A focus on redistribution replaced the Labour Party’s previous “obsession” with ownership — the gains from growth didn’t have to be equally distributed if the state could tax the wealthy and redistribute their income. In other words, rather than attempting to challenge a fundamentally unfair and unstable system, Blair accepted finance-led growth and aimed to make it slightly less unjust.10
And in many ways, he succeeded. As John Hills argues in his survey of inequality in Britain during the Blair years, New Labour’s policies did marginally reduce the large inequities that had resulted from the advent of finance-led growth.11 On average, in the middle of the distribution, income differences narrowed. Child and pensioner poverty fell and there were notable improvements in geographical inequality in some areas as Blair attempted to keep voters in traditional Labour seats on side. His hallmark focus on education meant that there was a marked reduction in attainment gaps between the wealthiest and the poorest children.
But Hills also points out that, despite the then widespread view that New Labour reduced inequality throughout society, the picture is actually much more complex. Incomes for the top 1% grew extremely quickly — far outpacing income growth for the rest of the population. Meanwhile, the incomes of the very poorest in society grew more slowly than the average, for reasons highlighted in the previous two chapters. The combination of these two trends meant that the incomes of the richest and the poorest in society diverged substantially over Blair’s tenure. Wealth inequality also continued to grow — unsurprising given that rising asset prices are a defining feature of finance-led growth. Hills’ assessment is that New Labour managed to make slight improvements to the highly unequal income distribution handed to them by Thatcher, but that the problem of inequality was much more deeply rooted than Blair and others had assumed, and “less amenable to a one-off fix”.
In fact, rising inequality is an inherent part of finance-led growth. The growth of shareholder value ideology during the 1980s meant that companies were more focused on increasing their profits and distributing the returns to shareholders than paying and retaining their workforce. The rapid growth of the finance sector and related “professional services” industries in the City also meant rising salaries for those at the top. But perhaps the greatest driver of inequality under New Labour was rising asset prices, driven by the billions of pounds worth of new money being pumped into property and stock markets every year.
Blair and Brown had to be seen to be doing something about these issues. For a start, a commitment to making British society fairer was the one thing that differentiated Labour from the Conservatives. But more generally, voters were starting to express real concern with rising inequality. As a result, Blair and Brown had to undertake a balancing act between alleviating the most obvious signs of inequality without undermining the incentives that made Thatcherism work.12
Out of this quagmire emerged a threefold strategy. Firstly, New Labour would adopt Thatcher’s language about welfare — the responsibility for unemployment would be placed firmly on the shoulders of the unemployed.13 The only difference was that workers’ laziness and irresponsibility would be met with a “compassionate” response from the state. The emphasis would be placed on skills acquisition — hence Blair’s famous focus on education as the route out of poverty. Welfare-to-work programmes were introduced, and tax credits were brought in to subsidise low pay and “encourage” people back to work. None of these measures, of course, tackled the structural causes of low pay or unemployment, but served instead to consolidate the division between the deserving and undeserving poor that underlaid the Thatcherite ideology. Those who took advantage of these welfare programmes would be seen as deserving, whilst those who did not would be punished.
Secondly, the state would learn to behave more like a private organisation itself, based on the emerging ideology of “new public management” (NPM).14 NPM advocates argued that the best way to run an economy was to subject all areas of economic activity — including state spending — to the discipline of the market. If markets didn’t naturally exist, then they should be created. After all, the lazy, corrupt, and inefficient bureaucrats who staffed the public sector had to be incentivised to behave in the best interests of the taxpayer. Introducing private-sector management techniques would promote public sector “efficiency” and improve “customer service”.15 Middle and upper management were empowered to introduce and police a set of rigid targets to hold civil servants and public sector workers to account. Mirroring the process that had taken place in the private sector after the famous “it’s not what you pay them but how” paper, senior civil servants started to be remunerated based on performance.
On the one hand, new public management ideology forced the civil service to operate much more like a private business.16 New policies were ruthlessly subjected to techniques like cost–benefit analysis to determine whether or not they would be “profitable” for the state to undertake. Such a process is, of course, meaningless, because states are not businesses. The vast majority of a state’s citizens do not behave like “customers” that will pick another, cheaper state if they don’t like the quality of service they receive. But treating the state like a business ended up benefitting those that do — the international capitalist class who can threaten to move if they are taxed too much. On the other hand, new public management also had what might be considered an unintended consequence — an increase in public sector bureaucracy. Middle management in the sector has grown substantially, and employees are continuously assessed against useless metrics that serve to create more work for all involved.
The third, and perhaps the most important, element of New Labour’s strategy for tax and spend would be to encourage the private sector to undertake public spending on the state’s behalf. The logic behind the outsourcing agenda and private financing initiatives was a natural extension of new public management thinking — what better way to introduce market discipline into the public sector than to have private companies undertake spending for the state themselves? This would be justified on the basis of “efficiency”, but its true purpose would be to allow private corporations to profit from the necessary redistribution created by the finance-led growth model. New Labour’s promise to the electorate centred on alleviating inequality without killing the goose that laid the golden eggs — finance. The genius of the privatisation agenda was using this expansion in state spending to make the goose even fatter.
PFI: Profits for Investors
Proposals for a tunnel linking the UK and France date back to the nineteenth century.17 In 1802 the French engineer Albert Mathieu-Favier put together a blueprint to dig a tunnel under the English Channel, illuminated by oil lamps to light a path for horse-drawn carriages. A desire to seal off the cliffs of Dover from any European invasion prevented the project from being taken up until 1980, when Thatcher’s newly elected Conservative government agreed to work with the socialist French President François Mitterrand to take forward the proposition. Thatcher had one condition: the project would be financed privately. This was no small ask. At the time, the £5bn Channel Tunnel was the largest infrastructure project ever proposed. Whilst state-owned and well-regulated private French investors eagerly stepped forward to provide their half of the funding, the City was less keen on the idea. This was something of an embarrassment for a British government intent on proving that it was host to the most powerful financial centre on the planet, and it took interventions by the Bank of England and the government to finally ensure that adequate capital was raised.
But this wasn’t enough to please the banks, which were worried about being exposed to what looked like a hare-brained politically-motivated white elephant. They demanded that a new body, which became known as Eurotunnel, be created to place some distance between the banks and the construction firms. At this point, the project was becoming incredibly expensive and complicated. Channel Tunnel Group in the UK and France Marche group in France would invest in Eurotunnel, which would be floated as a public company, before itself financing Trans-Marche Link, which would undertake the actual construction. Eurotunnel would, in turn, gain the concession to run services that ran through the tunnel in coordination with SNCF and British Rail and would recoup its costs over the long-term through “usage charges” paid to it by the train operators.
Almost as soon as construction began, costs began to mount. The engineering problems were almost enough to derail the project on their own, but the real trouble lay with figuring out who amongst the plethora of different actors involved would pick up the extra costs. Adding to the trouble, high interest rates meant that the financing costs of the project were 140% higher than expected — a year-long delay cost an extra £700m in interest charges. By 1995, Eurotunnel was up and running, a year late, and 80% over budget. The company lost £900m in its first year of operation. Three years later, it had undergone three state rescues. In 2003, its interest payments of £320m were almost double its operating profits of £170m.
And yet, when the government decided that it needed to upgrade the rail network that went through the tunnel, it concluded that the project should, once again, be privately financed. HS1 — otherwise known as the Channel Tunnel Rail Link — also turned out to be a disaster. Yet again, a consortium was created to raise the finance needed for the project. Yet again, overoptimistic assumptions about future revenues meant that it was unable to find the funding it needed. And yet again, the government stepped in to bail out the private consortium and save the project. The Public Accounts Committee found that the project has left taxpayers “saddled with £4.8bn worth of debt”.18
As private financing has been extended into ever more areas of public spending, public investment has collapsed, falling to just 2.6% of GDP in 2018 — well below the OECD average of 3.2%.19 A recent report on private financing initiatives from the National Audit Office revealed that most of these projects have been entirely unsuitable for private financing, and that some projects are costing the public 40% more than would have been the case had public money been used directly.20 Public borrowing is always, other than in extreme cases in which states are deemed uncreditworthy, cheaper than private borrowing because it is incredibly difficult for states to default. Even when they do, it is either because they have borrowed in a foreign currency, like Argentina today, or because they don’t have control over their monetary policy, like Greece.
So why did successive governments continue to press ahead with PFI? Supporters argued that PFI transferred risk from the taxpayer onto the private sector.21 If the contractors delivered on time and on budget they would get paid — if they didn’t, they would lose out and their shareholders would suffer. This was supposed to introduce market discipline into the provision of public contracts. Again, the ideological justification fell far short of the reality. The private sector prefers to operate in the absence of competition. So, in exchange for entertaining the government’s new publicity stunt, the companies involved made sure that the contracts were written in such a way as to ensure that, whatever happened, they would get their money. This meant that private companies were undertaking spending on the state’s behalf without incurring any risk whatsoever.
The second justification was even more spurious. Having inherited the idea that the state functioned like a household — and that the role of the prime minister was similar to that of a good housewife — New Labour had an incentive to ensure that public spending did not reach what looked like unsustainable levels.22 This analogy was always ridiculous, especially when it comes to investment spending. If the government borrows to invest in infrastructure projects that expand the productive potential of the economy, GDP will rise, tax revenues will follow and, over the long term, the project will pay for itself. Whilst the New Labour government undoubtedly knew this, they also knew that the returns wouldn’t be recouped for many years, whilst the impact on government debt would be immediately obvious. New Labour had to avoid looking like it was going back to the bad old days of socialism, and this is where PFI came in. Private financing allowed New Labour to shift the immediate cost of borrowing off the government’s books and onto those of the private sector, even though the cost would ultimately fall on the state itself.
Private financing is another avenue through which the British state has attempted to implement a regime of privatised Keynesianism.23 Combined with the increases in household debt described in the last chapter, PFI and other outsourcing initiatives would allow for the further displacement of public spending with private debt. Except under this scheme, the private debt would be held by wealthy shareholders rather than households, and it would be backed up by an implicit government guarantee. Private corporations would be able to borrow on financial markets without taking any risk, as the state would always be there to step in and pay back the debt. This meant implementing the logic of Keynesianism — that states should borrow to invest to mute the ups and downs of the business cycle — whilst skimming some cash off the top for the private sector. In other words, state-sponsored rentierism.
State-guaranteed private borrowing creates the problem of moral hazard, a situation in which economic actors are shielded from the negative consequences of their actions. Before 2007, the banks knew that if they ran into trouble the government would always be there to bail them out — they could take huge risks today, without having to face the consequences tomorrow. This problem of moral hazard is what underlay the collapse of PFI giant Carillion.24 The firm was accepting government contracts at very low prices — less than the amount they needed to deliver the work — and eventually found itself unable to deliver its contracts and pay its shareholders. Rather than admitting it was in trouble, the company increased the amount it was paying out to shareholders and started to take on new government contracts to cover the costs of the old ones — effectively throwing good money after bad. They did so betting, no doubt, that the state would step in to rescue the company were it to encounter financial difficulties. But when Carillion collapsed in 2017, the government did not step in to help — perhaps because of the public outrage at the incredible irresponsibility of the firm.
When the auditors came in to manage Carillion’s bankruptcy, they found that the company had just £29m in cash and owed £1.2bn to the banks, meaning that it didn’t even have enough money to pass through administration before entering liquidation. Carillion had become a giant, state-sponsored Ponzi scheme, siphoning off money from the taxpayer and channelling it into the pockets of wealthy shareholders. Whilst many of those shareholders who did not sell on the first signs of trouble have now lost out, the real losers have been the contractors and workers hired by Carillion, many of whom have found themselves out of pocket. Today, billions of pounds worth of taxpayers’ money is being funnelled into inefficient, financialised outsourcing giants like Carillion, only to enrich executives and shareholders, whilst leaving taxpayers to foot the bill.
The demise of Carillion epitomised the failure of New Labour’s experiments with private financing. But PFI wasn’t the only route through which public spending has become financialised — the rise of outsourcing more broadly was also to blame. Government spending can be divided into investment spending, which requires a big outlay up front to construct a potentially revenue-generating asset, and current spending, which pays for day-to-day public services provision. Upgrading the UK’s rail network might, for example, require billions of pounds to be spent today for improvements that will be felt tomorrow, whilst paying the salaries of NHS staff requires a continuous payment every year. PFI was meant to keep investment spending off the government’s books by requiring a private company to raise a lot of money up front, which the government could repay over a period of decades, with interest. But New Labour also wanted to bring the private sector into the delivery of day-to-day spending. So, it turned to outsourcing — paying a private company directly for the delivery of a public service. Many of the same firms that were brought in to deliver big infrastructure projects were also used to deliver public services.
Outsourcing has an ambiguous record.25 There are examples of relative success, where public procurement has been used wisely, as well as examples of dramatic failures, with low-quality services being delivered by unscrupulous contractors at a huge cost to the taxpayer. There are arguments for outsourcing government projects when procurement is done well and includes, for example, commitments to use companies with unionised workforces and with high environmental standards. But today, outsourcing is mostly dominated by a few big firms delivering low-quality services whilst skimming money off the top for shareholders and executives. G4S managed the security for the London Olympic Games so badly that the government was forced to bring in the army to support them.26 Serco operates some of the UK’s most brutal detention centres and has even been accused of using inmates as cheap labour.27 Capita is known for gouging many of the UK’s local authorities by delivering low-quality services at eye-watering prices.28 These outsourcing oligopolies have their tentacles spread all over the spending of the British state, from schools and hospitals, to prisons and detention centres.
The steady privatisation of public spending around the world was recently identified by the UN as the source of pervasive human rights abuses.29 The UN’s expert panel claimed that “[g]overnments trade short-term deficits for windfall profits and push financial liabilities on future generations”. Neoliberal governments have relied on privatised public spending in order to alleviate some of the inequality created by the finance-led growth regime, and to mute the ups and downs of the business cycle. They have, however, shied away from returning to the old Keynesian model of promoting full employment, given the implications this would have for power relations between workers and owners. Instead, they have sought to create a model of privatised Keynesianism, which allows executives and shareholders to profit from public spending through monopolistic corporations that pay executives huge sums whilst hiring workers on poorly-paid, precarious and insecure contracts. In other words, privatisation attempts to deal with some of the many contradictions of finance-led growth, whilst maintaining the power relations upon which it rests.
But private financing and outsourcing did not just allow private investors to extract large sums of money from the taxpayer. These innovations were also designed to insulate the private sector from democratic accountability.30 When the public sector provides a poor service, citizens can lobby, campaign, and vote against the politicians in charge. The more democratic and decentralised the state, the more this pressure is felt. But if a private organisation is providing a poor-quality service, to whom does the service user complain? She could try to complain to the organisation itself, but why would senior executives listen to a disgruntled service user when their profits are guaranteed by the state? She might try to influence politicians, but they would just tell her to take it up with the company. Without a real market, in which consumers can respond to poor outcomes by changing providers, private provision of public services insulates the providers from democratic accountability.
Today, our public services provide lower-quality services to a smaller number of people at a higher cost, and at much lower levels of efficiency. They are bureaucratic monoliths, managed according to the profit-maximising logic of the free market, without the countervailing competitive pressure that would require them to raise standards. The deterioration in the quality of public services has often been part of a deliberate strategy to encourage middle earners to take up private forms of social insurance, meaning that they are immune from the ongoing deterioration of the public sphere. The state is consigned to offering low-quality services to the poor, who are rendered voiceless in the face of the giant bureaucracies in control of many of our public services.
How did neoliberal states get away with such obvious disregard for such a large portion of their citizens? They did what they always do: they claimed that they didn’t have a choice.
The Bond Vigilantes
In 1983, Edward Yardeni, an economist at a major US brokerage house, coined the term “bond vigilantes”.31 These vigilantes, Yardeni claimed, would “watch over” domestic governments’ policies to determine “whether they were good or bad for bond investors”. In other words, in the era of capital mobility, it was up to states to prove to investors that their country was worth investing in. If states were found wanting, the vigilantes would flee, pockets stuffed full of cash. Yardeni’s bond vigilantes are a personification of the logic of market discipline. States that fail to safeguard the value of foreign investors’ capital would face capital flight as investors sold these states’ assets, including their governments’ bonds.
This capital flight would send the value of the country’s currency tumbling, which in import-dependent countries would lead to a rise in inflation and increase the cost of servicing foreign debt. For those countries with fixed exchange rates, it would necessitate cuts to public spending or a humiliating devaluation. The bond vigilantes could also more directly impact a government’s credibility by selling government debt. The higher the demand for a particular states’ government bonds, the lower the yield — the greater investors’ confidence in a country’s ability to pay its debts, the lower that country’s borrowing costs. If investors lost confidence, disaster could ensue: a mass sell-off of a country’s debt could trigger a sharp rise in the cost of debt servicing, potentially catalysing a solvency crisis.
Prior to the liberalisation of international capital markets, most developed countries didn’t have to worry too much about international financial markets’ views on their domestic policy decisions. Investors were constrained in their ability to move their money around the world, for the very reason that large volumes of capital flowing into or out of a country would have made it all but impossible for governments to maintain the exchange rate pegs at the heart of Bretton Woods. But with the removal of restrictions on capital mobility and the rise of the institutional investor, this all changed. Suddenly, a decision on the part of a few big investors to divest from a particular country could spark a crisis. This gave the vigilantes a great deal of power. International investors found themselves able to undermine — and sometimes even bring down — democratically-elected governments that they judged to be unsound economic managers.
Perhaps the best example of this kind of market discipline is the capital flight that befell French President Mitterrand’s government in 1983.32 Mitterrand had been elected in 1981 on a socialist platform that was essentially an extension of the post-war consensus. His 110 propositions for France included commitments to revive growth through a large Keynesian programme of investment, to nationalise key industries, to increase the country’s wealth taxes and to democratise the institutions of the European Union. This, Mitterrand hoped, would lay the groundwork for the “French road to socialism”. He could not have picked a more inopportune moment to advance such an agenda. International finance had been emboldened by the death of Bretton Woods and the birth of neoliberalism in the US and the UK — investors were not about to allow one of the world’s largest economies to fall to the scourge of a renewed socialism.
France — like much of the rest of the global North at the time — was also in the midst of its own economic crisis. International competition was eroding corporate profitability under French social democracy, and, when the oil price spike hit, rising inflation and unemployment brought the economy grinding to a halt. Just as it is today, the French state’s ability to use monetary policy to counteract these pressures was limited due to the country’s participation in the European Monetary System (EMS), which required it to peg its currency to the Deutsche Mark. France was also then enduring the effects of the Volker shock — the interest rate hike pursued by the US Federal Reserve that saw billions of dollars’ worth of capital flow into the US — which placed a strain on economies all over the world. Mitterrand’s nationalisations of French banks were not exactly encouraging international investors to keep their money in the country, and France was also running a trade deficit. These factors all contributed to a mass exodus out of French assets — from bank deposits, to property, to government bonds — and France lost around $5bn in capital flight between February and May 1981. Mitterrand faced a choice between implementing capital controls or giving in to the demands of international finance by implementing a harsh austerity agenda, reneging on his promises of a French road to socialism. In the end, he chose the latter.
This story seems to suggest that, by the 1980s, investors had become powerful enough to force democratically-elected governments to promote their interests — or, as the latter would argue, to abide by the logic of the market. This is what explains the rise of neoliberalism: states had no choice other than to implement “investor-friendly” policies, like reducing taxes, deregulating financial markets, and making credible commitments to respect private property rights and to keep inflation low. But the story is more nuanced. The increasing power of the bond vigilantes benefitted neoliberal states just as much as investors — Thatcher, Reagan, and others who sought to implement their radical economic agenda in the face of popular opposition could credibly claim that there was no alternative to cutting public spending, shrinking the state and deregulating markets. The idea that governments must compete for international investment has now become a central plank of economic discourse, reproduced by the financial and popular press.
The rise of finance came to shape the way the modern state functions, just as it has shaped the functioning of the modern corporation or household. But just as it is unwise to view the financialisation of the corporation as a battle between “good” capitalists and parasitic financial elites, it would be mistaken to view the financialisation of the state as something driven from the outside. Neoliberal politicians were not terrified into submission by the bond vigilantes, they worked with these investors to rebuild the global economy in the interests of global capital, just as they had rebuilt their domestic economies along the same lines.33 The bond vigilantes provided cover. States would deregulate financial markets, making investors more powerful, thereby allowing governments to invoke the logic of market competition to justify their imposition of neoliberal policies on an unwilling populace. By the 1980s, the bond vigilantes had made it possible for politicians like Thatcher and Reagan to claim that there was no alternative to neoliberalism — any attempt at socialist experimentation would be severely punished by the markets, just look at Mitterrand’s France.
Illiberal Technocracy
The bond vigilantes supported a project that aimed to place fiscal policy outside of the realm of political debate. In the era of capital mobility, states would have no choice other than to do as the markets wished. But whilst it contained an element of truth, states that had control over their own monetary policy still retained much more power than this discourse suggests. The bond vigilantes knew that much more had to be done to place economics outside of the realm of politics. Developments in academic economics would provide the perfect justification.
In the 1970s, neoclassical economists took Hickes’ version of Keynesianism and incorporated it into the theoretical framework established by classical economics to create what economist Joan Robinson described as “bastard Keynesianism”.34 This was an innovation, they claimed, permitted by advancements in mathematics that allowed economists to undertake complex modelling exercises that would reveal the fundamental “laws” of economic activity, based on simplifying assumptions about human behaviour. Human beings were perfectly rational, utility-maximising computational machines who interacted with one another in orderly, predictable ways producing clear, linear patterns at the macroeconomic level. The best neoclassical economists will tell you that these assumptions are not meant to accurately reflect reality, and that their outcomes cannot easily be translated into policy solutions. The worst will tell you that the assumptions don’t matter if the results are right, and that it isn’t their business what policymakers do with the findings of academics. As is so often the case in the economics profession, the worst won out, and the findings of neoclassical economics seeped into political discourse. The end result was the dissemination of the view that economics could be reduced to a set of neutral economic facts, which could be innocently handed over to policymakers, who would then be able to implement the “optimal” set of policies to maximise growth.
From this point on, the economic success of a particular government would be judged objectively based on technocratic measures such as GDP growth, inflation, and unemployment. These metrics came to dominate the discourse of economics — particularly the almighty metric of GDP. The combination of technocratic neoclassical economics discourse and the hegemony of GDP were the nails in the coffin of political contestation over the economy — from this point forward, economics would be a self-contained, academic subject best left to the “experts”. Of course, what the rise of the expert really meant was the capture of policy-making by the powerful.35 In the absence of any accountability to voters, decisions about macroeconomic policy could be based on the returns such policies would provide to the wealthy.
Perhaps the best example of how rule-by-experts facilitates policy capture has been the move towards central bank independence. Neoclassical economists argued that politicians exhibited an “inflationary bias”, which made them poor economic managers. Failing to consider the long-term implications of their actions, politicians would reduce interest rates and increase spending today in order to boost growth and secure re-election. Ultimately, however, this would damage the economy in the long-run by raising inflation, which would erode consumers’ incomes. The solution was clear: this powerful tool had to be placed on the top shelf, away from the prying hands of the political toddlers focused only on their own electoral prospects.
Some argued that central bank independence was supposed to bring about high interest rates, which would damage industrial capital and promote the interests of finance capital — but under the conditions of financialisation, the situation is much more nuanced. Histori- cally, there has been an assumed dichotomy between the interests of finance capital and those of industrial capital.36 The former has been assumed to prefer high interest rates to maximise the returns on lending, whilst the latter are assumed to prefer low interest rates to allow them to borrow cheaply. But as firms have become financialised, the interests of these two groups have merged.37 Amongst businesses committed to shareholder value, high profits mean high returns for investors, eroding investors’ commitment to high interest rates. Bankers themselves also tend to rely less on high interest rates to make their profits in modern financial systems. As interest rates fell, banks came to rely on the fees derived from processes like securitisation rather than interest rates themselves.
Equally, however, it is not in the interests of asset holders for interest rates to be kept low for too long as high interest rates are also a guarantee against inflation. Inflation can harm long-term asset-holders because it erodes the value of their assets. If inflation is running at 5% per year and my investments are delivering a nominal return of 4%, my returns are negative in real terms. This might have made financiers conflicted about interest rates — they want high profits, but they don’t want inflation. The triumph of Thatcherism was to ensure that British capitalists could have their cake and eat it. Profits soared with deregulation, privatisation, and tax reductions, but little of this accrued to workers in the form of rising wages. Thatcher’s attack on the unions placed them in a much weaker position, preventing from demanding pay increases in line with inflation. This meant that any increase in costs would be absorbed by the workforce in the form of shrinking pay packets.
The guarantee that inflation would be kept relatively low meant that monetary policy could be directed towards inflating asset prices.38 With central bank policy effectively captured by the finance sector, interest rates remained low throughout the 1990s, supporting an expansion in lending and an increase in asset prices. Most commentators agree that low interest rates were a central cause of the dot-com bubble that emerged towards the end of the 1990s, culminating in the crash in the early Noughties. Under financialisation, independent central banks have been able to provide the two macroeconomic conditions that benefitted investors most: low consumer price inflation, and high asset price inflation. Absent any democratic accountability, they could not be blamed for the financial instability this would inevitably cause. In fact, politicians encouraged the financial boom of the 1990s. Regulation was eased and “light touch” organisations like the FSA were set up to supervise the finance sector, often staffed by ex-financiers.39
In many ways, by the 1990s, global capital needed New Labour more than it needed another Thatcher. New Labour succeeded in hiding the stark class divisions that marked British society by the late 1990s, whereas Thatcher’s Conservative Party had made them more obvious. Blair showed himself capable of naturalising the finance-led growth model in a way that Thatcher never could. Class, we were told, no longer mattered. A rising tide would lift all boats. All that was needed was for educated policymakers to pick the “right” policies to maximise economic growth. Whilst the British state continued to pursue economic policies that were in the interests of elites, the battle lines between the elite and everyone else were no longer visible — they had been blurred by rising home ownership and consumer debt. Some argued that the battle lines had ceased to exist. The end stage of capitalism was to produce a classless utopia. It would take the largest financial crisis since 1929 for the class foundations of finance-led growth to be revealed once again.
CHAPTER FIVE THE CRASH
Stability leads to instability. The more stable things become and the longer things are stable, the more unstable they will be when the crisis hits. — Hyman Minsky
On 15 September 2008, Lehman Brothers, one of America’s largest and oldest banks, filed for bankruptcy. The bank held $600trn worth of assets, making this the largest bankruptcy in American history.1 Financial markets looked on in shock. Just days earlier, the US government had nationalized Fannie Mae and Freddie Mac — two highly subprime-exposed mortgage lenders. The fact that the US government had allowed Lehman Brothers to collapse sparked a worldwide panic. With mortgage default rates skyrocketing, there was no telling how many other banks were exposed to subprime losses on a similar scale to Lehman’s.
The trouble had started the year before, when mortgage defaults had begun to rise in the US. Many mortgages that had been issued in the boom years were flexible: subject to low fixed interest rates for the first few years of the loan, followed by higher ones down the line. People who took out these loans were assured that they would always be able to refinance their mortgage when the teaser rates expired. But at the beginning of 2007, refinancing became more difficult, and many consumers found themselves stuck with high interest payments that they couldn’t afford. House prices levelled off in 2006 and then began to fall. Defaults escalated, and banks began to worry. Had the trouble ended at US mortgages, we may have been left with a US, and perhaps a UK, housing crisis. But by 2007, mortgages were no longer just mortgages. The debt that had been created by the banks in the boom between the 1980s and 2007 had been transformed into the plumbing of the entire global financial system. Every day, millions of dollars’ worth of mortgages were packaged up into securities, traded on financial markets, insured, bet against, and repackaged into a seemingly endless train of financial intermediation.
As the crisis escalated, it was presented as an archetypal financial meltdown, driven by the greed and financial wizardry of the big banks, whose recklessness had brought the global economy to its knees. But whilst the big banks’ relentless desire for returns had escalated the crisis, it’s causes could be traced back to what was taking place in the real economy: mortgage lending.2 And this was driven by financialisation. The Anglo-American model of finance-led growth — described in this book so far from the British perspective — was uniquely financially unstable, even as policymakers believed that they had mastered boom and bust. The Anglo-American model was premised upon the kind of debt-fuelled asset price inflation that has always resulted in bubbles. The one that burst in 2008 just happened to be the largest, most global, and most complex bubble that has ever been witnessed in economic history.
In this sense, 2008 wasn’t simply a transatlantic banking crisis, it was the structural crisis of financial capitalism, emerging from the inherent contradictions of finance-led growth itself. The political regime of privatised Keynesianism, necessary to mitigate the fall in demand associated with low-wage, rentier capitalism, was always inherently unstable. Bank deregulation had created a one-off rush of cheap money that had inflated a bubble in housing and asset markets. The state allowed this bubble to grow for reasons of political expediency, rather than deflating it in the interests of financial stability. An economy that is creating billions of pounds worth of debt used for speculation rather than productive investment is an economy living on borrowed time. And in 2008, that time ran out.
Bubble Economics
As the financial crisis cascaded throughout the global economy, the Queen famously asked a group of economists why no one had seen “it” coming. All over the world, economists were asking themselves the same question. For the previous decade, the profession had been patting itself on the back for having “solved” the major problem at the heart of economic policy: mitigating the ups and downs of the business cycle. By absorbing some of Keynes’ insights on aggregate demand into the classical economic framework, the “neoclassical” economists — as they came to be known — claimed to have built highly accurate macroeconomic models that were able to produce the perfect answer to any policy question. Their success at prediction was, they argued, what underlay the so-called “Great Moderation” that preceded the financial crisis: a period of high growth, low inflation, and relative stability. As it turns out, the Great Moderation was no such thing. As the upswing in asset prices continued, greater amounts of risk built up in the system.3 Part of the reason the financial crisis of 2008 was so big is that the period of exuberance that had preceded it had been so long.
According to Hyman Minsky, “stability is destabilising” — long periods of calm in financial markets encourage behaviours that lead to instability.4 Minsky’s work built on Keynes’ theory that investment is driven by human psychology more than by any objective market rationality. The combination of these psychological factors, and the ability of modern capitalism to create huge amounts of debt, gives rise to financial systems that are fundamentally unstable. Financial markets tend to be characterised by periodic bubbles and panics, which in turn impact the real economy, causing credit crunches and recessions.
Instability results from the psychological factors that drive investment under conditions of uncertainty. Investment decisions are determined by the cost of the investment and the expected returns to be derived from it. Keynes argued that these two variables – costs and expected returns – are governed by different price systems. Keynes’ two price theory – later added to by Minsky – showed that costs associated with an investment — including the costs of financing the investment if the business is borrowing, and the risks associated with that borrowing — are determined by what is going on in the economy now. The other side of the equation — the expected returns derived from the investment — are driven by what businesses think is going to be happening in the economy tomorrow. These expectations are subject to uncertainty — about future economic growth, the potential for bankruptcy, etc. — and are therefore more subject to the caprices of human psychology.
This understanding of the relationship between uncertainty and prices is one of Keynes’ most important theoretical innovations. Human beings are generally quite bad at understanding the nature of uncertainty, often confusing it with risk. But whereas risk is quantifiable, uncertainty is not. Risk measurements can be applied to simple events like rolling a dice but trying to measure uncertainty is like trying to determine whether or not I’ll still own the dice in ten years’ time. I can invest on the basis that the economy has grown for the last several quarters, assessing the probability that this trend will continue, but there is no way I can account for the possibility that a major new invention will be brought to the market or that Earth will be hit by an asteroid. Optimism and pessimism therefore matter when it comes to investment, perhaps even more than the issues traditionally accounted for by economics, like current costs or growth rates. If investors are optimistic, they will not only expect that their future returns will be higher, they may also anticipate their future borrowing costs will fall and judge it quite unlikely that they will go bankrupt. As a result, they are much more likely to invest and to borrow to invest. The important thing to note here is that what drives this investment decision is not so much what is going on in the economy today, but what business owners think is probably going to happen tomorrow — a time horizon over which they can’t claim to have certain knowledge.
On aggregate, these differences in behaviour can lead to the emergence of bubbles. When the economic cycle is on the upswing, the prices of financial assets like stocks and shares start increasing. Investors buy these securities, expecting the good times to continue for the foreseeable future. When lots of investors buy the same asset, the price rises. Think, for example, about the increase in the price of Bitcoin, which was driven by expectations about the crypto-currency’s future value almost entirely divorced from its utility. As investors experience several periods of strong returns, they start borrowing greater sums to invest. Banks also tend to lend more to businesses when the economy is doing well. More money enters the financial system, pushing up asset prices even further and creating a self-reinforcing cycle of optimism-driven asset price inflation.
Eventually, the financial cycle enters a phase of “Ponzi finance”, with investors piling into assets one after another based purely on the speculation-driven price rises of the recent past. Just like a Ponzi scheme which uses new recruits to pay off old lenders, investors end up taking out debt simply to repay interest. This underlies Minsky’s famous insight that “stability is destabilising”: when investors experience an extended period of high returns without any crashes, they become overexuberant about the prospects of future growth and take risks they otherwise might not.
But eventually lending dries up, investment slows, and asset prices start to level off. Investors begin to sense that the party might be coming to an end and either hold off buying or start to sell their assets. Asset prices begin to fall on the back of slowing demand, just as they rose due to rising demand during the upswing. Believing that their assets will continue to fall in value, investors begin to panic sell, catalysing a chain reaction throughout the financial system. In extreme cases, this panic selling can cause prices to fall in the real economy. Falling asset values dampen profitability, reducing investment and wages, and investors’ and households’ wealth, lending to lower spending. Unrestricted lending exacerbates these dynamics by prolonging the upswing and exacerbating the downturn. Falling profits may require firms to sell off even more assets, or lay off workers, to repay their debts. Those who have used debt to purchase assets during the upswing may find themselves in negative equity — with assets worth less than the total amount of debt they have outstanding. They will put off all but essential purchases in an effort to pay off their debts, reducing demand, but they may still end up going bankrupt.
Historically, these observations have been applied mainly to business investment but the financialisation of the household meant that they could be applied to ordinary consumers too. Before 2007, consumers were borrowing huge amounts to purchase houses, increasing house prices and turning mortgage lending into a speculative game. With house prices rising, and credit more readily available than ever, houses became incredibly valuable financial assets. People began purchasing housing not just because they needed it, but because they expected that it would continuously rise in value. Some bought second homes, third homes, and fourth homes, all financed by debt. People also began to refinance their homes to “release their equity”, allowing them to purchase yet more assets — or even just to pay for holidays and new TVs.
This bubble was so big, and went on for so long, for two main reasons. On the one hand, instability emerged naturally due to changes that had taken place in the real economy. The financialisation of Anglo-American capitalism witnessed in the latter half of the twentieth century led to a falling labour share of national income and a rising rentier share. Rising inequality threatened to dampen demand and reduce growth. Bank deregulation and privatisation concealed these trends by expanding access to credit and asset ownership, allowing some working people to benefit from the increase in asset prices, even as others were left behind. The financialised state, meanwhile, used its control over economic policy and financial regulation to promote the interests of the elites that were doing so well out of the boom. Soon the bubble took on a life of its own. Rising house prices left consumers feeling wealthier, and therefore able to take out even greater amounts of credit, even as their wages declined relative to their productivity. This surge in private debt left both the British and American economies uniquely vulnerable to a crash. But this instability can be traced back to the chronic shortfall in demand that emerged from the disparities naturally created by finance-led growth.
On the other hand, the reason this boom was able to go on for so long was that financial globalisation and bank deregulation dramatically increased the amount of liquidity in the international financial system. Financial globalisation allowed banks and investors to draw on capital that had been stored away in states with lots of savings. Financial deregulation reduced restrictions on lending and allowed banks to use this capital to create more money. International banks developed ever more ingenious ways to evade the restrictions on lending that continued to exist. Mortgages were the dynamite at the centre of the explosive device that caused the economic crisis, but the explosive device itself had been transformed due to the financial innovation seen before the crash.
This transformation had several features. The removal of restrictions on capital mobility led to a wave of financial globalisation associated with significant increases in capital flows. The development of “securitisation” allowed ordinary mortgages to be turned into financial assets that could be sold to investors. The rise of the shadow banking system meant that banks were able to lend more than otherwise would have been possible. Finally, banks’ reliance on market-based finance — i.e. borrowing from other financial institutions rather than state-backed bank deposits — allowed investors from all over the world to get in on the game, but also left global banks uniquely exposed to any changes in lending conditions.
When talking about the financial crisis, commentators have tended to focus on this latter set of issues. But whilst the intricacies of global finance are important in determining the way the crisis happened, it is also critical to bear in mind that these factors merely served to prolong underlying trends that had their roots in the real economy — in the rise of finance-led growth that had led to falling wages, rising inequality, and ever higher levels of private debt. It was the combination of financialisation at the level of the real economy, and the growth of an interconnected, highly leveraged and unstable financial system, that explains the unique depth and breadth of the crisis, as well as much of what has taken place since.
Financial Globalisation
With the collapse of Bretton Woods and the removal of restrictions on capital mobility, capital was now free to flood into nearly every corner of the globe, giving rise to a new era of financial globalisation. Total cross-border capital flows increased from 5% of global GDP in the mid-1995 to 20% in 2007 — three times faster than trade flows.5 Amongst the so-called “advanced economy” group ownership of foreign assets rose from 68% of GDP in 1980 to 438% in 2007. In other words, by 2007, the amount the advanced economies owed was more than four times the size of all these economies added together.6
Financial globalisation has transformed states’ relationships with the rest of the world.7 According to traditional macroeconomic models, international trade is governed by the same principles of general equilibrium that govern national economies. Exchange rates, interest rates, trade, and financial flows are meant to adjust in order to bring supply and demand for different economies’ goods, services, and assets into balance. When a country runs a current account deficit — when it buys more from its trading partners than it sells to them — domestic currency flows out of the country. This is because the income from the current account has to come in the form of the domestic currency — if a consumer in the USA wants to buy a widget from the UK, they have to convert their dollars to sterling. High supply and low demand for a currency means falling prices. In other words, running a current account deficit means a falling exchange rate — your currency becomes less valuable relative to other currencies. A less valuable currency makes your exports cheaper to international consumers and should therefore increase demand for those exports. Over the long term, countries with current account deficits should experience falling exchange rates, making their exports more competitive, increasing demand for those exports, and reversing the deficit — and vice-versa for surplus countries. The relationship between the current account and the exchange rate is supposed to lead to equilibrium at the global level — no country should be able to run a current account deficit, or surplus, for a long time.
But from around 1990, large imbalances arose at the global level between “creditor” countries with large current account surpluses, and “debtor” countries with large current account deficits. Countries like the US and the UK had large and growing current account deficits, whilst Japan, China, and Germany had big surpluses. Where was the equilibrium? The US and the UK — deficit countries — should have seen large falls in the value of their currencies. China, Germany, and Japan — surplus countries — should have seen increases. Depreciations should have led to export growth in the deficit countries, and appreciations should have led to export falls in the surplus ones.
To understand what was going on, one must understand the relationship between the current account — which measures flows of income — and the financial account — which measures investment flows. If you think of the current account as like the current account of an individual, then it is mainly composed of income and expenditure. Income from a wage or another source goes in (like income from selling exports) and expenditure goes out (like expenditure from buying imports). But in modern financialised economies these are not the only sources of income available to consumers. They are also likely to have another account that contains, say, a mortgage. This is a form of income — a big cash transfer from a bank that has been used to buy a house — which also entails a certain amount of expenditure in the form of repayments. In the same way, countries are able to “borrow” money from the rest of the world via their financial accounts by selling assets.
The financial account (once known as the capital account) measures flows into and out of UK assets. To stay with the mortgage example, if a UK consumer borrows £500,000 from a foreign bank to buy a house, that will represent a £500,000 inflow via the financial account. This can seem counterintuitive: even though the consumer has borrowed from the rest of the world, they have still received £500,000 now, which counts as a positive sum on the financial account. If a foreign investor built a factory in the UK for £500,000, this would also represent an inflow via the financial account — but just like the loan, it also represents a future liability, because the income the factory generates will flow back to that investor over the long term.
Before the crisis, the US and the UK were seeing lots of capital flow out of their economies via the current account, meaning an increase in the supply of their currencies on international markets. This should have led to depreciations of their currencies. But demand for sterling and the dollar remained high, because there was high demand for British and American assets. The UK might have been losing sterling via the current account, but international investors were lending it back to us in exchange for our assets via the financial account. The rising value of house prices, and the proliferation of mortgage-backed securities (MBSs), meant that investors from all over the world, just like those in domestic markets, wanted to put their money into Anglo-American financial and housing markets. To do so they had to buy dollars or sterling, which maintained demand for these currencies, even as their current account deficits increased. Households were also able to purchase more imports because rising house prices made them feel wealthier.
All in all, this meant that the current account deficit expanded even as the currency appreciated. This created a self-reinforcing cycle. Rising currency values made British exports seem less competitive and imports seem cheaper. British exporters — especially manufacturers — found it harder to compete on international markets. Between the 1970s and 2007, the share of manufacturing in the British economy fell from 30% to just 10%. These economic changes reinforced financialisation by increasing the relative importance of the finance sector in driving economic growth. By the early Noughties, even if we had wanted to get off the train that was careering towards a cliff edge, we would have been unlikely to be able to do so.
Securitisation, Shadow Banking and Inter-Bank Lending
On its own, rising capital mobility would not have been sufficient to turn several large, but localised, housing bubbles into a global financial crisis. International investors needed assets to invest in — housing alone wasn’t enough. New, giant international banks, based mainly in Wall Street and the City of London, were only too happy to oblige. These banks placed British and American mortgages at the heart of the global financial system by turning them into financial securities that could be traded on financial markets — a process called securitisation.8 The securitisation of Anglo-American mortgage debt was central to both the long pre-crash boom and the swift collapse of the banking system in 2008. The American aspect of this equation was many times larger than the Anglo part, and far more important to the global financial system, but relative to the size of their respective economies, both experienced a surge in securitisation.
The process of securitisation involves turning claims into financial securities. A claim is a contract that entitles the holder to an amount of income at some point in the future. For example, a loan made by a bank to an individual or company is a claim on being paid back at a later date. Financial securities are claims that are traded in financial markets, and include equities (stakes in the ownership of a corporation, also called stocks or shares), fixed-income securities (securities based on underlying agreements to repay a certain amount of money over a certain period of time), and derivatives (bets on the future value of other securities or commodities). For example, a bank with some mortgages on its books may want to sell those mortgages now rather than waiting a few decades for the debt to be repaid. To access the money to which they are now entitled, they can turn the mortgage into a security and sell it on to another investor. The price of the security will reflect the underlying value of the loan, subject to interest rates, inflation, risk and other factors.
In the run up to the financial crisis, banks wanted to increase their lending to meet rising demand for credit. They were, however, constrained by regulation that required them to hold a certain proportion of the amount they lent out as cash, shareholders’ equity and certain other liquid assets. If they wanted to lend more, they needed more cash. Banks therefore took the mortgages on their books, placed them “off balance sheet” (in the shadow banking system described below) and securitised them, allowing investors to invest in them. In doing so, they were essentially selling other investors the future income stream that the mortgage would generate, pocketing the cash today, and then lending it to other individuals to create new mortgages. Minsky predicted that this kind of behaviour would come to dominate financial markets, when he wrote “that which can be securitised will be securitised”. In the US, the issuance of residential mortgage-backed securities (RMBSs) peaked at $2trn in 2007. US securities were sold to investors in the rest of the world, which bought them based on the assumption that they were as safe as US government debt, but with higher returns.
But securitisation didn’t just increase banks’ access to liquidity, it also allowed them to disguise the risks they were taking. Once they had lent as much as they could to creditworthy borrowers, banks started to increase their lending by issuing mortgages to customers who might not be able to repay them. The American government supported the emergence of what came to be known as “sub-prime” lending in an attempt to extend mortgages to a wider section of the electorate, just as Thatcher sought to increase home ownership in the UK through right-to-buy and financial deregulation. US federal bodies like Fannie Mae and Freddie Mac – both Government Sponsored Enterprises (GSEs) – would purchase mortgages from the banks and package them up into financial securities, before selling them on financial markets, backed by a state guarantee. 9 This created a large and deep market for mortgage-backed securities (MBSs) of varying qualities, and allowed the banks to lend more to less creditworthy consumers, because they would receive an immediate return — and insulate themselves from risk — by selling the mortgage to a GSE.
Eventually, the GSEs started to package up good mortgages with bad ones, using complex mathematical models to get the balance just right. The GSEs would take a bunch of good mortgages and add in just the right number of sub-prime mortgages to allow them to create financial securities that investors (and ratings agencies) would consider risk-free. Imagine baking a cake and adding just the right amount of poison to make sure it doesn’t kill whoever eats it — the cake is like the security that has just the right amount of sub-prime to make it look risk free. As the housing bubble expanded, more and more subprime mortgages were created. As more of these sub-prime mortgages were baked into these securities, the quality of the securitised products deteriorated, culminating in the creation of the collateralised debt obligations (CDOs) that appeared to make even the riskiest mortgages risk-free. At the same time, new financial institutions got into the securitisation game, competing with Fannie Mae and Freddie
Mac to parcel up mortgages into securities that could be bought and sold. The so-called “originators” would create mortgages, and either sell them to securitisers, or securitise them themselves.
Armed with the latest mathematical insights, the securitisers were confident that, even if some people started to default on their mortgages, their securities would retain their value. The rating agencies, who received their revenues from the financial institutions they were supposed to be rating, unsurprisingly agreed. The ratings agencies agreed to continue to give US MBSs and CDOs high ratings — similar to those they granted to US government debt — even as the quality of these securities deteriorated. This process was reinforced by the insurance industry. Companies like AIG allowed the owners of these securities to hedge against a potential default by the mortgage-holder by taking out infamous “credit default swaps”. If the value of the security fell, the owners would be due an insurance pay-out. The government, securitisers, rating agencies, and insurance industries collaborated to make it seem as though they were making huge amounts of money without having taken any real risk. And as long as house prices kept rising and securities kept being issued, their gamble paid off. But eventually this long period of stability destabilised the entire financial system.
We heard earlier about Keynes’ insights on the difference between risk and uncertainty — and they are central to understanding why securitisation wasn’t as safe as people thought.10 Risk is measurable and quantifiable — simple measures of probability are built on the measurement of risk. We may not know what the outcome will be when we roll a dice, but we can predict that the probability of rolling a 5 is 1/6. But not all events are like rolling a dice. In fact, few events are — especially in a complex system like the economy. In such situations, all we can do is predict the future based on the past, and the future is therefore uncertain — there are too many variables interacting with one another to allow us to predict outcomes with any certainty. Uncertainty is a completely different beast to measurable risk. Unlike risk, uncertainty is unquantifiable — the future is not only filled with known unknowns, but unknown unknowns.
As Fred Knight, an American economist, pointed out almost a century ago, human beings treat uncertainty like risk. We use past experience to extrapolate the likelihood that an event will occur in a particular way. Having invested in one company in a particular industry and received a large return on our investment, we might assume that investing in another business in the same industry will be equally as profitable. But we have no way of knowing what will happen to the business, or indeed the industry, in the future — there is too much uncertainty to give a reliable estimate of the probability that the business will provide a good return on investment. In quantifying and mitigating the risks associated with defaults, the securitisers claimed to have created completely risk-free products. But whilst mathematical models can help to mitigate risk, they can’t mitigate uncertainty. Perversely, the exuberance created by the belief that risk had disappeared encouraged investors to take even greater risks based on uncertain assumptions about the future.
This approach to predicting the future wasn’t confined to the banks themselves. Regulators also viewed their role as mitigating future risks, which could be predictably measured from institution to institution.11 The approach to regulation before the crisis largely focused on making sure each bank had enough capital to allow it to withstand a crisis of moderate severity. The Basel Accords, first agreed by the Basel Committee on Banking Supervision as Basel I in 1988 and amended during rounds II and III in 2004 and 2010, aimed to harmonise international regulation on banking by setting minimum capital requirements for banks. Bank capital consists of highly liquid — or easy to sell — assets, like cash and shareholders’ equity. For example, if a bank makes £10m worth of loans, then a capital requirement of 10% will mean they have to hold £1m in the form of highly liquid capital. Capital requirements limit banks’ profits, because they force them to hold some non-profitable but safe assets like cash and shareholders’ equity. But if a bank got into trouble and investors started to demand their money back, they ensure that the bank has enough cash to be able to pay them.
The Basel accords rested on the idea that regulation should serve to measure and mitigate predictable risks — they were not built to deal with a crisis of generalised uncertainty. The regulators could encourage banks to hold a specific amount of capital that would allow them to mitigate any foreseeable risks, but they should have realised that it would be impossible to predict when, where, and what kind of financial crises might arise over the course of the financial cycle. In an interconnected financial system, regulators should have realised that banks might be subject to unpredictable systemic risks that would affect the entire financial network, not just individual banks.
In fact, the Basel Accords ended up contributing to the crisis. Banks were required to hold different levels of capital against different assets depending on how risky the asset was judged to be. Riskier assets were associated with higher capital requirements. Mortgages and MBSs were judged to be low-risk, so banks had to hold less capital to insure themselves against potential losses. Banks worked out that they could increase their profits under Basel by holding low levels of capital against risky mortgages that provided them with high returns. This “regulatory arbitrage”— opportunities for profit-seeking created by regulation — encouraged banks to hold securities based on mortgage debt, even though they were often far riskier than many other assets.
Capital requirements also fostered the growth of the shadow banking system by encouraging the banks to undertake many activities “off balance sheet” in the less-regulated shadow banking sector. Shadow banks are institutions that lend money without taking deposits guaranteed by the state, and without direct access to central bank funding. Shadow banking is riskier than conventional banking because shadow banks should not be able to access central bank funds when they get into trouble. Because the state takes less responsibility for activities that take place in the shadow banking system, they are subject to less regulation. Basel II encouraged banks to create shadow banking entities “at arm’s length” from the main, regulated banking system. The banks could place riskier assets in the shadow banks, allowing them to disguise their exposure to these assets, without insulating them from the risks associated with this lending. These shadow banks were able to take more risk, and earn higher profits, even if these risks would ultimately be borne by the traditional banks themselves. As regulation on the traditional banking sector increased, the shadow banks — many of which were set up by the banks — increased their market share. Banks’ share of lending in the US fell from almost 100% before 1980 to just 40% in 2007.
Another change that took place in the international financial system before 2008 concerned the way in which banks raise funds.12 The traditional, neoclassical view of banking is that banks simply intermediate between savers and borrowers by receiving deposits and lending these to borrowers. State reserve requirements would determine the amounts that banks were able to lend — and they would lend as much as they were able to under domestic law. For example, if a bank had £10,000 worth of deposits, and regulation required it to keep 10% of its deposits in reserves at the central bank, it could only make £9,000 worth of loans. But this story hasn’t been true of the sophisticated financial systems that have emerged in the global North for decades — in the UK, for example, banks haven’t had any reserve requirements since 1981. Instead, banks lend as much as they can — limited only by demand — and then borrow to meet regulatory requirements. So, a bank might lend as much as it can to borrowers, before borrowing the capital it needs to meet capital requirements from an investor or another bank by the end of the day.
One source of funding for the banks in the pre-crisis period were the so-called “money market funds” (MMFs). Wealthy savers seeking out higher interest rates than were available in traditional bank accounts deposited their cash in MMFs, which were seen as equivalent to normal bank deposits. Investors could take out their cash at any time — the only catch was that MMFs weren’t guaranteed by the state, but this was far from the minds of most investors before 2007. The MMFs would then lend their capital to the banks, often via the shadow banking system, which could offer them a relatively high rate of return. They were joined by the other institutional investors that had emerged in the 1980s, who agglomerated the savings of corporations and wealthy individuals and lent these to the banks.
Another source was the development of the “repo” — short for “repurchase agreement” — markets. Repo transactions allow one investor to loan a security to another investor and buy it back at a later date at a pre-agreed price. Banks would borrow from investors by “repo-ing” MBSs with another investor, before buying it back a few weeks later. Effectively, repo transactions are a form of collateralised loan, with banks borrowing from investors using securities as collateral. In repo-ing the MBS, the bank would take a haircut, meaning it would have to use some of its own money to fund the transaction. This process allowed banks to invest in billions of dollars’ worth of securities, using a tiny fraction of their own cash.
During the 2000s, all of the innovations described so far came together to create an incredibly risky and complex matrix at the heart of the international financial system.13 Banks would set up “structured investment vehicles” (SIVs) — shadow banks — and then place assets like mortgages into these SIVs. The SIVs would raise funds by borrowing on the money markets, rather than taking cash from depositors, for example by issuing asset-backed commercial paper (ABCP), a form of short-term corporate bond. The SIVs packaged up the loans into securities and sold some on, often to investors in surplus countries, but kept others — particularly the lower quality mortgages that were harder to sell. Shadow banks would also engage in complex repo transactions using the securities as collateral, relying on the assumption that they would always be able to roll these loans over. The traditional banks that had set up the SIVs were ultimately responsible for any losses made on these assets, meaning that any problems in the SIV would have a knock-on impact on the bank that had set it up and funded it.
Bailout Britain
In the early 2000s, the global economy had emerged from the bursting of the tech bubble stronger than ever. Investors were convinced that economists really had managed to tame the business cycle once and for all. But by 2006, the “goldilocks” economy — as some termed the neither too hot nor too cold economic conditions that prevailed during the early Noughties — had begun to falter. US house prices peaked in 2006, and then started to fall. Similar trends prevailed in the UK.14
Banks had forayed into subprime markets and started to offer mortgages with low or no deposits based on the assumption that house prices would continue rising forever. As a result, when they started to fall, many homeowners fell into negative equity — meaning that they owed more in mortgage debt than their house was worth. In such a situation, consumers had a choice: keep a mortgage worth more than their homes or sell. Those who could opted to sell, some at any price, sending prices tumbling even further.
Falling house prices cascaded through the financial system. The financial securities that banks had been selling rapidly lost their value when the quality of the underlying mortgages was called into question. Many of these assets were held in the shadow banks that were operating at much higher levels of leverage than traditional banks, meaning even a small fall in asset prices could render them insolvent. These shadow banks, and many traditional banks, had also been financing much of their borrowing on international financial markets over very short time horizons. The securities that were tumbling in value had often been used as collateral for this lending. Combined with the general climate of fear and uncertainty as to who was solvent and who wasn’t, banks and their counterparts in the shadow banking system suddenly lost access to funding.
When banks could no longer rely on borrowing from other financial institutions to finance their liabilities, they started to sell their assets. Fire sales of asset-backed securities sent their prices tumbling even further. The repo markets that had developed before the crash, which allowed banks to borrow from one another using debt-based securities as collateral, seized up. Adam Tooze puts it succinctly: “Without valuation these assets could not be used as collateral. Without collateral there was no funding. And if there was no funding all the banks were in trouble, no matter how large their exposure to real estate”. Retail bank runs had been a thing of the past since the introduction of deposit insurance, but what happened in 2007 was essentially a giant bank run, led by other banks, which created a liquidity crisis – the banks didn’t have enough cash to meet their current liabilities. But the fire selling that resulted rapidly turned this liquidity crisis into a solvency crisis – the banks’ debts grew larger than their assets.
The panic quickly spread across the pond to the City of London. Whilst the subprime crisis was mainly driven by US consumers, the resulting panic and falling value of MBSs, CDOs and similar instruments affected securities from all over the world. As these securities fell in value, funding markets seized up, and many UK banks found themselves in the same situation as their US counterparts. British banks were part of the same international financial system as American ones: they were reliant on wholesale funding, and they had been exposed to billions of dollars’ worth of US mortgage debt. But the British banks had also been involved in the securitisation game themselves.
By the end of 2007, mortgage lending in the UK had reached 65% of GDP — just eight percentage points lower than in the US — and British banks issued £227bn worth of residential and commercial mortgage-backed securities in 2008 — 12% of GDP.15 Many of these mortgages had very high loan-to-value ratios (i.e. the loan was worth more than the value of the house), as well as the kind of adjustable rates that had become so popular in the US.16 In 2008, the Bank of England’s Financial Stability Report stated that “adverse credit and buy-to-let loans [have] risen from 9% at the end of 2004 to 14% at the end of 2007”. The bank expressed a concern that many of these loans had adjustable rates and that it was becoming more difficult to refinance, meaning borrowers will face rising interest rates. The Bank wrote:
As in the United States, this repayment shock is occurring at the same time as house prices are falling. Those who bought in recent years with high loan to income multiples and/or high LTV ratios will be particularly vulnerable to further shocks to their disposable income, such as higher inflation or unemployment.
In fact, the UK housing market had begun to turn at around the same time as that in the US.17 The subprime market wasn’t as large in the UK, but underwriting standards had been deteriorating for many years. Banks like Northern Rock were issuing mortgages worth much more than the underlying value of the home and securitising them in the same way as their US counterparts, whilst relying on similar funding models. The crisis began in the US — a far larger and more systemically important market, with unique vulnerabilities18 — but the boom would have ended in the UK at some point anyway. 2008 was a crisis of the Anglo-American model, also pursued by states like Iceland and Spain, and now Australia and Canada — not simply a crisis in US mortgage markets. The size, severity, and global nature of the crash was undoubtedly a result of its genesis in US markets, but what 2007 showed is that the model of debt-fuelled asset price inflation is inherently unstable. At some point, the debt has to stop growing. And when the debt stops growing, the entire system breaks down.
When the crash hit, governments around the world looked on in horror.19 When it began, they had treated the financial crisis like any other financial panic — as an issue of liquidity, or access to cash. They assumed that the panic would pass, revealing that the banks were creditworthy. Trillions of dollars’ worth of loans was made available to banks by central banks all over the world. But regulators quickly realised that this wouldn’t be enough. As panic spread through the system and prices tumbled, banks became insolvent, not just illiquid — i.e. they weren’t just facing a cash-flow problem, they were bankrupt. They needed capital — cash, equity, and other high-quality assets. They needed a bailout.
It was Gordon Brown who first realised what was going on. Having spent his holiday reading up on the events surrounding the Wall Street Crash, he realised that the panic selling that had started in 2008 had eroded the value of banks’ assets to such an extent that many were now effectively insolvent. Giving them access to central bank funding would involve throwing more money into a never-ending hole. Some of the UK’s banks — notably RBS and HBOS — had become unimaginably large and overleveraged, only to see the value of their assets plummet overnight. Mervyn King, the governor of the Bank of England, agreed. The problem was capital, not liquidity. In effect, the banks had to be forced to take money from the state in exchange for shares — they had to be nationalised.
On 8 October 2008, the government announced that £500bn would be made available to the banks — some in the form of loans and guarantees to support liquidity, and some in the form of taxpayer investment in exchange for equity. Most of the investment went to the basket case that was the Royal Bank of Scotland, indebted up to its eyeballs after its recent purchase of the Dutch bank ABN AMRO under the reckless leadership of Fred Goodwin. The US was forced to take a similar approach, eventually spending over $200bn on purchasing bank equity, and a further $70bn bailing out the distressed insurer AIG. Socialism for the banks saved the global economy from the Great Depression 2.0.
Aside from the bailouts themselves, what prevented the crash from becoming a new global depression were the coordinated international stimulus packages implemented by the world’s largest economies. Keynesian economics was back in vogue. In most states, automatic stabilisers — the falling tax revenues and rising welfare payments associated with recessions — combined with discretionary fiscal spending — i.e. planned, not automatic, increases in spending — limited the impact of the downturn. The US American Recovery and Reinvestment Act — worth over $800bn between 2009 and 201920 — helped to stem job losses by channelling investment into infrastructure, and supported demand by providing financial support to the unemployed. Other G20 states followed suit with their own stimulus programmes. But it was China that saved the day. The Chinese stimulus programme — which included measures to stimulate bank lending as well as increases in central and local government spending — was worth almost 20% of GDP in 2009.21 Ongoing expansionary fiscal and monetary policy — far more than exports — have supported high growth rates in China and its major trading partners ever since.
Monetary policy changes pursued by the world’s four major central banks — the Federal Reserve (the Fed), the Bank of England (BoE), the European Central Bank (ECB), and the Bank of Japan (BoJ) — also helped. Interest rates were reduced to historic lows. But with households already heavily indebted, businesses uncertain of the future, and banks unwilling to lend, cutting interest rates wasn’t going to be enough. So, the world’s central banks tried something new: quantitative easing (QE). Since 2009, these four central banks have pumped more than $10trn of digitally-created money into the global financial system by purchasing government bonds, which has pushed up asset prices across the board.22 The Fed’s balance sheet peaked at around $4.5trn in 2015, or a quarter of US GDP — the value of the UK’s programme as a percentage of GDP peaked at a similar level.23 The BoJ’s apparently unending QE programme has seen its assets climb to over $5trn, larger than Japan’s entire economy.24 In many countries, it is hard to see how this expansion in central bank balance sheets will ever be reversed.25
For a time, it looked as though this coordinated action might bring a relatively swift end to the series of overlapping recessions then taking place in the economies of the global North. But then came the Eurozone crisis. Just as Chinese money had flooded into US debt before the crisis, German money, derived from its large current account surplus, flowed into debt booms in the UK and the Eurozone — notably in Ireland and Spain. In Europe’s periphery, states like tiny, overindebted Latvia faced similar problems. The tell-tale signs of finance-led growth — rising debt, housing booms, and rising current account deficits — started to afflict many EU economies. As Tooze points out, several EU countries were staggeringly “overbanked” by 2007 — the liabilities of Ireland’s banks were worth 700% of its GDP. When the crisis hit, Europe’s banks needed bailing out too.
But there were no mechanisms to orchestrate such a bailout at the EU level. Instead, the burden fell on individual economies like Greece, Spain, Portugal, and Ireland to save their bloated financial systems. Unable to print their own currency, states like Greece and Ireland were forced to seek bailouts from the international institutions formerly restricted to bailing out low-income states in the global South. But there was a problem: many of these countries were effectively insolvent. Their debts were too large ever to be repaid. Rather than accept that the debts needed to be written off, and the system transformed, the EU — helped along by the IMF — decided to impose austerity on the struggling economies, immiserating a large portion of Europe’s population. The nationalised banks received easier treatment than the indebted states that had bailed them out: this was socialism for the banks, and ruthless free-market capitalism for everyone else.
In the wake of the Eurozone crisis, it didn’t take long for Keynes to fall out of favour again. The Greek crisis was exacerbated by its inability to print its own currency due to its membership of the Euro. But the idea that the financial crisis was sparking a new wave of sovereign debt crises, from which no economy would be safe, spread like wildfire. Rather than identifying the root cause of the crisis — the model of finance-led growth constructed in the 1980s — politicians, academics, and commentators around the world seized on the narrative that the recession stemmed from too much government borrowing. Governments, they patiently explained, are like households — they can only spend as much as they earn. If they borrow too much one year, they must save to pay it back the next year. And if they borrow too much over a short period of time, they were passing down the burden of those debts to their grandchildren. For the good of future generations, governments around the world would have to tighten their belts. Nowhere — other than in bailed-out Greece — did this go further than in the UK, where the coalition government implemented an austerity programme so harsh that it has been linked to 120,000 deaths over the last decade.26
Having socialised the costs of the banks’ recklessness, financialised states around the world failed to use their control over the banking system to support growth for fear of interfering with the operation of the “free market”. The British state, now a majority owner of several banks, refused to use its control over several large banks to direct lending to productive purposes. Despite the rhetoric about paying down the debt, the government did not even try to sell its shares in the banks at a competitive price, instead selling them at a loss to the taxpayer, even as it asked the British people to foot the bill.27 These decisions were justified using familiar tropes. The market, on this one occasion, had failed. But that didn’t undermine capitalism as a social and economic system; and state ownership of the banks certainly didn’t undermine their commitment to enforcing private property. In fact, the way the bailouts were conducted reinforced the logic of finance-led growth: the state would use its power to give the markets what they wanted, and working people would be forced to pick up the tab.
Transatlantic Banking Crisis or Structural Crisis of Financial Capitalism?
Reading this account on its own could lead one to conclude that what happened in 2007 was simply a transatlantic banking crisis with its origins in the US. It then spread around the world due to a combination of financial globalisation and financial innovations like securitisation. In the aftermath of the crash, this is the view that dominated. It was the parasitical rentiers in the international finance sector that had brought the global economy to its knees. Greedy bankers, out-of-touch economists, and regulators asleep at the wheel all shared the blame in popular readings of the crisis.28 Such accounts undoubtedly deliver an accurate analysis of the events surrounding 2008, but they do not tell the whole story. Finance is not some ethereal activity that sits atop the “real” economy — it has its roots in normal economic activity. International banks may have been playing reckless games with one another, but the source of their profits was lending to households and businesses.
The global financial crisis may have broken out in the US in 2008, but it had its origins in the unique Anglo-American model of finance-led growth pursued since the 1980s. The financialisation of the firm provided an immediate fix to the profitability crisis of the 1970s – a fix built on the repression of wages and productive investment. The states that had encouraged the financialisation of the firm deregulated their banking sectors in order to give households greater access to credit and expand asset ownership. In doing so, they were attempting to disguise the chronic shortfall in demand finance-led growth threatened to create, and to make the system politically sustainable. Rising mortgage lending increased house prices, eventually inflating a bubble that saw the British and American housing markets turned into a giant Ponzi scheme. Banks took this mortgage debt, packaged it up and sold it on international financial markets, disguising the amount of risk they were taking. Capital flooded into the US and the UK to take advantage of the boom, and repressed activity in the rest of the economy. The spark that set the whole thing alight came from the US, but the fallout extended throughout the financialised economies of the global North, and was particularly severe in Britain, whose economy had been buoyed by rising debt and asset prices for decades. Whilst 2008 may look like a transatlantic banking crisis, it was more than this: it was a structural crisis of financialised capitalism.
Understanding the financial crisis therefore requires adopting an historical approach to analysing the evolution of a model that was born forty years earlier. This allows us to recognise that financialisation, as a fix to the contradictions of the previous model, contains its own inherent contradictions. Just as Kalecki helped us to understand the contradictions of social democracy far before the system broke down, economists like Keynes and Minsky also helped us to understand the contradictions of financialised capitalism far before that system collapsed. The fact that these things were predictable means they were endogenous to the functioning of the system – they were inherent features of finance-led growth. And that is the most important message to take from this story. The global financial crisis wasn’t an aberration; it wasn’t a couple of bad years in an otherwise well-functioning economy. It represented a deep-seated crisis, the roots of which lay in the economic model pursued in Anglo-America up to 2007.
Today, just like those living through the crisis decade of the 1970s, we are living in what Gramsci called the interregnum: that moment between the death of the old and the birth of the new. The implications of this insight will be discussed in the next chapter. But if the reader takes only one thing from this book, let it be this: poor regulation, bad economics and greedy bankers all contributed to the particularly explosive events of 2008, but the financial crisis had far deeper roots. A crash — if not necessarily the crash that we got — was woven into the DNA of the economic system that was built in 1980. And nothing but wholesale economic transformation will deliver us from its shadow.
On 18 June 1984, five thousand striking miners descended on Orgreave, South Yorkshire, intent on disrupting deliveries of coal to the British Steel Corporation coking plant.10 They were matched by six thousand policemen, many mounted on horseback. The battle that ensued has been described as one of the most violent industrial disputes in British history. Having been outmanoeuvred by strikers in the past, the police approached Orgreave as a battle — as a chance to put an end to the strike once and for all. After trapping them in a nearby field, the Chief Constable ordered a mounted charge on the striking miners, followed by another, and another. During the final onslaught, officers charged in behind the cavalry and began beating miners with batons. Several hours later, the few miners that remained were charged again — this time entirely out of the field — leaving an “out-of-control police force [charging] pickets and onlookers alike on terraced, British streets”.
Despite the clear overreaction on the part of the police, almost a hundred miners were charged with various offences relating to the dispute. Seventy-one faced life sentences. They were acquitted after the Independent Police Complaints Commissioner concluded that the police had used excessive force, as well as exaggerating the violence they faced from the strikers, and committing perjury in an attempt to prosecute them. In an eerie precursor to the Hillsborough disaster, the miners also faced trial by media, where they were depicted as violent thugs attempting to undermine the rule of law. The escapade was described as “the worst example of a mass frame-up in this country this century” by one of the miners’ lawyers. The Battle of Orgreave — as it is now known — remains a stain on British policing to this day.
The Battle of Orgreave was one of the bloodiest confrontations to take place over the course of the miners’ strike of 1984–1985, which was called in response to Thatcher’s programme of pit closures. Though they did not know it at the time, Thatcher’s decision to close the pits was a deliberate attempt to spark a confrontation between mineworkers and the state — a confrontation for which she had been preparing from the moment she stepped into office. The Ridley Plan — named after its initiator, right-wing Conservative MP Nicholas Ridley — was drawn up in 1977 and leaked to the press in 1978, and laid out a strategy for dealing with the “political threat” from the “enemies of the next Tory government”.11 It was an exhaustive battle plan detailing how the Conservatives could conclusively defeat a major union in the event of a national strike. First, the government would spend several months stockpiling coal and planning imports before announcing pit closures. When strike action began, the plan recommended listing unions in order of their strength and going after them one after another, taking on the most militant first. It even included plans to come after the strikers in their homes by cutting off their access to dole money. Ridley anticipated that these actions would build up to a series of outright battles, for which the government should prepare by drastically expanding the capabilities of the police force, equipping them with the latest anti-riot gear and training them for what amounted to a military exercise.
Ridley’s plan was followed almost to the letter. Thatcher stockpiled six months of coal by expanding output dramatically from 1983. One miner described how, looking back, the government had effectively forced them to “dig their own graves” in this pre-strike period.12 Ian MacGregor, the multi-millionaire US businessman who Arthur Scargill — president of the National Union of Mineworkers — referred to as the “Yankee steel butcher”, was brought in to increase the “efficiency” of the UK’s coal industry by announcing a brutal programme of pit closures. Isolated strike action began to break out in pits across the country, and in March 1984 Scargill called a national miners strike. This was when the Ridley Plan truly came into its own. Thatcher’s beefed-up and newly mobile police force was ruthless. Orgreave is perhaps the most famous encounter of the strike, but many other skirmishes took place throughout the country, leaving hundreds injured and thousands arrested. The miners didn’t stand a chance. Within ten years, Thatcher had closed down nearly every pit in the country, and all but broken the miners.
Looking back, this course of events has an air of inevitability about it. The miners were fighting an uphill battle against the rise of renewable energy sources and cheap labour from abroad. The demise of the dirty, dangerous, and polluting coalfields was portrayed as a story of modern-isation, which would see Britain transition from a traditional manufacturing economy to a modern service-based one. In the end, coal mining may not have had a much longer future in the UK. But the brutality with which the miners were repressed, the speed with which functioning collieries were closed after the end of the strike and the decline into which many pit communities sank through the 1980s and 1990s were far from inevitable. In what often appeared like a personal vendetta, Thatcher decided almost as soon as she became Conservative leader to stake her entire political career on a face-off with what she called the “enemy within”. What could she possibly have hoped to gain from imposing such acute suffering onto the electorate? Pit villages in South Wales hardly seemed a threat to her deregulation of the stock market or her privatisation agenda.
But this is to fundamentally mistake the nature of Thatcher’s vision. In order to build the new economic model theorised by the right-wing activists at Mont Pélerin, the last remnants of the old one had to be destroyed. As long as the British labour movement was there to resist it, Thatcher would never have been able to institutionalise neoliberalism. As one striking miner put it, “[w]e knew from day one we were firmly in Thatcher’s sights. What was stopping privatisation, what was stopping letting rip with profits, their philosophy of a free-market economy? The thing that was stood in the way was us”. In taking on the miners, Thatcher wasn’t just putting the nail in the coffin of the British mining industry, she was waging war on the labour movement as a whole. By taking out the strongest and best organised workers first, she knew that when she came for the remainder, resistance would seem futile. Workers in nationalised industries found themselves unable to resist privatisation, which proceeded apace with few disruptions. What remained of the labour movement found it far harder to counter the steady decline in wages relative to productivity, the deterioration in conditions and the rise of flexible working. Union membership has more than halved since the 1980s, even as the population has grown.13
From the start, this project was cloaked in the language of “efficiency”, “modernisation”, and — most pernicious of all — “economic freedom”.14 Thatcher’s groupies argued that the unions were vested interests getting in the way of the operation of the free market. The neoclassical theory of wage determination posits that workers are paid a wage equal to the “marginal product” of their labour.15 Essentially, firms pay workers a wage equal to the value of the output they produce. If a firm paid a worker over this amount, another firm could afford to undercut them whilst still making a profit, and if they paid workers less, then another firm could poach the worker with a higher salary and still make a profit. In the perfect world of equilibrium inhabited by the professional economist, the economy runs like a well-oiled machine, everyone fulfils their function, and society’s resources are used in the most optimal way. By extension, workers who demand wages above their marginal productivity reduce the profitability of the companies they work for, therefore reducing the efficiency of the economy as a whole. The unions were committing a cardinal sin — disrupting the operation of the free market — and the state had no choice other than to intervene.
But over the course of the post-war period, the marginal productivity theory of distribution largely held. When the UK had a powerful labour movement able to argue for pay rises on behalf of their workers, on aggregate wages and productivity rose in unison — workers were paid a wage equal to what firms could afford, no more and no less. In fact, there is now a great deal of evidence to suggest that strong union movements actually raise productivity and improve firm performance.16 But after Thatcher’s battle with the unions, wages stopped rising in line with productivity. Without the unions to demand that firms paid workers a salary equal to their marginal output, bosses had no incentive to do so. Instead, they set about internally redistributing resources from workers to shareholders, making billions in the process.17
The downward pressure on workers’ wages was reinforced by the Conservatives approach to macroeconomics. Theoretically, those workers being paid less than their fair share of output could have left their companies and found other jobs. But the neoliberals also had a plan to prevent workers from voting with their feet. Before starting her war on the unions, Thatcher announced a war on inflation, which was running at 13% in the year she took office. Her main weapon was to be the new economic ideology of monetarism: the theory that governments can control inflation by controlling the money supply. The growing attractiveness of monetarism emerged out of Keynesianism’s failure to explain the concurrent increases in inflation and unemployment of the 1970s.18
According to the Phillips Curve, there should have been a trade-off between these two variables — and for most of the post-war period there was — but this relationship broke down in the 1970s. Expanding government spending would have been the solution to rising unemployment, but reducing it would have been the response to inflation — with both happening at the same time, the Keynesians were stuck. Monetarists explained this phenomenon by attributing the rise in inflation to low interest rates and too much government spending. The only way to tackle stagflation was to reduce government spending and raise interest rates to reduce inflation. They argued that, whilst such a course of action might create mass unemployment or cause a recession, this was the price that had to be paid for the greater good of controlling the money supply.
And this is exactly what happened. As soon as she came into office, Thatcher raised interest rates to 17%.19 Given that high levels of inflation were primarily being driven by rising costs not rising demand, this had the predictable effect of strangling economic activity. The economy shrank by 2% in 1980 and a further 1% in 1981. Companies laid off workers and the ranks of the “reserve army” swelled. Unemployment began to rise at the start of the 1980s, and between 1983–1986 it never fell below three million — double the level of 1979. At such levels of unemployment, workers who are laid off find it almost impossible to find another job. Without any voice in determining their own pay and conditions, and unable to leave their jobs to find others, workers had no choice but to accept the pay that they were offered by corporations. Bosses, of course, knew this, and reduced workers’ pay accordingly — either actively docking pay or allowing it to be eroded by increases in inflation.
In this sense, the war on inflation functioned as Thatcher’s second front in her war against the unions. When discussing the monetarist policies of the 1980s Alan Budd, a government advisor at the time, worried that “what was engineered in Marxist terms was a crisis of capitalism, which recreated a reserve army of labour and has allowed the capitalist to make high profits ever since”.20 Disem-powering the unions and increasing unemployment would reduce wages — meaning more money going to owners rather than workers — and permanently reduce workers’ collective bargaining power. Monetarism was, in this way, an overtly political approach to monetary policy, even as its adherents claimed it was based on neutral economic analysis.
There is, of course, no such thing as neutral economic analysis, even though the neoliberal narrative presented itself as such. The war against the unions was justified in terms of “efficiency”, whilst monetarism was justified on the basis that it would prevent inflation. But the demise of the unions has created inefficiency in the labour market by increasing the returns to capital well above what they should be in the imaginary neoclassical economy. And monetarism failed to achieve its stated aim of controlling the money supply: as we’ll see in the remainder of the chapter, Thatcher’s deregulation of the banking system meant that the broad money supply increased faster than at almost any other point in history. But as this debt was mainly driven into housing rather than consumer goods, it created asset price inflation rather than consumer price inflation — in other words, inflation that benefitted the wealthy, not workers. Thatcher’s war on the labour movement was aimed at breaking the last vestige of opposition to financialisation — and she succeeded.
Privatised Keynesianism
In crushing organised labour, Thatcher set the stage for the institutionalisation of finance-led growth. Without resistance from the country’s workers, she could go about entrenching neoliberalism and empowering the financial elites that had brought her to power. Her reforms to the stock market, the removal of restrictions on capital mobility and the rise of shareholder value ideology had ushered in a new world order for corporate Britain.21 Profitability was restored, and businesses no longer had to worry about the problem of union activity. Stock prices soared. Top salaries skyrocketed. And the profit share of national income grew at the expense of the labour share.
But whilst this system worked for a time, such levels of excess are unsustainable over the long term. Rising inequality leads to falling domestic demand, as the rich spend a lower proportion of their incomes than the poor, which ultimately harms capitalists’ profits.22 Both problems had begun to assert themselves by the end of the 1980s. The financialisation of the firm and the demise of the unions led to falling pay and increasing inequality. Whilst real wages grew by an average of 3% through the 1970s and 1980s, as unemployment increased and bargaining power fell, this figure fell to just 1.5% in the 1990s and 1.2% in the 2000s.23 Rising GDP benefitted owners rather than workers. Modelling from the TUC suggests that the wage share of national income has fallen from a peak of 64% in the mid-1970s to around 54% in 2007.24
Whilst pay was increasing in absolute terms in this period, most of these increases went to the top of the income spectrum, and inequality increased substantially. The UK’s GINI coefficient rose from 3 at the start of the 1980s to 3.4 by the start of the 1990s, a rise which been driven primarily by increases in pay at the top.25 Whilst overall increases in income for the top 10% averaged 2.5% between 1980–2000, they increased at just 0.9% for the bottom 10%.26 The ratio of CEO pay to the pay of the average worker increased from 20:1 in the 1980s to 149:1 by 2014.27
Secondly, and relatedly, investment in fixed capital — in the physical machinery and infrastructure needed for production — began to fall substantially from the end of the 1980s onwards. Investment in fixed capital matters because it is a critical determinant of long-term productivity, and therefore the health of the economy. As we’ve seen, if businesses aren’t investing in production, they’re either distributing their cash to shareholders or investing in financial markets. Investment in fixed capital fell from around 25.4% GDP in 1989 to 18.9% just five years later.28 And it kept falling: by 2004, it had reached just 16.7%. This was partly due to the state cutting its investment in the real economy. But, as outlined in the previous chapter, investment was also falling due to the financialisation of the corporation, with firms distributing their revenues to shareholders, investing them in financial markets or buying up other corporations. The long and slow decline of the UK’s manufacturing sector, which invests more in fixed capital than the services sector, also contributed.
Falling pay, rising inequality, and low investment threatened to recreate the conditions that preceded the Great Depression. Keynes and others argued that the best way to combat the low-wage, low-investment, low-demand doom-loop was for the government to intervene at strategic points to curb the twists and turns of the business cycle. They would signal their willingness to do this by committing to maintaining full employment, whatever the stage of the business cycle. If unemployment was rising, the state would step in to pick up the slack — either by directing increasing spending or cutting interest rates to boost investment in the private sector. The problem, as outlined in Chapter one, was that this model of growth gave workers more power. Thatcher’s goal was to get back to a time when “the markets” — i.e. the bosses — were in control; a time when workers could be traded by businesses like any other input to production, rather than causing trouble by demanding bosses treat them like human beings. But she had to do this without creating a return to Depression-era economics.
The solution to this problem was to change the engine of demand: rather than business investment and state spending fuelling economic growth, from the 1980s debt-fuelled consumption came to be the main driver of increasing output.29 Increases in consumption came to outstrip increases in wages. In the context of stagnating wages, the gap between income and expenditure would be covered by personal borrowing. 1988 was the first year ever that consumers’ expenditure exceeded their incomes.30 The Lawson boom — the economic boom named after the Chancellor who presided over it — saw tax cuts, a reduction in interest rates (once the union movement had been dealt with, of course), and an across-the-board increase in household borrowing and spending. Growth increased in the short-term, before collapsing in an equally large bust when interest rates had to be hiked again to keep the UK in the Exchange Rate Mechanism. In a mini precursor to 2008, a housing crisis ensued. But it wasn’t long before stability was restored, and debt began to climb once again — and it didn’t stop climbing for nearly two decades.
The genius of basing demand on private debt was that it allowed people to buy more, propelling economic growth, whilst also directing a greater portion of peoples’ income towards interest payments and fees that went to financiers.31 Under this system, individuals would use the tools provided to them by financial markets to weather the storms created by changes in the business cycle, making a tidy profit for the finance sector in the process.32 In this way, “privatised Keynesianism” replaced the Keynesian model of demand management that governed the post-war consensus.33
Had this model relied only on unsecured lending, like credit cards or student loans, it wouldn’t have lasted very long. If you’re borrowing to go on holiday, you have to assume that your wages are going to carry on rising so that you’ll easily be able to afford the interest payments tomorrow. And as we know, wage increases weren’t keeping up with productivity increases at this point. Instead, privatised Keynesianism relied on secured lending — lending backed up by an asset like a house. When you borrow to purchase a house, you’re left with an asset that can produce income and be sold if you can’t make the payments. What’s more, when lots of people invest in the same asset, the price of that asset tends to rise. If everyone wants to buy housing, and is able to access a mortgage, but the housing stock remains fixed, then the price of housing will rise. From the end of the 1990s recession, the amount of money created and directed into housing increased at a far faster rate than the number of houses for sale, increasing prices.
In place of rising wages, Thatcher may as well have said “let them eat houses”. Financial deregulation and right-to buy, combined with the pension fund capitalism released by the Big Bang, allowed the Conservatives to transform the British middle earners into mini-capitalists who would benefit from the financialisation of the economy. By providing capital gains to a large swathe of the population, the Conservatives would be creating a class of people who had a material interest in the economy remaining as it was, even if most of the gains from growth were going to the top 1%.
Blowing Bubbles
In October 2018, the record for the most expensive UK home was broken when the penthouse at One Hyde Park was sold for £160m.34 Initially, the identity of the buyer was shrouded in mystery. The property had been purchased through a shell corporation located in the tax haven of Guernsey, where companies aren’t required to disclose their beneficial owners. But a few days later the buyer’s identity was revealed. As it turns out, the developer, multi-millionaire property tycoon Nick Candy, sold the penthouse to himself via Project Grande (Guernsey) Limited — a joint venture between his brother Christian Candy and the former Prime Minister of Qatar — so he could release the equity with a £80m loan from Credit Suisse.
Together, Nick and Christian Candy are worth £1.5bn. In the mid-1990s they got their first break when a family member gave them a £6,000 loan that they used to buy, renovate, and sell a flat in London, making a £50,000 profit. Like many property developers at the time, they used these profits to buy and flip a series of flats in London, riding the wave of the housing boom and making themselves incredibly rich. The brothers managed One Hyde Park, the most expensive development in the world when it was completed in 2016. Today, they are famous for their fabulous wealth, their aggressive tax avoidance and their long list of celebrity clients, including Kylie Minogue and Katy Perry. How was it possible, that over a period of just twenty years, these two brothers turned £6,000 into £1.5bn (that is, £1,500,000,000) just by investing in UK property?
The Candy brothers aren’t alone in making their fortunes on soaring London property prices. In fact, they are only the UK’s 52nd wealthiest property developers, falling far behind names such as the Reuben brothers, the Grosvenors, and the Barclays. 163 of the top 1000 richest people in the UK made their money in property, making property wealth the biggest single source of wealth on the Sunday Times rich list. But it’s not just the wealthy who have benefitted from rising house prices: everyone who bought a home before the boom of the 1980s has seen a windfall gain. Property wealth is the second most significant source of wealth in the UK after private pensions wealth, worth £4.6bn.35 Prices in London have risen faster than those in other parts of the country, and now property wealth represents almost 50% of the net wealth of people living in London, compared to 26% for those living in the North East.
This increase in house prices began during the 1980s, as part of Thatcher’s push to create a “property-owning democracy”.36 Right-to-Buy legislation, which allowed tenants of social housing to purchase their home at between one- and two-thirds of its market value, was passed in 1980. In 1984, the amount of time a tenant had been living in a flat before they were able to benefit from Right-to-Buy was reduced, and the potential discounts on the property’s value were increased. In the first seven years of the 1980s, 6% of Britain’s social housing stock was sold to private owner-occupiers. But the privatisation of Britain’s social housing stock would not have been enough to create Thatcher’s nation of home owners. People needed mortgages — and that required a change to the country’s financial system. So, Thatcher deregulated the banks.
When banks lend, they create new money.37 This unique state-provided privilege is what makes a bank a bank, differentiating banks from other financial institutions like building societies. If I deposit £100 in, say, a building society, the society could keep £10 of this money and lend £90 to someone else. No new money has been created — it has just been moved from one place to another. Banks, on the other hand, can lend out money without first taking a deposit, because states give them the right to issue loans in the national currency, subject to certain rules. BigBank Inc could lend £90 to a consumer, without actually having £90 in deposits. The amount that banks are able to lend is determined by central bank regulation. The central bank might say that commercial banks must hold a certain amount of highly liquid capital (cash, shareholders’ equity, or anything relatively easy to sell) relative to its loans. Once it has lent the £90 out, it might have to find £9 worth of capital to keep within state regulation. But the remaining £81 is new money — the bank has not borrowed it from anyone else, it has simply created it out of thin air. Increases in bank lending can therefore increase the money supply — the total amount of money in circulation.
Prior to the 1980s, there were many more restrictions on banks’ ability to create money in this way. Before 1981 the banking “corset” — otherwise known as the supplementary deposits scheme — required banks to keep a certain amount of cash at the Bank of England before issuing loans above a certain amount, which restricted lending and therefore the money supply.38 When restrictions on capital mobility were removed, banks found it much easier to bypass these restrictions by moving their activities abroad, so the “corset” was removed. The removal of restrictions on capital mobility also meant that banks now had access to the big pools of money that had by then emerged at the international level. They found it much easier to borrow — whether from institutional investors, or from other banks — and could therefore use this borrowing to increase their lending. Banks started to play a much greater role in mortgage lending — in 1980, banks were responsible for just 5% of mortgage lending; this had risen to 35% just two years later.
Another important set of reforms to the financial system were the changes made to the UK’s building societies.39 Building societies had been a feature of the British financial system since the eighteenth century, when they were created in the UK’s new industrial towns and cities to build homes for those who could afford them. Older homeowners put their savings in the societies, which were then lent to younger members as mortgages. They weren’t like banks in that they weren’t able to create money — they could only loan out the money they had in deposits. Building societies continued to grow until, by 1980, they were responsible for 90% of UK mortgage lending, which gave them, in the words of the Bank of England, a “virtual monopoly” on the mortgage market.
In 1986 — the same year as the Big Bang — the Building Societies Act was passed, which aimed to increase competition in the sector by removing the regulation that prevented building societies from operating like normal banks. After the Act was passed, building societies could do pretty much all of the things that banks could do — including create money by issuing credit. Members were bought out, becoming rich in the process, while new borrowers faced higher interest rates. Eventually many of these building societies — including Northern Rock — ended up undertaking the kind of sub-prime lending activities that caused the crisis.
Throughout the 1980s, banks and former building societies issued millions of pounds worth of mortgage debt to allow people to purchase their own homes — many used this debt to purchase their council homes. This lending surge led to an increase in the UK’s broad money supply, which increased from around 40% of GDP in 1985 to 85% in 1990, mirrored by an increase in the amount of credit provided by financial institutions.40 There was now a wall of money chasing after the same amount of housing stock — and the inevitable consequence of such a scenario is house price inflation. To illustrate this point, imagine that two people both want to buy the same house.41 The asking price is £50,000, so couple A goes to the bank and asks for a £50,000 mortgage, which is granted. Couple B then goes to another bank and asks for a £55,000 mortgage to outbid couple A, which is granted, so couple B returns to ask for £60,000. Prices are pushed up based on the amount that banks are willing to lend — and without strict limits on bank lending, this led to an increase in house prices.
As finance came to colonise the real estate market, housing was transformed into a speculative asset. Average house prices increased tenfold between 1979 and the 2008 crash, whilst consumer prices increased by just half that amount.42 In London and the south east, the situation is even more extreme. In purchasing a house, one was no longer purchasing a roof over one’s head, one was purchasing a future: a pension, an inheritance for one’s children, and thirty years of continuous mortgage repayments. The boom, of course, ended in a bust. But before the crisis, this debt-fuelled consumption-driven growth model transformed the nature of Britain’s economy, its society, and its politics.
Thatcher couched her push for home ownership as a progressive bid to turn the country into a nation of responsible homeowners, who wouldn’t rely on the state to support their ambitions for a better life.43 Sensible, savvy consumers would choose to invest their hard-earned savings in housing and pensions, with the promise that these would continuously increase in value. Free markets would carry the nation to prosperity, unencumbered by the overbearing influence of the paternalistic state. If you were too poor, stupid or lazy to take advantage of the fantastic opportunity given by this brave new world, well, that was your own fault. The state’s role would be limited to controlling inflation, which might erode the value of your assets and hard-earned savings, by controlling interest rates.
It’s worth stating at this point just how much of a lie this vision really was. Too often people critique the Thatcherite vision by arguing that, whilst individual freedom is just as important as Thatcher claimed, it is also important to look after the collective. Sometimes individual freedom needs to be curbed in order to control the markets and reduce social ills like inequality and poverty. This argument may be true, but it accepts the neoliberal discourse on its own terms. One cannot understand Thatcherism by looking at Thatcher’s language — one has to understand the aims of her vision by looking at who benefitted from these changes. In doing so, it is easy to see how the language of neoliberalism served to conceal what was really going on: a transfer of society’s resources from those who work for a living to those who own the assets.
The Conservatives transformed British political economy by providing the wealthy with free money — realised in the capital gains they derived from increases in the value of their homes and their pensions. These rising asset prices compensated for falling wages amongst those who were able to access the credit required to purchase these assets. Middle earners were persuaded to support Thatcher’s model on the basis that they too might become wealthy one day. The rest of society — the majority — didn’t feature, other than as inputs to the production process. Thatcher may have talked about freedom, but she created a society based on unfreedom: the non-choice between work at a wage below what one deserves and destitution.
Similarly, Thatcher’s government never actually tackled inflation or the money supply, despite its monetarist rhetoric. The broad money supply increased dramatically over the course of the 1980s because of rising mortgage debt. Instead, they focused on curbing wage-inflation by cutting the size of the state and restricting collective bargaining. Asset prices — mainly houses and other financial assets — rose substantially under Thatcher, even as consumer price inflation was brought under control. The ideological battle between individual freedom and collective justice provided a smokescreen that allowed the neoliberals to stratify British society — co-opting middle earners by turning them into mini-capitalists and creating a margin-alised class of poorly-paid, precarious, and heavily indebted workers beneath them.
A central plank of the finance-led growth regime has been the replacement of wages with debt and private wealth as the central determinants of many households’ sense of economic prosperity.44 When households fall on hard times, rather than taking their fight to employers, they are much more likely to take out new debt. When planning for the future, those who own homes and have private pensions are more likely to rely on the value of these assets than they are on social security provided by the state. In other words, the financialisation of the household has radically individualised peoples’ experience of the economy, leaving them to rely on individual financial management rather than collective political mobilisation to improve their standard of living.
Financialisation and Politics
The capitalist societies described by Marx were divided between the owners of the most important resources and those who worked for them. Capitalists would use their political and economic power to force everyone else to work for them for a wage below what they deserved. Land owners and financiers used their control over land and capital to extract wealth from both capitalists and workers. Property was passed down through the generations, and it was all but impossible to transition between the classes. The state existed to protect owners — the franchise was strictly limited, and policy was determined by battles between different classes of owner.
But during the golden age of capitalism, Marx’s analysis of class no longer seemed to fit the experience of the global North. Strong unions meant that most workers were being paid an income approaching the value they produced for capitalists. The extension of the franchise transformed the state, which was now stepping in to provide public services and promote full employment. A new class of professional managers emerged, earning high wages, and often being remunerated in shares as well, undermining the distinction between owners and some workers. Increasing social mobility and high wages produced a society that looked much less stratified than the one analysed by Marx in the nineteenth century. Many resources were owned collectively, meaning that the amount people had to spend on rent was relatively low. And finance was reined in, meaning less was being spent on interest payments.
But under finance-led growth, society has come to look a lot more like the ones described by Marx. The wage share has fallen, and the profit share has risen. Within the profit share, the rentier share has also risen. The increase in income accrued from sources like interest and property rents has made financialised capitalism much less productive. When large amounts of income are diverted to economic rents, less money is reinvested in production and more accrues to the owners of already-existing assets. No new jobs are created when I pay my landlord rent or when a corporation pays interest to a bank — income is simply transferred from one place to another. The combination of a falling wage share and a rising rentier share saps demand out of the real economy, as well as increasing financial instability, contradictions that will be analysed later in this book.
The divide between the owners of the means of production and rentiers on the one hand, and those who are forced to work for a living on the other, is the divide between the many and the few — between those who live off work and those who live off wealth. This is the fundamental divide that characterises capitalist societies today. The political salience of this divide may rise and fall depending upon wider political economic conditions, but it never goes away. Even before the overt conflict of the 1970s, the class divisions in British society were a primary feature of politics. The division between owners and workers has become more obvious under finance-led growth as profits and asset prices have risen and wages have stagnated. But as society has become more polarised, this division has seemed to become less politically salient. Thatcher may have managed to physically constrain the resistance to her agenda of privatisation and deregulation in the 1980s, but why did people continue to support politicians advocating similar policies all the way until 2007?
The genius of Thatcherism was to mute peoples’ awareness of the divide between owners and workers by extending asset ownership to middle earners. The expansion of home ownership and the financialisation of the housing market convinced middle earners to side with owners instead of workers. The Conservatives built a large, stable voter base by creating an alliance between homeowners and the 1%. Middle earners who happened to be alive at the right time were able to buy homes and invest their savings in stock markets and benefitting from capital gains. Bankers and financiers made huge amounts of money through mortgage lending and securitisation whilst middle earners benefitted from rising wealth. The former provided the money, the latter provided the votes. This group by no means represents a majority of British society, but they have emerged as an exceptionally powerful minority.
Today we know that from the 1990s, the UK was sleepwalking into a debt crisis — one that would end in a much bigger crash than that of 1989.45 But at the time, the country was blissfully unaware of the problems that were being stored up for the future. To many people, the avalanche of cheap credit seemed like a gift from the heavens. This boom coincided with the fall of the Berlin Wall and a new era of globalisation, during which cheap consumer goods from all over the world would become more readily available than ever before in history. Working people were able to afford plasma screen TVs, mobile phones, and video game consoles.
But peoples’ experiences of the long boom differed depending upon their class position. On the one hand, the new property-owning classes were able to release equity from their homes to finance consumption. In this way, middle-earners were able to acquire elite identities through wealth, even as wealth has become much more concentrated amongst the top 1%, who have also become much less socially mobile than ever before. On the other hand, those without access to such wealth and capital gains could still buy into the new consumer culture by taking out unsecured credit through credit cards, overdrafts, and payday loans. Corporations also started jumping on the bandwagon by offering consumers low-interest credit to purchase cars and consumer durables.
Over time, the differential experiences of finance-led growth led to a divergence between the economic experience of the property-owning classes and those of everyone else. For the wealthy, debt is a luxury. Access to interest-only or low-deposit mortgages has allowed many families to jump onto the property ladder and watch their wealth increase, transforming their class identity. For others, debt is a curse. Barely able to make ends meet on their low incomes, it has become easier and easier for the poorest in society to get access to emergency loans which charge usurious rates of interest. Payday lenders will target the most desperate people in society — those with poor credit scores who have fallen on hard times — knowing that they will be unable to access credit anywhere else. A single parking ticket, a broken car or a dental emergency can leave these people bankrupt — or, for those like Jerome, much worse.
The decline of the union movement has only exacerbated these problems. Before financialisation took off, British workers had been bound together by their collective experience of exploitation in the workplace, and their organisation against it in the union movement. Without participation in the labour movement, the experience of exploitation and poverty came to be terrifyingly individualised. This was a process helped along by the changing nature of the labour market, the erosion of the welfare state and the general decline in civic participation and social capital that marked the financialisation of society. Many of those previously employed in mining or manufacturing found themselves perpetually on the dole, chastised by the media as the welfare-dependent, undeserving poor. Their experience of poverty was unique, one infused with overtones of shame, isolation, and anger.
Others found work in poorly-paid, insecure jobs in the emerging service sector, in hospitality or retail. Even the most dedicated unionists found it hard to organise in these sectors, with workers spread across the country, paid partly in tips or commissions, and forced to undertake the kind of psychologically-warping emotional labour that can erode one’s capacity for genuine connection with other human beings. Those who had been granted an education might have been allowed access into the ranks of the civil service, joining the salaried professionals themselves. Most found themselves in debt of one kind or another. This was perhaps the most tortuous aspect of the new poverty: the power asymmetry between a debtor and a creditor is far more extreme than that between a worker and a boss. There can be no organising against loan sharks or payday lenders, still less against commercial banks.
At the same time, the state was retreating from providing the kind of social security that had been a hallmark of the post-war era. Risks that had formerly been socialised were privatised, encouraging middle-earners to “think like capitalists” in planning and insuring for risks. Private health insurance coverage has increased as wealthier consumers seek out better care than that which is available on the NHS. Rising tuition fees have also shifted the burden for paying for education onto individuals, who find themselves saddled with debt well into their careers. Many working families were taken in by the “delusion of thrift”, believing that their pensions and properties were increasing in value because of smart investments rather than a generalised environment of asset price inflation. This was, of course, a delusion — one that was quickly shattered in 2007 and the legacy of which many families are still dealing with. Many peoples’ pensions were effectively wiped out in 2008 (only to be revived through QE), some homes were foreclosed upon, and personal bankruptcies soared. Unsurprisingly, this assumption of what were previously socialised risks by ordinary households has led to a pervasive rise in feelings of anxiety and insecurity.
As greater portions of society came to rely on priva- tised insurance to mitigate personal risk, the socialised risk of the welfare state came to be seen as something for the poor and, increasingly, the lazy. Those who owned property extricated themselves from the welfare system, relying on asset price inflation to insure them against future risks. But this has had profound political consequences, including “alienating those with property from a welfare state for which they pay but from which they derive little benefit”.46 Such a situation allows the welfare state to be redefined as something for the poor, and eventually the lazy and unproductive. Slowly, as state benefits are restricted to an ever-smaller section of the population, support for the welfare state declines and it becomes far easier to cut. This process has been reinforced by the “neoliberal welfare discourse”, which locates the blame for worklessness amongst the unemployed themselves.
The changing relationship between class and politics was made strikingly clear with the rise of New Labour, which Thatcher later reflected on as her greatest achievement. It might appear odd for Thatcher to praise a party that kept hers out of government for almost fifteen years, but she was, as ever, astute to observe that the rise of New Labour consolidated Thatcher’s grand bargain between elites and the mini-capitalists. The New Labour project was based around the idea that class was no longer politically relevant; that electoral politics could be confined to societal and cultural issues, and debates over how the gains from growth should be distributed. The commanding heights of the economy would be “left to the free market”, under the watchful eye of independent technocrats in central banks and regulatory bodies. But the entire economic model that New Labour had blindly accepted was premised upon the continuous expansion of debt and asset prices, and the Tories managed to hand the reins over just as it was entering its least stable phase. Thatcher described New Labour as her greatest achievement because, just as in the 1950s the Conservative Party couldn’t touch the unions, in the 1990s the Labour Party couldn’t touch the banks.
CHAPTER FOUR THATCHER’S GREATEST ACHIEVEMENT: THE FINANCIALISATION OF THE STATE
The Establishment decided Thatcher’s ideas were safer with a strong Blair government than with a weak Major government. — Tony Benn.
On 26 June 2002, Gordon Brown delivered a speech to City dignitaries assembled at Mansion House. “Mr Lord Mayor, Mr Governor, my Lords, Aldermen, Mr Recorder, Sheriffs”, he pronounced, “let me at the outset pay tribute to the contribution you and your companies make to the prosperity of Britain”.1 These might sound like strange remarks from the party that had, just over two decades previously, pledged to nationalise the banking system. But in many ways, its close relationship to the City was one of the defining characteristics of New Labour, which consistently deregulated the finance sector. Blair attempted to woo ordinarily hostile investors and executives in the City through his famous “prawn cocktail offensive”. Financiers have always been, and would continue to be, natural supporters of the Conservative Party. But Blair and Brown made significant inroads with the sector during their tenure. The consequences of this offensive were, as later noted by the FSA, a total failure to properly regulate financial institutions, which ultimately contributed to the financial crisis.2
Given the power that the City of London Corporation holds within British politics, it is perhaps unsurprising that Blair felt the need to get the institution on side. Some have referred to the City as a state within a state: a shady, arcane institution designed to corrupt British politics and promote the interest of reclusive financiers.3 The City is the only space in the UK over which Parliament has no authority, and its representative in the House of Commons is the only unelected member allowed to enter the chamber.4 Its political architecture continues to be based on the Medieval guild system, under which businesses have votes, with larger businesses having greater weight than smaller ones. In 2011, the Bureau of Investigative Journalism revealed that the City had spent more than £92m lobbying politicians and regulators in the wake of the financial crisis to limit new regulation.5 The Bureau was able to link these lobbying efforts with a series of legislative changes, including reductions in bank taxation and regulation.
But whilst the relationship between the political parties and the City may occasionally veer into outright corruption, the influence of the City on British politics is less of an aberration than a reflection of the UK’s political economy.6 In other words, it’s not so much that a small set of financial interests centred in the City of London have “captured” policymaking (though they undoubtedly have); it’s more that the individuals making policy conflate the interests of the City with those of the British state as a whole. Politicians like Blair and Brown weren’t simply vying for access to the City’s lobbying budget, they genuinely believed that deregulating financial markets would help to boost economic growth and tax revenues that could be spent on making society more equal. By taxing the revenues of the big banks, and the salaries of their employees, the British state would be able to provide public services and welfare for those in parts of the country where traditional industries had been destroyed. Globalisation may have harmed British manufacturing, but it could help to provide support for those “left behind” by bolstering the City as a global financial centre.
Whilst finance has always played a central role in British politics, in the 1980s and 1990s the City’s dominance was taken to a whole new level. This was initially catalysed by Thatcher’s policies — from bank deregulation, to right-to-buy, to the Big Bang. But Blair and Brown took this process one step further. They developed a complex and arcane regulatory architecture for the City that was easy for insiders to manipulate. These organisations were given a mandate to implement “light touch” regulation on the finance sector, to encourage “innovation” and promote investment.7 Meanwhile, billions of pounds were pumped into the UK’s real estate market, inflating a bubble that would eventually burst in the biggest financial crisis since 1929. The revenues from this model were then used to expand the provision of welfare and public services for those left out of the boom, under the auspices of the private sector, which was given responsibility for delivering public services. In other words, Blair maintained Thatcherite political economy, but sought to make the grossly stratified society that resulted slightly less unfair. However, in expanding the size of the state without challenging the dominance of finance, Blair managed to do something that no other government had achieved: financialise the state itself.
Thatcher’s Greatest Achievement
By the 1990s, high and rising levels of inequality were a defining feature of the British economy. The Conservatives had attempted to naturalise this inequality by claiming that it was the result of market forces in a globalised world.8 Over the long term, they claimed, the efficiency gains from trade would make everyone better off. Of course, the kind of globalisation taking place in the 1990s was not primarily based on trade. Instead, the 1980s marked the start of the era of financial globalisation, which was only ever meant to serve the interests of the 1%.9 Financiers had been pushing behind the scenes for the removal of restrictions on capital mobility for decades, and when their wish was finally granted, it precipitated a global financial boom. With the political foundations of the new world order firmly hidden from sight politicians were free to claim that rising inequality was a natural state of affairs. A focus on redistribution replaced the Labour Party’s previous “obsession” with ownership — the gains from growth didn’t have to be equally distributed if the state could tax the wealthy and redistribute their income. In other words, rather than attempting to challenge a fundamentally unfair and unstable system, Blair accepted finance-led growth and aimed to make it slightly less unjust.10
And in many ways, he succeeded. As John Hills argues in his survey of inequality in Britain during the Blair years, New Labour’s policies did marginally reduce the large inequities that had resulted from the advent of finance-led growth.11 On average, in the middle of the distribution, income differences narrowed. Child and pensioner poverty fell and there were notable improvements in geographical inequality in some areas as Blair attempted to keep voters in traditional Labour seats on side. His hallmark focus on education meant that there was a marked reduction in attainment gaps between the wealthiest and the poorest children.
But Hills also points out that, despite the then widespread view that New Labour reduced inequality throughout society, the picture is actually much more complex. Incomes for the top 1% grew extremely quickly — far outpacing income growth for the rest of the population. Meanwhile, the incomes of the very poorest in society grew more slowly than the average, for reasons highlighted in the previous two chapters. The combination of these two trends meant that the incomes of the richest and the poorest in society diverged substantially over Blair’s tenure. Wealth inequality also continued to grow — unsurprising given that rising asset prices are a defining feature of finance-led growth. Hills’ assessment is that New Labour managed to make slight improvements to the highly unequal income distribution handed to them by Thatcher, but that the problem of inequality was much more deeply rooted than Blair and others had assumed, and “less amenable to a one-off fix”.
In fact, rising inequality is an inherent part of finance-led growth. The growth of shareholder value ideology during the 1980s meant that companies were more focused on increasing their profits and distributing the returns to shareholders than paying and retaining their workforce. The rapid growth of the finance sector and related “professional services” industries in the City also meant rising salaries for those at the top. But perhaps the greatest driver of inequality under New Labour was rising asset prices, driven by the billions of pounds worth of new money being pumped into property and stock markets every year.
Blair and Brown had to be seen to be doing something about these issues. For a start, a commitment to making British society fairer was the one thing that differentiated Labour from the Conservatives. But more generally, voters were starting to express real concern with rising inequality. As a result, Blair and Brown had to undertake a balancing act between alleviating the most obvious signs of inequality without undermining the incentives that made Thatcherism work.12
Out of this quagmire emerged a threefold strategy. Firstly, New Labour would adopt Thatcher’s language about welfare — the responsibility for unemployment would be placed firmly on the shoulders of the unemployed.13 The only difference was that workers’ laziness and irresponsibility would be met with a “compassionate” response from the state. The emphasis would be placed on skills acquisition — hence Blair’s famous focus on education as the route out of poverty. Welfare-to-work programmes were introduced, and tax credits were brought in to subsidise low pay and “encourage” people back to work. None of these measures, of course, tackled the structural causes of low pay or unemployment, but served instead to consolidate the division between the deserving and undeserving poor that underlaid the Thatcherite ideology. Those who took advantage of these welfare programmes would be seen as deserving, whilst those who did not would be punished.
Secondly, the state would learn to behave more like a private organisation itself, based on the emerging ideology of “new public management” (NPM).14 NPM advocates argued that the best way to run an economy was to subject all areas of economic activity — including state spending — to the discipline of the market. If markets didn’t naturally exist, then they should be created. After all, the lazy, corrupt, and inefficient bureaucrats who staffed the public sector had to be incentivised to behave in the best interests of the taxpayer. Introducing private-sector management techniques would promote public sector “efficiency” and improve “customer service”.15 Middle and upper management were empowered to introduce and police a set of rigid targets to hold civil servants and public sector workers to account. Mirroring the process that had taken place in the private sector after the famous “it’s not what you pay them but how” paper, senior civil servants started to be remunerated based on performance.
On the one hand, new public management ideology forced the civil service to operate much more like a private business.16 New policies were ruthlessly subjected to techniques like cost–benefit analysis to determine whether or not they would be “profitable” for the state to undertake. Such a process is, of course, meaningless, because states are not businesses. The vast majority of a state’s citizens do not behave like “customers” that will pick another, cheaper state if they don’t like the quality of service they receive. But treating the state like a business ended up benefitting those that do — the international capitalist class who can threaten to move if they are taxed too much. On the other hand, new public management also had what might be considered an unintended consequence — an increase in public sector bureaucracy. Middle management in the sector has grown substantially, and employees are continuously assessed against useless metrics that serve to create more work for all involved.
The third, and perhaps the most important, element of New Labour’s strategy for tax and spend would be to encourage the private sector to undertake public spending on the state’s behalf. The logic behind the outsourcing agenda and private financing initiatives was a natural extension of new public management thinking — what better way to introduce market discipline into the public sector than to have private companies undertake spending for the state themselves? This would be justified on the basis of “efficiency”, but its true purpose would be to allow private corporations to profit from the necessary redistribution created by the finance-led growth model. New Labour’s promise to the electorate centred on alleviating inequality without killing the goose that laid the golden eggs — finance. The genius of the privatisation agenda was using this expansion in state spending to make the goose even fatter.
PFI: Profits for Investors
Proposals for a tunnel linking the UK and France date back to the nineteenth century.17 In 1802 the French engineer Albert Mathieu-Favier put together a blueprint to dig a tunnel under the English Channel, illuminated by oil lamps to light a path for horse-drawn carriages. A desire to seal off the cliffs of Dover from any European invasion prevented the project from being taken up until 1980, when Thatcher’s newly elected Conservative government agreed to work with the socialist French President François Mitterrand to take forward the proposition. Thatcher had one condition: the project would be financed privately. This was no small ask. At the time, the £5bn Channel Tunnel was the largest infrastructure project ever proposed. Whilst state-owned and well-regulated private French investors eagerly stepped forward to provide their half of the funding, the City was less keen on the idea. This was something of an embarrassment for a British government intent on proving that it was host to the most powerful financial centre on the planet, and it took interventions by the Bank of England and the government to finally ensure that adequate capital was raised.
But this wasn’t enough to please the banks, which were worried about being exposed to what looked like a hare-brained politically-motivated white elephant. They demanded that a new body, which became known as Eurotunnel, be created to place some distance between the banks and the construction firms. At this point, the project was becoming incredibly expensive and complicated. Channel Tunnel Group in the UK and France Marche group in France would invest in Eurotunnel, which would be floated as a public company, before itself financing Trans-Marche Link, which would undertake the actual construction. Eurotunnel would, in turn, gain the concession to run services that ran through the tunnel in coordination with SNCF and British Rail and would recoup its costs over the long-term through “usage charges” paid to it by the train operators.
Almost as soon as construction began, costs began to mount. The engineering problems were almost enough to derail the project on their own, but the real trouble lay with figuring out who amongst the plethora of different actors involved would pick up the extra costs. Adding to the trouble, high interest rates meant that the financing costs of the project were 140% higher than expected — a year-long delay cost an extra £700m in interest charges. By 1995, Eurotunnel was up and running, a year late, and 80% over budget. The company lost £900m in its first year of operation. Three years later, it had undergone three state rescues. In 2003, its interest payments of £320m were almost double its operating profits of £170m.
And yet, when the government decided that it needed to upgrade the rail network that went through the tunnel, it concluded that the project should, once again, be privately financed. HS1 — otherwise known as the Channel Tunnel Rail Link — also turned out to be a disaster. Yet again, a consortium was created to raise the finance needed for the project. Yet again, overoptimistic assumptions about future revenues meant that it was unable to find the funding it needed. And yet again, the government stepped in to bail out the private consortium and save the project. The Public Accounts Committee found that the project has left taxpayers “saddled with £4.8bn worth of debt”.18
As private financing has been extended into ever more areas of public spending, public investment has collapsed, falling to just 2.6% of GDP in 2018 — well below the OECD average of 3.2%.19 A recent report on private financing initiatives from the National Audit Office revealed that most of these projects have been entirely unsuitable for private financing, and that some projects are costing the public 40% more than would have been the case had public money been used directly.20 Public borrowing is always, other than in extreme cases in which states are deemed uncreditworthy, cheaper than private borrowing because it is incredibly difficult for states to default. Even when they do, it is either because they have borrowed in a foreign currency, like Argentina today, or because they don’t have control over their monetary policy, like Greece.
So why did successive governments continue to press ahead with PFI? Supporters argued that PFI transferred risk from the taxpayer onto the private sector.21 If the contractors delivered on time and on budget they would get paid — if they didn’t, they would lose out and their shareholders would suffer. This was supposed to introduce market discipline into the provision of public contracts. Again, the ideological justification fell far short of the reality. The private sector prefers to operate in the absence of competition. So, in exchange for entertaining the government’s new publicity stunt, the companies involved made sure that the contracts were written in such a way as to ensure that, whatever happened, they would get their money. This meant that private companies were undertaking spending on the state’s behalf without incurring any risk whatsoever.
The second justification was even more spurious. Having inherited the idea that the state functioned like a household — and that the role of the prime minister was similar to that of a good housewife — New Labour had an incentive to ensure that public spending did not reach what looked like unsustainable levels.22 This analogy was always ridiculous, especially when it comes to investment spending. If the government borrows to invest in infrastructure projects that expand the productive potential of the economy, GDP will rise, tax revenues will follow and, over the long term, the project will pay for itself. Whilst the New Labour government undoubtedly knew this, they also knew that the returns wouldn’t be recouped for many years, whilst the impact on government debt would be immediately obvious. New Labour had to avoid looking like it was going back to the bad old days of socialism, and this is where PFI came in. Private financing allowed New Labour to shift the immediate cost of borrowing off the government’s books and onto those of the private sector, even though the cost would ultimately fall on the state itself.
Private financing is another avenue through which the British state has attempted to implement a regime of privatised Keynesianism.23 Combined with the increases in household debt described in the last chapter, PFI and other outsourcing initiatives would allow for the further displacement of public spending with private debt. Except under this scheme, the private debt would be held by wealthy shareholders rather than households, and it would be backed up by an implicit government guarantee. Private corporations would be able to borrow on financial markets without taking any risk, as the state would always be there to step in and pay back the debt. This meant implementing the logic of Keynesianism — that states should borrow to invest to mute the ups and downs of the business cycle — whilst skimming some cash off the top for the private sector. In other words, state-sponsored rentierism.
State-guaranteed private borrowing creates the problem of moral hazard, a situation in which economic actors are shielded from the negative consequences of their actions. Before 2007, the banks knew that if they ran into trouble the government would always be there to bail them out — they could take huge risks today, without having to face the consequences tomorrow. This problem of moral hazard is what underlay the collapse of PFI giant Carillion.24 The firm was accepting government contracts at very low prices — less than the amount they needed to deliver the work — and eventually found itself unable to deliver its contracts and pay its shareholders. Rather than admitting it was in trouble, the company increased the amount it was paying out to shareholders and started to take on new government contracts to cover the costs of the old ones — effectively throwing good money after bad. They did so betting, no doubt, that the state would step in to rescue the company were it to encounter financial difficulties. But when Carillion collapsed in 2017, the government did not step in to help — perhaps because of the public outrage at the incredible irresponsibility of the firm.
When the auditors came in to manage Carillion’s bankruptcy, they found that the company had just £29m in cash and owed £1.2bn to the banks, meaning that it didn’t even have enough money to pass through administration before entering liquidation. Carillion had become a giant, state-sponsored Ponzi scheme, siphoning off money from the taxpayer and channelling it into the pockets of wealthy shareholders. Whilst many of those shareholders who did not sell on the first signs of trouble have now lost out, the real losers have been the contractors and workers hired by Carillion, many of whom have found themselves out of pocket. Today, billions of pounds worth of taxpayers’ money is being funnelled into inefficient, financialised outsourcing giants like Carillion, only to enrich executives and shareholders, whilst leaving taxpayers to foot the bill.
The demise of Carillion epitomised the failure of New Labour’s experiments with private financing. But PFI wasn’t the only route through which public spending has become financialised — the rise of outsourcing more broadly was also to blame. Government spending can be divided into investment spending, which requires a big outlay up front to construct a potentially revenue-generating asset, and current spending, which pays for day-to-day public services provision. Upgrading the UK’s rail network might, for example, require billions of pounds to be spent today for improvements that will be felt tomorrow, whilst paying the salaries of NHS staff requires a continuous payment every year. PFI was meant to keep investment spending off the government’s books by requiring a private company to raise a lot of money up front, which the government could repay over a period of decades, with interest. But New Labour also wanted to bring the private sector into the delivery of day-to-day spending. So, it turned to outsourcing — paying a private company directly for the delivery of a public service. Many of the same firms that were brought in to deliver big infrastructure projects were also used to deliver public services.
Outsourcing has an ambiguous record.25 There are examples of relative success, where public procurement has been used wisely, as well as examples of dramatic failures, with low-quality services being delivered by unscrupulous contractors at a huge cost to the taxpayer. There are arguments for outsourcing government projects when procurement is done well and includes, for example, commitments to use companies with unionised workforces and with high environmental standards. But today, outsourcing is mostly dominated by a few big firms delivering low-quality services whilst skimming money off the top for shareholders and executives. G4S managed the security for the London Olympic Games so badly that the government was forced to bring in the army to support them.26 Serco operates some of the UK’s most brutal detention centres and has even been accused of using inmates as cheap labour.27 Capita is known for gouging many of the UK’s local authorities by delivering low-quality services at eye-watering prices.28 These outsourcing oligopolies have their tentacles spread all over the spending of the British state, from schools and hospitals, to prisons and detention centres.
The steady privatisation of public spending around the world was recently identified by the UN as the source of pervasive human rights abuses.29 The UN’s expert panel claimed that “[g]overnments trade short-term deficits for windfall profits and push financial liabilities on future generations”. Neoliberal governments have relied on privatised public spending in order to alleviate some of the inequality created by the finance-led growth regime, and to mute the ups and downs of the business cycle. They have, however, shied away from returning to the old Keynesian model of promoting full employment, given the implications this would have for power relations between workers and owners. Instead, they have sought to create a model of privatised Keynesianism, which allows executives and shareholders to profit from public spending through monopolistic corporations that pay executives huge sums whilst hiring workers on poorly-paid, precarious and insecure contracts. In other words, privatisation attempts to deal with some of the many contradictions of finance-led growth, whilst maintaining the power relations upon which it rests.
But private financing and outsourcing did not just allow private investors to extract large sums of money from the taxpayer. These innovations were also designed to insulate the private sector from democratic accountability.30 When the public sector provides a poor service, citizens can lobby, campaign, and vote against the politicians in charge. The more democratic and decentralised the state, the more this pressure is felt. But if a private organisation is providing a poor-quality service, to whom does the service user complain? She could try to complain to the organisation itself, but why would senior executives listen to a disgruntled service user when their profits are guaranteed by the state? She might try to influence politicians, but they would just tell her to take it up with the company. Without a real market, in which consumers can respond to poor outcomes by changing providers, private provision of public services insulates the providers from democratic accountability.
Today, our public services provide lower-quality services to a smaller number of people at a higher cost, and at much lower levels of efficiency. They are bureaucratic monoliths, managed according to the profit-maximising logic of the free market, without the countervailing competitive pressure that would require them to raise standards. The deterioration in the quality of public services has often been part of a deliberate strategy to encourage middle earners to take up private forms of social insurance, meaning that they are immune from the ongoing deterioration of the public sphere. The state is consigned to offering low-quality services to the poor, who are rendered voiceless in the face of the giant bureaucracies in control of many of our public services.
How did neoliberal states get away with such obvious disregard for such a large portion of their citizens? They did what they always do: they claimed that they didn’t have a choice.
The Bond Vigilantes
In 1983, Edward Yardeni, an economist at a major US brokerage house, coined the term “bond vigilantes”.31 These vigilantes, Yardeni claimed, would “watch over” domestic governments’ policies to determine “whether they were good or bad for bond investors”. In other words, in the era of capital mobility, it was up to states to prove to investors that their country was worth investing in. If states were found wanting, the vigilantes would flee, pockets stuffed full of cash. Yardeni’s bond vigilantes are a personification of the logic of market discipline. States that fail to safeguard the value of foreign investors’ capital would face capital flight as investors sold these states’ assets, including their governments’ bonds.
This capital flight would send the value of the country’s currency tumbling, which in import-dependent countries would lead to a rise in inflation and increase the cost of servicing foreign debt. For those countries with fixed exchange rates, it would necessitate cuts to public spending or a humiliating devaluation. The bond vigilantes could also more directly impact a government’s credibility by selling government debt. The higher the demand for a particular states’ government bonds, the lower the yield — the greater investors’ confidence in a country’s ability to pay its debts, the lower that country’s borrowing costs. If investors lost confidence, disaster could ensue: a mass sell-off of a country’s debt could trigger a sharp rise in the cost of debt servicing, potentially catalysing a solvency crisis.
Prior to the liberalisation of international capital markets, most developed countries didn’t have to worry too much about international financial markets’ views on their domestic policy decisions. Investors were constrained in their ability to move their money around the world, for the very reason that large volumes of capital flowing into or out of a country would have made it all but impossible for governments to maintain the exchange rate pegs at the heart of Bretton Woods. But with the removal of restrictions on capital mobility and the rise of the institutional investor, this all changed. Suddenly, a decision on the part of a few big investors to divest from a particular country could spark a crisis. This gave the vigilantes a great deal of power. International investors found themselves able to undermine — and sometimes even bring down — democratically-elected governments that they judged to be unsound economic managers.
Perhaps the best example of this kind of market discipline is the capital flight that befell French President Mitterrand’s government in 1983.32 Mitterrand had been elected in 1981 on a socialist platform that was essentially an extension of the post-war consensus. His 110 propositions for France included commitments to revive growth through a large Keynesian programme of investment, to nationalise key industries, to increase the country’s wealth taxes and to democratise the institutions of the European Union. This, Mitterrand hoped, would lay the groundwork for the “French road to socialism”. He could not have picked a more inopportune moment to advance such an agenda. International finance had been emboldened by the death of Bretton Woods and the birth of neoliberalism in the US and the UK — investors were not about to allow one of the world’s largest economies to fall to the scourge of a renewed socialism.
France — like much of the rest of the global North at the time — was also in the midst of its own economic crisis. International competition was eroding corporate profitability under French social democracy, and, when the oil price spike hit, rising inflation and unemployment brought the economy grinding to a halt. Just as it is today, the French state’s ability to use monetary policy to counteract these pressures was limited due to the country’s participation in the European Monetary System (EMS), which required it to peg its currency to the Deutsche Mark. France was also then enduring the effects of the Volker shock — the interest rate hike pursued by the US Federal Reserve that saw billions of dollars’ worth of capital flow into the US — which placed a strain on economies all over the world. Mitterrand’s nationalisations of French banks were not exactly encouraging international investors to keep their money in the country, and France was also running a trade deficit. These factors all contributed to a mass exodus out of French assets — from bank deposits, to property, to government bonds — and France lost around $5bn in capital flight between February and May 1981. Mitterrand faced a choice between implementing capital controls or giving in to the demands of international finance by implementing a harsh austerity agenda, reneging on his promises of a French road to socialism. In the end, he chose the latter.
This story seems to suggest that, by the 1980s, investors had become powerful enough to force democratically-elected governments to promote their interests — or, as the latter would argue, to abide by the logic of the market. This is what explains the rise of neoliberalism: states had no choice other than to implement “investor-friendly” policies, like reducing taxes, deregulating financial markets, and making credible commitments to respect private property rights and to keep inflation low. But the story is more nuanced. The increasing power of the bond vigilantes benefitted neoliberal states just as much as investors — Thatcher, Reagan, and others who sought to implement their radical economic agenda in the face of popular opposition could credibly claim that there was no alternative to cutting public spending, shrinking the state and deregulating markets. The idea that governments must compete for international investment has now become a central plank of economic discourse, reproduced by the financial and popular press.
The rise of finance came to shape the way the modern state functions, just as it has shaped the functioning of the modern corporation or household. But just as it is unwise to view the financialisation of the corporation as a battle between “good” capitalists and parasitic financial elites, it would be mistaken to view the financialisation of the state as something driven from the outside. Neoliberal politicians were not terrified into submission by the bond vigilantes, they worked with these investors to rebuild the global economy in the interests of global capital, just as they had rebuilt their domestic economies along the same lines.33 The bond vigilantes provided cover. States would deregulate financial markets, making investors more powerful, thereby allowing governments to invoke the logic of market competition to justify their imposition of neoliberal policies on an unwilling populace. By the 1980s, the bond vigilantes had made it possible for politicians like Thatcher and Reagan to claim that there was no alternative to neoliberalism — any attempt at socialist experimentation would be severely punished by the markets, just look at Mitterrand’s France.
Illiberal Technocracy
The bond vigilantes supported a project that aimed to place fiscal policy outside of the realm of political debate. In the era of capital mobility, states would have no choice other than to do as the markets wished. But whilst it contained an element of truth, states that had control over their own monetary policy still retained much more power than this discourse suggests. The bond vigilantes knew that much more had to be done to place economics outside of the realm of politics. Developments in academic economics would provide the perfect justification.
In the 1970s, neoclassical economists took Hickes’ version of Keynesianism and incorporated it into the theoretical framework established by classical economics to create what economist Joan Robinson described as “bastard Keynesianism”.34 This was an innovation, they claimed, permitted by advancements in mathematics that allowed economists to undertake complex modelling exercises that would reveal the fundamental “laws” of economic activity, based on simplifying assumptions about human behaviour. Human beings were perfectly rational, utility-maximising computational machines who interacted with one another in orderly, predictable ways producing clear, linear patterns at the macroeconomic level. The best neoclassical economists will tell you that these assumptions are not meant to accurately reflect reality, and that their outcomes cannot easily be translated into policy solutions. The worst will tell you that the assumptions don’t matter if the results are right, and that it isn’t their business what policymakers do with the findings of academics. As is so often the case in the economics profession, the worst won out, and the findings of neoclassical economics seeped into political discourse. The end result was the dissemination of the view that economics could be reduced to a set of neutral economic facts, which could be innocently handed over to policymakers, who would then be able to implement the “optimal” set of policies to maximise growth.
From this point on, the economic success of a particular government would be judged objectively based on technocratic measures such as GDP growth, inflation, and unemployment. These metrics came to dominate the discourse of economics — particularly the almighty metric of GDP. The combination of technocratic neoclassical economics discourse and the hegemony of GDP were the nails in the coffin of political contestation over the economy — from this point forward, economics would be a self-contained, academic subject best left to the “experts”. Of course, what the rise of the expert really meant was the capture of policy-making by the powerful.35 In the absence of any accountability to voters, decisions about macroeconomic policy could be based on the returns such policies would provide to the wealthy.
Perhaps the best example of how rule-by-experts facilitates policy capture has been the move towards central bank independence. Neoclassical economists argued that politicians exhibited an “inflationary bias”, which made them poor economic managers. Failing to consider the long-term implications of their actions, politicians would reduce interest rates and increase spending today in order to boost growth and secure re-election. Ultimately, however, this would damage the economy in the long-run by raising inflation, which would erode consumers’ incomes. The solution was clear: this powerful tool had to be placed on the top shelf, away from the prying hands of the political toddlers focused only on their own electoral prospects.
Some argued that central bank independence was supposed to bring about high interest rates, which would damage industrial capital and promote the interests of finance capital — but under the conditions of financialisation, the situation is much more nuanced. Histori- cally, there has been an assumed dichotomy between the interests of finance capital and those of industrial capital.36 The former has been assumed to prefer high interest rates to maximise the returns on lending, whilst the latter are assumed to prefer low interest rates to allow them to borrow cheaply. But as firms have become financialised, the interests of these two groups have merged.37 Amongst businesses committed to shareholder value, high profits mean high returns for investors, eroding investors’ commitment to high interest rates. Bankers themselves also tend to rely less on high interest rates to make their profits in modern financial systems. As interest rates fell, banks came to rely on the fees derived from processes like securitisation rather than interest rates themselves.
Equally, however, it is not in the interests of asset holders for interest rates to be kept low for too long as high interest rates are also a guarantee against inflation. Inflation can harm long-term asset-holders because it erodes the value of their assets. If inflation is running at 5% per year and my investments are delivering a nominal return of 4%, my returns are negative in real terms. This might have made financiers conflicted about interest rates — they want high profits, but they don’t want inflation. The triumph of Thatcherism was to ensure that British capitalists could have their cake and eat it. Profits soared with deregulation, privatisation, and tax reductions, but little of this accrued to workers in the form of rising wages. Thatcher’s attack on the unions placed them in a much weaker position, preventing from demanding pay increases in line with inflation. This meant that any increase in costs would be absorbed by the workforce in the form of shrinking pay packets.
The guarantee that inflation would be kept relatively low meant that monetary policy could be directed towards inflating asset prices.38 With central bank policy effectively captured by the finance sector, interest rates remained low throughout the 1990s, supporting an expansion in lending and an increase in asset prices. Most commentators agree that low interest rates were a central cause of the dot-com bubble that emerged towards the end of the 1990s, culminating in the crash in the early Noughties. Under financialisation, independent central banks have been able to provide the two macroeconomic conditions that benefitted investors most: low consumer price inflation, and high asset price inflation. Absent any democratic accountability, they could not be blamed for the financial instability this would inevitably cause. In fact, politicians encouraged the financial boom of the 1990s. Regulation was eased and “light touch” organisations like the FSA were set up to supervise the finance sector, often staffed by ex-financiers.39
In many ways, by the 1990s, global capital needed New Labour more than it needed another Thatcher. New Labour succeeded in hiding the stark class divisions that marked British society by the late 1990s, whereas Thatcher’s Conservative Party had made them more obvious. Blair showed himself capable of naturalising the finance-led growth model in a way that Thatcher never could. Class, we were told, no longer mattered. A rising tide would lift all boats. All that was needed was for educated policymakers to pick the “right” policies to maximise economic growth. Whilst the British state continued to pursue economic policies that were in the interests of elites, the battle lines between the elite and everyone else were no longer visible — they had been blurred by rising home ownership and consumer debt. Some argued that the battle lines had ceased to exist. The end stage of capitalism was to produce a classless utopia. It would take the largest financial crisis since 1929 for the class foundations of finance-led growth to be revealed once again.
CHAPTER FIVE THE CRASH
Stability leads to instability. The more stable things become and the longer things are stable, the more unstable they will be when the crisis hits. — Hyman Minsky
On 15 September 2008, Lehman Brothers, one of America’s largest and oldest banks, filed for bankruptcy. The bank held $600trn worth of assets, making this the largest bankruptcy in American history.1 Financial markets looked on in shock. Just days earlier, the US government had nationalized Fannie Mae and Freddie Mac — two highly subprime-exposed mortgage lenders. The fact that the US government had allowed Lehman Brothers to collapse sparked a worldwide panic. With mortgage default rates skyrocketing, there was no telling how many other banks were exposed to subprime losses on a similar scale to Lehman’s.
The trouble had started the year before, when mortgage defaults had begun to rise in the US. Many mortgages that had been issued in the boom years were flexible: subject to low fixed interest rates for the first few years of the loan, followed by higher ones down the line. People who took out these loans were assured that they would always be able to refinance their mortgage when the teaser rates expired. But at the beginning of 2007, refinancing became more difficult, and many consumers found themselves stuck with high interest payments that they couldn’t afford. House prices levelled off in 2006 and then began to fall. Defaults escalated, and banks began to worry. Had the trouble ended at US mortgages, we may have been left with a US, and perhaps a UK, housing crisis. But by 2007, mortgages were no longer just mortgages. The debt that had been created by the banks in the boom between the 1980s and 2007 had been transformed into the plumbing of the entire global financial system. Every day, millions of dollars’ worth of mortgages were packaged up into securities, traded on financial markets, insured, bet against, and repackaged into a seemingly endless train of financial intermediation.
As the crisis escalated, it was presented as an archetypal financial meltdown, driven by the greed and financial wizardry of the big banks, whose recklessness had brought the global economy to its knees. But whilst the big banks’ relentless desire for returns had escalated the crisis, it’s causes could be traced back to what was taking place in the real economy: mortgage lending.2 And this was driven by financialisation. The Anglo-American model of finance-led growth — described in this book so far from the British perspective — was uniquely financially unstable, even as policymakers believed that they had mastered boom and bust. The Anglo-American model was premised upon the kind of debt-fuelled asset price inflation that has always resulted in bubbles. The one that burst in 2008 just happened to be the largest, most global, and most complex bubble that has ever been witnessed in economic history.
In this sense, 2008 wasn’t simply a transatlantic banking crisis, it was the structural crisis of financial capitalism, emerging from the inherent contradictions of finance-led growth itself. The political regime of privatised Keynesianism, necessary to mitigate the fall in demand associated with low-wage, rentier capitalism, was always inherently unstable. Bank deregulation had created a one-off rush of cheap money that had inflated a bubble in housing and asset markets. The state allowed this bubble to grow for reasons of political expediency, rather than deflating it in the interests of financial stability. An economy that is creating billions of pounds worth of debt used for speculation rather than productive investment is an economy living on borrowed time. And in 2008, that time ran out.
Bubble Economics
As the financial crisis cascaded throughout the global economy, the Queen famously asked a group of economists why no one had seen “it” coming. All over the world, economists were asking themselves the same question. For the previous decade, the profession had been patting itself on the back for having “solved” the major problem at the heart of economic policy: mitigating the ups and downs of the business cycle. By absorbing some of Keynes’ insights on aggregate demand into the classical economic framework, the “neoclassical” economists — as they came to be known — claimed to have built highly accurate macroeconomic models that were able to produce the perfect answer to any policy question. Their success at prediction was, they argued, what underlay the so-called “Great Moderation” that preceded the financial crisis: a period of high growth, low inflation, and relative stability. As it turns out, the Great Moderation was no such thing. As the upswing in asset prices continued, greater amounts of risk built up in the system.3 Part of the reason the financial crisis of 2008 was so big is that the period of exuberance that had preceded it had been so long.
According to Hyman Minsky, “stability is destabilising” — long periods of calm in financial markets encourage behaviours that lead to instability.4 Minsky’s work built on Keynes’ theory that investment is driven by human psychology more than by any objective market rationality. The combination of these psychological factors, and the ability of modern capitalism to create huge amounts of debt, gives rise to financial systems that are fundamentally unstable. Financial markets tend to be characterised by periodic bubbles and panics, which in turn impact the real economy, causing credit crunches and recessions.
Instability results from the psychological factors that drive investment under conditions of uncertainty. Investment decisions are determined by the cost of the investment and the expected returns to be derived from it. Keynes argued that these two variables – costs and expected returns – are governed by different price systems. Keynes’ two price theory – later added to by Minsky – showed that costs associated with an investment — including the costs of financing the investment if the business is borrowing, and the risks associated with that borrowing — are determined by what is going on in the economy now. The other side of the equation — the expected returns derived from the investment — are driven by what businesses think is going to be happening in the economy tomorrow. These expectations are subject to uncertainty — about future economic growth, the potential for bankruptcy, etc. — and are therefore more subject to the caprices of human psychology.
This understanding of the relationship between uncertainty and prices is one of Keynes’ most important theoretical innovations. Human beings are generally quite bad at understanding the nature of uncertainty, often confusing it with risk. But whereas risk is quantifiable, uncertainty is not. Risk measurements can be applied to simple events like rolling a dice but trying to measure uncertainty is like trying to determine whether or not I’ll still own the dice in ten years’ time. I can invest on the basis that the economy has grown for the last several quarters, assessing the probability that this trend will continue, but there is no way I can account for the possibility that a major new invention will be brought to the market or that Earth will be hit by an asteroid. Optimism and pessimism therefore matter when it comes to investment, perhaps even more than the issues traditionally accounted for by economics, like current costs or growth rates. If investors are optimistic, they will not only expect that their future returns will be higher, they may also anticipate their future borrowing costs will fall and judge it quite unlikely that they will go bankrupt. As a result, they are much more likely to invest and to borrow to invest. The important thing to note here is that what drives this investment decision is not so much what is going on in the economy today, but what business owners think is probably going to happen tomorrow — a time horizon over which they can’t claim to have certain knowledge.
On aggregate, these differences in behaviour can lead to the emergence of bubbles. When the economic cycle is on the upswing, the prices of financial assets like stocks and shares start increasing. Investors buy these securities, expecting the good times to continue for the foreseeable future. When lots of investors buy the same asset, the price rises. Think, for example, about the increase in the price of Bitcoin, which was driven by expectations about the crypto-currency’s future value almost entirely divorced from its utility. As investors experience several periods of strong returns, they start borrowing greater sums to invest. Banks also tend to lend more to businesses when the economy is doing well. More money enters the financial system, pushing up asset prices even further and creating a self-reinforcing cycle of optimism-driven asset price inflation.
Eventually, the financial cycle enters a phase of “Ponzi finance”, with investors piling into assets one after another based purely on the speculation-driven price rises of the recent past. Just like a Ponzi scheme which uses new recruits to pay off old lenders, investors end up taking out debt simply to repay interest. This underlies Minsky’s famous insight that “stability is destabilising”: when investors experience an extended period of high returns without any crashes, they become overexuberant about the prospects of future growth and take risks they otherwise might not.
But eventually lending dries up, investment slows, and asset prices start to level off. Investors begin to sense that the party might be coming to an end and either hold off buying or start to sell their assets. Asset prices begin to fall on the back of slowing demand, just as they rose due to rising demand during the upswing. Believing that their assets will continue to fall in value, investors begin to panic sell, catalysing a chain reaction throughout the financial system. In extreme cases, this panic selling can cause prices to fall in the real economy. Falling asset values dampen profitability, reducing investment and wages, and investors’ and households’ wealth, lending to lower spending. Unrestricted lending exacerbates these dynamics by prolonging the upswing and exacerbating the downturn. Falling profits may require firms to sell off even more assets, or lay off workers, to repay their debts. Those who have used debt to purchase assets during the upswing may find themselves in negative equity — with assets worth less than the total amount of debt they have outstanding. They will put off all but essential purchases in an effort to pay off their debts, reducing demand, but they may still end up going bankrupt.
Historically, these observations have been applied mainly to business investment but the financialisation of the household meant that they could be applied to ordinary consumers too. Before 2007, consumers were borrowing huge amounts to purchase houses, increasing house prices and turning mortgage lending into a speculative game. With house prices rising, and credit more readily available than ever, houses became incredibly valuable financial assets. People began purchasing housing not just because they needed it, but because they expected that it would continuously rise in value. Some bought second homes, third homes, and fourth homes, all financed by debt. People also began to refinance their homes to “release their equity”, allowing them to purchase yet more assets — or even just to pay for holidays and new TVs.
This bubble was so big, and went on for so long, for two main reasons. On the one hand, instability emerged naturally due to changes that had taken place in the real economy. The financialisation of Anglo-American capitalism witnessed in the latter half of the twentieth century led to a falling labour share of national income and a rising rentier share. Rising inequality threatened to dampen demand and reduce growth. Bank deregulation and privatisation concealed these trends by expanding access to credit and asset ownership, allowing some working people to benefit from the increase in asset prices, even as others were left behind. The financialised state, meanwhile, used its control over economic policy and financial regulation to promote the interests of the elites that were doing so well out of the boom. Soon the bubble took on a life of its own. Rising house prices left consumers feeling wealthier, and therefore able to take out even greater amounts of credit, even as their wages declined relative to their productivity. This surge in private debt left both the British and American economies uniquely vulnerable to a crash. But this instability can be traced back to the chronic shortfall in demand that emerged from the disparities naturally created by finance-led growth.
On the other hand, the reason this boom was able to go on for so long was that financial globalisation and bank deregulation dramatically increased the amount of liquidity in the international financial system. Financial globalisation allowed banks and investors to draw on capital that had been stored away in states with lots of savings. Financial deregulation reduced restrictions on lending and allowed banks to use this capital to create more money. International banks developed ever more ingenious ways to evade the restrictions on lending that continued to exist. Mortgages were the dynamite at the centre of the explosive device that caused the economic crisis, but the explosive device itself had been transformed due to the financial innovation seen before the crash.
This transformation had several features. The removal of restrictions on capital mobility led to a wave of financial globalisation associated with significant increases in capital flows. The development of “securitisation” allowed ordinary mortgages to be turned into financial assets that could be sold to investors. The rise of the shadow banking system meant that banks were able to lend more than otherwise would have been possible. Finally, banks’ reliance on market-based finance — i.e. borrowing from other financial institutions rather than state-backed bank deposits — allowed investors from all over the world to get in on the game, but also left global banks uniquely exposed to any changes in lending conditions.
When talking about the financial crisis, commentators have tended to focus on this latter set of issues. But whilst the intricacies of global finance are important in determining the way the crisis happened, it is also critical to bear in mind that these factors merely served to prolong underlying trends that had their roots in the real economy — in the rise of finance-led growth that had led to falling wages, rising inequality, and ever higher levels of private debt. It was the combination of financialisation at the level of the real economy, and the growth of an interconnected, highly leveraged and unstable financial system, that explains the unique depth and breadth of the crisis, as well as much of what has taken place since.
Financial Globalisation
With the collapse of Bretton Woods and the removal of restrictions on capital mobility, capital was now free to flood into nearly every corner of the globe, giving rise to a new era of financial globalisation. Total cross-border capital flows increased from 5% of global GDP in the mid-1995 to 20% in 2007 — three times faster than trade flows.5 Amongst the so-called “advanced economy” group ownership of foreign assets rose from 68% of GDP in 1980 to 438% in 2007. In other words, by 2007, the amount the advanced economies owed was more than four times the size of all these economies added together.6
Financial globalisation has transformed states’ relationships with the rest of the world.7 According to traditional macroeconomic models, international trade is governed by the same principles of general equilibrium that govern national economies. Exchange rates, interest rates, trade, and financial flows are meant to adjust in order to bring supply and demand for different economies’ goods, services, and assets into balance. When a country runs a current account deficit — when it buys more from its trading partners than it sells to them — domestic currency flows out of the country. This is because the income from the current account has to come in the form of the domestic currency — if a consumer in the USA wants to buy a widget from the UK, they have to convert their dollars to sterling. High supply and low demand for a currency means falling prices. In other words, running a current account deficit means a falling exchange rate — your currency becomes less valuable relative to other currencies. A less valuable currency makes your exports cheaper to international consumers and should therefore increase demand for those exports. Over the long term, countries with current account deficits should experience falling exchange rates, making their exports more competitive, increasing demand for those exports, and reversing the deficit — and vice-versa for surplus countries. The relationship between the current account and the exchange rate is supposed to lead to equilibrium at the global level — no country should be able to run a current account deficit, or surplus, for a long time.
But from around 1990, large imbalances arose at the global level between “creditor” countries with large current account surpluses, and “debtor” countries with large current account deficits. Countries like the US and the UK had large and growing current account deficits, whilst Japan, China, and Germany had big surpluses. Where was the equilibrium? The US and the UK — deficit countries — should have seen large falls in the value of their currencies. China, Germany, and Japan — surplus countries — should have seen increases. Depreciations should have led to export growth in the deficit countries, and appreciations should have led to export falls in the surplus ones.
To understand what was going on, one must understand the relationship between the current account — which measures flows of income — and the financial account — which measures investment flows. If you think of the current account as like the current account of an individual, then it is mainly composed of income and expenditure. Income from a wage or another source goes in (like income from selling exports) and expenditure goes out (like expenditure from buying imports). But in modern financialised economies these are not the only sources of income available to consumers. They are also likely to have another account that contains, say, a mortgage. This is a form of income — a big cash transfer from a bank that has been used to buy a house — which also entails a certain amount of expenditure in the form of repayments. In the same way, countries are able to “borrow” money from the rest of the world via their financial accounts by selling assets.
The financial account (once known as the capital account) measures flows into and out of UK assets. To stay with the mortgage example, if a UK consumer borrows £500,000 from a foreign bank to buy a house, that will represent a £500,000 inflow via the financial account. This can seem counterintuitive: even though the consumer has borrowed from the rest of the world, they have still received £500,000 now, which counts as a positive sum on the financial account. If a foreign investor built a factory in the UK for £500,000, this would also represent an inflow via the financial account — but just like the loan, it also represents a future liability, because the income the factory generates will flow back to that investor over the long term.
Before the crisis, the US and the UK were seeing lots of capital flow out of their economies via the current account, meaning an increase in the supply of their currencies on international markets. This should have led to depreciations of their currencies. But demand for sterling and the dollar remained high, because there was high demand for British and American assets. The UK might have been losing sterling via the current account, but international investors were lending it back to us in exchange for our assets via the financial account. The rising value of house prices, and the proliferation of mortgage-backed securities (MBSs), meant that investors from all over the world, just like those in domestic markets, wanted to put their money into Anglo-American financial and housing markets. To do so they had to buy dollars or sterling, which maintained demand for these currencies, even as their current account deficits increased. Households were also able to purchase more imports because rising house prices made them feel wealthier.
All in all, this meant that the current account deficit expanded even as the currency appreciated. This created a self-reinforcing cycle. Rising currency values made British exports seem less competitive and imports seem cheaper. British exporters — especially manufacturers — found it harder to compete on international markets. Between the 1970s and 2007, the share of manufacturing in the British economy fell from 30% to just 10%. These economic changes reinforced financialisation by increasing the relative importance of the finance sector in driving economic growth. By the early Noughties, even if we had wanted to get off the train that was careering towards a cliff edge, we would have been unlikely to be able to do so.
Securitisation, Shadow Banking and Inter-Bank Lending
On its own, rising capital mobility would not have been sufficient to turn several large, but localised, housing bubbles into a global financial crisis. International investors needed assets to invest in — housing alone wasn’t enough. New, giant international banks, based mainly in Wall Street and the City of London, were only too happy to oblige. These banks placed British and American mortgages at the heart of the global financial system by turning them into financial securities that could be traded on financial markets — a process called securitisation.8 The securitisation of Anglo-American mortgage debt was central to both the long pre-crash boom and the swift collapse of the banking system in 2008. The American aspect of this equation was many times larger than the Anglo part, and far more important to the global financial system, but relative to the size of their respective economies, both experienced a surge in securitisation.
The process of securitisation involves turning claims into financial securities. A claim is a contract that entitles the holder to an amount of income at some point in the future. For example, a loan made by a bank to an individual or company is a claim on being paid back at a later date. Financial securities are claims that are traded in financial markets, and include equities (stakes in the ownership of a corporation, also called stocks or shares), fixed-income securities (securities based on underlying agreements to repay a certain amount of money over a certain period of time), and derivatives (bets on the future value of other securities or commodities). For example, a bank with some mortgages on its books may want to sell those mortgages now rather than waiting a few decades for the debt to be repaid. To access the money to which they are now entitled, they can turn the mortgage into a security and sell it on to another investor. The price of the security will reflect the underlying value of the loan, subject to interest rates, inflation, risk and other factors.
In the run up to the financial crisis, banks wanted to increase their lending to meet rising demand for credit. They were, however, constrained by regulation that required them to hold a certain proportion of the amount they lent out as cash, shareholders’ equity and certain other liquid assets. If they wanted to lend more, they needed more cash. Banks therefore took the mortgages on their books, placed them “off balance sheet” (in the shadow banking system described below) and securitised them, allowing investors to invest in them. In doing so, they were essentially selling other investors the future income stream that the mortgage would generate, pocketing the cash today, and then lending it to other individuals to create new mortgages. Minsky predicted that this kind of behaviour would come to dominate financial markets, when he wrote “that which can be securitised will be securitised”. In the US, the issuance of residential mortgage-backed securities (RMBSs) peaked at $2trn in 2007. US securities were sold to investors in the rest of the world, which bought them based on the assumption that they were as safe as US government debt, but with higher returns.
But securitisation didn’t just increase banks’ access to liquidity, it also allowed them to disguise the risks they were taking. Once they had lent as much as they could to creditworthy borrowers, banks started to increase their lending by issuing mortgages to customers who might not be able to repay them. The American government supported the emergence of what came to be known as “sub-prime” lending in an attempt to extend mortgages to a wider section of the electorate, just as Thatcher sought to increase home ownership in the UK through right-to-buy and financial deregulation. US federal bodies like Fannie Mae and Freddie Mac – both Government Sponsored Enterprises (GSEs) – would purchase mortgages from the banks and package them up into financial securities, before selling them on financial markets, backed by a state guarantee. 9 This created a large and deep market for mortgage-backed securities (MBSs) of varying qualities, and allowed the banks to lend more to less creditworthy consumers, because they would receive an immediate return — and insulate themselves from risk — by selling the mortgage to a GSE.
Eventually, the GSEs started to package up good mortgages with bad ones, using complex mathematical models to get the balance just right. The GSEs would take a bunch of good mortgages and add in just the right number of sub-prime mortgages to allow them to create financial securities that investors (and ratings agencies) would consider risk-free. Imagine baking a cake and adding just the right amount of poison to make sure it doesn’t kill whoever eats it — the cake is like the security that has just the right amount of sub-prime to make it look risk free. As the housing bubble expanded, more and more subprime mortgages were created. As more of these sub-prime mortgages were baked into these securities, the quality of the securitised products deteriorated, culminating in the creation of the collateralised debt obligations (CDOs) that appeared to make even the riskiest mortgages risk-free. At the same time, new financial institutions got into the securitisation game, competing with Fannie Mae and Freddie
Mac to parcel up mortgages into securities that could be bought and sold. The so-called “originators” would create mortgages, and either sell them to securitisers, or securitise them themselves.
Armed with the latest mathematical insights, the securitisers were confident that, even if some people started to default on their mortgages, their securities would retain their value. The rating agencies, who received their revenues from the financial institutions they were supposed to be rating, unsurprisingly agreed. The ratings agencies agreed to continue to give US MBSs and CDOs high ratings — similar to those they granted to US government debt — even as the quality of these securities deteriorated. This process was reinforced by the insurance industry. Companies like AIG allowed the owners of these securities to hedge against a potential default by the mortgage-holder by taking out infamous “credit default swaps”. If the value of the security fell, the owners would be due an insurance pay-out. The government, securitisers, rating agencies, and insurance industries collaborated to make it seem as though they were making huge amounts of money without having taken any real risk. And as long as house prices kept rising and securities kept being issued, their gamble paid off. But eventually this long period of stability destabilised the entire financial system.
We heard earlier about Keynes’ insights on the difference between risk and uncertainty — and they are central to understanding why securitisation wasn’t as safe as people thought.10 Risk is measurable and quantifiable — simple measures of probability are built on the measurement of risk. We may not know what the outcome will be when we roll a dice, but we can predict that the probability of rolling a 5 is 1/6. But not all events are like rolling a dice. In fact, few events are — especially in a complex system like the economy. In such situations, all we can do is predict the future based on the past, and the future is therefore uncertain — there are too many variables interacting with one another to allow us to predict outcomes with any certainty. Uncertainty is a completely different beast to measurable risk. Unlike risk, uncertainty is unquantifiable — the future is not only filled with known unknowns, but unknown unknowns.
As Fred Knight, an American economist, pointed out almost a century ago, human beings treat uncertainty like risk. We use past experience to extrapolate the likelihood that an event will occur in a particular way. Having invested in one company in a particular industry and received a large return on our investment, we might assume that investing in another business in the same industry will be equally as profitable. But we have no way of knowing what will happen to the business, or indeed the industry, in the future — there is too much uncertainty to give a reliable estimate of the probability that the business will provide a good return on investment. In quantifying and mitigating the risks associated with defaults, the securitisers claimed to have created completely risk-free products. But whilst mathematical models can help to mitigate risk, they can’t mitigate uncertainty. Perversely, the exuberance created by the belief that risk had disappeared encouraged investors to take even greater risks based on uncertain assumptions about the future.
This approach to predicting the future wasn’t confined to the banks themselves. Regulators also viewed their role as mitigating future risks, which could be predictably measured from institution to institution.11 The approach to regulation before the crisis largely focused on making sure each bank had enough capital to allow it to withstand a crisis of moderate severity. The Basel Accords, first agreed by the Basel Committee on Banking Supervision as Basel I in 1988 and amended during rounds II and III in 2004 and 2010, aimed to harmonise international regulation on banking by setting minimum capital requirements for banks. Bank capital consists of highly liquid — or easy to sell — assets, like cash and shareholders’ equity. For example, if a bank makes £10m worth of loans, then a capital requirement of 10% will mean they have to hold £1m in the form of highly liquid capital. Capital requirements limit banks’ profits, because they force them to hold some non-profitable but safe assets like cash and shareholders’ equity. But if a bank got into trouble and investors started to demand their money back, they ensure that the bank has enough cash to be able to pay them.
The Basel accords rested on the idea that regulation should serve to measure and mitigate predictable risks — they were not built to deal with a crisis of generalised uncertainty. The regulators could encourage banks to hold a specific amount of capital that would allow them to mitigate any foreseeable risks, but they should have realised that it would be impossible to predict when, where, and what kind of financial crises might arise over the course of the financial cycle. In an interconnected financial system, regulators should have realised that banks might be subject to unpredictable systemic risks that would affect the entire financial network, not just individual banks.
In fact, the Basel Accords ended up contributing to the crisis. Banks were required to hold different levels of capital against different assets depending on how risky the asset was judged to be. Riskier assets were associated with higher capital requirements. Mortgages and MBSs were judged to be low-risk, so banks had to hold less capital to insure themselves against potential losses. Banks worked out that they could increase their profits under Basel by holding low levels of capital against risky mortgages that provided them with high returns. This “regulatory arbitrage”— opportunities for profit-seeking created by regulation — encouraged banks to hold securities based on mortgage debt, even though they were often far riskier than many other assets.
Capital requirements also fostered the growth of the shadow banking system by encouraging the banks to undertake many activities “off balance sheet” in the less-regulated shadow banking sector. Shadow banks are institutions that lend money without taking deposits guaranteed by the state, and without direct access to central bank funding. Shadow banking is riskier than conventional banking because shadow banks should not be able to access central bank funds when they get into trouble. Because the state takes less responsibility for activities that take place in the shadow banking system, they are subject to less regulation. Basel II encouraged banks to create shadow banking entities “at arm’s length” from the main, regulated banking system. The banks could place riskier assets in the shadow banks, allowing them to disguise their exposure to these assets, without insulating them from the risks associated with this lending. These shadow banks were able to take more risk, and earn higher profits, even if these risks would ultimately be borne by the traditional banks themselves. As regulation on the traditional banking sector increased, the shadow banks — many of which were set up by the banks — increased their market share. Banks’ share of lending in the US fell from almost 100% before 1980 to just 40% in 2007.
Another change that took place in the international financial system before 2008 concerned the way in which banks raise funds.12 The traditional, neoclassical view of banking is that banks simply intermediate between savers and borrowers by receiving deposits and lending these to borrowers. State reserve requirements would determine the amounts that banks were able to lend — and they would lend as much as they were able to under domestic law. For example, if a bank had £10,000 worth of deposits, and regulation required it to keep 10% of its deposits in reserves at the central bank, it could only make £9,000 worth of loans. But this story hasn’t been true of the sophisticated financial systems that have emerged in the global North for decades — in the UK, for example, banks haven’t had any reserve requirements since 1981. Instead, banks lend as much as they can — limited only by demand — and then borrow to meet regulatory requirements. So, a bank might lend as much as it can to borrowers, before borrowing the capital it needs to meet capital requirements from an investor or another bank by the end of the day.
One source of funding for the banks in the pre-crisis period were the so-called “money market funds” (MMFs). Wealthy savers seeking out higher interest rates than were available in traditional bank accounts deposited their cash in MMFs, which were seen as equivalent to normal bank deposits. Investors could take out their cash at any time — the only catch was that MMFs weren’t guaranteed by the state, but this was far from the minds of most investors before 2007. The MMFs would then lend their capital to the banks, often via the shadow banking system, which could offer them a relatively high rate of return. They were joined by the other institutional investors that had emerged in the 1980s, who agglomerated the savings of corporations and wealthy individuals and lent these to the banks.
Another source was the development of the “repo” — short for “repurchase agreement” — markets. Repo transactions allow one investor to loan a security to another investor and buy it back at a later date at a pre-agreed price. Banks would borrow from investors by “repo-ing” MBSs with another investor, before buying it back a few weeks later. Effectively, repo transactions are a form of collateralised loan, with banks borrowing from investors using securities as collateral. In repo-ing the MBS, the bank would take a haircut, meaning it would have to use some of its own money to fund the transaction. This process allowed banks to invest in billions of dollars’ worth of securities, using a tiny fraction of their own cash.
During the 2000s, all of the innovations described so far came together to create an incredibly risky and complex matrix at the heart of the international financial system.13 Banks would set up “structured investment vehicles” (SIVs) — shadow banks — and then place assets like mortgages into these SIVs. The SIVs would raise funds by borrowing on the money markets, rather than taking cash from depositors, for example by issuing asset-backed commercial paper (ABCP), a form of short-term corporate bond. The SIVs packaged up the loans into securities and sold some on, often to investors in surplus countries, but kept others — particularly the lower quality mortgages that were harder to sell. Shadow banks would also engage in complex repo transactions using the securities as collateral, relying on the assumption that they would always be able to roll these loans over. The traditional banks that had set up the SIVs were ultimately responsible for any losses made on these assets, meaning that any problems in the SIV would have a knock-on impact on the bank that had set it up and funded it.
Bailout Britain
In the early 2000s, the global economy had emerged from the bursting of the tech bubble stronger than ever. Investors were convinced that economists really had managed to tame the business cycle once and for all. But by 2006, the “goldilocks” economy — as some termed the neither too hot nor too cold economic conditions that prevailed during the early Noughties — had begun to falter. US house prices peaked in 2006, and then started to fall. Similar trends prevailed in the UK.14
Banks had forayed into subprime markets and started to offer mortgages with low or no deposits based on the assumption that house prices would continue rising forever. As a result, when they started to fall, many homeowners fell into negative equity — meaning that they owed more in mortgage debt than their house was worth. In such a situation, consumers had a choice: keep a mortgage worth more than their homes or sell. Those who could opted to sell, some at any price, sending prices tumbling even further.
Falling house prices cascaded through the financial system. The financial securities that banks had been selling rapidly lost their value when the quality of the underlying mortgages was called into question. Many of these assets were held in the shadow banks that were operating at much higher levels of leverage than traditional banks, meaning even a small fall in asset prices could render them insolvent. These shadow banks, and many traditional banks, had also been financing much of their borrowing on international financial markets over very short time horizons. The securities that were tumbling in value had often been used as collateral for this lending. Combined with the general climate of fear and uncertainty as to who was solvent and who wasn’t, banks and their counterparts in the shadow banking system suddenly lost access to funding.
When banks could no longer rely on borrowing from other financial institutions to finance their liabilities, they started to sell their assets. Fire sales of asset-backed securities sent their prices tumbling even further. The repo markets that had developed before the crash, which allowed banks to borrow from one another using debt-based securities as collateral, seized up. Adam Tooze puts it succinctly: “Without valuation these assets could not be used as collateral. Without collateral there was no funding. And if there was no funding all the banks were in trouble, no matter how large their exposure to real estate”. Retail bank runs had been a thing of the past since the introduction of deposit insurance, but what happened in 2007 was essentially a giant bank run, led by other banks, which created a liquidity crisis – the banks didn’t have enough cash to meet their current liabilities. But the fire selling that resulted rapidly turned this liquidity crisis into a solvency crisis – the banks’ debts grew larger than their assets.
The panic quickly spread across the pond to the City of London. Whilst the subprime crisis was mainly driven by US consumers, the resulting panic and falling value of MBSs, CDOs and similar instruments affected securities from all over the world. As these securities fell in value, funding markets seized up, and many UK banks found themselves in the same situation as their US counterparts. British banks were part of the same international financial system as American ones: they were reliant on wholesale funding, and they had been exposed to billions of dollars’ worth of US mortgage debt. But the British banks had also been involved in the securitisation game themselves.
By the end of 2007, mortgage lending in the UK had reached 65% of GDP — just eight percentage points lower than in the US — and British banks issued £227bn worth of residential and commercial mortgage-backed securities in 2008 — 12% of GDP.15 Many of these mortgages had very high loan-to-value ratios (i.e. the loan was worth more than the value of the house), as well as the kind of adjustable rates that had become so popular in the US.16 In 2008, the Bank of England’s Financial Stability Report stated that “adverse credit and buy-to-let loans [have] risen from 9% at the end of 2004 to 14% at the end of 2007”. The bank expressed a concern that many of these loans had adjustable rates and that it was becoming more difficult to refinance, meaning borrowers will face rising interest rates. The Bank wrote:
As in the United States, this repayment shock is occurring at the same time as house prices are falling. Those who bought in recent years with high loan to income multiples and/or high LTV ratios will be particularly vulnerable to further shocks to their disposable income, such as higher inflation or unemployment.
In fact, the UK housing market had begun to turn at around the same time as that in the US.17 The subprime market wasn’t as large in the UK, but underwriting standards had been deteriorating for many years. Banks like Northern Rock were issuing mortgages worth much more than the underlying value of the home and securitising them in the same way as their US counterparts, whilst relying on similar funding models. The crisis began in the US — a far larger and more systemically important market, with unique vulnerabilities18 — but the boom would have ended in the UK at some point anyway. 2008 was a crisis of the Anglo-American model, also pursued by states like Iceland and Spain, and now Australia and Canada — not simply a crisis in US mortgage markets. The size, severity, and global nature of the crash was undoubtedly a result of its genesis in US markets, but what 2007 showed is that the model of debt-fuelled asset price inflation is inherently unstable. At some point, the debt has to stop growing. And when the debt stops growing, the entire system breaks down.
When the crash hit, governments around the world looked on in horror.19 When it began, they had treated the financial crisis like any other financial panic — as an issue of liquidity, or access to cash. They assumed that the panic would pass, revealing that the banks were creditworthy. Trillions of dollars’ worth of loans was made available to banks by central banks all over the world. But regulators quickly realised that this wouldn’t be enough. As panic spread through the system and prices tumbled, banks became insolvent, not just illiquid — i.e. they weren’t just facing a cash-flow problem, they were bankrupt. They needed capital — cash, equity, and other high-quality assets. They needed a bailout.
It was Gordon Brown who first realised what was going on. Having spent his holiday reading up on the events surrounding the Wall Street Crash, he realised that the panic selling that had started in 2008 had eroded the value of banks’ assets to such an extent that many were now effectively insolvent. Giving them access to central bank funding would involve throwing more money into a never-ending hole. Some of the UK’s banks — notably RBS and HBOS — had become unimaginably large and overleveraged, only to see the value of their assets plummet overnight. Mervyn King, the governor of the Bank of England, agreed. The problem was capital, not liquidity. In effect, the banks had to be forced to take money from the state in exchange for shares — they had to be nationalised.
On 8 October 2008, the government announced that £500bn would be made available to the banks — some in the form of loans and guarantees to support liquidity, and some in the form of taxpayer investment in exchange for equity. Most of the investment went to the basket case that was the Royal Bank of Scotland, indebted up to its eyeballs after its recent purchase of the Dutch bank ABN AMRO under the reckless leadership of Fred Goodwin. The US was forced to take a similar approach, eventually spending over $200bn on purchasing bank equity, and a further $70bn bailing out the distressed insurer AIG. Socialism for the banks saved the global economy from the Great Depression 2.0.
Aside from the bailouts themselves, what prevented the crash from becoming a new global depression were the coordinated international stimulus packages implemented by the world’s largest economies. Keynesian economics was back in vogue. In most states, automatic stabilisers — the falling tax revenues and rising welfare payments associated with recessions — combined with discretionary fiscal spending — i.e. planned, not automatic, increases in spending — limited the impact of the downturn. The US American Recovery and Reinvestment Act — worth over $800bn between 2009 and 201920 — helped to stem job losses by channelling investment into infrastructure, and supported demand by providing financial support to the unemployed. Other G20 states followed suit with their own stimulus programmes. But it was China that saved the day. The Chinese stimulus programme — which included measures to stimulate bank lending as well as increases in central and local government spending — was worth almost 20% of GDP in 2009.21 Ongoing expansionary fiscal and monetary policy — far more than exports — have supported high growth rates in China and its major trading partners ever since.
Monetary policy changes pursued by the world’s four major central banks — the Federal Reserve (the Fed), the Bank of England (BoE), the European Central Bank (ECB), and the Bank of Japan (BoJ) — also helped. Interest rates were reduced to historic lows. But with households already heavily indebted, businesses uncertain of the future, and banks unwilling to lend, cutting interest rates wasn’t going to be enough. So, the world’s central banks tried something new: quantitative easing (QE). Since 2009, these four central banks have pumped more than $10trn of digitally-created money into the global financial system by purchasing government bonds, which has pushed up asset prices across the board.22 The Fed’s balance sheet peaked at around $4.5trn in 2015, or a quarter of US GDP — the value of the UK’s programme as a percentage of GDP peaked at a similar level.23 The BoJ’s apparently unending QE programme has seen its assets climb to over $5trn, larger than Japan’s entire economy.24 In many countries, it is hard to see how this expansion in central bank balance sheets will ever be reversed.25
For a time, it looked as though this coordinated action might bring a relatively swift end to the series of overlapping recessions then taking place in the economies of the global North. But then came the Eurozone crisis. Just as Chinese money had flooded into US debt before the crisis, German money, derived from its large current account surplus, flowed into debt booms in the UK and the Eurozone — notably in Ireland and Spain. In Europe’s periphery, states like tiny, overindebted Latvia faced similar problems. The tell-tale signs of finance-led growth — rising debt, housing booms, and rising current account deficits — started to afflict many EU economies. As Tooze points out, several EU countries were staggeringly “overbanked” by 2007 — the liabilities of Ireland’s banks were worth 700% of its GDP. When the crisis hit, Europe’s banks needed bailing out too.
But there were no mechanisms to orchestrate such a bailout at the EU level. Instead, the burden fell on individual economies like Greece, Spain, Portugal, and Ireland to save their bloated financial systems. Unable to print their own currency, states like Greece and Ireland were forced to seek bailouts from the international institutions formerly restricted to bailing out low-income states in the global South. But there was a problem: many of these countries were effectively insolvent. Their debts were too large ever to be repaid. Rather than accept that the debts needed to be written off, and the system transformed, the EU — helped along by the IMF — decided to impose austerity on the struggling economies, immiserating a large portion of Europe’s population. The nationalised banks received easier treatment than the indebted states that had bailed them out: this was socialism for the banks, and ruthless free-market capitalism for everyone else.
In the wake of the Eurozone crisis, it didn’t take long for Keynes to fall out of favour again. The Greek crisis was exacerbated by its inability to print its own currency due to its membership of the Euro. But the idea that the financial crisis was sparking a new wave of sovereign debt crises, from which no economy would be safe, spread like wildfire. Rather than identifying the root cause of the crisis — the model of finance-led growth constructed in the 1980s — politicians, academics, and commentators around the world seized on the narrative that the recession stemmed from too much government borrowing. Governments, they patiently explained, are like households — they can only spend as much as they earn. If they borrow too much one year, they must save to pay it back the next year. And if they borrow too much over a short period of time, they were passing down the burden of those debts to their grandchildren. For the good of future generations, governments around the world would have to tighten their belts. Nowhere — other than in bailed-out Greece — did this go further than in the UK, where the coalition government implemented an austerity programme so harsh that it has been linked to 120,000 deaths over the last decade.26
Having socialised the costs of the banks’ recklessness, financialised states around the world failed to use their control over the banking system to support growth for fear of interfering with the operation of the “free market”. The British state, now a majority owner of several banks, refused to use its control over several large banks to direct lending to productive purposes. Despite the rhetoric about paying down the debt, the government did not even try to sell its shares in the banks at a competitive price, instead selling them at a loss to the taxpayer, even as it asked the British people to foot the bill.27 These decisions were justified using familiar tropes. The market, on this one occasion, had failed. But that didn’t undermine capitalism as a social and economic system; and state ownership of the banks certainly didn’t undermine their commitment to enforcing private property. In fact, the way the bailouts were conducted reinforced the logic of finance-led growth: the state would use its power to give the markets what they wanted, and working people would be forced to pick up the tab.
Transatlantic Banking Crisis or Structural Crisis of Financial Capitalism?
Reading this account on its own could lead one to conclude that what happened in 2007 was simply a transatlantic banking crisis with its origins in the US. It then spread around the world due to a combination of financial globalisation and financial innovations like securitisation. In the aftermath of the crash, this is the view that dominated. It was the parasitical rentiers in the international finance sector that had brought the global economy to its knees. Greedy bankers, out-of-touch economists, and regulators asleep at the wheel all shared the blame in popular readings of the crisis.28 Such accounts undoubtedly deliver an accurate analysis of the events surrounding 2008, but they do not tell the whole story. Finance is not some ethereal activity that sits atop the “real” economy — it has its roots in normal economic activity. International banks may have been playing reckless games with one another, but the source of their profits was lending to households and businesses.
The global financial crisis may have broken out in the US in 2008, but it had its origins in the unique Anglo-American model of finance-led growth pursued since the 1980s. The financialisation of the firm provided an immediate fix to the profitability crisis of the 1970s – a fix built on the repression of wages and productive investment. The states that had encouraged the financialisation of the firm deregulated their banking sectors in order to give households greater access to credit and expand asset ownership. In doing so, they were attempting to disguise the chronic shortfall in demand finance-led growth threatened to create, and to make the system politically sustainable. Rising mortgage lending increased house prices, eventually inflating a bubble that saw the British and American housing markets turned into a giant Ponzi scheme. Banks took this mortgage debt, packaged it up and sold it on international financial markets, disguising the amount of risk they were taking. Capital flooded into the US and the UK to take advantage of the boom, and repressed activity in the rest of the economy. The spark that set the whole thing alight came from the US, but the fallout extended throughout the financialised economies of the global North, and was particularly severe in Britain, whose economy had been buoyed by rising debt and asset prices for decades. Whilst 2008 may look like a transatlantic banking crisis, it was more than this: it was a structural crisis of financialised capitalism.
Understanding the financial crisis therefore requires adopting an historical approach to analysing the evolution of a model that was born forty years earlier. This allows us to recognise that financialisation, as a fix to the contradictions of the previous model, contains its own inherent contradictions. Just as Kalecki helped us to understand the contradictions of social democracy far before the system broke down, economists like Keynes and Minsky also helped us to understand the contradictions of financialised capitalism far before that system collapsed. The fact that these things were predictable means they were endogenous to the functioning of the system – they were inherent features of finance-led growth. And that is the most important message to take from this story. The global financial crisis wasn’t an aberration; it wasn’t a couple of bad years in an otherwise well-functioning economy. It represented a deep-seated crisis, the roots of which lay in the economic model pursued in Anglo-America up to 2007.
Today, just like those living through the crisis decade of the 1970s, we are living in what Gramsci called the interregnum: that moment between the death of the old and the birth of the new. The implications of this insight will be discussed in the next chapter. But if the reader takes only one thing from this book, let it be this: poor regulation, bad economics and greedy bankers all contributed to the particularly explosive events of 2008, but the financial crisis had far deeper roots. A crash — if not necessarily the crash that we got — was woven into the DNA of the economic system that was built in 1980. And nothing but wholesale economic transformation will deliver us from its shadow.
Comments
Post a Comment