grace blakeley
INTRODUCTION
Before 2007 the last time there was a run on a British bank the Austro-Hungarian Empire was preparing for war with the Prussians, and the thirty-seven states of the USA had just agreed to free the country’s slaves. In 1866, Overend, Gurney and Company — the “banker’s bank” — found itself in serious financial difficulties.1 Caught up in the euphoria of the industrial revolution, the bank had lent too much to the UK railway industry, fuelling a speculative boom that spread the length and breadth of the country. But when the bubble burst, the bank found itself with a stack of unpayable debts. Overend appealed to the Bank of England for financial assistance, but its pleas fell on deaf ears. Queues started forming outside the bank’s headquarters at 65 Lombard Street, and within a week the “Panic of 1866” had taken hold of the country.
141 years later, the panic of 2007 was just beginning as Northern Rock, the largest mortgage lender in the UK, found itself unable to access funding.2 Northern Rock’s business model was based on the securitisation of mortgage loans — turning mortgages into financial securities that could be traded on capital markets. It borrowed from other financial institutions over short-time horizons — often on an overnight basis — and lent long, issuing mortgages that wouldn’t mature for decades. When financial markets started to seize up in 2007, banks stopped lending to one another, and “the Rock” found itself unable to access international capital markets, meaning it couldn’t pay its debts. On 13 September 2007, the news broke that Northern Rock was seeking emergency support from the Bank of England: the first UK bank run since Overend.
Both bank runs resulted from an asset bubble — one in railways, the other in housing. Both Northern Rock and Overend relied on borrowing from financial markets to finance their day-to-day liabilities. Both were eventually forced to appeal to the Bank of England for help. But there were also some critical differences between the two institutions. Overend lent money to companies that were building the UK’s railway networks: the same railway networks that we use to this day. They may have done so on unwise terms, but they had invested in the expansion of the productive capacity of the economy — in our ability to produce things, both then and in the future. Northern Rock was doing no such thing. A former building society, Northern Rock lent consumers money to buy already-existing homes. It had been criticised for approving mortgages with incredibly high “loan-to-value” ratios; on occasion the bank granted mortgages worth 125% of the property’s value.3 Rather than creating assets, Northern Rock was creating debt. And it was doing so on an unsustainable scale.
The contrast is puzzling. If it was so unproductive, then why was Northern Rock bailed out when Overend, Gurney and Company was allowed to fail? It is true that by 2007 the Bank of England had become the UK’s official lender of last resort, with a responsibility to support ailing banks if their demise might threaten the stability of the financial system. But this raises more questions. How had a small former building society become so important that its demise could have brought the booming British finance sector to its knees? When did the UK’s finance sector became so large, and so powerful, that a single bank could extract billions from the taxpayer under the threat of economic meltdown? In other words, when did finance become such a dominant and dangerous force in our society?
This book argues that, since the 1980s, the UK has entered a new phase of its economic history. Once the workshop of the world, today our main connection to the global economy comes from the City of London, a global centre for financial speculation. This transformation has not been slow and steady — it has occurred in fits and starts, as the economy has lurched from one crisis to the next, adapting under the influence of the powerful at each stage. Our current economic model — finance-led growth — can be traced back to the 1980s, when a new system emerged out of the ashes of the post-war social-democratic order. Since then, British politics and economics, as in the US and a string of other advanced economies, has become “financialised”, with results that were not apparent until the crisis of 2008.
The best-known definition of financialisation is that it involves the “increasing role of financial motives, financial markets, financial actors and financial institutions in the operation of the domestic and international economies”.4 In other words, financialisation means more and bigger financial institutions — from banks, to hedge funds, to pension funds — wielding a much greater influence over other economic actors — from consumers, to businesses, to the state.5 The growth of finance has led to the emergence of a new economic model — financialisation represents a deep, structural change in how the economy works.6
When economists talk about financialisation, they usually point to the United States, which, in absolute terms, is home to the largest finance sector on the planet.7 Whilst this book focuses on the history of finance-led growth from a British perspective, most of its lessons can also be applied to the world’s current superpower. In the run up to the crisis, each and every one of these issues — the financialisation of corporations, households and the state — afflicted the American economy too, though in subtly different ways. In fact, we can speak of a peculiarly Anglo-American growth model, marked by a growing finance sector, a falling wage share of national income, growing household and corporate debt, and a yawning current account deficit.8 Other economies that pursued this model before 2007 include Iceland and Spain, and today Australia and Canada are perhaps its most enthusiastic adopters.
The most obvious indicator of financialisation is the dramatic increase in the size of the finance sector itself. Between 1970 and 2007, the UK’s finance sector grew 1.5% faster than the economy as a whole each year.9 The profits of financial corporations show an even starker trend: between 1948 and 1989, financial intermediation accounted for around 1.5% of total economy profits. This figure had risen to 15% by 2007.10 The share of finance in economic output was, however, dwarfed by the growth in the assets held by the UK banking system: banks’ assets grew fivefold between 1990 and 2007, reaching almost 500% of GDP by 2007.11 The UK also boasted one of the biggest shadow banking systems relative to its GDP before the crisis — a trend that has continued to this day.12 Meanwhile, cottage industries of financial lawyers, consultants, and assorted advisors grew up in the glistening towers in the City of London and Canary Wharf. Between 1997 and 2010, the increase in the share of financial and insurance services in UK value-added was greater than the increase in the share of any other broad sector bar the government sector — itself supported by the tax revenues provided by finance.13 Overall, by 2007, the UK had one of the largest finance sectors in the world relative to the real economy.
But financialisation can’t be reduced to the increasing importance of big banks in the functioning of the economy.14 It’s not as though capitalism has been “taken over” by finance. Instead, every aspect of economic activity has been subtly, and sometimes dramatically, transformed by the rising importance of finance in the economy as a whole. Whereas economic life for the individual was once centred around wages and wage bargaining, now the management of debt has gained importance. Businesses once focused primarily on producing the goods and services for which they had a competitive advantage, but today they are likely to place just as much if not more focus on their share price, their dividends regime, their borrowing and the bets they’ve made on exchange rates and interest rates. There was a time when state borrowing was constrained by restrictive monetary policy; today states are not only able to borrow far in excess of what they earn, they are also able to have private corporations undertake their spending on their behalf.
Historically, its advocates have argued that capitalism makes everyone better off by creating wealth for everyone. Businesses make profits, and they invest these profits in future production. This creates jobs, which raise living standards for the majority of the population. Such a system might lead to rising inequality in the short term but, as entrepreneurs reinvest their profits, eventually this wealth will trickle down to everyone else. Whilst this has always been an optimistic reading of the way capitalism works, during the post-war period it often appeared to reflect reality (at least in the global North). But finance-led growth upsets the channels through which wealth is supposed to trickle down from rich to poor, and it does so in obvious ways. Investment slows, wages fall, and profits — especially financial profits — boom. 15
Whilst all capitalist systems are premised upon the monopolisation of the gains of growth by the people who own the assets, under finance-led growth these dynamics become more extreme. Rising private debt might conceal this fact during the upswing of the economic cycle, but when the downturn hits it becomes clear that finance-led growth is based on trickle-up economics, in which the gains of the wealthy come directly at the expense of ordinary people. This is because financialisation involves the extraction of economic rents from the production process — income derived from the ownership of existing assets that doesn’t create anything new. When, for example, a landlord increases a tenant’s rent without having made any changes to the property, this is a simple transfer of wealth from a non-owner to an owner. The landowner cannot use the increase in price to ‘create’ new land that would benefit everyone – he will simply pocket the money for himself. The same can be said for interest payments on debt, which transfer money from people who don’t own capital to people who do. Rising household debt, booming property prices, the enforcement of shareholder value and the financialisation of the state all transfer money from those who don’t own assets to those who do, without creating anything new in the process.
Financialised capitalism may be a uniquely extractive way of organising the economy, but this is not to say that it represents the perversion of an otherwise sound model. Rather, it is a process that has been driven by the logic of capitalism itself. As their economic model has developed, the owners of capital have sought out ever more ingenious ways to maximise returns, with financial extractivism the latest fix. In many ways finance-led growth represents capitalism’s most perfect incarnation — a system in which profits seem to appear out of thin air, even as these gains really represent value extracted from workers, now and in the future.
The Interregnum
The financial crisis was the beginning of the end for finance-led growth. Since 2007, the UK has experienced the longest period of wage stagnation since the Napoleonic wars, whilst American workers have the same purchasing power as they did forty years ago.16 Employment may be high, but work has also become more insecure, and levels of in-work poverty have risen. High levels of employment have also coincided with a stagnation in productivity — the amount of output produced for every hour worked — which has flatlined in these states since the financial crisis. The rate of investment — by both the public and private sectors in the US and the UK — has fallen since 2008 and remains below its pre-crisis peak.17 In the UK, falling business confidence, volatility in financial markets, and the levelling off in house prices suggest that a recession is just around the corner. In the US, meanwhile, corporate debt is higher as a percentage of GDP than it has ever been. There seems to be a new corporate scandal every week, with overindebted, extractive, monopolistic companies controlling an increasing share of economic output whilst public services crumble. Interest rates around the world were until recently at record lows and most states are only now – a decade on from the crash – starting to wind up quantitative easing. The extra weight placed on monetary policy means that when the next crisis hits there will be little room for manoeuvre.
Economists are at a loss to explain this ongoing malaise. Some have argued that we are living through an era of “secular stagnation” (where secular means long-term). Technological and demographic change mean that the Western world must accustom itself to much lower rates of growth than in the past.18 Others claim that this economic stagnation results from rising government debt, which is a drain on productive economic activity and is scaring off foreign investment.19 Still others argue that this is all down to “economic populism” — governments implementing ill-advised economic policies to please the masses rather than listening to the timeless, objective wisdom of professional economists.20 The lost decade since the financial crisis is living up to that old adage that when you get ten economists in a room, you’ll get eleven opinions. The old guard is unable to explain to people just what on Earth is going on.
The central argument of this book is that, having gorged themselves before the crash, today’s capitalists are running out of things to take. We are currently living through the death throes of finance-led growth. Just like the post-war consensus of the 1970s, the old model is crumbling before our eyes, leaving chaos and destruction in its wake. And just like the post-war consensus, the death of finance-led growth was inevitable and predictable. Marx showed that every kind of capitalist system is subject to its own contradictions: strains that arise from the normal functioning of the economic model — from businesses trying to make money, politicians trying to get votes, and people trying to survive.21 These dynamics have characterised the development of capitalism for centuries. Each and every capitalist model must end in crisis, and moments of crisis are moments of adaption — moments when, out of the ashes of the old, the new economy can be born.
They are also, as the Italian theorist Antonio Gramsci pointed out, very dangerous moments indeed. Each crisis of capitalism doesn’t simply threaten to bring down the dominant economic model, but the institutions that govern politics and society too. When people no longer expect to be made better off by the status quo, they withdraw their support for it. The guardians of our governing institutions double down as a result, defending their model even as it fails to deliver gains for the majority of the population. Both sides dig in, leading to battles that can be drawn along surprising lines — with those at the bottom the most likely to lose out.
British society has clearly entered such a phase since the financial crisis. The UK’s 2016 referendum vote to leave the European Union was the biggest upset to British politics in a generation. Voters across the country used the referendum to express their discontent with a status quo that has seen them excluded from the proceeds of economic growth. The 2017 general election that followed the vote delivered a government unable to rule without the conditional support of one of the most regressive parties in British politics — the Democratic Unionist Party (DUP) — and unable to undertake the one task appointed to it — delivering a Brexit deal. In the absence of a growing finance sector, and with rising debt and asset price inflation, inequality has risen, living standards have fallen, and the old neoliberal institutions have struggled to contain, let alone channel, the anger of the majority of the population. A pervasive sense of crisis hangs in the air of British politics. The old paradigm can offer only more of the same, and ongoing austerity and weak growth will only exacerbate the UK’s political and economic problems.
In the US, the election of Donald Trump signals a similar grassroots backlash, even as Trump’s economic policy has served to increase inequality and provide windfalls for finance capital. Socialists within the Democratic Party seem to be profiting from Trump’s failure to address the concerns of the constituency that helped to elect him. In Europe, a new wave of xenophobia is sweeping across the continent, countered only by the steady rise in support for popular, socialist alternatives. Crisis after crisis has afflicted the economies that were once represented as the great success stories of liberal, capitalist development — Brazil, South Africa, Russia, Argentina, Turkey, and so many others are all experiencing political and economic turmoil. The poorest states continue to be left behind. Countries like Mozambique and Ghana, along with many low-income countries, are in deep debt distress.
Meanwhile, the environment is collapsing around us. Climate change is accelerating at rates that will render many parts of the planet uninhabitable in just a few short years. The past four years have been the warmest since records began, and the warmest twenty years have all occurred within the last twenty-two. As our forests are destroyed and our oceans acidified, it will not be long before we reach a series of tipping points when the effects of climate change will accelerate suddenly and unpredictably, rapidly creating the kind of “hothouse Earth” currently only seen in science fiction. And it is not just climate change we have to worry about. We are living through a mass extinction: the last fifty years has seen a 60% fall in vertebrate populations. Insects, particularly those critical for pollinating many plant species, are in terminal decline, and our soils are being eroded faster than they can be replenished. In other words, we are on the verge of ecological Armageddon.
But this moment of extended crisis could also represent a moment of opportunity. Many capitalist economies around the world are not only failing to deliver rising living standards for their most powerful constituencies, the capitalist mode of production is accelerating the breakdown of all our most important environmental systems. Finance-led growth contributes to these dynamics by creating huge, unsustainable booms, followed by equally massive, wasteful busts. We cannot afford to organise our economies according to the logic of finance-led growth anymore. But our aim should not be to replace it with a new, equally contradictory model. Instead, we must use this moment of crisis as an opportunity to move beyond capitalism entirely. But that means answering a question that, ordinarily, we are not allowed to ask: What comes next?
What is the Alternative?
For a long time, it has been easier to imagine the end of the world than the end of capitalism — by which we mean an economic system based on private ownership of the means of production (the main factors used in the production process) with the aim of profit maximisation, the enforcement of private property rights by the state, and the allocation of resources through the market mechanism. The system may create inequality, unemployment, frequent crises, and environmental degradation but, we have been told, the alternative is far worse. Socialism — a system under which the means of production are owned collectively — has only ever lead to death and destruction. Capitalism is the worst way of organising the economy, except for all the others.
Socialism’s opponents seem to believe that the basic conditions for organising a society and an economy have been the same at every moment throughout history. Capitalism emerged naturally because it is the natural way of doing things; socialism has failed because it is not. But, as surprising as it may seem, capitalism has not always existed. For most of human history, societies have been governed based on non-capitalist economic and political institutions. Feudalism only gave way to capitalism because states became powerful enough to disrupt rural power relationships and create a landless working class that could be used in the production process.22 This kind of power was premised upon the existence of complex societies, and the availability of certain technologies, without which experiments at capitalism would have foundered.
In the same way, the technological, economic, and political pre-conditions for the establishment of socialist societies exist today in ways that they never have in history. Large sections of the global economy are governed by rational planning rather than the market — that is, all of the economic activity that takes place within private corporations.23 Huge, international monopolies, many times the size of modern nation states in revenue terms, organise themselves based on a regime of top-down planning, generally using the latest technologies to do so. Neoclassical economists treat the firm as a “black box” and do not see relations within these firms as particularly relevant to economic outcomes. Instead, some might say conveniently, they restrict their analysis to those areas of economic activity governed by market relations. But the management of most firms today makes it quite clear that rational planning is perfectly possible, provided you have the means, and you are working towards the “right” ends.
When it comes to the means, we are living in a phase of human history associated with unparalleled technological development.24 Each of us holds in our pockets a computing device more powerful than the technology that sent the first man into space. We produce endless amounts of data about our habits, behaviours, and preferences that can be agglomerated and used by firms like Amazon to determine how much they should be producing, and of what. But the revolutionary power of these technologies is limited because they are concentrated in the hands of a tiny elite, which is using them to maximise their profits.
This brings us to the second issue, ends. Some say that it doesn’t matter what goes on inside firms as long as they are organised according to the logic of profit maximisation. This ensures that they remain “efficient”, and therefore provides for an optimal allocation of society’s limited resources. Except it doesn’t. Not only do many firms operate far from maximum efficiency (and pay expensive consultants to tell them how they can improve), they produce a host of other social and environmental ills — from inequality to climate change. There is no way that an organisational structure based on incentivising those at the top to extract as much as possible is the most rational — or indeed moral — way to organise production today. And top-down planning with the aim of achieving other ends is just as likely to lead to information and coordination problems.
Complex systems — whether these be firms or entire economies — rely on feedback. They are neither centrally directed, nor perfectly decentralised — they operate on the boundary between chaos and order — the realm of complexity. Such systems are dynamic — they are constantly moving. It is never possible to achieve a static equilibrium because conditions are always in flux. Instead, feedback from different parts of the network helps people to self-organise with the aim of achieving a collectively-determined goal, with some coordination and direction provided from the centre.
Capitalism, on the other hand, operates at the two poles of order and chaos. Within the firm — which neoclassical economists don’t study, but which Marxists do — production is organised through command-and-control, enforced by the threat of “the sack” and supported by various other technologies of control and exploitation. Outside of the firm, the state determines the rules of the game, backed up by the threat of force. These two institutions — firms and states — work together to produce an economic system based on domination, which also provides the appearance of freedom. Because within the market — its boundaries having already been determined by the powerful — economic activity seems almost anarchic. There are booms and busts, firms rise and fall, individuals are encouraged to place themselves in constant competition with one another just to survive. And this entire controlled and chaotic, free and coercive system is governed with one sole aim: maximising profits for those at the top.
Finance-led growth represents the apogee of the logic of capitalism. The owners of capital are able to derive profits without actually producing anything of value. They lend their capital out to other economic actors, who then hand over a portion of their future earnings to financiers, limiting economic growth. The costs of this model are left to future generations in the form of mountains of private debt and unsustainable rates of resource consumption. If the logic of capitalism is based on extraction from people and planet today, then finance-led growth is based on extraction from people and planet today and tomorrow, until the future itself has been stolen.
Climate change, global poverty and the financial crisis are all disasters that have emerged from firms and governments mismanaging the complex systems that they have created in the pursuit of profit. Capitalism has built these systems, and the powerful are trying to contain their complexity using hierarchical, top-down decision-making processes that are unfit for the task. As a result, capitalists are slowly losing control. As Marx put it, modern bourgeois society, which “has conjured up such gigantic means of production and of exchange, is like the sorcerer, who is no longer able to control the powers of the nether world whom he has called up by his spells”.25
There is a better way. Just as feudalism paved the way for capitalism, the development of capitalism is paving the way for socialism. Socialising ownership would ensure that economic growth and development benefit everyone — if everyone has a stake in the economy, then when the economy grows, we all get better off. But it is the democratic aspect of democratic socialism that is truly revolutionary. Rather than organising production based on the profit motive, working people would come together to determine their collective goals and how best to achieve them. Rather than working purely to maximise profits, we would be working to maximise our collective prosperity, which includes the health and happiness of people and planet.
Building the Future
Visions of the future abound. Democratic socialism, cybernetic socialism, fully automated luxury communism — all these utopian dreams are slowly seeping into our collective consciousness and allowing us to imagine a future not governed by the logic of private ownership and the market. But it is not enough simply to imagine a new world: we must develop a strategy to get there. Historical change does not proceed in neat, clearly delineated stages. We cannot wait for capitalism to fail and socialism to replace it. But equally, we cannot force our way towards a socialist society if the technological conditions, economic outputs, and, most importantly, the power relations that would support it are not already starting to emerge. What we need is a plan to get from here to there, based on an analysis of our current situation and the strategic points for intervention it offers.26
And this requires an analysis of how change actually happens. Socialists have long been divided between those who claim that history is driven forward by the objective forces of technological change — a view informed by one reading of Marx — and those who argue that history is driven forward by people coming together to organise and influence events — a view informed by another reading of Marx. One prioritises structures — the overarching political, economic, and technological conditions that shape what happens in the world — whilst the other prioritises agency — the individual and collective actions undertaken by people who are free to shape the conditions of their own existence.
Marx himself brought these ideas together using the notions of “contradiction” and “crisis”.27 Capitalist systems, of whatever kind, have their own inherent contradictions — internal problems which mean that, after a while, they stop working properly. The 2008 financial crisis resulted from the contradictions of finance-led growth — the creation of huge amounts of debt, the growth of the finance sector, and declining wages and capital investment. Capitalist systems can trundle along for decades, their problems getting worse and worse without anyone noticing, until they implode in a moment of crisis. These moments — understood as historical epochs rather than brief time periods — are especially important in determining the course of capitalist development. During a crisis, economic and technological structures loosen their grip over human action. Institutions cease to function, peoples’ ideas cease to make sense, rifts emerge within dominant factions, material resources are destroyed, and everything becomes more contingent. Possibility expands during moments of crisis: individual and collective action comes to matter much more.
Marx’s theory of history provides us with a unique understanding of our own times, and how we might change them. The contradictions of the social-democratic model created acute tensions in British political economy during the 1970s, and the crisis that ensued provided the perfect political moment for the wealthy to build a new institutional compromise out of the wreckage of the old.28 They took this opportunity and used it to rebalance power in society away from labour and towards capital, institutionalising a new model of growth and giving rise to a period finance-led growth from the 1980s to 2007.
Finance-led growth was born, and for a while it seemed as though we had chanced upon a uniquely stable economic model. Politicians spent most of the 1990s and early 2000s claiming to have solved the problem of boom and bust. History, they told us, was over.29 Capitalism had won. In fact, for these observers, history had ended almost as soon as capitalism was born. The bourgeois economists, Marx claimed, operate according to the belief that “there has been history, but there is no longer any”.30 There is, they argue, no alternative to capitalism: “things might be bad for you now, but they could be a whole lot worse — just look at Venezuela”. If anything, the masses should be grateful for the benign, enlightened leadership of the ruling class.
The financial crisis shattered this illusion. And yet the ruling classes continued as though nothing had happened. They implemented austerity on the basis of an economic analysis undertaken by those who had failed to predict the crisis, and they ensured that the costs fell mainly on those least able to bear them. Many of the same elites who have governed the global economy for the last forty years remain in power to this day, which is perhaps why so few of the issues that caused the crisis have been dealt with. Debt levels are extraordinarily high, inequality is rising, the environment is collapsing, and policymakers seem less able to get to grips with these issues than ever before. Where is the revolt? Isn’t the financial crisis a paradigmatic example of our collective inability to challenge the deep-rooted logic of the capitalist system?
Yes and no. Ideas, behaviours, and beliefs that are built up over a lifetime cannot be undone overnight. Those raised during the end of history did not see the scales fall from their eyes on the day that Lehman Brothers collapsed. And far from organising in the shadows like the Mont Pelerin Society — the network of right-wing thinkers who sought to undermine social democracy — the left has spent decades in retreat under neoliberalism. Socialist parties, movements, and narratives all faded into the background: many genuinely believed that the centuries-long struggle between labour and capital was over. It took a while for people to realise that the crash had not been a blip; that capitalism was not invulnerable; and that things were only going to get worse, not better. Today, after the extended period of stagnation that followed the crash, we inhabit a revolutionary moment. We live in the shadow of a great event that will come to define the thinking of a generation.31
But unless we are able to contextualise this moment in the long history of capitalist development, we will fail to exploit its full potential. To move beyond capitalism, we must develop an understanding of its structural weaknesses to determine how best to challenge it. By exposing the unseen, unquestioned laws according to which the economy works, Marx demonstrated that history would continue under capitalism: that things could be different. Applying his method to our current moment allows us to understand how the system really works, and how we might go about changing it.
In just over a decade, it will be too late for us to deal with one of the greatest challenges humanity has ever faced, and before that, elites are likely to have reasserted their control by foisting upon us a new order that maintains all the powers of the old. But between now and then lies an extended moment of crisis — a moment of contingency and uncertainty — a moment during which the logic of capitalism has once again been brought into question. A new economy, and a new society, is slowly being born in the minds of those who know that history will never end. It is up to us to bring that new world into being.
CHAPTER ONE THE GOLDEN AGE OF CAPITALISM
In 1944, the great and the good met in Bretton Woods, New Hampshire, to discuss rebuilding the world economy in the wake of the bloodiest war in history.1 The American delegation, led by Harry Dexter White, had been sent to ensure that the reins of the global economy were handed from the UK to the US in an orderly fashion. The British delegation, led by the famed economist J.M. Keynes, had been sent to retain as much power as conceivably possible without angering the UK’s main creditor, the US, which had emerged as the new global hegemon in the wake of the destruction of Europe. White, a little-known Treasury apparatchik, was a “short and stocky… self-made man from the wrong side of the tracks”. Other delegates recall that he was shy and reserved, though this may have had something to do with the fact that he spent much of the conference in hushed meetings with the delegates from the Soviet Union. Years later, he was accused of being a Russian spy, which he denied before dying from a heart attack. Keynes couldn’t have been more different — a tall, intellectual member of the British establishment, who unabashedly touted his achievements and promoted his own ideas. They were the “odd couple of international economics”.
The conference itself was, by all accounts, a raucous affair. Its wheels were greased with alcohol and fine food — in the small hours of the morning, delegates could be found drunk and cavorting with the “pretty girls” sourced from all over the US. Keynes predicted that the end of the conference would come alongside “acute alcohol poisoning”. The hotel boasted top of the range facilities, including “boot and gun rooms, a furrier and card rooms for the wives, a bowling alley for the kids, a billiard room for the evening”, as well as a preponderance of bars, restaurants and “beautiful women”. The more extravagant, the better — the splendour and superiority of the American way was to be shown at every turn.
It is somewhat ironic that the decadent crowd at Bretton Woods came up with an agreement that would hold back the re-emergence of the gilded age of the inter-war years. Bretton Woods was meant to prevent the outbreak of not only another world war, but also another Wall Street Crash. Keynes argued forcefully that doing so would require reining in what he called the “rentier class”: those who made their money from lending and speculation, rather than the production, sale and distribution of commodities.2 In the late eighteenth and early nineteenth century, rentiers had become extremely powerful on the back of the rising profits associated with the industrial revolution and increasing trade within the world’s constellation of empires. In the absence of controls on capital mobility, these profits traversed the global economy seeking out the highest returns. Much of this capital was invested in US stock markets, pushing up stock prices and inflating a bubble that eventually popped in 1929.
What the Great Depression started was finished by the Second World War, which saw billions of dollars’ worth of destruction, and increases in taxation to finance states’ war efforts.3 As a result, financial capital emerged from the first half of the twentieth century on the back foot, which made reining in the parasitic rentier class easier. Whilst the negotiators at Bretton Woods were undoubtedly concerned with securing the profitability of their domestic banking industry — not least the emerging power of Wall Street — just one banker was invited to the summit by the US delegation.4
Between the eating, the drinking, and the flirting, delegates at the conference hammered out an historic agreement for a set of institutions that would govern the global economy during the golden age of capitalism. The world’s currencies would be pegged to the dollar at a pre-determined level, supervised by the Federal Reserve, and the dollar would be pegged to gold. Capital controls were implemented to prevent financiers from the kind of currency speculation that could cause wild swings in exchange rates. The system of exchange-rate pegging and controls on capital mobility served to hem in those powerful pools of capital that had wreaked such havoc in the global economy in the period before 1929. Bretton Woods was a significant step forward in reining in the rentier class.
But Keynes didn’t get everything he wanted. He was hindered in his battle against international finance by the formidable Dexter White, backed up by the full force of US imperial power. White wished to retain the US dollar as the centre of the international monetary system, whilst Keynes wanted it replaced with a new international currency — the bancor. White emerged victorious, and the US gained the “exorbitant privilege” of controlling the world’s reserve currency.5 In other words, as well as constraining international finance, Bretton Woods also institutionalised American imperialism.6
The Bretton Woods conference marked the dawning of a new era for the global economy. Europe set about the long processes of post-war reconstruction and decolonisation, and the multinational corporations of the world’s newest superpower profited handsomely.7 Trade flows increased after the years of autarky during the war, and a new age of globalisation began. Whilst Bretton Woods provided the international framework for this economic renewal, it was at the level of national economic policy that the transition from pre-war laissez-faire economics was most evident. Keynes was, once again, at the centre of these developments.
In the inter-war period, Keynes had mounted a challenge to the economics profession by developing a theory of economic demand that challenged the central tenet of classical economics — Say’s law, the idea that supply creates its own demand.8 According to Jean-Baptiste Say — a Napoleonic-era French economist — prices in a free market will rise and fall to ensure that the market “clears”, leaving no goods or services left once everyone has had the chance to bid. If the market fails to clear — i.e. if businesses have products to sell but no one wants to buy them — it is because something is getting in the way of the price mechanism, like taxes or regulation. The law applied to workers as well as commodities, which reinforced the idea that there could be such a thing as involuntary unemployment. If a worker was unable to find a job, it was because he was setting his wage expec- tations too high.
This ideology was, of course, at odds with the experiences of those who had lived through the Great Depression. But the classical economists would retort that their field was a science, which paid no heed to the sensibilities of working people. Keynes was able to prove them wrong. His great innovation was to introduce the idea of uncertainty into economic models. When people are uncertain about the future, they may behave in ways that seem irrational — for example, saving when they will receive little return for doing so, or spending far above what they can afford. This is because in the context of uncertainty, people prefer to hold liquid (easy-to-sell) assets — and they tend to prefer to hold the most liquid asset of all: cash. Liquidity preference means that, the higher the levels of uncertainty, the more people save rather than spend.
This kind of uncertainty marks business’ behaviour even more than consumers’ and affects their investment decisions. If businesses’ confidence about the future turns, then they are likely to stop investing. These lower levels of investment will result in lower revenues for suppliers, who may have to lay people off, who will reduce their spending, leading to a fall in economic activity. This kind of self-reinforcing cycle of expectations is what gives rise to the business cycle: the ups and downs of the economy through time. It also shows why, over the short term, Say’s law doesn’t hold — if businesses lack confidence in future economic growth, they may choose not to spend even if they can afford to do so. And as Keynes famously stated, “in the long run we are all dead”.
But Keynes’ didn’t stop with this theoretical innovation, he also offered solutions to policymakers. Say’s law implies that taxes and regulation distort the normal functioning of the market, and that it is best for everyone when state economic policy is as unobtrusive as possible. But Keynesian economics provides a role for the state as an influencer of expectations, and a backstop for demand. If, for example, business confidence drops and investment falls, the state can anticipate the multiplier effect this will cause by increasing its own spending or by cutting interest rates, making borrowing cheaper. If, on the other hand, businesses are investing too much, leading to inflation, the state can cut spending or raise interest rates to mute the upward swing of the business cycle. Managing the business cycle also required reining in the influence of finance, because lending and investment are also pro-cyclical: they rise during the good times and fall during the bad times. If the role of government was to lessen the ups and downs of the business cycle, it must properly regulate finance, which so often exacerbated these ups and downs.
This kind of Keynesian economic management had a significant influence on economic policy in the post-war period. The destruction of the war, the increasing size of the state, and the arrival of Bretton Woods led to something of a rebalancing in the power of labour relative to capital within the states of the global North.9 The rising political power of domestic labour movements led to the widespread take up of Keynes’ ideas, which were, after all, aimed at preventing recessions and unemployment. States and unions often developed close relationships with one another via emerging mass parties representing labour, and many had a centralised collective bargaining process. Taxes on the wealthy and on corporations were high — underpinned by low levels of capital mobility — and societies became much more equal. During this time, many Keynesians believed that they had finally succeeded in taming the excesses of a capitalist system that had caused so much destruction in the preceding decades, which is why this period was termed the golden age of capitalism, following the gilded age of the pre-war years.
In the UK, this period saw the emergence of a new type of political economy, often referred to as the post-war or Keynesian consensus.10 Following the wartime coalition, Labour roundly defeated the Conservatives in the 1945 election and Clement Attlee became prime minister. The new Labour government seized on Keynesianism which had, up to that point, had a limited impact on economic policy: Keynes’ ideas had revolutionised economics, but it took a change in power relations for them to revolutionise the real world. Over the course of the next several decades, inequality fell, wages rose in line with productivity, living standards for the majority rose and both the labour movement and the state apparatus became more powerful relative to capital. The welfare state developed, providing a safety net when the business cycle turned, as well as increasing the social wage and therefore workers’ bargaining power. And whilst the City grew, and retained its strong influence over government, the rentier class — landlords, speculators, and financiers — was much more constrained than it had been before.
The post-war consensus could be enforced because the workers, who stood to benefit from Keynesian management of the economy, had emerged from the war more powerful than ever before, and they organised to make it happen. In this way, the rebalancing of power from capital to labour that came about as a result of the war was institutionalised in the post-war social and economic framework implemented in the 1940s.
How Does Change Happen?
This understanding of historical change — that which is driven by power relations, institutions, and crisis — is based on one reading of Marx’s analysis of history. One reading because it is a topic upon which Marxists continue to disagree. In particular, there is some disagreement between those who believe Marx prioritised economic structures in his analysis of historical development, and those who believe he prioritised agency. In other words, these groups have different answers to the question: “what matters most when it comes to historical change – economic and technological conditions, or how people respond to these conditions?”
On the first view, technological change leads to changes in peoples’ working conditions, and this leads to changes in the balance of power within society, and therefore peoples’ ideas. For example, the advent of mass production made it easier for workers to share political ideas and to organise to resist their exploitation, facilitating the emergence of unions. In this case, the political change naturally follows from the technological change in a way that can appear inevitable. Economic and technological conditions – what Marx referred to as the economic base – determine the balance of power in capitalist societies, and those with the power set about building institutions that reinforce their ideas – what he referred to as the superstructure. The powerful use their control over education, the media, and the law to influence their narratives, which determine how people make sense of the world. This is how the system remains stable from day to day. But it is all underpinned by an asymmetry of material power – by who has the control over force and resources. Taken to extremes, those who view history in this way may claim that human agency doesn’t matter at all – history progresses due to changes in technology, not human decisions.
Others respond that human beings aren’t robots: we have the capacity for free thought, debate and to make sense of the world in our own ways. They claim that the superstructure has power in its own right – institutions can shape the development of capitalism, they can make it harsher or kinder, more extractive or less exploitative. And institutions can be shaped by battles that take place in the realm of ideas. These people can often be found arguing that, if a policy is convincing enough, and if we lobby hard enough, we will be able to implement it and change the way capitalism works. For them, it is human action that drives history, not the other way around. For example, the development of social democracy wasn’t just based on changes in technology that made it easier for workers to organise. It was workers who won limits on the working week, sick pay, and eventually even the creation of the welfare state itself; and they did so by organising.
The determinism of the structuralists jars with the utopianism of those who view human agency as the driving force of history, and this tension has dominated debates on the left — and indeed in the social sciences more broadly — for generations. Marx’s own method for dealing with these questions – also the method used in this book – was based on the idea of the dialectic, in which what appear at first as opposing forces merge to determine the direction of historical change. The economic base — the technological basis of production — interacts with the super-structure — ideas, culture, and institutions — to determine what happens and when. Under this view, the nature of technology and the economy provides the overarching context in which human action takes place — these things shape peoples’ incentives and behaviours in ways that make certain outcomes more likely than others. But they do not determine human action. People, their capacity to organise themselves, and the ideas they hold, still have the capacity to drive and shape history in ways that cannot be determined through an analysis of their economic conditions alone. Men make their own history, but they do not make it as they please.
The relationship between structure and agency becomes particularly important during moments of structural crisis, which naturally emerge in capitalist systems due to their inherent contradictions.11 Capitalism is subject to contradictions that stop it from working properly — from workers not earning enough to purchase the goods capitalists are producing, to the emergence of financial crises driven by investment booms, to the environmental crises associated with the injudicious extraction and use of the planet’s scarce resources. These contradictions are contained by political institutions designed by the powerful to make the system more stable — like the welfare state or financial and environmental regulation. But these institutions do not stop the contradictions from emerging, they only mute their impact. As capitalism develops, its contradictions escalate until they explode in a moment of crisis. These extended periods of crisis are critical in determining how change happens. Moments of crisis are moments when institutions, norms, and discourses break down — it becomes harder for our political, economic, and social systems to function, and much more difficult for people to make sense of the world. Divisions emerge amongst the people with the power, which leave them vulnerable to all sorts of attacks — most revolutions have taken place during moments of crisis. The structural flaws of capitalism lead to crises, and crises are times when agency matters more: it is primarily during these moments that ideas and the movements that champion them can influence the course of history.
And this is exactly what happened in the post-war period. The destruction of the war had changed the balance of power between capital and labour and created an institutional crisis of which the latter could take advantage. Working people used this moment of crisis to organise and institutionalise a new settlement — one that would benefit them. And for a long time, this framework worked. But it could not last forever. As the twentieth century progressed, capital began to strain against the leash that had been placed on it, and the compromise between labour, capital, and the state began to break down. Social democracy, just like any capitalist economic model, was subject to its own inherent contradictions. And its collapse paved the way for something new entirely.
The Rise of Global Finance
On 28 June 1955, G.I. Williamson, the Chief Foreign Manager of the Midland Bank, was called into the Bank of England to discuss what appeared to be some unusual dealings in the foreign exchange markets.12 Midland Bank had been engaging in an activity that, up until 1955, no UK bank had dared to try. It had been taking deposits denominated in US dollars and paying out interest to the holders of these deposits — an activity formerly restricted to US banks regulated by the Federal Reserve. The Bank of England’s “gentlemanly” approach to regulation at the time is well-documented. Bankers were frequently invited to Leadenhall Street — an old, imposing building, in which alumni of Eton, Oxford, and Cambridge were likely to have felt quite comfortable — for a cup of tea and a chat. Occasionally stern words were exchanged, but rarely would any real discord disturb what has been described as the “dream-like” state of the City of London in the golden era of capitalism.
The discussions between Williamson and Cyril Hamilton, a Bank official, were no different. Hamilton summarised the meeting in a memo reassuring his higher-ups that “nothing out of the ordinary had taken place” at Midland and that its foreign exchange activities had been undertaken in the “normal course of business”. In any case, Hamilton reported that “Williamson appreciates that a light warning has been shown”. Quite why a light warning would have been required for proceedings undertaken in the normal course of business was not specified. Perhaps Hamilton had a faint inkling that Midland’s activities represented an entirely new phenomenon that the Bank of England was not quite equipped to manage. It is, however, highly unlikely that he realised he had just given the go ahead for an innovation that, within two decades, would have transformed global finance.
The new market in dollars outside of the US, and therefore outside of the jurisdiction of the Federal Reserve, was called the “Eurodollar market”. Usually, when you hold a foreign currency, you can either spend it in a foreign country, deposit it in a foreign bank, or invest it in foreign assets — a British bank wouldn’t generally allow you to deposit euros in your bank account. The Eurodollar markets changed all this by allowing banks to take and pay interest on foreign currency deposits. The term “Eurodollar” is something of a misnomer given that the first non-US dollar deposits were taken in the UK, but it stuck, and today the prefix “Euro-“ is used for any currency held outside its home country; for example, “Euroyen” are Japanese yen held outside Japan. The implications of this system weren’t truly visible until the Eurodollar markets took off in the 1970s. Socialist and newly-wealthy oil-producing states that wanted to hold dollar deposits without depositing them in US banks were able to put their dollars in London instead. London’s Eurodollar markets grew substantially as a result.
The Eurodollar markets undermined Bretton Woods by creating a global system of unregulated capital flows.13 Those investors holding dollars — pretty much everyone, given the use of the dollar as the global reserve currency — could now deposit them into the City of London. These dollars would then be free to float around the global economy at will, unhindered by the strict regulation then imposed on US banks by the Federal Reserve. Billions of dollars had ended up in the unregulated Eurodollar markets by the 1970s, undermining Keynes’ determination to curb the hot money of the rentier class. This gave financiers in the City an almost bottomless pit of dollar reserves to play with. After decades of retrenchment for the former financial centre of the largest empire in the world, the Eurodollar markets gave the City of London a new lease of life.
But the growth of the Eurodollar markets wasn’t the only threat to Bretton Woods that emerged in the 1970s. The increase in international trade that took place in the post-war period benefitted some countries more than others. US corporations, backed by the most powerful state in the world, grew substantially. Many were drafted by the US government to help rebuild Europe, becoming some of the first modern multinational corporations in the process. Between 1955 and 1965, US corporations increased their subsidiaries in Europe threefold.14 As the reconstruction effort took off, they were joined by German and Japanese multinationals, such that by the 1970s there were more, and larger, multinational corporations than ever before.
The growth of the multinational corporation meant that billions of pounds worth of capital was flowing around the world within corporations. Toyota, General Electric, and Volkswagen couldn’t afford to keep their subsidiaries across the globe insulated from one another — money had to be moved, even if that meant undermining the monetary architecture of the international economy. Technological change also facilitated direct transfers of capital between different parts of the world. All this meant that, despite the continued existence of capital controls, capital mobility had increased substantially by the 1970s. The combination of the emergence of the Eurodollar markets and the rise of the multinational corporation were beginning to place serious strain on Bretton Woods.
But it was the US government — not the banks — that dealt the final blow to the system that it had helped to create. With the dollar as the reserve currency, the US had gained the “exorbitant privilege” of being able to produce dollars to finance its spending15. Because everyone needed dollars, the US could spend as much as it liked without the threat of hyper-inflation. The gold peg was supposed to rein in this behaviour: if investors started to think that there were more dollars in circulation than gold to back it up, they might turn up at Fort Knox demanding the weight of their dollars in gold. But this didn’t stop the Americans from printing billions of dollars to fund a wasteful and destructive war in Vietnam. Combined with dollars leaking out of the US via its growing current account deficit, the global economy was facing a dollar glut by the 1970s. Realising that there were far too many dollars in circulation to keep up the pretence, in 1971 Nixon announced that dollars would no longer be convertible to gold. Bretton Woods was finally over.
Many expected a sharp devaluation of the dollar at this point, but this didn’t happen. In fact, the dollar — strong as ever — continued to be used as the global reserve currency, even in the absence of any link with gold. Finally, the real foundations Bretton Woods had been exposed: American imperial power. The gold peg established at Bretton Woods was not the source of the dollar’s value; the source of its value was a collective agreement that dollars would be used as the default global currency, much as English had by that point become the default global language. Freed from the need even to pretend it was covering its increased spending with ever-greater gold reserves, the power of the US Treasury was finally unleashed, with consequences that would not be felt for three and a half decades.
The end of Bretton Woods represented a profound transformation in the international monetary system. Absent any link with gold or any other commodity, money became nothing more than a promise, created by fiat by the state issuing it. The value of a currency would now be determined by the forces of supply and demand. Rather than having to limit the amount of money they were creating in order to maintain a currency peg, states would be able to create as much money as they liked, accounting only for the threat of inflation. Private banks were also now free to create currency on their behalf in the form of credit, constrained only by domestic regulation. The collapse of Bretton Woods represented the final step away from a system of commodity money, which has been the norm for most of human history, and towards fiat and credit money, which now dominate all other forms of money. The implications of this change would be far more profound than anyone could have seen at the time.16
With the demise of Bretton Woods, capital was finally released from its cage. Many countries continued to maintain capital controls and strict financial regulation. But the glut of dollars that had emerged at the international level needed somewhere to go. Meanwhile, the capital that had been stored up within states like the UK under Bretton Woods was desperate to be released into the global economy. It pushed and strained against the continued existence of capital controls, finding ever more ingenious ways of getting around the system. Finance capital had returned with a vengeance, and it sought to remove all obstacles to its continued growth. But it would take a national crisis for the remnants of the post-war order finally to fall.
The Political Consequences of Social Democracy
Just as Bretton Woods was collapsing, the social democratic model was starting to show signs of strain.17 Bretton Woods created a global economy, with global corporations, global supply chains, and global competition. Eventually, the system became a victim of its own success. Some companies — notably the US multinationals — thrived, but many others found it harder and harder to compete with the rising industries located in Germany, China, and Japan. UK corporations in particular found themselves struggling to benefit from the new wave of globalisation, partly because sterling was pegged to the dollar at too high a level, making British exports more expensive to international consumers.18 These firms struggled to cope with increasing international competition, and by the end of the 1960s, their profits had been seriously reduced. By the 1970s, the UK was referred to as the sick man of Europe. From 1973, after an attempt at a European peg was abandoned, sterling continuously fell against the dollar until, in 1976, sterling below $2 for the first time.19
In this context, one might have thought that the end of Bretton Woods would be good for British capitalists. Freed from the overvalued exchange rate, manufacturers would now finally be able to compete internationally once again. But decades of stagnation cannot be undone overnight. Britain’s manufacturers found that, even with a lower exchange rate, they could not compete with the new multinationals on either quality or cost. The first oil price spike in 1973 drove an increase in inflation, which exceeded 20% in two years over the course of the 1970s, peaking at 27% in the year to August 1975. In the absence of strong unions, rising inflation driven by rising costs might not have been such a systemic problem. Under other circumstances, bosses would have laid off workers or reduced pay to cut costs. But with the post-war consensus still firmly in place, unions pressed for pay rises that kept pace with inflation. Able to bargain with and make demands on the state, the unions refused to back down.
Nevertheless, as cost pressures mounted, unemployment rose. The state flitted between increasing spending to alleviate unemployment and cutting it to reduce inflation. The oil price spike had created a catch-22 situation that Keynesian policymakers were not equipped to deal with: stagflation — the combination of unemployment and inflation. This was not supposed to happen. Keynesian economics was based on the idea of the Phillips Curve. In the 1960s, economists drew on the work of William Phillips to posit an inverse relationship between inflation and unemployment. According to the models they built, when unemployment was high, inflation was low, and vice versa, implying that states should tolerate moderate levels of inflation in order to promote full employment.20 Governments were supposed to boost spending and reduce interest rates until full employment was reached, at which point they should start to reduce spending and raise interest rates in order to bring down inflation. Effecting this balancing act between inflation and unemployment was seen as the main aim of economic policy throughout the post-war period.
But by the 1970s, social democratic management of the economy was failing to bring down either unemployment or inflation — the latter of which was driven by political developments halfway around the world. Increases and decreases in interest rates had done nothing other than create a “stop–go” economy that fluctuated from one set of extremes to another. In this uncharted territory nobody knew what to do. By the time unemployment reached 4% in the early 1970s, it was clear that the state was trying to resolve the issue by tacitly withdrawing its promise to protect full employment in an effort to bring down inflation. But such a strategy posed an existential threat to the UK’s trade unions: the withdrawal of the state’s commitment to full employment would mean losing a powerful ally in their fight against the bosses. They could not afford to go down without a fight — not to mention, their members required jobs and pay increases in line with inflation to be able to survive. Industrial action escalated, especially in the industries with the most powerful unions – particularly the miners, whose power stemmed from their control over the nation’s energy supply.
Economic turmoil created a political crisis. On the one hand, by the mid-1970s, the Conservatives had roundly failed to turn years of strikes, energy shortages, and stagflation into an electoral advantage. Ted Heath went to the nation and asked them to decide “who governs this country? Us or the miners?”. On the other hand, the Labour government elected in 1974 proved equally unable to end the stalemate. Pursuing a more conciliatory approach, Harold Wilson raised the miners’ wages and attempted to implement a “social contract” between capital and labour, involving a voluntary incomes policy in which the government negotiated pay increases with the unions. But the second oil price spike — which came three years after the UK had sought an emergency loan from the International Monetary Fund — was the nail in the coffin of the social contract. In 1979, with inflation spiralling once again, the unions pushed for a return to free collective bargaining.
1979 was the coldest winter since 1962, and the combination of industrial action, economic stagnation, and energy shortages led to its being termed the “Winter of Discontent”. A sense of crisis hung in the air. In January 1979, Prime Minister James Callaghan was at a summit in Guadeloupe and was asked by a journalist about “the mounting chaos in the country”. He responded that he didn’t think others would agree with the journalist’s assessment that the country was in chaos. The following day, the Sun famously ran with the headline: “Crisis, What Crisis?”. By 1979, Britain was at a crossroads: the unions would not back down, and the social democratic state could not afford to confront them. What had happened to the golden age of capitalism?
Looking back, it is quite clear that the 1970s were a turning point for the post-war consensus. Businesses could not afford to continue to tolerate unions’ demands for pay increases in the context of rising international competition and high inflation. But unions could not afford not to demand jobs and pay increases in line with inflation. These problems were structural — they were inherent to the way the system functioned. Economic actors pursuing their own interests — whether businesses trying to increase profits, or workers trying to increase wages — eventually led to the emergence of acute strains that threatened to bring the British economy to the brink of collapse. The contradictions inherent in the social democratic growth model had finally come to the fore, and there were only two potential solutions to the crisis: a victory for the workers, or for capital. Much depended on where the loyalties of the state would lie.
MichaÅ‚ Kalecki — a Polish economist who theorised demand management at the same time, and some have said before, Keynes himself — had foreseen such problems decades earlier.21 After reaching his conclusions about the capacity of the state to control demand in the economy, he argued that such policies couldn’t work for long because there were “political aspects” of full employment policy that rendered it inherently unstable. The state’s commitment to promote full employment undermined the thing that made capitalism work: the threat of the sack. A policy of full employment would remove the “reserve army” that capitalists relied on to ensure a steady stream of cheap labour. Without desperate workers to exploit, profits would dry up.
The powerful state that had emerged from the Second World War had committed a second sin: it was no longer afraid of the capitalists’ threats to withdraw investment. When the government invests too much in the economy, and especially when certain industries are nationalised, it becomes much harder for businesses and investors to withdraw their capital when the state does something they don’t like — the option of “capital strike” is removed. Over the long-term, the combination of these factors encourages owners of capital to oppose policies that promote full employment, even if those policies also boost consumption and therefore support capitalists’ profits.
Kalecki’s argument is not that social democracy is economically unsustainable, but that it is politically untenable: at some point, a political crisis moment will be reached. He explains:
[U]nder a regime of full employment, the “sack” would cease to play its role as a disciplinary measure. The social position of the boss would be undermined, and the self-assurance and class-consciousness of the working class would grow. Strikes for wage increases and improvements in conditions of work would create political tension. It is true that profits would be higher under a regime of full employment than they are on the average under laissez-faire... But “discipline in the factories” and “political stability” are more appreciated than profits by business leaders.
This is what appears to have happened in the 1960s and 1970s. With high wages, low unemployment, and moderate levels of inflation, the power of the UK’s unions grew. The distributional tension over profits between the bosses and the workers was muted during the early years due to the investment and aid being sent by the US and the increase in global trade facilitated by Bretton Woods. But when things started getting tough — when inflation increased and competition from abroad began to erode profits — these tensions exploded onto the national stage. It was at this point that the political contradictions of social democracy became apparent; when the battle between capital and labour finally became zero-sum.
With profits under pressure, only one thing determined who got the gains from growth: who had the power. Thanks to rising capital mobility and the breakdown of Bretton Woods, the balance of power between capital and labour had changed by the 1970s. Capitalists could threaten to up and leave if they didn’t like the business environment — and though capital controls were still in place, many were finding ingenious ways to move their money anyway. With state support for the labour movement weakening, workers, meanwhile, found themselves facing up to bosses without powerful political allies.
These pressures steadily wore away at the post-war consensus, until they erupted during the crisis of the 1970s. But the old model would not completely collapse until a new one emerged in its place. The political tumult created by the erosion of British social democracy — echoed by the retreat of social democratic movements over much of the global North — provided a long-awaited opportunity for those who had been marginalised during the post-war boom to shape what came next. The left seemed out of answers, but the right saw that their moment had finally arrived.
Never Let a Serious Crisis Go to Waste
After asking voters “who runs the country” and being told “not you”, the humiliated former prime minister Ted Heath was forced into organising an election for leadership of the Conservative Party in 1975. Despite losing the twin elections of 1974, Heath maintained the support of much of the Conservative establishment and newspapers. He was expected to win. But instead he was ousted by the young upstart elected on a radical new economic programme that would eventually come to be known as neoliberalism: the theory that human wellbeing is best advanced by liberating the entrepreneurial spirit through free markets, private property rights, and free trade, all supported by a strong state.22 Her name was Margaret Thatcher.
Thatcher’s radical, neoliberal economic agenda had been forged decades earlier in the Swiss village of Mont Pélerin.23 In 1947, a group of economists from all over the world met to develop a new programme that would begin the fightback against the “Marxist and Keynesian planning sweeping the globe”. This was an austere, intellectual affair, in stark contrast to the bawdy conference that had taken place across the Atlantic three years previously. The Mont Pelerin Society, or the MPS — as the group would name themselves — knew that they were politically and intellectually isolated. The credibility of pre-war laissez-faire liberalism had crashed with Wall Street in 1929. The war that had followed these events had empowered the state to levels never previously seen in history, and these states had used their power to constrain the activities of the international financiers who were sponsoring the event.
The MPS objected to any state intervention that stood in the way of free markets. They were deeply offended by the creation of the National Health Service and the introduction of a social safety net. The rise of the unions and the role of the state in supporting collective bargaining were equally significant affronts to neoliberal ideology. But perhaps the most egregious aspect of the post-war consensus was the continued existence of capital controls. Allowing the state to determine where an individual could put their money was seen by some as a threat to human liberty, and to others simply as a barrier to profitability. The alliance between ideologues, desperate to create a world free of totalitarianism where private enterprise thrived, and the opportunists who wanted to undermine a system that was preventing them from making money, marked the Mont Pelerin Society from day one.
This ambiguity is important to understanding how neoliberalism eventually rose to prominence. It is both an internally-coherent intellectual framework and an ideology used to promote the power of the owners of capital in general, and finance capital in particular.24 The work of Hayek, von Mises, and others constituted a serious intellectual enterprise grounded in a particular set of values: namely, a commitment to human freedom, defined by control over one’s property.25 The fact that this gave justification for shrinking the size of the state, removing capital controls, and reducing taxes is what led several prominent international financiers to cover a large portion of the costs for the first meeting. One can see a parallel in the development of Keynesianism and the Labour Party’s adoption of this ideology. On the one hand, Keynes sought to “save capitalism” from its own contradictions, and on the other the Labour Party sought an ideology and set of policies that would allow it to maintain a compromise between the workers, capitalists and the state. In this sense, neoliberalism was no more a conspiratorial plot to take over the global economy than was Keynesianism. Intellectuals will always seek out the powerful to sponsor their ideas, and the powerful will always seek out ideas to justify their interests.
The elite that gathered at Mont Pélerin decided, then and there, that they would exhaust their time, money, and intellectual resources in an effort to bring down the system of state capitalism which they saw as paving the way to totalitarianism. Their political manifesto — the “Statement of Aims” — included commitments to promote the free initiative and functioning of the market, prevent the encroachment of private property rights, and establish states and international institutions that uphold these ideals. The Statement of Aims also claimed that “[t]he group does not aspire to conduct propaganda”. Yet they hatched a plan to translate these principles into an economic policy agenda that would undermine the social democratic consensus all around the world. Their ideas would be thrust into the mainstream through a network of academics, politicians and think tanks who could spread the word about this newer and better way of looking at economics. They had their work cut out for them. The Keynesian political compromise had seen living standards rise, inequality fall, and a strong bargain emerge between organised labour and the nation-state. Arguing for the abolition of the welfare state made the neoliberals look like dangerous radicals not worth taking seriously. For decades, Hayek and his acolytes were left shouting from the sidelines, derided by academics and politicians alike.
But perhaps the social democrats were too complacent. What looks like unparalleled stability can quickly implode under the dynamic, unconstrained forces of global capitalism. The crisis of the 1970s proved that social democracy was no different to any other capitalist system: it contained its own inherent contradictions that would eventually prove its undoing. Those neoliberals following in the wake of the MPS were as shocked by the collapse of the post-war consensus as anyone else. They had spent decades working at the global level, trying to unpick the regulations that underpinned Bretton Woods, but the national social democratic settlement looked stable in comparison. The Seventies changed everything. With the US state having dealt the final blow to Bretton Woods, the neoliberals felt emboldened. They knew that this spelled the beginning of the end for capital controls. Rising capital mobility would stand them in good stead in their battle with the nation state — capital mobility, after all, gives those who own it veto power. Don’t want to pay your taxes? Move your money abroad.
The neoliberals focused their efforts on the British state — the historic centre of global finance, in which the golden age of capitalism already seemed to be coming to a close with the acute crisis of social democracy. The think tanks they had created after Mont Pélerin — the Institute for Economic Affairs and the Adam Smith Institute — started churning out neoliberal propaganda at an impressive rate. They engaged with any politician that was willing to talk to them — and one proved much more open than any other. Neoliberal economists and lobbyists were quick to latch onto Thatcher’s campaign for the leadership of the Conservative Party.26 When she won, they were equally quick to work with her to shape an electoral agenda that would change the course of British history.
Thatcher’s campaign hinged on three promises: to take on the unions, shrink the state, and create a nation of homeowners. Her electoral promises were couched in populist terms: the Conservatives would “restore the health of our economic and social life”, “restore incentives so that hard work pays”, and “support family life by helping people to become home-owners”. This talk of restoration allowed Thatcher to frame what were radical economic policies in the language of traditional conservatism, drawing on people’s fond memories of the post-war consensus. Her attack on the Labour Party portrayed them as the party of the scroungers, living off the hard-work of others, and the thugs, holding the country to ransom. She sought to appeal to traditional Labour voters by claiming that her economic policies would restore full employment, using the famous “Labour isn’t working” posters to portray this message in popular terms. Labour, she claimed, was the party of fringe-extremists seeking to bring down British democracy and replace it with Soviet-style totalitarian rule. The Conservatives were the true party of working people — they would lower your taxes and inflation, while securing you a job and a home. It was a powerful message, and the polling shows that Thatcher’s victory came on the back of the switched allegiances of many low-income voters.
This populist rhetoric was, of course, the thin end of the neoliberal wedge. Thatcher knew that there was little public support for the most important elements of the neoliberal agenda, so she hid her commitments to privatisation and deregulation in the small print. In fact, even those policies that Thatcher did advertise — from going to battle with the unions to reducing the size of the state — were no more popular amongst voters in 1979 than they had been in 1974.27 The lesson of Thatcher’s period in opposition is the importance of extended crises in eroding support for the status quo. Even if they weren’t particularly keen on privatisation, people were sick to death of the constant disruption associated with industrial disputes, with the high levels of inflation and unemployment, and with the state’s apparent inability to deal with any of these issues. Many people voted for Thatcher in 1979 because she appeared to be one of the few politicians who was able to make sense of what was going on and provide workable solutions. Even if you didn’t like the Thatcherite agenda, after the Winter of Discontent you might have thought it was worth a try. Milton Friedman — one of the founders of the Mont Pelerin Society — knew this better than anyone. Looking back on the neoliberal victories of the 1980s, he wrote:
Only a crisis — actual or perceived — produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.
The neoliberals’ aim wasn’t simply to get Thatcher elected. It was to use the moment of crisis provided by the breakdown of the post-war consensus to institutionalise a new model for the British economy — one that increased the power of capital, just as the Keynesian consensus had institutionalised the power of labour. In this sense, the neoliberals had a view of change just as dialectical as that of any Marxist. The contradictions of social democracy would be exposed by a crisis that would bring the economy grinding to a halt. During such a crisis, people and politicians would search for ideas that might provide them with a way out. By building a narrative, developing an electoral coalition, and gaining control of the state, the neoliberals could use the crisis moment to build a new set of institutions that would give them and their backers the kind of lasting power that social democracy had denied them.
This is what the Thatcherite agenda was all about. Neoliberal economists, think tankers and financiers convinced Thatcher — who didn’t need much convincing to begin with — that free markets required a strong state.28 The only way to deal with the communist threat — at home and abroad — was to aggressively take on the power of the labour movement and release the dynamic forces of market competition that would promote efficiency, profitability, and social justice — as well as restoring the owners of capital to their rightful, unchallenged position as the most powerful group in society. Thatcher and her acolytes knew that they had five years to build such a model, but that once it had been built, it would be just as irreversible as the NHS.
The first thing they did was to deal with the only group capable of challenging their hegemony: the unions. Thatcher spent years, and a great deal of political capital, waging war with the UK’s labour movement. The next job was to empower capital in its place. Rather than seeking out an alliance with an ailing national capitalist class focused on mining and manufacturing, Thatcher knew that this required supporting the interests of the burgeoning international capitalist class. The natural allies of such a grouping could be found just down the road from Westminster, in the City of London.
On its own, this victory of capital over labour would not have lasted very long. What the neoliberals needed was an electoral alliance that would render their new system structurally stable. The clue to how this was created can be found in the electoral agenda of the 1979 Conservative government: a small state and property ownership. In place of the alliance between the national capitalist class and the labour movement that governed the post-war consensus, Thatcher would build an alliance between the international capitalist class centred in the City of London and middle earners in the south of England. She secured the support of middle earners by turning them into mini-capitalists through the extension of property ownership and the privatisation of their pension funds. In doing so, she transformed British politics and unleashed a new growth model that lasted over thirty-five years, before collapsing in the biggest financial crisis since 1929.
CHAPTER TWO VULTURE CAPITALISM: THE FINANCIALISATION OF THE CORPORATION
We accept our responsibilities as a corporate citizen in community, national and world affairs; we serve our interests best when we serve the public interest... We acknowledge our obligation as a business institution to help improve the quality of the society we are part of. We want to be in the forefront of those companies which are working to make the world a better place. — Thomas Watson Jr, former chief executive of IBM, 1969.
When Thomas Watson Jr spoke these words, he was reflecting the mood of the times. This statement was typical of Watson, who believed that the best way to secure the long-term profitability of his business was to account for the interests of all of IBM’s stakeholders — workers, managers, shareholders, the state, and society at large.1 He repeatedly maintained that the guarantor of IBM’s success was its commitment to putting its workers first. Under Watson, IBM was responsible for making significant advances in machine learning, developing newer, faster computer processors, and even helping NASA with its space programme. Endicott, New York, a town of around thirteen thousand people in which IBM was headquartered, hosted eleven thousand IBM employees at the firm’s peak.2
But by 2012, IBM’s business model was shaped around quite a different set of goals. The key promise of the 2015 road map was to “[leverage] our strong cash generation to return value to shareholders by reducing shares outstanding”. Its measure of success: increasing the share price to $20 per share by 2015. Rather than innovate, IBM has set out to achieve this mission through mergers and acquisitions.3 Between the end of Watson’s tenure and the present day, employment in Endicott fell from ten thousand to just seven hundred. In contrast, an investor who had bought a thousand IBM shares for $16,000 in 1980 would have seen those shares increase in value twenty-five times: their holding would now be worth almost half a million dollars.
Watson Jr would be unlikely to recognise the IBM that exists today. Gone are the concerns with stakeholders, or even workers. Instead, the corporate culture of one of the greatest technology companies in the world has been reshaped around a single imperative: maximising shareholder value. Describing the transformation of IBM in this way is not meant to imply that Thomas Watson Jr was a particularly saintly individual, or that today’s chief executives are particularly awful; nor that the old IBM model was perfect: clearly, the obsession with the “national interest” suggests a symbiotic relationship between multinational corporations and the US state that has not been a progressive development. But the change in business discourse — from an emphasis on stakeholder value, with workers at the core, to shareholder value, with workers coming last — reveals a deep change in the way corporations are run.
Today’s corporations have become thoroughly financialised, with some looking more like banks than productive enterprises. The financialisation of the non-financial corporation has involved a transfer of society’s resources from workers to shareholders. This transfer of power has resulted both from changes in the political and economic foundations of the global economy and from the rise of a new ideology, which holds that corporations’ sole aim should be to maximise profitability via increasing returns to shareholders. Both ideas and power relations have to change to create any lasting economic change — and the 1980s was a period of transition for both.
Firstly, rising capital mobility and the collapse of the post-war consensus increased the power of big institutional investors. Institutional investors control pools of money, such as hedge funds and pension funds, and are able to invest and divest huge sums at will.4 Much of this money was invested in corporations, allowing investors to use their power to control how these corporations were managed. Organisations were restructured to ensure that managers’ sole aim was to make as much money for their shareholders as possible. And the money that went to shareholders was money that wasn’t going to workers or being invested in future production.
Secondly, neoliberalism was sweeping the world by the 1980s, and with it the idea that the ruthless pursuit of profit was the only responsibility of any corporation.5 This translated into a simple imperative for corporate executives: maximise shareholder value.6 The valorisation of profit was cemented as managers’ pay packages were linked to share prices, ensuring that they would faithfully pursue the interests of their shareholders. As neoliberals gained control of many political parties, states actively began to encourage such behaviour. The ideology of shareholder value was institutionalised in a corporate code that reinforces the idea that the function of a business is to maximise its profits, consequences be damned.
The rise of the institutional investor and shareholder value ideology have had a lasting impact on corporate power in both the US and the UK.7 Most corporations are now structured around the interests of shareholders, with workers’ interests coming last, if they are even considered at all.8 As this process has developed, a battle has emerged between certain types of shareholders over others. Short-term shareholders, like hedge funds, have benefitted to a much greater extent than long-term shareholders, like pension funds.9 Some private executives, intent on maintaining their corporations’ size and power, have sought to protect themselves from hostile takeovers and activist investors. Those that have succeeded have emerged as the most powerful monopolies in human history. Meanwhile, any form of resistance to the emergence of this model has been brutally broken. Where unions may once have acted in the interests of workers against managers acting in the interests of shareholders, the former have been eviscerated by states intent on ensuring that businesses are able to make as much money as possible. The corporate culture that has emerged from these changes would be unrecognisable to the CEOs of the 1950s.
Some have argued that this focus on the maximisation of shareholder value represents a perversion of an otherwise benign capitalist system, and that the triumph of the “takers” over the “makers” is a development that we should be trying to somehow reverse.10 But whilst national politics were important in determining how this ideology developed, these changes didn’t just happen, they were driven by much deeper shifts in the way the global economy works. It is hard to imagine how shareholders wouldn’t have used the collapse of Bretton Woods and the rise of financial globalisation to increase their power, even if the political struggles that took place within different states determined how much their power grew relative to other actors. Capitalism wasn’t distorted by the changes of the 1980s, it adapted — and it did so in the interests of the most powerful.
The balance of social forces in the UK ensured that it developed the financialised corporate culture par excellence. By unleashing the power of the City of London, and crushing everything that stood in its way, Thatcher helped to build a highly exploitative, extractive and unequal economic model in the UK: one which endures to this day.
The Big Bang
Once upon a time in the City of London, there lived a noble and chivalrous group of knights in a great big castle called the Stock Exchange.11 At least, that was the story told by John Redwood, then head of the Number 10 Policy Unit. Redwood’s 1984 speech — Tilting at Castles — described the City as it existed back then as an elaborate system of knights, barons, kings, and peasants. The knights — the brokers who worked on the London Stock Exchange — were honest, hard-working, and “competed with each other in high spirits”. The barons — institutional investors like pension funds — weren’t nearly as jolly as the Stock Exchange knights and were forced to send all their money to the Stock Exchange castle, where the real money was made. At the bottom of the pile were the peasants, who subserviently sent their savings to the institutional barons for them to invest. The system worked well for the knights, but not so well for everyone else. Redwood’s speech told the story of how the wise ruler went to the castle to ask the knights to lower their drawbridge and let just a few more people in.
This incredible piece of Orwellian doublespeak describes the fierce battle that took place between the government and traders on the London Stock Exchange over the course of the early 1980s, ending with the deregulation of the City. Before 1986, regulation that dated back decades restricted the kinds of activities that different economic actors and institutions could undertake. Fixed minimum commissions were imposed on certain kinds of trades, making these more expensive; trading took place on the slow, crowded, non-automated Exchange floor; and different types of investors were separated from one another, creating a rigid City hierarchy. This arcane regulation and strict separation between actors gave rise to a system that worked something like an old boy’s club. In this pre-Big-Bang world Nick Shaxson reports that bankers could show their disapproval for one another by crossing the road and could determine a man’s creditworthiness by the strength of his handshake.12
In the wake of a legal battle between the government and traders, the Big Bang hit the doors of the London Stock Exchange like a battering ram. In a single day, many of the restrictions that maintained the City hierarchy were removed. Fixed commissions were abolished, the separation between those who traded stocks and those who advised investors was eliminated, rapid trading was moved away from the floor of the Exchange and foreign firms were invited into the City. These changes allowed more institutions to enter the stock market and facilitated a wave of mergers and acquisitions, many by foreign banks. By 1987, seventy-five of the three hundred member firms of the London Stock Exchange had been bought up by foreign rivals.13 Technological developments that allowed traders to buy and sell securities in the blink of an eye quickly followed the move of trading away from the Exchange room floor. In just one year, trade times were reduced from an average of ten minutes to ten seconds — a large reduction, but far off the trading times of today, which are measured in milliseconds.14 Trading volumes skyrocketed, reaching $7.4bn just one week after the Big Bang, compared to $4.5bn a week before.15 Many of the partners in the firms that had previously been at the centre of the City old boy’s network took their money and ran: some say that the Big Bang created 1,500 millionaires overnight.16
The Big Bang was helped along by the privatisation drives of the 1980s. In the same year, the UK government launched its famous “Tell Sid” advertising campaign, encouraging people to buy shares in the soon-to-be privatised British Gas. The adverts were centred on people encouraging one another — in the pub, at the shops, or on the street — to jump on the bandwagon before it was too late. The exchange always finished with the now-famous line: “If you see Sid, tell him!” As one commentator puts it, “You couldn’t pass a billboard, switch on the radio or glance at your junk mail and miss it”.17 After starting with British Aerospace in 1981, Associated British Ports in 1983 and Sealink in 1984, Thatcher’s privatisation of British Gas was by far the most ambitious privatisation attempted thus far — and was based on a questionable commercial case. The £32m advertising campaign worked and millions of ordinary Brits signed up to get their part of the nation’s family silver.18 At the time, it was the largest privatisation ever undertaken on the London Stock Exchange.19
Overall, Thatcher privatised more than forty stateowned enterprises. This represented a major challenge to the post-war status quo: in 1979, nationalised industries accounted for 10% of economic output and almost 16% of capital investment.20 By the time she left office, £60bn worth of UK assets had been sold off — often on the cheap.21 Output accounted for by nationalised industries fell to 3% and investment to 5%.22 Employment in nationalised industries fell from almost 10% of total employment to just 2%.23 According to one government minister, “[w]hen we came into office, there were about three million people who owned shares in Britain. By the end of the Thatcher years, there were twelve to fifteen million shareholders”.24 Millions of people were effectively given free money when the state sold off national assets under their value — shockingly, many of them ended up voting Conservative.
Over the longer term, Thatcher’s dreams of boosting individual share ownership proved over-optimistic. She and Redwood claimed that financial liberalisation would allow the peasants — ordinary savers — to get a chunk of the pie by allowing them to earn money on the stock market. But instead, people ended up handing their savings over to the barons — the institutional investors previously prevented from directly engaging in trades themselves — who were able to extract large fees from their management of other peoples’ money.25 One can think of institutional investors as financial institutions sitting on huge piles of cash that they invest to make the largest possible return. These cash piles can come from ordinary people’s savings, as with pension funds, the savings of the wealthy, as with hedge funds, or even from states, as with sovereign wealth funds. Institutional investors can buy all sorts of financial securities — from bonds, to equities, to derivatives — as well as real assets like property.
In 1963, individuals owned about 55% of publicly listed shares, whilst pension and insurance funds owned 6% and 10% respectively.26 By 1997, individual shareholdings had fallen to 17% of the value of total equity, whilst pension and insurance funds had risen to 22% and 23% respectively. Many international institutional investors also bought up UK equities, meaning foreign ownership of UK corporations also increased. Meanwhile, individual investments were skewed towards the wealthy — some of whom set up hedge funds to manage their own, their close friends’, and their family’s money.
This was all part of the Conservative plan for “pension fund capitalism”.27 In 1988, Thatcher launched private, personal pensions, allowing individuals to save without enrolling in a corporate scheme, which had themselves already amassed vast pools of capital thanks to previous reforms. Initially, this ended in disaster as pensions advisors took advantage of savers’ inexperience to sell them risky financial products. But eventually, private pensions pots and other savings instruments became a central part of the British financial landscape. The creation of private pensions pots would have two linked and propitious effects for the Conservative government. On the one hand, it helped to create a class of “mini-capitalists” with an incentive to support measures that would boost returns in financial markets. Thatcher’s acute grasp of political economy allowed her to build an electoral coalition with a strong material interest in supporting her policies. On the other hand, the move towards pension fund capitalism increased the pool of available savings for financial institutions to plough into whatever investments would deliver the highest returns.
The combination of private pensions pots and large, corporate funds gave private investors a great deal of capital to play with. It is a fairly respected law of investing that the more capital you have at your disposal, the higher your returns, not least because if a single investor puts enough money into a single security, his investment would boost the price of that security. When asset managers got their hands on workers’ pension funds, they invested this capital into global financial markets, making huge amounts of money in the process. As one commentator puts it, “social security capital’ is now as important as other sources of capital… it is a key element in fuelling the expansion of financial markets”.28 By 1995, one estimate put the global assets of pension funds at almost $12trn, at least £600bn of which came from UK savers, making the UK’s the largest pensions pool in the EU.29
It is not a coincidence that corporations began to be governed based on the logic of maximising shareholder value just as institutional investors from around the world emerged as some of the most powerful actors in the City. Historically, these pools of capital have been important: when they are large, those who control them are able to wield immense amounts of power by determining who gets what.30 The mass-scale channelling of people’s savings into stock markets via pension funds and insurance funds after the end of Bretton Woods and the financial deregulation by the 1980s allowed institutional investors and wealthy individuals from around the world to channel money into the UK’s stock markets, unencumbered by capital controls or restrictions on foreign trading. Hyman Minsky has argued that we now live in an age of “money manager capitalism”, in which these pools of capital are some of the most important entities in determining economic activity.31
In this sense, money manager capitalism doesn’t just affect financial markets. By influencing the allocation of capital across the economy, it has affected the behaviour of almost every other economic actor — most clearly, it has transformed the nature of the non-financial corporation.32 Institutional investors’ primary goal is to maximise their returns as this is how they earn their fees and commissions. These pressures have been passed on to corporations via the stock market: with equities representing a significant chunk of the assets held by money managers, the pressure on corporations to meet shareholder needs for immediate returns increased.33 In some cases, rather than being responsible to a board of directors and a few disorganised shareholders, corporations have been held to ransom by “activist investors” demanding that their capital is used in the most efficient way possible. This change in corporate governance has also been reinforced and embedded by the emergence of a new ideology: shareholder value.
Together, the increasing power of investors and the emergence of an ideology to support this power has led to the financialisation of the non-financial corporation: businesses are increasingly being used as piggy banks for rich shareholders. This, according to the CEO of General Electric, makes shareholder value “the dumbest idea in the world”34. But like many dumb ideas that enrich the powerful, shareholder value took off in the 1980s — and nowhere more so than in the City of London.
Corporate Raiders, Hostile Takeovers, and Activist Investors
Lord Hanson — aka “Lord Moneybags” — is famous for many things.35 He was engaged to Audrey Hepburn, had a fling with Joan Collins, and also happens to be one of the UK’s most notorious corporate raiders. Although he made his money in the new economy, Hanson didn’t exactly come from humble beginnings. Born into a family that made its money during the industrial revolution, he built multiple successful business ventures on the back of his family’s wealth before teaming up with Lord Gordon White to start Hanson Trust in 1964. At its height, Hanson Trust was worth £11bn. Over the course of the 1980s, its share price outperformed the rest of the FTSE100 by a staggering 370%. He was named by Margaret Thatcher as one of the UK’s premier businessmen and, completely unrelatedly, he donated millions of pounds to the Conservatives over the course of his business career. The root of James Hanson’s success was his commitment to the religion of shareholder value. Thatcher admired Hanson not simply because of his political donations, but because she saw Hanson Trust as the future of the new economy, and the close relationship between the two can tell us a lot about what Thatcher was trying to do when she deregulated the City.
Hanson Trust was not built on the back of a great new idea by a brilliant entrepreneur, or some new innovation that promised to revolutionise its industry forever. Its sole aim was to find and buy up “underperforming assets” and make them profitable. Throughout the 1970s, the conglomerate loaded up on debt to buy up shares in several large companies — seen as “underperforming” — before selling off assets and cutting the payroll to disgorge these companies of cash, used to pay back bondholders and generate gains for shareholders. Hanson Trust quickly gained a reputation as an infamous “asset stripper” before the term was widely used.
But Hanson truly made his reputation in the same year as the Big Bang itself. In 1986, Hanson Trust purchased Imperial Tobacco for £2.5bn, accounting for 15% of the value of total mergers and acquisitions activity in that year alone. The Trust quickly sold off £2.3bn worth of Imperial’s assets and distributed the money to bondholders and shareholders. Hanson had aimed to extract assets from the company’s pension fund, but the trustees had managed to close the fund the day before the takeover went through. So instead, he sold off most of Imperial’s subsidiaries — from food producers, to brewers, to a variety of tobacco producers. He was left with a business that made a profit margin of 50%. And this takeover was only one of the more extreme examples of Hanson’s attitude towards acquisitions. Hanson Trust acquired dozens of undervalued companies throughout the Eighties and Nineties, claiming always to put shareholders first, customers second, and employees last. When James Hanson came for your employer, you knew what was coming next.
Initially, raiders like Hanson were derided as extractive parasites on productive economic activity. Hanson, widely reviled by the British media, was compared to a “dealer who bought a load of junk, tarted it up and sold it on as antiques”. In a more ambiguous assessment, the Economist termed him the king of the corporate raiders. When he attempted to take over a famous British brand — ICI chemicals — in 1991, he was faced with “the sort of moral indignation that the British usually reserve for a Tory cabinet minister caught in bed with his secretary”.36 ICI was at the time one of the leading chemical firms in the world, based on strong previous investments, particularly in research and development. There was widespread concern that a Hanson takeover would lead to ICI being stripped to the bone, focused on increasing current cash flow and distributing it to shareholders rather than investing in the long-term future of the business. Faced with significant political opposition, the ICI bid failed. But Hanson’s approach eventually became common business practice.
By the 1990s it was no longer controversial to argue that, when corporations maximised their profits, the economy worked better for everyone.
These arguments ran contrary to the received wisdom in management theory, which held that businesses had responsibilities to a wide variety of stakeholders — workers, consumers, and governments for example. But with the rise of neoliberalism, the argument that — in the words of Milton Friedman — “the social responsibility of business is to increase its profits” gained traction.37 This view assumes that resources are scarce, so when companies use their resources in unproductive ways there are fewer to go around for everyone else. In this sense, doing anything other than maximising profits is wasteful and inefficient.
From here, it is a short leap to arguing that the singular purpose of any corporation should be to maximise shareholder value — with the share price used as the proxy for profitability. Because neoclassical economic theory assumes that equity markets are efficient, it also assumes that current stock prices are an accurate reflection of the long-term profitability of a company. Investors will base their investment decisions on the amount of profit they expect the enterprise to make in the future, and how much of that profit they expect the firm to distribute to shareholders. The argument for shareholder value therefore proceeded from “businesses’ sole aim is to maximise profits”, through to “boosting the current share price is the best way to maximise profits”.
But this nice, neat story is based on some fundamental misconceptions about the way financial markets work — not to mention its questionable assumptions about human behaviour. First and foremost, a firm’s current share price doesn’t always reflect its real long-term value. Keynes was one of the first to point out that the prices of different shares on stock markets are mainly determined by a “beauty contest”: in other words, without perfect information about the inner workings of a firm and without certain knowledge of its future profitability, investors will put their money in the nicest-looking shares.38 One can think of beautiful shares as expensive football players: when a football team buys a new player they cannot be certain that the player will be worth the expense — they will judge the price based on past performance and trends in the rest of the market.
In the same way, an investor can wade into a booming market, see a share that has been performing well — say, Carillion PLC — and purchase it expecting its value to carry on increasing, even if its business model isn’t particularly strong. This creates a self-reinforcing cycle in which the most “beautiful” shares receive more investment, pushing up their price and vindicating investors’ decisions to buy them in the first place. This dynamic can create bubbles: when everyone piles into certain stocks based on the fact that everyone else seems to be making lots of money from them, the price of those stocks comes to reflect peoples’ expectations about profits, rather than profits themselves.
But taking the neoliberal argument at face value is to miss the point. The reorganisation of the economy that took place in the 1980s had little to do with making the economy work better, and everything to do with changing who the economy worked for. Shareholder value became so dominant precisely because it benefitted those with the power. As a result, it quickly colonised management theory and practice, transforming corporate governance by changing managers’ incentives to ensure that they acted as reliable functionaries for the owners of capital.
Contrary to the arguments of mainstream economists, this political reorganisation of the firm has made firms less efficient when it comes to their use of society’s scarce resources. In the late 1970s, professors Meckling and Jensen published an article arguing that there existed a “principal-agent problem” between the individuals who owned a corporation and those who managed it.39 Those who ran companies — managers, the agents in this context — had every incentive to maximise their own pay packages and engage in “empire building” to increase their power, even if this wasn’t in the long-term interest of the people who owned companies — shareholders, the principals. This created a conflict of interest for managers who were technically employed by shareholders to run successful, profitable companies, but who were, according to Meckling and Jensen, likely to use their positions to maximise their own wealth and power. According to this view of the world, corrupt, bureaucratic managers were wasting money, reducing business’ profits and therefore shrinking the size of the economic pie for everyone in the economy.
The way to solve this, Jensen later argued, was to align the interests of managers with those of shareholders. Their immensely popular article — “CEO Incentives: It’s Not How Much You Pay, But How” — argued that in paying CEOs a salary that didn’t reflect the impact they had on the company’s share price, directors were encouraging them to behave like bureaucrats. If instead CEOs were remunerated based on share prices, they would have a greater incentive to act in the best interests of shareholders, and therefore in the best interests of society as a whole. Managers had to be made to act like business owners — ruthlessly pursuing profit at every turn.
Adherence to the flawed ideology of shareholder value has created a set of deep-seated problems with British capitalism.40 As you would expect, the ideology of “shareholder value” encouraged companies to distribute their profits to shareholders rather than distributing them internally or using them for investment, which curtails long-term profitability to facilitate a short-term boost in the share price. Failing to retain and properly remunerate workers erodes trust between workers and their employers, which can negatively impact productivity.
William Lazonick argued that the rise of shareholder value ideology has led to a transformation in the philosophy of corporate governance — the way in which corporations are run — from “retain and invest” to “downsize and distribute”. In other words, the rise of shareholder value has become a mechanism for redistributing the profits of business away from workers and towards corporate executives and current shareholders. This has, in the words of one commentator, led to “rampant short-termism, excessive share buybacks to the neglect of investment, skyrocketing C-suite compensation and misallocation of resources in the economy”.41 Elsewhere the Economist recently argued that shareholder value has become “a license for bad conduct, including skimping on investment, exorbitant pay, high leverage, silly takeovers, accounting shenanigans and a craze for share buy-backs”.42
In the UK, these trends are clear. The proportion of corporate profits (measured as discretionary cash flow) returned to shareholders increased from just over 25% in 1987 to almost 50% in 2014.43 As well as distributing profits to shareholders, corporations can also increase share prices by buying up their own shares. Data on share buybacks from the Bank of England between 2003 and 2015 showed that in almost every year, companies bought more of their own shares than they issued new ones.44 Another way to give a quick boost to a company’s share price is to quickly expand the company by buying up another — this was the strategy preferred by corporate raiders like Lord Hanson. Between 1998 and 2005, UK mergers and acquisitions (M&A) activity was worth around 22% of GDP — double that of the US, and more than double that of Germany and France.45 With shareholders placed firmly at the centre of corporate decision making, and managers remunerated based on share prices, long-term investment has fallen. UK companies’ investment in fixed assets fell from around 70% of their disposable incomes in 1987 to 40% in 2008.46
Those firms that pursued the downsize and distribute model often ended up taking out debt to do so.47 In what came to be known as the “debt-leveraged buyout”, activist investors would take out “junk bonds” — or expensive debt — to buy out existing shareholders, before selling off chunks of the corporation and using this to repay bondholders. This makes the hierarchy of finance capitalism obvious — at the top are creditors, followed by shareholders, with workers at the very bottom. Firms came to operate according to the logic of finance-led growth: distributing earnings to shareholders and taking out debt to finance investment and new takeovers. All in all, business’ stock of outstanding debt has grown from 25% of GDP in 1979 to 101% by 2008.48 As a ratio of profits, this means that UK corporations owe 6.5 times more in debt than they earn in profits each year, making them some of the most indebted corporations in the global North.49
As well as investing less and taking out more debt, companies have also been reducing workers’ pay and making their employment conditions more precarious. The ratio of CEO pay to the pay of the average worker increased from 20:1 in the 1980s to 149:1 by 2014.50 This has driven up income inequality: the UK’s GINI coefficient — a measure of income inequality in which countries closer to zero are more equal and those closer to one more unequal — rose from 0.26 at the start of the 1980s to 0.34 by the start of the 1990s. In fact, there has been a secular decoupling of productivity (the value of what workers produce) and wages. The total income of an economy can be divided between that which accrues to workers in the form of wages and that which accrues to owners in the form of profits; modelling from the TUC suggests that the wage share of national income has fallen from a peak of 64% in the mid-1970s to around 54% in 2007.51
Within the profit share of national income, rising interest payments have led to an increase in what has been termed the “rentier share” of national income. Economic rents are income derived from the ownership of a scarce resource over and above what would be necessary to reproduce it. When a landlord increases a tenant’s rent without improving the property, he is simply extracting more income from the tenant without producing anything new — in this sense, economic rents are unproductive transfers from one group to another based on an asymmetry of power. The power to extract economic rents generally depends upon the monopoly ownership of a particular factor of production. Property rents, over and above what is necessary to maintain the property, paid to landlords are economic rents derived from the landlord’s monopoly ownership of a property in a particular location. Banks are often able to charge interest payments over and above the level necessary to compensate them for the risk they are taking in lending because they have monopolistic — or, more often, oligopolistic — control over money lending. Monopolies can extract monopoly rents from overcharging consumers, and firms can generate commodity rents from their control over a particular resource, like oil or diamonds. Perhaps the most common source of economic rents in financialised economies are property rents and financial rents. Those on the receiving end of economic rents are known as “rentiers”. Keynes famously called for the “euthanasia of the rentier”, defining a rentier as a “functionless investor”, who exploits the “scarcity-value” of capital to generate income.
In 2005, Gerald Epstein made the first attempt to measure the rentier share of national income in OECD economies. Epstein opted for a fairly narrow definition of financial rents, defined as “the income received by the owners of financial firms, plus the returns to holders of financial assets generally”. He was building on Kalecki’s definition of financial rents, which captures the returns financiers are able to generate from their control over lending and investment. Epstein showed that the rentier share in the UK had risen from 1970 to 1990, from 5% GDP to nearly 15%. Similar trends pertain in the US, where the rentier share increased from around 20% to over 40% GDP over the same time period, and in most other advanced economies. So, whilst the profit share as a whole was increasing, within the profit share, the amount accruing to rentiers was also rising. This was largely due to rising interest payments, after the dramatic increase in corporate and household debt during the 1980s. The reason Keynes called for the “euthanasia of the rentier” is that rental payments flowing up to the owners of capital act as a drain on demand. Interest paid by businesses represents capital that can’t be used for investment. Economic rents also accrue to the already-wealthy, who are less likely to consume their extra income. This was one of the major drivers of the rising inequality and financial instability evident in the inter-war years. Since the 1980s, the rising rentier share has once again begun to act as a drain on productive economic activity.
Even as these problems became obvious over the course of the 1980s, there were no attempt in the UK to constrain shareholder value ideology, perhaps because it was benefitting some of the wealthiest and most powerful people in society. Realising they had opened Pandora’s box, regulators in the US tried to close it again by outlawing corporate raiding strategies. Meanwhile, in the US, firms were developing innovative new ways to protect themselves from hostile takeovers. Firms could opt to take a “poison pill” in which existing shareholders could dilute their holdings to prevent a hostile bidder from gaining an overall majority; they could establish shares with different voting rights; or they could seek out a non-hostile bidder — a White Knight — to buy up the shares being targeted by the hostile party. But in the UK, the stock market crash did nothing to dampen the corporate raiding culture. In fact, the shareholder value ideology was positively encouraged by politicians like Thatcher. The infamous City Code created one of the most permissive takeover regimes in the world.52 It set out that all shareholders must be treated equally, preventing the use of many of the defences outlined above, and prevented management from standing in the way of a takeover agreed by shareholders. In other words, the City Code institutionalised the power of corporate raiders, activist investors, and other short-term shareholders who would come to act as the enforcers of shareholder value ideology.
From Thatcher’s perspective, and from that of her friends in the Mont Pelerin Society, corporate raiders like Hanson were heroes, charging into corporate fortresses and taking on the vested interests of the managers who were hoarding the company’s capital for themselves rather than investing it in the interests of shareholders. Thatcher’s Big Bang and the development of the City Code sought to make it as easy as possible for corporate raiders to “shake-up” big, incumbent firms like Imperial Tobacco. Once the shake-up was through, corporate raiders like Hanson were no longer needed. The Economist wrote in Hanson’s obituary that his company’s focus on the maximisation of shareholder value had become “standard business practice”.
From Downsize and Distribute to Merge and Monopolise
The pursuit of shareholder value made many companies profitable in the short-term, but over the long-term low rates of investment, high rates of debt, and the declining wage share should have reduced profits. If companies aren’t investing in new assets, like factories or technology, then they won’t be able to take advantage of rising demand for their goods down the line. Taking out debt today and failing to use this for productive investment comes at the expense of profits tomorrow. And paying workers relatively less across the board reduces overall demand for goods and services. As inequality rises, the demand deficit increases because those on lower incomes spend a higher proportion of their incomes on goods and services, whilst the wealthier tend to save more. Low demand is, in turn, likely to make businesses invest less, decreasing future profits, and therefore wages and employment. But instead of this low-investment, low-wage, low-demand doom-loop, we’ve seen corporate profits rising on average. What’s been going on?
As Jack Welch has pointed out, shareholder value — as interpreted by the corporate raiders of the 1980s — really is the dumbest idea in the world. Many of those companies that cut investment, loaded up on debt, and dished out money to shareholders didn’t last very long. Instead, they have been bought up by bigger corporations in the wave of M&A activity that has taken place since the 1980s. The most successful advocates of shareholder value haven’t been the downsizers or the distributers, but a small number of huge mergers and monopolisers. Corporations have learnt to adapt to the pressures of finance-led growth by building monopolies, immune from the pressures of competition, activist investors, and even tax and regulation. In fact, many of them have grown so large and make so much money that they are effectively able to act like banks — rather than loading up on debt, they’ve been lending to other companies.53 This is perhaps the most important hangover of the shareholder value ideology and the corporate raiding culture it entailed: a massive increase in the number of monopolies and oligopolies.54
The macroeconomic link between investment and profits appears to have been severed because a few large corporations are dominating the global economy and maximising their profitability by acting as monopolies and failing to pay tax. Clearly, these corporations have not adopted the “downsize and distribute” model of growth — rather, these firms can be seen as having adopted a model of “merge and monopolise”. Monopolies are highly profitable because they are able to benefit from “monopoly rents” — i.e. they are able to charge consumers and other businesses more than they would in a competitive setting. This increases monopolies’ profits at the expense of consumers and other businesses. What’s more, these corporate behemoths tend not to recycle their earnings back into productive investment. Instead, they adopt two related strategies — neither of which is helpful for economic growth. Firstly, they buy other corporations to consolidate their monopoly positions and benefit from the past investment of these firms. Secondly, they invest the profits they generate from their monopolisation of key markets into financial markets — in other words, they act like financiers themselves.
The first trend can be measured by looking at corporate mergers and acquisitions (M&A) activity over the past several decades. Global M&A activity broke a record in the first half of 2018, when deal volumes increased 65% on the previous year and came in at the highest level since records began.55 This comes off the back of forty years of increasing M&A activity — according to one industry body the value of M&A activity doubled between 1985 and 1989 and increased fivefold between 1989 and 1999. As more “merge and monopolise” activity takes place, the monopolies themselves become ever more powerful. Gaining a greater market share means increasing profitability, which facilitates even greater M&A activity, creating a self-reinforcing cycle that has led to the emergence of the biggest global monopolies in history.
Second, these firms are investing in financial markets. Monopolisation impacts investment in fixed capital because firms find it more profitable to restrict production and invest the proceeds in financial markets.56 They distribute large sums to shareholders, but even that doesn’t exhaust their cash piles. Instead, they reinvest their profits into other assets — making these firms similar to the institutional investors that have been so important to the development of financialisation.57 This trend can be measured by looking at the extent to which corporations’ holdings of financial assets have increased since the 1980s. Financial assets include assets such as loans, equities, and bonds — but they also include bank deposits and internal cash piles. Today, the financial assets of British non-financial corporations are 1.2 times the size of total GDP.58 In the US, where most of the global monopolies are based, the trend is even starker.
This pattern is reflected across OECD countries, but the UK is unusual insofar as its corporations are more likely to hold debt securities and bank deposits than other European corporations.59 In this sense, many UK-based corporations are acting a lot like hedge funds or investment banks — they are lending their capital to other corporations or banks in the hope of increasing their profits. UK corporations have actually become net savers since 2002 (think of saving as anything that isn’t spending — so financial investments and deposits both count as “saving”). Huge piles of corporate capital have now joined the cash piles of the big institutional investors to play a significant role in shaping the allocation of resources across society.
The result of both models — “downsize and distribute” and “merge and monopolise” — is the same: more money stuck at the top. By prioritising paying shareholders over remunerating workers or investing in long-term production, the structure and governance of today’s firms helps to increase wealth and income inequality. By hoarding cash and investing it in financial markets, failing to pay tax, overcharging consumers for services, and mistreating their workers, global monopolies are launching a concerted attack on society itself. As these companies grow, they become more powerful than the nation states which are supposed to regulate them.
But these changes did not take place simply because Thatcher deregulated the City, or because firms suddenly made a collective decision that it would be in their interests to maximise shareholder value. The changing corporate culture in the UK reflects broader changes in the global economy. In this sense, whilst it has created a number of severe problems with the functioning of the economy, it doesn’t really make sense to think of the rise of shareholder value as a corruption of a purer form of capitalist accumulation. The reason corporations are now run in the interests of shareholders rather than workers is that shareholder power increased dramatically in the 1980s relative to that of workers, and this power has been consolidated as it has been embedded in new sets of institutions and new ideologies.
But the rise of shareholder power on its own only explains half of the story; shareholders gained power at the expense of workers, who were previously far more central to corporate governance than they are now. Attempting to redistribute power from shareholders to workers would have met with fierce resistance in any company with a strong union. The necessary correlate of the promotion of the shareholder was the attack on the worker, and the best way to attack workers in the 1970s and 1980s was to attack their unions.
Before 2007 the last time there was a run on a British bank the Austro-Hungarian Empire was preparing for war with the Prussians, and the thirty-seven states of the USA had just agreed to free the country’s slaves. In 1866, Overend, Gurney and Company — the “banker’s bank” — found itself in serious financial difficulties.1 Caught up in the euphoria of the industrial revolution, the bank had lent too much to the UK railway industry, fuelling a speculative boom that spread the length and breadth of the country. But when the bubble burst, the bank found itself with a stack of unpayable debts. Overend appealed to the Bank of England for financial assistance, but its pleas fell on deaf ears. Queues started forming outside the bank’s headquarters at 65 Lombard Street, and within a week the “Panic of 1866” had taken hold of the country.
141 years later, the panic of 2007 was just beginning as Northern Rock, the largest mortgage lender in the UK, found itself unable to access funding.2 Northern Rock’s business model was based on the securitisation of mortgage loans — turning mortgages into financial securities that could be traded on capital markets. It borrowed from other financial institutions over short-time horizons — often on an overnight basis — and lent long, issuing mortgages that wouldn’t mature for decades. When financial markets started to seize up in 2007, banks stopped lending to one another, and “the Rock” found itself unable to access international capital markets, meaning it couldn’t pay its debts. On 13 September 2007, the news broke that Northern Rock was seeking emergency support from the Bank of England: the first UK bank run since Overend.
Both bank runs resulted from an asset bubble — one in railways, the other in housing. Both Northern Rock and Overend relied on borrowing from financial markets to finance their day-to-day liabilities. Both were eventually forced to appeal to the Bank of England for help. But there were also some critical differences between the two institutions. Overend lent money to companies that were building the UK’s railway networks: the same railway networks that we use to this day. They may have done so on unwise terms, but they had invested in the expansion of the productive capacity of the economy — in our ability to produce things, both then and in the future. Northern Rock was doing no such thing. A former building society, Northern Rock lent consumers money to buy already-existing homes. It had been criticised for approving mortgages with incredibly high “loan-to-value” ratios; on occasion the bank granted mortgages worth 125% of the property’s value.3 Rather than creating assets, Northern Rock was creating debt. And it was doing so on an unsustainable scale.
The contrast is puzzling. If it was so unproductive, then why was Northern Rock bailed out when Overend, Gurney and Company was allowed to fail? It is true that by 2007 the Bank of England had become the UK’s official lender of last resort, with a responsibility to support ailing banks if their demise might threaten the stability of the financial system. But this raises more questions. How had a small former building society become so important that its demise could have brought the booming British finance sector to its knees? When did the UK’s finance sector became so large, and so powerful, that a single bank could extract billions from the taxpayer under the threat of economic meltdown? In other words, when did finance become such a dominant and dangerous force in our society?
This book argues that, since the 1980s, the UK has entered a new phase of its economic history. Once the workshop of the world, today our main connection to the global economy comes from the City of London, a global centre for financial speculation. This transformation has not been slow and steady — it has occurred in fits and starts, as the economy has lurched from one crisis to the next, adapting under the influence of the powerful at each stage. Our current economic model — finance-led growth — can be traced back to the 1980s, when a new system emerged out of the ashes of the post-war social-democratic order. Since then, British politics and economics, as in the US and a string of other advanced economies, has become “financialised”, with results that were not apparent until the crisis of 2008.
The best-known definition of financialisation is that it involves the “increasing role of financial motives, financial markets, financial actors and financial institutions in the operation of the domestic and international economies”.4 In other words, financialisation means more and bigger financial institutions — from banks, to hedge funds, to pension funds — wielding a much greater influence over other economic actors — from consumers, to businesses, to the state.5 The growth of finance has led to the emergence of a new economic model — financialisation represents a deep, structural change in how the economy works.6
When economists talk about financialisation, they usually point to the United States, which, in absolute terms, is home to the largest finance sector on the planet.7 Whilst this book focuses on the history of finance-led growth from a British perspective, most of its lessons can also be applied to the world’s current superpower. In the run up to the crisis, each and every one of these issues — the financialisation of corporations, households and the state — afflicted the American economy too, though in subtly different ways. In fact, we can speak of a peculiarly Anglo-American growth model, marked by a growing finance sector, a falling wage share of national income, growing household and corporate debt, and a yawning current account deficit.8 Other economies that pursued this model before 2007 include Iceland and Spain, and today Australia and Canada are perhaps its most enthusiastic adopters.
The most obvious indicator of financialisation is the dramatic increase in the size of the finance sector itself. Between 1970 and 2007, the UK’s finance sector grew 1.5% faster than the economy as a whole each year.9 The profits of financial corporations show an even starker trend: between 1948 and 1989, financial intermediation accounted for around 1.5% of total economy profits. This figure had risen to 15% by 2007.10 The share of finance in economic output was, however, dwarfed by the growth in the assets held by the UK banking system: banks’ assets grew fivefold between 1990 and 2007, reaching almost 500% of GDP by 2007.11 The UK also boasted one of the biggest shadow banking systems relative to its GDP before the crisis — a trend that has continued to this day.12 Meanwhile, cottage industries of financial lawyers, consultants, and assorted advisors grew up in the glistening towers in the City of London and Canary Wharf. Between 1997 and 2010, the increase in the share of financial and insurance services in UK value-added was greater than the increase in the share of any other broad sector bar the government sector — itself supported by the tax revenues provided by finance.13 Overall, by 2007, the UK had one of the largest finance sectors in the world relative to the real economy.
But financialisation can’t be reduced to the increasing importance of big banks in the functioning of the economy.14 It’s not as though capitalism has been “taken over” by finance. Instead, every aspect of economic activity has been subtly, and sometimes dramatically, transformed by the rising importance of finance in the economy as a whole. Whereas economic life for the individual was once centred around wages and wage bargaining, now the management of debt has gained importance. Businesses once focused primarily on producing the goods and services for which they had a competitive advantage, but today they are likely to place just as much if not more focus on their share price, their dividends regime, their borrowing and the bets they’ve made on exchange rates and interest rates. There was a time when state borrowing was constrained by restrictive monetary policy; today states are not only able to borrow far in excess of what they earn, they are also able to have private corporations undertake their spending on their behalf.
Historically, its advocates have argued that capitalism makes everyone better off by creating wealth for everyone. Businesses make profits, and they invest these profits in future production. This creates jobs, which raise living standards for the majority of the population. Such a system might lead to rising inequality in the short term but, as entrepreneurs reinvest their profits, eventually this wealth will trickle down to everyone else. Whilst this has always been an optimistic reading of the way capitalism works, during the post-war period it often appeared to reflect reality (at least in the global North). But finance-led growth upsets the channels through which wealth is supposed to trickle down from rich to poor, and it does so in obvious ways. Investment slows, wages fall, and profits — especially financial profits — boom. 15
Whilst all capitalist systems are premised upon the monopolisation of the gains of growth by the people who own the assets, under finance-led growth these dynamics become more extreme. Rising private debt might conceal this fact during the upswing of the economic cycle, but when the downturn hits it becomes clear that finance-led growth is based on trickle-up economics, in which the gains of the wealthy come directly at the expense of ordinary people. This is because financialisation involves the extraction of economic rents from the production process — income derived from the ownership of existing assets that doesn’t create anything new. When, for example, a landlord increases a tenant’s rent without having made any changes to the property, this is a simple transfer of wealth from a non-owner to an owner. The landowner cannot use the increase in price to ‘create’ new land that would benefit everyone – he will simply pocket the money for himself. The same can be said for interest payments on debt, which transfer money from people who don’t own capital to people who do. Rising household debt, booming property prices, the enforcement of shareholder value and the financialisation of the state all transfer money from those who don’t own assets to those who do, without creating anything new in the process.
Financialised capitalism may be a uniquely extractive way of organising the economy, but this is not to say that it represents the perversion of an otherwise sound model. Rather, it is a process that has been driven by the logic of capitalism itself. As their economic model has developed, the owners of capital have sought out ever more ingenious ways to maximise returns, with financial extractivism the latest fix. In many ways finance-led growth represents capitalism’s most perfect incarnation — a system in which profits seem to appear out of thin air, even as these gains really represent value extracted from workers, now and in the future.
The Interregnum
The financial crisis was the beginning of the end for finance-led growth. Since 2007, the UK has experienced the longest period of wage stagnation since the Napoleonic wars, whilst American workers have the same purchasing power as they did forty years ago.16 Employment may be high, but work has also become more insecure, and levels of in-work poverty have risen. High levels of employment have also coincided with a stagnation in productivity — the amount of output produced for every hour worked — which has flatlined in these states since the financial crisis. The rate of investment — by both the public and private sectors in the US and the UK — has fallen since 2008 and remains below its pre-crisis peak.17 In the UK, falling business confidence, volatility in financial markets, and the levelling off in house prices suggest that a recession is just around the corner. In the US, meanwhile, corporate debt is higher as a percentage of GDP than it has ever been. There seems to be a new corporate scandal every week, with overindebted, extractive, monopolistic companies controlling an increasing share of economic output whilst public services crumble. Interest rates around the world were until recently at record lows and most states are only now – a decade on from the crash – starting to wind up quantitative easing. The extra weight placed on monetary policy means that when the next crisis hits there will be little room for manoeuvre.
Economists are at a loss to explain this ongoing malaise. Some have argued that we are living through an era of “secular stagnation” (where secular means long-term). Technological and demographic change mean that the Western world must accustom itself to much lower rates of growth than in the past.18 Others claim that this economic stagnation results from rising government debt, which is a drain on productive economic activity and is scaring off foreign investment.19 Still others argue that this is all down to “economic populism” — governments implementing ill-advised economic policies to please the masses rather than listening to the timeless, objective wisdom of professional economists.20 The lost decade since the financial crisis is living up to that old adage that when you get ten economists in a room, you’ll get eleven opinions. The old guard is unable to explain to people just what on Earth is going on.
The central argument of this book is that, having gorged themselves before the crash, today’s capitalists are running out of things to take. We are currently living through the death throes of finance-led growth. Just like the post-war consensus of the 1970s, the old model is crumbling before our eyes, leaving chaos and destruction in its wake. And just like the post-war consensus, the death of finance-led growth was inevitable and predictable. Marx showed that every kind of capitalist system is subject to its own contradictions: strains that arise from the normal functioning of the economic model — from businesses trying to make money, politicians trying to get votes, and people trying to survive.21 These dynamics have characterised the development of capitalism for centuries. Each and every capitalist model must end in crisis, and moments of crisis are moments of adaption — moments when, out of the ashes of the old, the new economy can be born.
They are also, as the Italian theorist Antonio Gramsci pointed out, very dangerous moments indeed. Each crisis of capitalism doesn’t simply threaten to bring down the dominant economic model, but the institutions that govern politics and society too. When people no longer expect to be made better off by the status quo, they withdraw their support for it. The guardians of our governing institutions double down as a result, defending their model even as it fails to deliver gains for the majority of the population. Both sides dig in, leading to battles that can be drawn along surprising lines — with those at the bottom the most likely to lose out.
British society has clearly entered such a phase since the financial crisis. The UK’s 2016 referendum vote to leave the European Union was the biggest upset to British politics in a generation. Voters across the country used the referendum to express their discontent with a status quo that has seen them excluded from the proceeds of economic growth. The 2017 general election that followed the vote delivered a government unable to rule without the conditional support of one of the most regressive parties in British politics — the Democratic Unionist Party (DUP) — and unable to undertake the one task appointed to it — delivering a Brexit deal. In the absence of a growing finance sector, and with rising debt and asset price inflation, inequality has risen, living standards have fallen, and the old neoliberal institutions have struggled to contain, let alone channel, the anger of the majority of the population. A pervasive sense of crisis hangs in the air of British politics. The old paradigm can offer only more of the same, and ongoing austerity and weak growth will only exacerbate the UK’s political and economic problems.
In the US, the election of Donald Trump signals a similar grassroots backlash, even as Trump’s economic policy has served to increase inequality and provide windfalls for finance capital. Socialists within the Democratic Party seem to be profiting from Trump’s failure to address the concerns of the constituency that helped to elect him. In Europe, a new wave of xenophobia is sweeping across the continent, countered only by the steady rise in support for popular, socialist alternatives. Crisis after crisis has afflicted the economies that were once represented as the great success stories of liberal, capitalist development — Brazil, South Africa, Russia, Argentina, Turkey, and so many others are all experiencing political and economic turmoil. The poorest states continue to be left behind. Countries like Mozambique and Ghana, along with many low-income countries, are in deep debt distress.
Meanwhile, the environment is collapsing around us. Climate change is accelerating at rates that will render many parts of the planet uninhabitable in just a few short years. The past four years have been the warmest since records began, and the warmest twenty years have all occurred within the last twenty-two. As our forests are destroyed and our oceans acidified, it will not be long before we reach a series of tipping points when the effects of climate change will accelerate suddenly and unpredictably, rapidly creating the kind of “hothouse Earth” currently only seen in science fiction. And it is not just climate change we have to worry about. We are living through a mass extinction: the last fifty years has seen a 60% fall in vertebrate populations. Insects, particularly those critical for pollinating many plant species, are in terminal decline, and our soils are being eroded faster than they can be replenished. In other words, we are on the verge of ecological Armageddon.
But this moment of extended crisis could also represent a moment of opportunity. Many capitalist economies around the world are not only failing to deliver rising living standards for their most powerful constituencies, the capitalist mode of production is accelerating the breakdown of all our most important environmental systems. Finance-led growth contributes to these dynamics by creating huge, unsustainable booms, followed by equally massive, wasteful busts. We cannot afford to organise our economies according to the logic of finance-led growth anymore. But our aim should not be to replace it with a new, equally contradictory model. Instead, we must use this moment of crisis as an opportunity to move beyond capitalism entirely. But that means answering a question that, ordinarily, we are not allowed to ask: What comes next?
What is the Alternative?
For a long time, it has been easier to imagine the end of the world than the end of capitalism — by which we mean an economic system based on private ownership of the means of production (the main factors used in the production process) with the aim of profit maximisation, the enforcement of private property rights by the state, and the allocation of resources through the market mechanism. The system may create inequality, unemployment, frequent crises, and environmental degradation but, we have been told, the alternative is far worse. Socialism — a system under which the means of production are owned collectively — has only ever lead to death and destruction. Capitalism is the worst way of organising the economy, except for all the others.
Socialism’s opponents seem to believe that the basic conditions for organising a society and an economy have been the same at every moment throughout history. Capitalism emerged naturally because it is the natural way of doing things; socialism has failed because it is not. But, as surprising as it may seem, capitalism has not always existed. For most of human history, societies have been governed based on non-capitalist economic and political institutions. Feudalism only gave way to capitalism because states became powerful enough to disrupt rural power relationships and create a landless working class that could be used in the production process.22 This kind of power was premised upon the existence of complex societies, and the availability of certain technologies, without which experiments at capitalism would have foundered.
In the same way, the technological, economic, and political pre-conditions for the establishment of socialist societies exist today in ways that they never have in history. Large sections of the global economy are governed by rational planning rather than the market — that is, all of the economic activity that takes place within private corporations.23 Huge, international monopolies, many times the size of modern nation states in revenue terms, organise themselves based on a regime of top-down planning, generally using the latest technologies to do so. Neoclassical economists treat the firm as a “black box” and do not see relations within these firms as particularly relevant to economic outcomes. Instead, some might say conveniently, they restrict their analysis to those areas of economic activity governed by market relations. But the management of most firms today makes it quite clear that rational planning is perfectly possible, provided you have the means, and you are working towards the “right” ends.
When it comes to the means, we are living in a phase of human history associated with unparalleled technological development.24 Each of us holds in our pockets a computing device more powerful than the technology that sent the first man into space. We produce endless amounts of data about our habits, behaviours, and preferences that can be agglomerated and used by firms like Amazon to determine how much they should be producing, and of what. But the revolutionary power of these technologies is limited because they are concentrated in the hands of a tiny elite, which is using them to maximise their profits.
This brings us to the second issue, ends. Some say that it doesn’t matter what goes on inside firms as long as they are organised according to the logic of profit maximisation. This ensures that they remain “efficient”, and therefore provides for an optimal allocation of society’s limited resources. Except it doesn’t. Not only do many firms operate far from maximum efficiency (and pay expensive consultants to tell them how they can improve), they produce a host of other social and environmental ills — from inequality to climate change. There is no way that an organisational structure based on incentivising those at the top to extract as much as possible is the most rational — or indeed moral — way to organise production today. And top-down planning with the aim of achieving other ends is just as likely to lead to information and coordination problems.
Complex systems — whether these be firms or entire economies — rely on feedback. They are neither centrally directed, nor perfectly decentralised — they operate on the boundary between chaos and order — the realm of complexity. Such systems are dynamic — they are constantly moving. It is never possible to achieve a static equilibrium because conditions are always in flux. Instead, feedback from different parts of the network helps people to self-organise with the aim of achieving a collectively-determined goal, with some coordination and direction provided from the centre.
Capitalism, on the other hand, operates at the two poles of order and chaos. Within the firm — which neoclassical economists don’t study, but which Marxists do — production is organised through command-and-control, enforced by the threat of “the sack” and supported by various other technologies of control and exploitation. Outside of the firm, the state determines the rules of the game, backed up by the threat of force. These two institutions — firms and states — work together to produce an economic system based on domination, which also provides the appearance of freedom. Because within the market — its boundaries having already been determined by the powerful — economic activity seems almost anarchic. There are booms and busts, firms rise and fall, individuals are encouraged to place themselves in constant competition with one another just to survive. And this entire controlled and chaotic, free and coercive system is governed with one sole aim: maximising profits for those at the top.
Finance-led growth represents the apogee of the logic of capitalism. The owners of capital are able to derive profits without actually producing anything of value. They lend their capital out to other economic actors, who then hand over a portion of their future earnings to financiers, limiting economic growth. The costs of this model are left to future generations in the form of mountains of private debt and unsustainable rates of resource consumption. If the logic of capitalism is based on extraction from people and planet today, then finance-led growth is based on extraction from people and planet today and tomorrow, until the future itself has been stolen.
Climate change, global poverty and the financial crisis are all disasters that have emerged from firms and governments mismanaging the complex systems that they have created in the pursuit of profit. Capitalism has built these systems, and the powerful are trying to contain their complexity using hierarchical, top-down decision-making processes that are unfit for the task. As a result, capitalists are slowly losing control. As Marx put it, modern bourgeois society, which “has conjured up such gigantic means of production and of exchange, is like the sorcerer, who is no longer able to control the powers of the nether world whom he has called up by his spells”.25
There is a better way. Just as feudalism paved the way for capitalism, the development of capitalism is paving the way for socialism. Socialising ownership would ensure that economic growth and development benefit everyone — if everyone has a stake in the economy, then when the economy grows, we all get better off. But it is the democratic aspect of democratic socialism that is truly revolutionary. Rather than organising production based on the profit motive, working people would come together to determine their collective goals and how best to achieve them. Rather than working purely to maximise profits, we would be working to maximise our collective prosperity, which includes the health and happiness of people and planet.
Building the Future
Visions of the future abound. Democratic socialism, cybernetic socialism, fully automated luxury communism — all these utopian dreams are slowly seeping into our collective consciousness and allowing us to imagine a future not governed by the logic of private ownership and the market. But it is not enough simply to imagine a new world: we must develop a strategy to get there. Historical change does not proceed in neat, clearly delineated stages. We cannot wait for capitalism to fail and socialism to replace it. But equally, we cannot force our way towards a socialist society if the technological conditions, economic outputs, and, most importantly, the power relations that would support it are not already starting to emerge. What we need is a plan to get from here to there, based on an analysis of our current situation and the strategic points for intervention it offers.26
And this requires an analysis of how change actually happens. Socialists have long been divided between those who claim that history is driven forward by the objective forces of technological change — a view informed by one reading of Marx — and those who argue that history is driven forward by people coming together to organise and influence events — a view informed by another reading of Marx. One prioritises structures — the overarching political, economic, and technological conditions that shape what happens in the world — whilst the other prioritises agency — the individual and collective actions undertaken by people who are free to shape the conditions of their own existence.
Marx himself brought these ideas together using the notions of “contradiction” and “crisis”.27 Capitalist systems, of whatever kind, have their own inherent contradictions — internal problems which mean that, after a while, they stop working properly. The 2008 financial crisis resulted from the contradictions of finance-led growth — the creation of huge amounts of debt, the growth of the finance sector, and declining wages and capital investment. Capitalist systems can trundle along for decades, their problems getting worse and worse without anyone noticing, until they implode in a moment of crisis. These moments — understood as historical epochs rather than brief time periods — are especially important in determining the course of capitalist development. During a crisis, economic and technological structures loosen their grip over human action. Institutions cease to function, peoples’ ideas cease to make sense, rifts emerge within dominant factions, material resources are destroyed, and everything becomes more contingent. Possibility expands during moments of crisis: individual and collective action comes to matter much more.
Marx’s theory of history provides us with a unique understanding of our own times, and how we might change them. The contradictions of the social-democratic model created acute tensions in British political economy during the 1970s, and the crisis that ensued provided the perfect political moment for the wealthy to build a new institutional compromise out of the wreckage of the old.28 They took this opportunity and used it to rebalance power in society away from labour and towards capital, institutionalising a new model of growth and giving rise to a period finance-led growth from the 1980s to 2007.
Finance-led growth was born, and for a while it seemed as though we had chanced upon a uniquely stable economic model. Politicians spent most of the 1990s and early 2000s claiming to have solved the problem of boom and bust. History, they told us, was over.29 Capitalism had won. In fact, for these observers, history had ended almost as soon as capitalism was born. The bourgeois economists, Marx claimed, operate according to the belief that “there has been history, but there is no longer any”.30 There is, they argue, no alternative to capitalism: “things might be bad for you now, but they could be a whole lot worse — just look at Venezuela”. If anything, the masses should be grateful for the benign, enlightened leadership of the ruling class.
The financial crisis shattered this illusion. And yet the ruling classes continued as though nothing had happened. They implemented austerity on the basis of an economic analysis undertaken by those who had failed to predict the crisis, and they ensured that the costs fell mainly on those least able to bear them. Many of the same elites who have governed the global economy for the last forty years remain in power to this day, which is perhaps why so few of the issues that caused the crisis have been dealt with. Debt levels are extraordinarily high, inequality is rising, the environment is collapsing, and policymakers seem less able to get to grips with these issues than ever before. Where is the revolt? Isn’t the financial crisis a paradigmatic example of our collective inability to challenge the deep-rooted logic of the capitalist system?
Yes and no. Ideas, behaviours, and beliefs that are built up over a lifetime cannot be undone overnight. Those raised during the end of history did not see the scales fall from their eyes on the day that Lehman Brothers collapsed. And far from organising in the shadows like the Mont Pelerin Society — the network of right-wing thinkers who sought to undermine social democracy — the left has spent decades in retreat under neoliberalism. Socialist parties, movements, and narratives all faded into the background: many genuinely believed that the centuries-long struggle between labour and capital was over. It took a while for people to realise that the crash had not been a blip; that capitalism was not invulnerable; and that things were only going to get worse, not better. Today, after the extended period of stagnation that followed the crash, we inhabit a revolutionary moment. We live in the shadow of a great event that will come to define the thinking of a generation.31
But unless we are able to contextualise this moment in the long history of capitalist development, we will fail to exploit its full potential. To move beyond capitalism, we must develop an understanding of its structural weaknesses to determine how best to challenge it. By exposing the unseen, unquestioned laws according to which the economy works, Marx demonstrated that history would continue under capitalism: that things could be different. Applying his method to our current moment allows us to understand how the system really works, and how we might go about changing it.
In just over a decade, it will be too late for us to deal with one of the greatest challenges humanity has ever faced, and before that, elites are likely to have reasserted their control by foisting upon us a new order that maintains all the powers of the old. But between now and then lies an extended moment of crisis — a moment of contingency and uncertainty — a moment during which the logic of capitalism has once again been brought into question. A new economy, and a new society, is slowly being born in the minds of those who know that history will never end. It is up to us to bring that new world into being.
CHAPTER ONE THE GOLDEN AGE OF CAPITALISM
In 1944, the great and the good met in Bretton Woods, New Hampshire, to discuss rebuilding the world economy in the wake of the bloodiest war in history.1 The American delegation, led by Harry Dexter White, had been sent to ensure that the reins of the global economy were handed from the UK to the US in an orderly fashion. The British delegation, led by the famed economist J.M. Keynes, had been sent to retain as much power as conceivably possible without angering the UK’s main creditor, the US, which had emerged as the new global hegemon in the wake of the destruction of Europe. White, a little-known Treasury apparatchik, was a “short and stocky… self-made man from the wrong side of the tracks”. Other delegates recall that he was shy and reserved, though this may have had something to do with the fact that he spent much of the conference in hushed meetings with the delegates from the Soviet Union. Years later, he was accused of being a Russian spy, which he denied before dying from a heart attack. Keynes couldn’t have been more different — a tall, intellectual member of the British establishment, who unabashedly touted his achievements and promoted his own ideas. They were the “odd couple of international economics”.
The conference itself was, by all accounts, a raucous affair. Its wheels were greased with alcohol and fine food — in the small hours of the morning, delegates could be found drunk and cavorting with the “pretty girls” sourced from all over the US. Keynes predicted that the end of the conference would come alongside “acute alcohol poisoning”. The hotel boasted top of the range facilities, including “boot and gun rooms, a furrier and card rooms for the wives, a bowling alley for the kids, a billiard room for the evening”, as well as a preponderance of bars, restaurants and “beautiful women”. The more extravagant, the better — the splendour and superiority of the American way was to be shown at every turn.
It is somewhat ironic that the decadent crowd at Bretton Woods came up with an agreement that would hold back the re-emergence of the gilded age of the inter-war years. Bretton Woods was meant to prevent the outbreak of not only another world war, but also another Wall Street Crash. Keynes argued forcefully that doing so would require reining in what he called the “rentier class”: those who made their money from lending and speculation, rather than the production, sale and distribution of commodities.2 In the late eighteenth and early nineteenth century, rentiers had become extremely powerful on the back of the rising profits associated with the industrial revolution and increasing trade within the world’s constellation of empires. In the absence of controls on capital mobility, these profits traversed the global economy seeking out the highest returns. Much of this capital was invested in US stock markets, pushing up stock prices and inflating a bubble that eventually popped in 1929.
What the Great Depression started was finished by the Second World War, which saw billions of dollars’ worth of destruction, and increases in taxation to finance states’ war efforts.3 As a result, financial capital emerged from the first half of the twentieth century on the back foot, which made reining in the parasitic rentier class easier. Whilst the negotiators at Bretton Woods were undoubtedly concerned with securing the profitability of their domestic banking industry — not least the emerging power of Wall Street — just one banker was invited to the summit by the US delegation.4
Between the eating, the drinking, and the flirting, delegates at the conference hammered out an historic agreement for a set of institutions that would govern the global economy during the golden age of capitalism. The world’s currencies would be pegged to the dollar at a pre-determined level, supervised by the Federal Reserve, and the dollar would be pegged to gold. Capital controls were implemented to prevent financiers from the kind of currency speculation that could cause wild swings in exchange rates. The system of exchange-rate pegging and controls on capital mobility served to hem in those powerful pools of capital that had wreaked such havoc in the global economy in the period before 1929. Bretton Woods was a significant step forward in reining in the rentier class.
But Keynes didn’t get everything he wanted. He was hindered in his battle against international finance by the formidable Dexter White, backed up by the full force of US imperial power. White wished to retain the US dollar as the centre of the international monetary system, whilst Keynes wanted it replaced with a new international currency — the bancor. White emerged victorious, and the US gained the “exorbitant privilege” of controlling the world’s reserve currency.5 In other words, as well as constraining international finance, Bretton Woods also institutionalised American imperialism.6
The Bretton Woods conference marked the dawning of a new era for the global economy. Europe set about the long processes of post-war reconstruction and decolonisation, and the multinational corporations of the world’s newest superpower profited handsomely.7 Trade flows increased after the years of autarky during the war, and a new age of globalisation began. Whilst Bretton Woods provided the international framework for this economic renewal, it was at the level of national economic policy that the transition from pre-war laissez-faire economics was most evident. Keynes was, once again, at the centre of these developments.
In the inter-war period, Keynes had mounted a challenge to the economics profession by developing a theory of economic demand that challenged the central tenet of classical economics — Say’s law, the idea that supply creates its own demand.8 According to Jean-Baptiste Say — a Napoleonic-era French economist — prices in a free market will rise and fall to ensure that the market “clears”, leaving no goods or services left once everyone has had the chance to bid. If the market fails to clear — i.e. if businesses have products to sell but no one wants to buy them — it is because something is getting in the way of the price mechanism, like taxes or regulation. The law applied to workers as well as commodities, which reinforced the idea that there could be such a thing as involuntary unemployment. If a worker was unable to find a job, it was because he was setting his wage expec- tations too high.
This ideology was, of course, at odds with the experiences of those who had lived through the Great Depression. But the classical economists would retort that their field was a science, which paid no heed to the sensibilities of working people. Keynes was able to prove them wrong. His great innovation was to introduce the idea of uncertainty into economic models. When people are uncertain about the future, they may behave in ways that seem irrational — for example, saving when they will receive little return for doing so, or spending far above what they can afford. This is because in the context of uncertainty, people prefer to hold liquid (easy-to-sell) assets — and they tend to prefer to hold the most liquid asset of all: cash. Liquidity preference means that, the higher the levels of uncertainty, the more people save rather than spend.
This kind of uncertainty marks business’ behaviour even more than consumers’ and affects their investment decisions. If businesses’ confidence about the future turns, then they are likely to stop investing. These lower levels of investment will result in lower revenues for suppliers, who may have to lay people off, who will reduce their spending, leading to a fall in economic activity. This kind of self-reinforcing cycle of expectations is what gives rise to the business cycle: the ups and downs of the economy through time. It also shows why, over the short term, Say’s law doesn’t hold — if businesses lack confidence in future economic growth, they may choose not to spend even if they can afford to do so. And as Keynes famously stated, “in the long run we are all dead”.
But Keynes’ didn’t stop with this theoretical innovation, he also offered solutions to policymakers. Say’s law implies that taxes and regulation distort the normal functioning of the market, and that it is best for everyone when state economic policy is as unobtrusive as possible. But Keynesian economics provides a role for the state as an influencer of expectations, and a backstop for demand. If, for example, business confidence drops and investment falls, the state can anticipate the multiplier effect this will cause by increasing its own spending or by cutting interest rates, making borrowing cheaper. If, on the other hand, businesses are investing too much, leading to inflation, the state can cut spending or raise interest rates to mute the upward swing of the business cycle. Managing the business cycle also required reining in the influence of finance, because lending and investment are also pro-cyclical: they rise during the good times and fall during the bad times. If the role of government was to lessen the ups and downs of the business cycle, it must properly regulate finance, which so often exacerbated these ups and downs.
This kind of Keynesian economic management had a significant influence on economic policy in the post-war period. The destruction of the war, the increasing size of the state, and the arrival of Bretton Woods led to something of a rebalancing in the power of labour relative to capital within the states of the global North.9 The rising political power of domestic labour movements led to the widespread take up of Keynes’ ideas, which were, after all, aimed at preventing recessions and unemployment. States and unions often developed close relationships with one another via emerging mass parties representing labour, and many had a centralised collective bargaining process. Taxes on the wealthy and on corporations were high — underpinned by low levels of capital mobility — and societies became much more equal. During this time, many Keynesians believed that they had finally succeeded in taming the excesses of a capitalist system that had caused so much destruction in the preceding decades, which is why this period was termed the golden age of capitalism, following the gilded age of the pre-war years.
In the UK, this period saw the emergence of a new type of political economy, often referred to as the post-war or Keynesian consensus.10 Following the wartime coalition, Labour roundly defeated the Conservatives in the 1945 election and Clement Attlee became prime minister. The new Labour government seized on Keynesianism which had, up to that point, had a limited impact on economic policy: Keynes’ ideas had revolutionised economics, but it took a change in power relations for them to revolutionise the real world. Over the course of the next several decades, inequality fell, wages rose in line with productivity, living standards for the majority rose and both the labour movement and the state apparatus became more powerful relative to capital. The welfare state developed, providing a safety net when the business cycle turned, as well as increasing the social wage and therefore workers’ bargaining power. And whilst the City grew, and retained its strong influence over government, the rentier class — landlords, speculators, and financiers — was much more constrained than it had been before.
The post-war consensus could be enforced because the workers, who stood to benefit from Keynesian management of the economy, had emerged from the war more powerful than ever before, and they organised to make it happen. In this way, the rebalancing of power from capital to labour that came about as a result of the war was institutionalised in the post-war social and economic framework implemented in the 1940s.
How Does Change Happen?
This understanding of historical change — that which is driven by power relations, institutions, and crisis — is based on one reading of Marx’s analysis of history. One reading because it is a topic upon which Marxists continue to disagree. In particular, there is some disagreement between those who believe Marx prioritised economic structures in his analysis of historical development, and those who believe he prioritised agency. In other words, these groups have different answers to the question: “what matters most when it comes to historical change – economic and technological conditions, or how people respond to these conditions?”
On the first view, technological change leads to changes in peoples’ working conditions, and this leads to changes in the balance of power within society, and therefore peoples’ ideas. For example, the advent of mass production made it easier for workers to share political ideas and to organise to resist their exploitation, facilitating the emergence of unions. In this case, the political change naturally follows from the technological change in a way that can appear inevitable. Economic and technological conditions – what Marx referred to as the economic base – determine the balance of power in capitalist societies, and those with the power set about building institutions that reinforce their ideas – what he referred to as the superstructure. The powerful use their control over education, the media, and the law to influence their narratives, which determine how people make sense of the world. This is how the system remains stable from day to day. But it is all underpinned by an asymmetry of material power – by who has the control over force and resources. Taken to extremes, those who view history in this way may claim that human agency doesn’t matter at all – history progresses due to changes in technology, not human decisions.
Others respond that human beings aren’t robots: we have the capacity for free thought, debate and to make sense of the world in our own ways. They claim that the superstructure has power in its own right – institutions can shape the development of capitalism, they can make it harsher or kinder, more extractive or less exploitative. And institutions can be shaped by battles that take place in the realm of ideas. These people can often be found arguing that, if a policy is convincing enough, and if we lobby hard enough, we will be able to implement it and change the way capitalism works. For them, it is human action that drives history, not the other way around. For example, the development of social democracy wasn’t just based on changes in technology that made it easier for workers to organise. It was workers who won limits on the working week, sick pay, and eventually even the creation of the welfare state itself; and they did so by organising.
The determinism of the structuralists jars with the utopianism of those who view human agency as the driving force of history, and this tension has dominated debates on the left — and indeed in the social sciences more broadly — for generations. Marx’s own method for dealing with these questions – also the method used in this book – was based on the idea of the dialectic, in which what appear at first as opposing forces merge to determine the direction of historical change. The economic base — the technological basis of production — interacts with the super-structure — ideas, culture, and institutions — to determine what happens and when. Under this view, the nature of technology and the economy provides the overarching context in which human action takes place — these things shape peoples’ incentives and behaviours in ways that make certain outcomes more likely than others. But they do not determine human action. People, their capacity to organise themselves, and the ideas they hold, still have the capacity to drive and shape history in ways that cannot be determined through an analysis of their economic conditions alone. Men make their own history, but they do not make it as they please.
The relationship between structure and agency becomes particularly important during moments of structural crisis, which naturally emerge in capitalist systems due to their inherent contradictions.11 Capitalism is subject to contradictions that stop it from working properly — from workers not earning enough to purchase the goods capitalists are producing, to the emergence of financial crises driven by investment booms, to the environmental crises associated with the injudicious extraction and use of the planet’s scarce resources. These contradictions are contained by political institutions designed by the powerful to make the system more stable — like the welfare state or financial and environmental regulation. But these institutions do not stop the contradictions from emerging, they only mute their impact. As capitalism develops, its contradictions escalate until they explode in a moment of crisis. These extended periods of crisis are critical in determining how change happens. Moments of crisis are moments when institutions, norms, and discourses break down — it becomes harder for our political, economic, and social systems to function, and much more difficult for people to make sense of the world. Divisions emerge amongst the people with the power, which leave them vulnerable to all sorts of attacks — most revolutions have taken place during moments of crisis. The structural flaws of capitalism lead to crises, and crises are times when agency matters more: it is primarily during these moments that ideas and the movements that champion them can influence the course of history.
And this is exactly what happened in the post-war period. The destruction of the war had changed the balance of power between capital and labour and created an institutional crisis of which the latter could take advantage. Working people used this moment of crisis to organise and institutionalise a new settlement — one that would benefit them. And for a long time, this framework worked. But it could not last forever. As the twentieth century progressed, capital began to strain against the leash that had been placed on it, and the compromise between labour, capital, and the state began to break down. Social democracy, just like any capitalist economic model, was subject to its own inherent contradictions. And its collapse paved the way for something new entirely.
The Rise of Global Finance
On 28 June 1955, G.I. Williamson, the Chief Foreign Manager of the Midland Bank, was called into the Bank of England to discuss what appeared to be some unusual dealings in the foreign exchange markets.12 Midland Bank had been engaging in an activity that, up until 1955, no UK bank had dared to try. It had been taking deposits denominated in US dollars and paying out interest to the holders of these deposits — an activity formerly restricted to US banks regulated by the Federal Reserve. The Bank of England’s “gentlemanly” approach to regulation at the time is well-documented. Bankers were frequently invited to Leadenhall Street — an old, imposing building, in which alumni of Eton, Oxford, and Cambridge were likely to have felt quite comfortable — for a cup of tea and a chat. Occasionally stern words were exchanged, but rarely would any real discord disturb what has been described as the “dream-like” state of the City of London in the golden era of capitalism.
The discussions between Williamson and Cyril Hamilton, a Bank official, were no different. Hamilton summarised the meeting in a memo reassuring his higher-ups that “nothing out of the ordinary had taken place” at Midland and that its foreign exchange activities had been undertaken in the “normal course of business”. In any case, Hamilton reported that “Williamson appreciates that a light warning has been shown”. Quite why a light warning would have been required for proceedings undertaken in the normal course of business was not specified. Perhaps Hamilton had a faint inkling that Midland’s activities represented an entirely new phenomenon that the Bank of England was not quite equipped to manage. It is, however, highly unlikely that he realised he had just given the go ahead for an innovation that, within two decades, would have transformed global finance.
The new market in dollars outside of the US, and therefore outside of the jurisdiction of the Federal Reserve, was called the “Eurodollar market”. Usually, when you hold a foreign currency, you can either spend it in a foreign country, deposit it in a foreign bank, or invest it in foreign assets — a British bank wouldn’t generally allow you to deposit euros in your bank account. The Eurodollar markets changed all this by allowing banks to take and pay interest on foreign currency deposits. The term “Eurodollar” is something of a misnomer given that the first non-US dollar deposits were taken in the UK, but it stuck, and today the prefix “Euro-“ is used for any currency held outside its home country; for example, “Euroyen” are Japanese yen held outside Japan. The implications of this system weren’t truly visible until the Eurodollar markets took off in the 1970s. Socialist and newly-wealthy oil-producing states that wanted to hold dollar deposits without depositing them in US banks were able to put their dollars in London instead. London’s Eurodollar markets grew substantially as a result.
The Eurodollar markets undermined Bretton Woods by creating a global system of unregulated capital flows.13 Those investors holding dollars — pretty much everyone, given the use of the dollar as the global reserve currency — could now deposit them into the City of London. These dollars would then be free to float around the global economy at will, unhindered by the strict regulation then imposed on US banks by the Federal Reserve. Billions of dollars had ended up in the unregulated Eurodollar markets by the 1970s, undermining Keynes’ determination to curb the hot money of the rentier class. This gave financiers in the City an almost bottomless pit of dollar reserves to play with. After decades of retrenchment for the former financial centre of the largest empire in the world, the Eurodollar markets gave the City of London a new lease of life.
But the growth of the Eurodollar markets wasn’t the only threat to Bretton Woods that emerged in the 1970s. The increase in international trade that took place in the post-war period benefitted some countries more than others. US corporations, backed by the most powerful state in the world, grew substantially. Many were drafted by the US government to help rebuild Europe, becoming some of the first modern multinational corporations in the process. Between 1955 and 1965, US corporations increased their subsidiaries in Europe threefold.14 As the reconstruction effort took off, they were joined by German and Japanese multinationals, such that by the 1970s there were more, and larger, multinational corporations than ever before.
The growth of the multinational corporation meant that billions of pounds worth of capital was flowing around the world within corporations. Toyota, General Electric, and Volkswagen couldn’t afford to keep their subsidiaries across the globe insulated from one another — money had to be moved, even if that meant undermining the monetary architecture of the international economy. Technological change also facilitated direct transfers of capital between different parts of the world. All this meant that, despite the continued existence of capital controls, capital mobility had increased substantially by the 1970s. The combination of the emergence of the Eurodollar markets and the rise of the multinational corporation were beginning to place serious strain on Bretton Woods.
But it was the US government — not the banks — that dealt the final blow to the system that it had helped to create. With the dollar as the reserve currency, the US had gained the “exorbitant privilege” of being able to produce dollars to finance its spending15. Because everyone needed dollars, the US could spend as much as it liked without the threat of hyper-inflation. The gold peg was supposed to rein in this behaviour: if investors started to think that there were more dollars in circulation than gold to back it up, they might turn up at Fort Knox demanding the weight of their dollars in gold. But this didn’t stop the Americans from printing billions of dollars to fund a wasteful and destructive war in Vietnam. Combined with dollars leaking out of the US via its growing current account deficit, the global economy was facing a dollar glut by the 1970s. Realising that there were far too many dollars in circulation to keep up the pretence, in 1971 Nixon announced that dollars would no longer be convertible to gold. Bretton Woods was finally over.
Many expected a sharp devaluation of the dollar at this point, but this didn’t happen. In fact, the dollar — strong as ever — continued to be used as the global reserve currency, even in the absence of any link with gold. Finally, the real foundations Bretton Woods had been exposed: American imperial power. The gold peg established at Bretton Woods was not the source of the dollar’s value; the source of its value was a collective agreement that dollars would be used as the default global currency, much as English had by that point become the default global language. Freed from the need even to pretend it was covering its increased spending with ever-greater gold reserves, the power of the US Treasury was finally unleashed, with consequences that would not be felt for three and a half decades.
The end of Bretton Woods represented a profound transformation in the international monetary system. Absent any link with gold or any other commodity, money became nothing more than a promise, created by fiat by the state issuing it. The value of a currency would now be determined by the forces of supply and demand. Rather than having to limit the amount of money they were creating in order to maintain a currency peg, states would be able to create as much money as they liked, accounting only for the threat of inflation. Private banks were also now free to create currency on their behalf in the form of credit, constrained only by domestic regulation. The collapse of Bretton Woods represented the final step away from a system of commodity money, which has been the norm for most of human history, and towards fiat and credit money, which now dominate all other forms of money. The implications of this change would be far more profound than anyone could have seen at the time.16
With the demise of Bretton Woods, capital was finally released from its cage. Many countries continued to maintain capital controls and strict financial regulation. But the glut of dollars that had emerged at the international level needed somewhere to go. Meanwhile, the capital that had been stored up within states like the UK under Bretton Woods was desperate to be released into the global economy. It pushed and strained against the continued existence of capital controls, finding ever more ingenious ways of getting around the system. Finance capital had returned with a vengeance, and it sought to remove all obstacles to its continued growth. But it would take a national crisis for the remnants of the post-war order finally to fall.
The Political Consequences of Social Democracy
Just as Bretton Woods was collapsing, the social democratic model was starting to show signs of strain.17 Bretton Woods created a global economy, with global corporations, global supply chains, and global competition. Eventually, the system became a victim of its own success. Some companies — notably the US multinationals — thrived, but many others found it harder and harder to compete with the rising industries located in Germany, China, and Japan. UK corporations in particular found themselves struggling to benefit from the new wave of globalisation, partly because sterling was pegged to the dollar at too high a level, making British exports more expensive to international consumers.18 These firms struggled to cope with increasing international competition, and by the end of the 1960s, their profits had been seriously reduced. By the 1970s, the UK was referred to as the sick man of Europe. From 1973, after an attempt at a European peg was abandoned, sterling continuously fell against the dollar until, in 1976, sterling below $2 for the first time.19
In this context, one might have thought that the end of Bretton Woods would be good for British capitalists. Freed from the overvalued exchange rate, manufacturers would now finally be able to compete internationally once again. But decades of stagnation cannot be undone overnight. Britain’s manufacturers found that, even with a lower exchange rate, they could not compete with the new multinationals on either quality or cost. The first oil price spike in 1973 drove an increase in inflation, which exceeded 20% in two years over the course of the 1970s, peaking at 27% in the year to August 1975. In the absence of strong unions, rising inflation driven by rising costs might not have been such a systemic problem. Under other circumstances, bosses would have laid off workers or reduced pay to cut costs. But with the post-war consensus still firmly in place, unions pressed for pay rises that kept pace with inflation. Able to bargain with and make demands on the state, the unions refused to back down.
Nevertheless, as cost pressures mounted, unemployment rose. The state flitted between increasing spending to alleviate unemployment and cutting it to reduce inflation. The oil price spike had created a catch-22 situation that Keynesian policymakers were not equipped to deal with: stagflation — the combination of unemployment and inflation. This was not supposed to happen. Keynesian economics was based on the idea of the Phillips Curve. In the 1960s, economists drew on the work of William Phillips to posit an inverse relationship between inflation and unemployment. According to the models they built, when unemployment was high, inflation was low, and vice versa, implying that states should tolerate moderate levels of inflation in order to promote full employment.20 Governments were supposed to boost spending and reduce interest rates until full employment was reached, at which point they should start to reduce spending and raise interest rates in order to bring down inflation. Effecting this balancing act between inflation and unemployment was seen as the main aim of economic policy throughout the post-war period.
But by the 1970s, social democratic management of the economy was failing to bring down either unemployment or inflation — the latter of which was driven by political developments halfway around the world. Increases and decreases in interest rates had done nothing other than create a “stop–go” economy that fluctuated from one set of extremes to another. In this uncharted territory nobody knew what to do. By the time unemployment reached 4% in the early 1970s, it was clear that the state was trying to resolve the issue by tacitly withdrawing its promise to protect full employment in an effort to bring down inflation. But such a strategy posed an existential threat to the UK’s trade unions: the withdrawal of the state’s commitment to full employment would mean losing a powerful ally in their fight against the bosses. They could not afford to go down without a fight — not to mention, their members required jobs and pay increases in line with inflation to be able to survive. Industrial action escalated, especially in the industries with the most powerful unions – particularly the miners, whose power stemmed from their control over the nation’s energy supply.
Economic turmoil created a political crisis. On the one hand, by the mid-1970s, the Conservatives had roundly failed to turn years of strikes, energy shortages, and stagflation into an electoral advantage. Ted Heath went to the nation and asked them to decide “who governs this country? Us or the miners?”. On the other hand, the Labour government elected in 1974 proved equally unable to end the stalemate. Pursuing a more conciliatory approach, Harold Wilson raised the miners’ wages and attempted to implement a “social contract” between capital and labour, involving a voluntary incomes policy in which the government negotiated pay increases with the unions. But the second oil price spike — which came three years after the UK had sought an emergency loan from the International Monetary Fund — was the nail in the coffin of the social contract. In 1979, with inflation spiralling once again, the unions pushed for a return to free collective bargaining.
1979 was the coldest winter since 1962, and the combination of industrial action, economic stagnation, and energy shortages led to its being termed the “Winter of Discontent”. A sense of crisis hung in the air. In January 1979, Prime Minister James Callaghan was at a summit in Guadeloupe and was asked by a journalist about “the mounting chaos in the country”. He responded that he didn’t think others would agree with the journalist’s assessment that the country was in chaos. The following day, the Sun famously ran with the headline: “Crisis, What Crisis?”. By 1979, Britain was at a crossroads: the unions would not back down, and the social democratic state could not afford to confront them. What had happened to the golden age of capitalism?
Looking back, it is quite clear that the 1970s were a turning point for the post-war consensus. Businesses could not afford to continue to tolerate unions’ demands for pay increases in the context of rising international competition and high inflation. But unions could not afford not to demand jobs and pay increases in line with inflation. These problems were structural — they were inherent to the way the system functioned. Economic actors pursuing their own interests — whether businesses trying to increase profits, or workers trying to increase wages — eventually led to the emergence of acute strains that threatened to bring the British economy to the brink of collapse. The contradictions inherent in the social democratic growth model had finally come to the fore, and there were only two potential solutions to the crisis: a victory for the workers, or for capital. Much depended on where the loyalties of the state would lie.
MichaÅ‚ Kalecki — a Polish economist who theorised demand management at the same time, and some have said before, Keynes himself — had foreseen such problems decades earlier.21 After reaching his conclusions about the capacity of the state to control demand in the economy, he argued that such policies couldn’t work for long because there were “political aspects” of full employment policy that rendered it inherently unstable. The state’s commitment to promote full employment undermined the thing that made capitalism work: the threat of the sack. A policy of full employment would remove the “reserve army” that capitalists relied on to ensure a steady stream of cheap labour. Without desperate workers to exploit, profits would dry up.
The powerful state that had emerged from the Second World War had committed a second sin: it was no longer afraid of the capitalists’ threats to withdraw investment. When the government invests too much in the economy, and especially when certain industries are nationalised, it becomes much harder for businesses and investors to withdraw their capital when the state does something they don’t like — the option of “capital strike” is removed. Over the long-term, the combination of these factors encourages owners of capital to oppose policies that promote full employment, even if those policies also boost consumption and therefore support capitalists’ profits.
Kalecki’s argument is not that social democracy is economically unsustainable, but that it is politically untenable: at some point, a political crisis moment will be reached. He explains:
[U]nder a regime of full employment, the “sack” would cease to play its role as a disciplinary measure. The social position of the boss would be undermined, and the self-assurance and class-consciousness of the working class would grow. Strikes for wage increases and improvements in conditions of work would create political tension. It is true that profits would be higher under a regime of full employment than they are on the average under laissez-faire... But “discipline in the factories” and “political stability” are more appreciated than profits by business leaders.
This is what appears to have happened in the 1960s and 1970s. With high wages, low unemployment, and moderate levels of inflation, the power of the UK’s unions grew. The distributional tension over profits between the bosses and the workers was muted during the early years due to the investment and aid being sent by the US and the increase in global trade facilitated by Bretton Woods. But when things started getting tough — when inflation increased and competition from abroad began to erode profits — these tensions exploded onto the national stage. It was at this point that the political contradictions of social democracy became apparent; when the battle between capital and labour finally became zero-sum.
With profits under pressure, only one thing determined who got the gains from growth: who had the power. Thanks to rising capital mobility and the breakdown of Bretton Woods, the balance of power between capital and labour had changed by the 1970s. Capitalists could threaten to up and leave if they didn’t like the business environment — and though capital controls were still in place, many were finding ingenious ways to move their money anyway. With state support for the labour movement weakening, workers, meanwhile, found themselves facing up to bosses without powerful political allies.
These pressures steadily wore away at the post-war consensus, until they erupted during the crisis of the 1970s. But the old model would not completely collapse until a new one emerged in its place. The political tumult created by the erosion of British social democracy — echoed by the retreat of social democratic movements over much of the global North — provided a long-awaited opportunity for those who had been marginalised during the post-war boom to shape what came next. The left seemed out of answers, but the right saw that their moment had finally arrived.
Never Let a Serious Crisis Go to Waste
After asking voters “who runs the country” and being told “not you”, the humiliated former prime minister Ted Heath was forced into organising an election for leadership of the Conservative Party in 1975. Despite losing the twin elections of 1974, Heath maintained the support of much of the Conservative establishment and newspapers. He was expected to win. But instead he was ousted by the young upstart elected on a radical new economic programme that would eventually come to be known as neoliberalism: the theory that human wellbeing is best advanced by liberating the entrepreneurial spirit through free markets, private property rights, and free trade, all supported by a strong state.22 Her name was Margaret Thatcher.
Thatcher’s radical, neoliberal economic agenda had been forged decades earlier in the Swiss village of Mont Pélerin.23 In 1947, a group of economists from all over the world met to develop a new programme that would begin the fightback against the “Marxist and Keynesian planning sweeping the globe”. This was an austere, intellectual affair, in stark contrast to the bawdy conference that had taken place across the Atlantic three years previously. The Mont Pelerin Society, or the MPS — as the group would name themselves — knew that they were politically and intellectually isolated. The credibility of pre-war laissez-faire liberalism had crashed with Wall Street in 1929. The war that had followed these events had empowered the state to levels never previously seen in history, and these states had used their power to constrain the activities of the international financiers who were sponsoring the event.
The MPS objected to any state intervention that stood in the way of free markets. They were deeply offended by the creation of the National Health Service and the introduction of a social safety net. The rise of the unions and the role of the state in supporting collective bargaining were equally significant affronts to neoliberal ideology. But perhaps the most egregious aspect of the post-war consensus was the continued existence of capital controls. Allowing the state to determine where an individual could put their money was seen by some as a threat to human liberty, and to others simply as a barrier to profitability. The alliance between ideologues, desperate to create a world free of totalitarianism where private enterprise thrived, and the opportunists who wanted to undermine a system that was preventing them from making money, marked the Mont Pelerin Society from day one.
This ambiguity is important to understanding how neoliberalism eventually rose to prominence. It is both an internally-coherent intellectual framework and an ideology used to promote the power of the owners of capital in general, and finance capital in particular.24 The work of Hayek, von Mises, and others constituted a serious intellectual enterprise grounded in a particular set of values: namely, a commitment to human freedom, defined by control over one’s property.25 The fact that this gave justification for shrinking the size of the state, removing capital controls, and reducing taxes is what led several prominent international financiers to cover a large portion of the costs for the first meeting. One can see a parallel in the development of Keynesianism and the Labour Party’s adoption of this ideology. On the one hand, Keynes sought to “save capitalism” from its own contradictions, and on the other the Labour Party sought an ideology and set of policies that would allow it to maintain a compromise between the workers, capitalists and the state. In this sense, neoliberalism was no more a conspiratorial plot to take over the global economy than was Keynesianism. Intellectuals will always seek out the powerful to sponsor their ideas, and the powerful will always seek out ideas to justify their interests.
The elite that gathered at Mont Pélerin decided, then and there, that they would exhaust their time, money, and intellectual resources in an effort to bring down the system of state capitalism which they saw as paving the way to totalitarianism. Their political manifesto — the “Statement of Aims” — included commitments to promote the free initiative and functioning of the market, prevent the encroachment of private property rights, and establish states and international institutions that uphold these ideals. The Statement of Aims also claimed that “[t]he group does not aspire to conduct propaganda”. Yet they hatched a plan to translate these principles into an economic policy agenda that would undermine the social democratic consensus all around the world. Their ideas would be thrust into the mainstream through a network of academics, politicians and think tanks who could spread the word about this newer and better way of looking at economics. They had their work cut out for them. The Keynesian political compromise had seen living standards rise, inequality fall, and a strong bargain emerge between organised labour and the nation-state. Arguing for the abolition of the welfare state made the neoliberals look like dangerous radicals not worth taking seriously. For decades, Hayek and his acolytes were left shouting from the sidelines, derided by academics and politicians alike.
But perhaps the social democrats were too complacent. What looks like unparalleled stability can quickly implode under the dynamic, unconstrained forces of global capitalism. The crisis of the 1970s proved that social democracy was no different to any other capitalist system: it contained its own inherent contradictions that would eventually prove its undoing. Those neoliberals following in the wake of the MPS were as shocked by the collapse of the post-war consensus as anyone else. They had spent decades working at the global level, trying to unpick the regulations that underpinned Bretton Woods, but the national social democratic settlement looked stable in comparison. The Seventies changed everything. With the US state having dealt the final blow to Bretton Woods, the neoliberals felt emboldened. They knew that this spelled the beginning of the end for capital controls. Rising capital mobility would stand them in good stead in their battle with the nation state — capital mobility, after all, gives those who own it veto power. Don’t want to pay your taxes? Move your money abroad.
The neoliberals focused their efforts on the British state — the historic centre of global finance, in which the golden age of capitalism already seemed to be coming to a close with the acute crisis of social democracy. The think tanks they had created after Mont Pélerin — the Institute for Economic Affairs and the Adam Smith Institute — started churning out neoliberal propaganda at an impressive rate. They engaged with any politician that was willing to talk to them — and one proved much more open than any other. Neoliberal economists and lobbyists were quick to latch onto Thatcher’s campaign for the leadership of the Conservative Party.26 When she won, they were equally quick to work with her to shape an electoral agenda that would change the course of British history.
Thatcher’s campaign hinged on three promises: to take on the unions, shrink the state, and create a nation of homeowners. Her electoral promises were couched in populist terms: the Conservatives would “restore the health of our economic and social life”, “restore incentives so that hard work pays”, and “support family life by helping people to become home-owners”. This talk of restoration allowed Thatcher to frame what were radical economic policies in the language of traditional conservatism, drawing on people’s fond memories of the post-war consensus. Her attack on the Labour Party portrayed them as the party of the scroungers, living off the hard-work of others, and the thugs, holding the country to ransom. She sought to appeal to traditional Labour voters by claiming that her economic policies would restore full employment, using the famous “Labour isn’t working” posters to portray this message in popular terms. Labour, she claimed, was the party of fringe-extremists seeking to bring down British democracy and replace it with Soviet-style totalitarian rule. The Conservatives were the true party of working people — they would lower your taxes and inflation, while securing you a job and a home. It was a powerful message, and the polling shows that Thatcher’s victory came on the back of the switched allegiances of many low-income voters.
This populist rhetoric was, of course, the thin end of the neoliberal wedge. Thatcher knew that there was little public support for the most important elements of the neoliberal agenda, so she hid her commitments to privatisation and deregulation in the small print. In fact, even those policies that Thatcher did advertise — from going to battle with the unions to reducing the size of the state — were no more popular amongst voters in 1979 than they had been in 1974.27 The lesson of Thatcher’s period in opposition is the importance of extended crises in eroding support for the status quo. Even if they weren’t particularly keen on privatisation, people were sick to death of the constant disruption associated with industrial disputes, with the high levels of inflation and unemployment, and with the state’s apparent inability to deal with any of these issues. Many people voted for Thatcher in 1979 because she appeared to be one of the few politicians who was able to make sense of what was going on and provide workable solutions. Even if you didn’t like the Thatcherite agenda, after the Winter of Discontent you might have thought it was worth a try. Milton Friedman — one of the founders of the Mont Pelerin Society — knew this better than anyone. Looking back on the neoliberal victories of the 1980s, he wrote:
Only a crisis — actual or perceived — produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.
The neoliberals’ aim wasn’t simply to get Thatcher elected. It was to use the moment of crisis provided by the breakdown of the post-war consensus to institutionalise a new model for the British economy — one that increased the power of capital, just as the Keynesian consensus had institutionalised the power of labour. In this sense, the neoliberals had a view of change just as dialectical as that of any Marxist. The contradictions of social democracy would be exposed by a crisis that would bring the economy grinding to a halt. During such a crisis, people and politicians would search for ideas that might provide them with a way out. By building a narrative, developing an electoral coalition, and gaining control of the state, the neoliberals could use the crisis moment to build a new set of institutions that would give them and their backers the kind of lasting power that social democracy had denied them.
This is what the Thatcherite agenda was all about. Neoliberal economists, think tankers and financiers convinced Thatcher — who didn’t need much convincing to begin with — that free markets required a strong state.28 The only way to deal with the communist threat — at home and abroad — was to aggressively take on the power of the labour movement and release the dynamic forces of market competition that would promote efficiency, profitability, and social justice — as well as restoring the owners of capital to their rightful, unchallenged position as the most powerful group in society. Thatcher and her acolytes knew that they had five years to build such a model, but that once it had been built, it would be just as irreversible as the NHS.
The first thing they did was to deal with the only group capable of challenging their hegemony: the unions. Thatcher spent years, and a great deal of political capital, waging war with the UK’s labour movement. The next job was to empower capital in its place. Rather than seeking out an alliance with an ailing national capitalist class focused on mining and manufacturing, Thatcher knew that this required supporting the interests of the burgeoning international capitalist class. The natural allies of such a grouping could be found just down the road from Westminster, in the City of London.
On its own, this victory of capital over labour would not have lasted very long. What the neoliberals needed was an electoral alliance that would render their new system structurally stable. The clue to how this was created can be found in the electoral agenda of the 1979 Conservative government: a small state and property ownership. In place of the alliance between the national capitalist class and the labour movement that governed the post-war consensus, Thatcher would build an alliance between the international capitalist class centred in the City of London and middle earners in the south of England. She secured the support of middle earners by turning them into mini-capitalists through the extension of property ownership and the privatisation of their pension funds. In doing so, she transformed British politics and unleashed a new growth model that lasted over thirty-five years, before collapsing in the biggest financial crisis since 1929.
CHAPTER TWO VULTURE CAPITALISM: THE FINANCIALISATION OF THE CORPORATION
We accept our responsibilities as a corporate citizen in community, national and world affairs; we serve our interests best when we serve the public interest... We acknowledge our obligation as a business institution to help improve the quality of the society we are part of. We want to be in the forefront of those companies which are working to make the world a better place. — Thomas Watson Jr, former chief executive of IBM, 1969.
When Thomas Watson Jr spoke these words, he was reflecting the mood of the times. This statement was typical of Watson, who believed that the best way to secure the long-term profitability of his business was to account for the interests of all of IBM’s stakeholders — workers, managers, shareholders, the state, and society at large.1 He repeatedly maintained that the guarantor of IBM’s success was its commitment to putting its workers first. Under Watson, IBM was responsible for making significant advances in machine learning, developing newer, faster computer processors, and even helping NASA with its space programme. Endicott, New York, a town of around thirteen thousand people in which IBM was headquartered, hosted eleven thousand IBM employees at the firm’s peak.2
But by 2012, IBM’s business model was shaped around quite a different set of goals. The key promise of the 2015 road map was to “[leverage] our strong cash generation to return value to shareholders by reducing shares outstanding”. Its measure of success: increasing the share price to $20 per share by 2015. Rather than innovate, IBM has set out to achieve this mission through mergers and acquisitions.3 Between the end of Watson’s tenure and the present day, employment in Endicott fell from ten thousand to just seven hundred. In contrast, an investor who had bought a thousand IBM shares for $16,000 in 1980 would have seen those shares increase in value twenty-five times: their holding would now be worth almost half a million dollars.
Watson Jr would be unlikely to recognise the IBM that exists today. Gone are the concerns with stakeholders, or even workers. Instead, the corporate culture of one of the greatest technology companies in the world has been reshaped around a single imperative: maximising shareholder value. Describing the transformation of IBM in this way is not meant to imply that Thomas Watson Jr was a particularly saintly individual, or that today’s chief executives are particularly awful; nor that the old IBM model was perfect: clearly, the obsession with the “national interest” suggests a symbiotic relationship between multinational corporations and the US state that has not been a progressive development. But the change in business discourse — from an emphasis on stakeholder value, with workers at the core, to shareholder value, with workers coming last — reveals a deep change in the way corporations are run.
Today’s corporations have become thoroughly financialised, with some looking more like banks than productive enterprises. The financialisation of the non-financial corporation has involved a transfer of society’s resources from workers to shareholders. This transfer of power has resulted both from changes in the political and economic foundations of the global economy and from the rise of a new ideology, which holds that corporations’ sole aim should be to maximise profitability via increasing returns to shareholders. Both ideas and power relations have to change to create any lasting economic change — and the 1980s was a period of transition for both.
Firstly, rising capital mobility and the collapse of the post-war consensus increased the power of big institutional investors. Institutional investors control pools of money, such as hedge funds and pension funds, and are able to invest and divest huge sums at will.4 Much of this money was invested in corporations, allowing investors to use their power to control how these corporations were managed. Organisations were restructured to ensure that managers’ sole aim was to make as much money for their shareholders as possible. And the money that went to shareholders was money that wasn’t going to workers or being invested in future production.
Secondly, neoliberalism was sweeping the world by the 1980s, and with it the idea that the ruthless pursuit of profit was the only responsibility of any corporation.5 This translated into a simple imperative for corporate executives: maximise shareholder value.6 The valorisation of profit was cemented as managers’ pay packages were linked to share prices, ensuring that they would faithfully pursue the interests of their shareholders. As neoliberals gained control of many political parties, states actively began to encourage such behaviour. The ideology of shareholder value was institutionalised in a corporate code that reinforces the idea that the function of a business is to maximise its profits, consequences be damned.
The rise of the institutional investor and shareholder value ideology have had a lasting impact on corporate power in both the US and the UK.7 Most corporations are now structured around the interests of shareholders, with workers’ interests coming last, if they are even considered at all.8 As this process has developed, a battle has emerged between certain types of shareholders over others. Short-term shareholders, like hedge funds, have benefitted to a much greater extent than long-term shareholders, like pension funds.9 Some private executives, intent on maintaining their corporations’ size and power, have sought to protect themselves from hostile takeovers and activist investors. Those that have succeeded have emerged as the most powerful monopolies in human history. Meanwhile, any form of resistance to the emergence of this model has been brutally broken. Where unions may once have acted in the interests of workers against managers acting in the interests of shareholders, the former have been eviscerated by states intent on ensuring that businesses are able to make as much money as possible. The corporate culture that has emerged from these changes would be unrecognisable to the CEOs of the 1950s.
Some have argued that this focus on the maximisation of shareholder value represents a perversion of an otherwise benign capitalist system, and that the triumph of the “takers” over the “makers” is a development that we should be trying to somehow reverse.10 But whilst national politics were important in determining how this ideology developed, these changes didn’t just happen, they were driven by much deeper shifts in the way the global economy works. It is hard to imagine how shareholders wouldn’t have used the collapse of Bretton Woods and the rise of financial globalisation to increase their power, even if the political struggles that took place within different states determined how much their power grew relative to other actors. Capitalism wasn’t distorted by the changes of the 1980s, it adapted — and it did so in the interests of the most powerful.
The balance of social forces in the UK ensured that it developed the financialised corporate culture par excellence. By unleashing the power of the City of London, and crushing everything that stood in its way, Thatcher helped to build a highly exploitative, extractive and unequal economic model in the UK: one which endures to this day.
The Big Bang
Once upon a time in the City of London, there lived a noble and chivalrous group of knights in a great big castle called the Stock Exchange.11 At least, that was the story told by John Redwood, then head of the Number 10 Policy Unit. Redwood’s 1984 speech — Tilting at Castles — described the City as it existed back then as an elaborate system of knights, barons, kings, and peasants. The knights — the brokers who worked on the London Stock Exchange — were honest, hard-working, and “competed with each other in high spirits”. The barons — institutional investors like pension funds — weren’t nearly as jolly as the Stock Exchange knights and were forced to send all their money to the Stock Exchange castle, where the real money was made. At the bottom of the pile were the peasants, who subserviently sent their savings to the institutional barons for them to invest. The system worked well for the knights, but not so well for everyone else. Redwood’s speech told the story of how the wise ruler went to the castle to ask the knights to lower their drawbridge and let just a few more people in.
This incredible piece of Orwellian doublespeak describes the fierce battle that took place between the government and traders on the London Stock Exchange over the course of the early 1980s, ending with the deregulation of the City. Before 1986, regulation that dated back decades restricted the kinds of activities that different economic actors and institutions could undertake. Fixed minimum commissions were imposed on certain kinds of trades, making these more expensive; trading took place on the slow, crowded, non-automated Exchange floor; and different types of investors were separated from one another, creating a rigid City hierarchy. This arcane regulation and strict separation between actors gave rise to a system that worked something like an old boy’s club. In this pre-Big-Bang world Nick Shaxson reports that bankers could show their disapproval for one another by crossing the road and could determine a man’s creditworthiness by the strength of his handshake.12
In the wake of a legal battle between the government and traders, the Big Bang hit the doors of the London Stock Exchange like a battering ram. In a single day, many of the restrictions that maintained the City hierarchy were removed. Fixed commissions were abolished, the separation between those who traded stocks and those who advised investors was eliminated, rapid trading was moved away from the floor of the Exchange and foreign firms were invited into the City. These changes allowed more institutions to enter the stock market and facilitated a wave of mergers and acquisitions, many by foreign banks. By 1987, seventy-five of the three hundred member firms of the London Stock Exchange had been bought up by foreign rivals.13 Technological developments that allowed traders to buy and sell securities in the blink of an eye quickly followed the move of trading away from the Exchange room floor. In just one year, trade times were reduced from an average of ten minutes to ten seconds — a large reduction, but far off the trading times of today, which are measured in milliseconds.14 Trading volumes skyrocketed, reaching $7.4bn just one week after the Big Bang, compared to $4.5bn a week before.15 Many of the partners in the firms that had previously been at the centre of the City old boy’s network took their money and ran: some say that the Big Bang created 1,500 millionaires overnight.16
The Big Bang was helped along by the privatisation drives of the 1980s. In the same year, the UK government launched its famous “Tell Sid” advertising campaign, encouraging people to buy shares in the soon-to-be privatised British Gas. The adverts were centred on people encouraging one another — in the pub, at the shops, or on the street — to jump on the bandwagon before it was too late. The exchange always finished with the now-famous line: “If you see Sid, tell him!” As one commentator puts it, “You couldn’t pass a billboard, switch on the radio or glance at your junk mail and miss it”.17 After starting with British Aerospace in 1981, Associated British Ports in 1983 and Sealink in 1984, Thatcher’s privatisation of British Gas was by far the most ambitious privatisation attempted thus far — and was based on a questionable commercial case. The £32m advertising campaign worked and millions of ordinary Brits signed up to get their part of the nation’s family silver.18 At the time, it was the largest privatisation ever undertaken on the London Stock Exchange.19
Overall, Thatcher privatised more than forty stateowned enterprises. This represented a major challenge to the post-war status quo: in 1979, nationalised industries accounted for 10% of economic output and almost 16% of capital investment.20 By the time she left office, £60bn worth of UK assets had been sold off — often on the cheap.21 Output accounted for by nationalised industries fell to 3% and investment to 5%.22 Employment in nationalised industries fell from almost 10% of total employment to just 2%.23 According to one government minister, “[w]hen we came into office, there were about three million people who owned shares in Britain. By the end of the Thatcher years, there were twelve to fifteen million shareholders”.24 Millions of people were effectively given free money when the state sold off national assets under their value — shockingly, many of them ended up voting Conservative.
Over the longer term, Thatcher’s dreams of boosting individual share ownership proved over-optimistic. She and Redwood claimed that financial liberalisation would allow the peasants — ordinary savers — to get a chunk of the pie by allowing them to earn money on the stock market. But instead, people ended up handing their savings over to the barons — the institutional investors previously prevented from directly engaging in trades themselves — who were able to extract large fees from their management of other peoples’ money.25 One can think of institutional investors as financial institutions sitting on huge piles of cash that they invest to make the largest possible return. These cash piles can come from ordinary people’s savings, as with pension funds, the savings of the wealthy, as with hedge funds, or even from states, as with sovereign wealth funds. Institutional investors can buy all sorts of financial securities — from bonds, to equities, to derivatives — as well as real assets like property.
In 1963, individuals owned about 55% of publicly listed shares, whilst pension and insurance funds owned 6% and 10% respectively.26 By 1997, individual shareholdings had fallen to 17% of the value of total equity, whilst pension and insurance funds had risen to 22% and 23% respectively. Many international institutional investors also bought up UK equities, meaning foreign ownership of UK corporations also increased. Meanwhile, individual investments were skewed towards the wealthy — some of whom set up hedge funds to manage their own, their close friends’, and their family’s money.
This was all part of the Conservative plan for “pension fund capitalism”.27 In 1988, Thatcher launched private, personal pensions, allowing individuals to save without enrolling in a corporate scheme, which had themselves already amassed vast pools of capital thanks to previous reforms. Initially, this ended in disaster as pensions advisors took advantage of savers’ inexperience to sell them risky financial products. But eventually, private pensions pots and other savings instruments became a central part of the British financial landscape. The creation of private pensions pots would have two linked and propitious effects for the Conservative government. On the one hand, it helped to create a class of “mini-capitalists” with an incentive to support measures that would boost returns in financial markets. Thatcher’s acute grasp of political economy allowed her to build an electoral coalition with a strong material interest in supporting her policies. On the other hand, the move towards pension fund capitalism increased the pool of available savings for financial institutions to plough into whatever investments would deliver the highest returns.
The combination of private pensions pots and large, corporate funds gave private investors a great deal of capital to play with. It is a fairly respected law of investing that the more capital you have at your disposal, the higher your returns, not least because if a single investor puts enough money into a single security, his investment would boost the price of that security. When asset managers got their hands on workers’ pension funds, they invested this capital into global financial markets, making huge amounts of money in the process. As one commentator puts it, “social security capital’ is now as important as other sources of capital… it is a key element in fuelling the expansion of financial markets”.28 By 1995, one estimate put the global assets of pension funds at almost $12trn, at least £600bn of which came from UK savers, making the UK’s the largest pensions pool in the EU.29
It is not a coincidence that corporations began to be governed based on the logic of maximising shareholder value just as institutional investors from around the world emerged as some of the most powerful actors in the City. Historically, these pools of capital have been important: when they are large, those who control them are able to wield immense amounts of power by determining who gets what.30 The mass-scale channelling of people’s savings into stock markets via pension funds and insurance funds after the end of Bretton Woods and the financial deregulation by the 1980s allowed institutional investors and wealthy individuals from around the world to channel money into the UK’s stock markets, unencumbered by capital controls or restrictions on foreign trading. Hyman Minsky has argued that we now live in an age of “money manager capitalism”, in which these pools of capital are some of the most important entities in determining economic activity.31
In this sense, money manager capitalism doesn’t just affect financial markets. By influencing the allocation of capital across the economy, it has affected the behaviour of almost every other economic actor — most clearly, it has transformed the nature of the non-financial corporation.32 Institutional investors’ primary goal is to maximise their returns as this is how they earn their fees and commissions. These pressures have been passed on to corporations via the stock market: with equities representing a significant chunk of the assets held by money managers, the pressure on corporations to meet shareholder needs for immediate returns increased.33 In some cases, rather than being responsible to a board of directors and a few disorganised shareholders, corporations have been held to ransom by “activist investors” demanding that their capital is used in the most efficient way possible. This change in corporate governance has also been reinforced and embedded by the emergence of a new ideology: shareholder value.
Together, the increasing power of investors and the emergence of an ideology to support this power has led to the financialisation of the non-financial corporation: businesses are increasingly being used as piggy banks for rich shareholders. This, according to the CEO of General Electric, makes shareholder value “the dumbest idea in the world”34. But like many dumb ideas that enrich the powerful, shareholder value took off in the 1980s — and nowhere more so than in the City of London.
Corporate Raiders, Hostile Takeovers, and Activist Investors
Lord Hanson — aka “Lord Moneybags” — is famous for many things.35 He was engaged to Audrey Hepburn, had a fling with Joan Collins, and also happens to be one of the UK’s most notorious corporate raiders. Although he made his money in the new economy, Hanson didn’t exactly come from humble beginnings. Born into a family that made its money during the industrial revolution, he built multiple successful business ventures on the back of his family’s wealth before teaming up with Lord Gordon White to start Hanson Trust in 1964. At its height, Hanson Trust was worth £11bn. Over the course of the 1980s, its share price outperformed the rest of the FTSE100 by a staggering 370%. He was named by Margaret Thatcher as one of the UK’s premier businessmen and, completely unrelatedly, he donated millions of pounds to the Conservatives over the course of his business career. The root of James Hanson’s success was his commitment to the religion of shareholder value. Thatcher admired Hanson not simply because of his political donations, but because she saw Hanson Trust as the future of the new economy, and the close relationship between the two can tell us a lot about what Thatcher was trying to do when she deregulated the City.
Hanson Trust was not built on the back of a great new idea by a brilliant entrepreneur, or some new innovation that promised to revolutionise its industry forever. Its sole aim was to find and buy up “underperforming assets” and make them profitable. Throughout the 1970s, the conglomerate loaded up on debt to buy up shares in several large companies — seen as “underperforming” — before selling off assets and cutting the payroll to disgorge these companies of cash, used to pay back bondholders and generate gains for shareholders. Hanson Trust quickly gained a reputation as an infamous “asset stripper” before the term was widely used.
But Hanson truly made his reputation in the same year as the Big Bang itself. In 1986, Hanson Trust purchased Imperial Tobacco for £2.5bn, accounting for 15% of the value of total mergers and acquisitions activity in that year alone. The Trust quickly sold off £2.3bn worth of Imperial’s assets and distributed the money to bondholders and shareholders. Hanson had aimed to extract assets from the company’s pension fund, but the trustees had managed to close the fund the day before the takeover went through. So instead, he sold off most of Imperial’s subsidiaries — from food producers, to brewers, to a variety of tobacco producers. He was left with a business that made a profit margin of 50%. And this takeover was only one of the more extreme examples of Hanson’s attitude towards acquisitions. Hanson Trust acquired dozens of undervalued companies throughout the Eighties and Nineties, claiming always to put shareholders first, customers second, and employees last. When James Hanson came for your employer, you knew what was coming next.
Initially, raiders like Hanson were derided as extractive parasites on productive economic activity. Hanson, widely reviled by the British media, was compared to a “dealer who bought a load of junk, tarted it up and sold it on as antiques”. In a more ambiguous assessment, the Economist termed him the king of the corporate raiders. When he attempted to take over a famous British brand — ICI chemicals — in 1991, he was faced with “the sort of moral indignation that the British usually reserve for a Tory cabinet minister caught in bed with his secretary”.36 ICI was at the time one of the leading chemical firms in the world, based on strong previous investments, particularly in research and development. There was widespread concern that a Hanson takeover would lead to ICI being stripped to the bone, focused on increasing current cash flow and distributing it to shareholders rather than investing in the long-term future of the business. Faced with significant political opposition, the ICI bid failed. But Hanson’s approach eventually became common business practice.
By the 1990s it was no longer controversial to argue that, when corporations maximised their profits, the economy worked better for everyone.
These arguments ran contrary to the received wisdom in management theory, which held that businesses had responsibilities to a wide variety of stakeholders — workers, consumers, and governments for example. But with the rise of neoliberalism, the argument that — in the words of Milton Friedman — “the social responsibility of business is to increase its profits” gained traction.37 This view assumes that resources are scarce, so when companies use their resources in unproductive ways there are fewer to go around for everyone else. In this sense, doing anything other than maximising profits is wasteful and inefficient.
From here, it is a short leap to arguing that the singular purpose of any corporation should be to maximise shareholder value — with the share price used as the proxy for profitability. Because neoclassical economic theory assumes that equity markets are efficient, it also assumes that current stock prices are an accurate reflection of the long-term profitability of a company. Investors will base their investment decisions on the amount of profit they expect the enterprise to make in the future, and how much of that profit they expect the firm to distribute to shareholders. The argument for shareholder value therefore proceeded from “businesses’ sole aim is to maximise profits”, through to “boosting the current share price is the best way to maximise profits”.
But this nice, neat story is based on some fundamental misconceptions about the way financial markets work — not to mention its questionable assumptions about human behaviour. First and foremost, a firm’s current share price doesn’t always reflect its real long-term value. Keynes was one of the first to point out that the prices of different shares on stock markets are mainly determined by a “beauty contest”: in other words, without perfect information about the inner workings of a firm and without certain knowledge of its future profitability, investors will put their money in the nicest-looking shares.38 One can think of beautiful shares as expensive football players: when a football team buys a new player they cannot be certain that the player will be worth the expense — they will judge the price based on past performance and trends in the rest of the market.
In the same way, an investor can wade into a booming market, see a share that has been performing well — say, Carillion PLC — and purchase it expecting its value to carry on increasing, even if its business model isn’t particularly strong. This creates a self-reinforcing cycle in which the most “beautiful” shares receive more investment, pushing up their price and vindicating investors’ decisions to buy them in the first place. This dynamic can create bubbles: when everyone piles into certain stocks based on the fact that everyone else seems to be making lots of money from them, the price of those stocks comes to reflect peoples’ expectations about profits, rather than profits themselves.
But taking the neoliberal argument at face value is to miss the point. The reorganisation of the economy that took place in the 1980s had little to do with making the economy work better, and everything to do with changing who the economy worked for. Shareholder value became so dominant precisely because it benefitted those with the power. As a result, it quickly colonised management theory and practice, transforming corporate governance by changing managers’ incentives to ensure that they acted as reliable functionaries for the owners of capital.
Contrary to the arguments of mainstream economists, this political reorganisation of the firm has made firms less efficient when it comes to their use of society’s scarce resources. In the late 1970s, professors Meckling and Jensen published an article arguing that there existed a “principal-agent problem” between the individuals who owned a corporation and those who managed it.39 Those who ran companies — managers, the agents in this context — had every incentive to maximise their own pay packages and engage in “empire building” to increase their power, even if this wasn’t in the long-term interest of the people who owned companies — shareholders, the principals. This created a conflict of interest for managers who were technically employed by shareholders to run successful, profitable companies, but who were, according to Meckling and Jensen, likely to use their positions to maximise their own wealth and power. According to this view of the world, corrupt, bureaucratic managers were wasting money, reducing business’ profits and therefore shrinking the size of the economic pie for everyone in the economy.
The way to solve this, Jensen later argued, was to align the interests of managers with those of shareholders. Their immensely popular article — “CEO Incentives: It’s Not How Much You Pay, But How” — argued that in paying CEOs a salary that didn’t reflect the impact they had on the company’s share price, directors were encouraging them to behave like bureaucrats. If instead CEOs were remunerated based on share prices, they would have a greater incentive to act in the best interests of shareholders, and therefore in the best interests of society as a whole. Managers had to be made to act like business owners — ruthlessly pursuing profit at every turn.
Adherence to the flawed ideology of shareholder value has created a set of deep-seated problems with British capitalism.40 As you would expect, the ideology of “shareholder value” encouraged companies to distribute their profits to shareholders rather than distributing them internally or using them for investment, which curtails long-term profitability to facilitate a short-term boost in the share price. Failing to retain and properly remunerate workers erodes trust between workers and their employers, which can negatively impact productivity.
William Lazonick argued that the rise of shareholder value ideology has led to a transformation in the philosophy of corporate governance — the way in which corporations are run — from “retain and invest” to “downsize and distribute”. In other words, the rise of shareholder value has become a mechanism for redistributing the profits of business away from workers and towards corporate executives and current shareholders. This has, in the words of one commentator, led to “rampant short-termism, excessive share buybacks to the neglect of investment, skyrocketing C-suite compensation and misallocation of resources in the economy”.41 Elsewhere the Economist recently argued that shareholder value has become “a license for bad conduct, including skimping on investment, exorbitant pay, high leverage, silly takeovers, accounting shenanigans and a craze for share buy-backs”.42
In the UK, these trends are clear. The proportion of corporate profits (measured as discretionary cash flow) returned to shareholders increased from just over 25% in 1987 to almost 50% in 2014.43 As well as distributing profits to shareholders, corporations can also increase share prices by buying up their own shares. Data on share buybacks from the Bank of England between 2003 and 2015 showed that in almost every year, companies bought more of their own shares than they issued new ones.44 Another way to give a quick boost to a company’s share price is to quickly expand the company by buying up another — this was the strategy preferred by corporate raiders like Lord Hanson. Between 1998 and 2005, UK mergers and acquisitions (M&A) activity was worth around 22% of GDP — double that of the US, and more than double that of Germany and France.45 With shareholders placed firmly at the centre of corporate decision making, and managers remunerated based on share prices, long-term investment has fallen. UK companies’ investment in fixed assets fell from around 70% of their disposable incomes in 1987 to 40% in 2008.46
Those firms that pursued the downsize and distribute model often ended up taking out debt to do so.47 In what came to be known as the “debt-leveraged buyout”, activist investors would take out “junk bonds” — or expensive debt — to buy out existing shareholders, before selling off chunks of the corporation and using this to repay bondholders. This makes the hierarchy of finance capitalism obvious — at the top are creditors, followed by shareholders, with workers at the very bottom. Firms came to operate according to the logic of finance-led growth: distributing earnings to shareholders and taking out debt to finance investment and new takeovers. All in all, business’ stock of outstanding debt has grown from 25% of GDP in 1979 to 101% by 2008.48 As a ratio of profits, this means that UK corporations owe 6.5 times more in debt than they earn in profits each year, making them some of the most indebted corporations in the global North.49
As well as investing less and taking out more debt, companies have also been reducing workers’ pay and making their employment conditions more precarious. The ratio of CEO pay to the pay of the average worker increased from 20:1 in the 1980s to 149:1 by 2014.50 This has driven up income inequality: the UK’s GINI coefficient — a measure of income inequality in which countries closer to zero are more equal and those closer to one more unequal — rose from 0.26 at the start of the 1980s to 0.34 by the start of the 1990s. In fact, there has been a secular decoupling of productivity (the value of what workers produce) and wages. The total income of an economy can be divided between that which accrues to workers in the form of wages and that which accrues to owners in the form of profits; modelling from the TUC suggests that the wage share of national income has fallen from a peak of 64% in the mid-1970s to around 54% in 2007.51
Within the profit share of national income, rising interest payments have led to an increase in what has been termed the “rentier share” of national income. Economic rents are income derived from the ownership of a scarce resource over and above what would be necessary to reproduce it. When a landlord increases a tenant’s rent without improving the property, he is simply extracting more income from the tenant without producing anything new — in this sense, economic rents are unproductive transfers from one group to another based on an asymmetry of power. The power to extract economic rents generally depends upon the monopoly ownership of a particular factor of production. Property rents, over and above what is necessary to maintain the property, paid to landlords are economic rents derived from the landlord’s monopoly ownership of a property in a particular location. Banks are often able to charge interest payments over and above the level necessary to compensate them for the risk they are taking in lending because they have monopolistic — or, more often, oligopolistic — control over money lending. Monopolies can extract monopoly rents from overcharging consumers, and firms can generate commodity rents from their control over a particular resource, like oil or diamonds. Perhaps the most common source of economic rents in financialised economies are property rents and financial rents. Those on the receiving end of economic rents are known as “rentiers”. Keynes famously called for the “euthanasia of the rentier”, defining a rentier as a “functionless investor”, who exploits the “scarcity-value” of capital to generate income.
In 2005, Gerald Epstein made the first attempt to measure the rentier share of national income in OECD economies. Epstein opted for a fairly narrow definition of financial rents, defined as “the income received by the owners of financial firms, plus the returns to holders of financial assets generally”. He was building on Kalecki’s definition of financial rents, which captures the returns financiers are able to generate from their control over lending and investment. Epstein showed that the rentier share in the UK had risen from 1970 to 1990, from 5% GDP to nearly 15%. Similar trends pertain in the US, where the rentier share increased from around 20% to over 40% GDP over the same time period, and in most other advanced economies. So, whilst the profit share as a whole was increasing, within the profit share, the amount accruing to rentiers was also rising. This was largely due to rising interest payments, after the dramatic increase in corporate and household debt during the 1980s. The reason Keynes called for the “euthanasia of the rentier” is that rental payments flowing up to the owners of capital act as a drain on demand. Interest paid by businesses represents capital that can’t be used for investment. Economic rents also accrue to the already-wealthy, who are less likely to consume their extra income. This was one of the major drivers of the rising inequality and financial instability evident in the inter-war years. Since the 1980s, the rising rentier share has once again begun to act as a drain on productive economic activity.
Even as these problems became obvious over the course of the 1980s, there were no attempt in the UK to constrain shareholder value ideology, perhaps because it was benefitting some of the wealthiest and most powerful people in society. Realising they had opened Pandora’s box, regulators in the US tried to close it again by outlawing corporate raiding strategies. Meanwhile, in the US, firms were developing innovative new ways to protect themselves from hostile takeovers. Firms could opt to take a “poison pill” in which existing shareholders could dilute their holdings to prevent a hostile bidder from gaining an overall majority; they could establish shares with different voting rights; or they could seek out a non-hostile bidder — a White Knight — to buy up the shares being targeted by the hostile party. But in the UK, the stock market crash did nothing to dampen the corporate raiding culture. In fact, the shareholder value ideology was positively encouraged by politicians like Thatcher. The infamous City Code created one of the most permissive takeover regimes in the world.52 It set out that all shareholders must be treated equally, preventing the use of many of the defences outlined above, and prevented management from standing in the way of a takeover agreed by shareholders. In other words, the City Code institutionalised the power of corporate raiders, activist investors, and other short-term shareholders who would come to act as the enforcers of shareholder value ideology.
From Thatcher’s perspective, and from that of her friends in the Mont Pelerin Society, corporate raiders like Hanson were heroes, charging into corporate fortresses and taking on the vested interests of the managers who were hoarding the company’s capital for themselves rather than investing it in the interests of shareholders. Thatcher’s Big Bang and the development of the City Code sought to make it as easy as possible for corporate raiders to “shake-up” big, incumbent firms like Imperial Tobacco. Once the shake-up was through, corporate raiders like Hanson were no longer needed. The Economist wrote in Hanson’s obituary that his company’s focus on the maximisation of shareholder value had become “standard business practice”.
From Downsize and Distribute to Merge and Monopolise
The pursuit of shareholder value made many companies profitable in the short-term, but over the long-term low rates of investment, high rates of debt, and the declining wage share should have reduced profits. If companies aren’t investing in new assets, like factories or technology, then they won’t be able to take advantage of rising demand for their goods down the line. Taking out debt today and failing to use this for productive investment comes at the expense of profits tomorrow. And paying workers relatively less across the board reduces overall demand for goods and services. As inequality rises, the demand deficit increases because those on lower incomes spend a higher proportion of their incomes on goods and services, whilst the wealthier tend to save more. Low demand is, in turn, likely to make businesses invest less, decreasing future profits, and therefore wages and employment. But instead of this low-investment, low-wage, low-demand doom-loop, we’ve seen corporate profits rising on average. What’s been going on?
As Jack Welch has pointed out, shareholder value — as interpreted by the corporate raiders of the 1980s — really is the dumbest idea in the world. Many of those companies that cut investment, loaded up on debt, and dished out money to shareholders didn’t last very long. Instead, they have been bought up by bigger corporations in the wave of M&A activity that has taken place since the 1980s. The most successful advocates of shareholder value haven’t been the downsizers or the distributers, but a small number of huge mergers and monopolisers. Corporations have learnt to adapt to the pressures of finance-led growth by building monopolies, immune from the pressures of competition, activist investors, and even tax and regulation. In fact, many of them have grown so large and make so much money that they are effectively able to act like banks — rather than loading up on debt, they’ve been lending to other companies.53 This is perhaps the most important hangover of the shareholder value ideology and the corporate raiding culture it entailed: a massive increase in the number of monopolies and oligopolies.54
The macroeconomic link between investment and profits appears to have been severed because a few large corporations are dominating the global economy and maximising their profitability by acting as monopolies and failing to pay tax. Clearly, these corporations have not adopted the “downsize and distribute” model of growth — rather, these firms can be seen as having adopted a model of “merge and monopolise”. Monopolies are highly profitable because they are able to benefit from “monopoly rents” — i.e. they are able to charge consumers and other businesses more than they would in a competitive setting. This increases monopolies’ profits at the expense of consumers and other businesses. What’s more, these corporate behemoths tend not to recycle their earnings back into productive investment. Instead, they adopt two related strategies — neither of which is helpful for economic growth. Firstly, they buy other corporations to consolidate their monopoly positions and benefit from the past investment of these firms. Secondly, they invest the profits they generate from their monopolisation of key markets into financial markets — in other words, they act like financiers themselves.
The first trend can be measured by looking at corporate mergers and acquisitions (M&A) activity over the past several decades. Global M&A activity broke a record in the first half of 2018, when deal volumes increased 65% on the previous year and came in at the highest level since records began.55 This comes off the back of forty years of increasing M&A activity — according to one industry body the value of M&A activity doubled between 1985 and 1989 and increased fivefold between 1989 and 1999. As more “merge and monopolise” activity takes place, the monopolies themselves become ever more powerful. Gaining a greater market share means increasing profitability, which facilitates even greater M&A activity, creating a self-reinforcing cycle that has led to the emergence of the biggest global monopolies in history.
Second, these firms are investing in financial markets. Monopolisation impacts investment in fixed capital because firms find it more profitable to restrict production and invest the proceeds in financial markets.56 They distribute large sums to shareholders, but even that doesn’t exhaust their cash piles. Instead, they reinvest their profits into other assets — making these firms similar to the institutional investors that have been so important to the development of financialisation.57 This trend can be measured by looking at the extent to which corporations’ holdings of financial assets have increased since the 1980s. Financial assets include assets such as loans, equities, and bonds — but they also include bank deposits and internal cash piles. Today, the financial assets of British non-financial corporations are 1.2 times the size of total GDP.58 In the US, where most of the global monopolies are based, the trend is even starker.
This pattern is reflected across OECD countries, but the UK is unusual insofar as its corporations are more likely to hold debt securities and bank deposits than other European corporations.59 In this sense, many UK-based corporations are acting a lot like hedge funds or investment banks — they are lending their capital to other corporations or banks in the hope of increasing their profits. UK corporations have actually become net savers since 2002 (think of saving as anything that isn’t spending — so financial investments and deposits both count as “saving”). Huge piles of corporate capital have now joined the cash piles of the big institutional investors to play a significant role in shaping the allocation of resources across society.
The result of both models — “downsize and distribute” and “merge and monopolise” — is the same: more money stuck at the top. By prioritising paying shareholders over remunerating workers or investing in long-term production, the structure and governance of today’s firms helps to increase wealth and income inequality. By hoarding cash and investing it in financial markets, failing to pay tax, overcharging consumers for services, and mistreating their workers, global monopolies are launching a concerted attack on society itself. As these companies grow, they become more powerful than the nation states which are supposed to regulate them.
But these changes did not take place simply because Thatcher deregulated the City, or because firms suddenly made a collective decision that it would be in their interests to maximise shareholder value. The changing corporate culture in the UK reflects broader changes in the global economy. In this sense, whilst it has created a number of severe problems with the functioning of the economy, it doesn’t really make sense to think of the rise of shareholder value as a corruption of a purer form of capitalist accumulation. The reason corporations are now run in the interests of shareholders rather than workers is that shareholder power increased dramatically in the 1980s relative to that of workers, and this power has been consolidated as it has been embedded in new sets of institutions and new ideologies.
But the rise of shareholder power on its own only explains half of the story; shareholders gained power at the expense of workers, who were previously far more central to corporate governance than they are now. Attempting to redistribute power from shareholders to workers would have met with fierce resistance in any company with a strong union. The necessary correlate of the promotion of the shareholder was the attack on the worker, and the best way to attack workers in the 1970s and 1980s was to attack their unions.
Comments
Post a Comment