eichengreen dollar
INTRODUCTION
The Counterfeiters, an award-winning German film set in 1940s Europe, opens with the concentration camp survivor Salomon Sorowitsch, played by the Austrian actor Karl Markovics, sitting fully clothed on the beach holding a suitcase full of dollars. The war has just ended, and he intends to put that currency, of dubious provenance, to work on the tables of Monte Carlo. That it is dollars rather than French francs is essential to the authenticity of the scene. In post–World War II Europe it was the dollar, the currency of the only major economy still standing, that people in all countries wanted. Dollars were the only plausible currency that a Holocaust survivor might carry into a casino in 1945.
Fast forward now 50-odd years. In City of Ghosts, a 2002 thriller set in contemporary Cambodia, the hero, a crooked insurance salesman played by Matt Dillon, uses a suitcase full of dollars to ransom his partner and mentor, played by James Caan, who has been kidnapped by business associates. More than half a century of cinematic time has passed and the location is now developing Asia rather than Europe, but the suitcase still contains dollars, not Japanese yen or Chinese renminbi. That any self-respecting kidnapper would expect the ransom to be paid in dollars is so obvious as to go unstated.
The suitcase full of dollars is by now a standard trope of mystery novels and Hollywood screenplays. But this artistic convention reflects a common truth. For more than half a century the dollar has been the world’s monetary lingua franca. When a senator from the Republic of Kalmykia is caught shaking down a Russian airline, he is apprehended with a suitcase containing $300,000 in marked U.S. bills. When Somali pirates ransom a ship, they demand that the ransom money be parachuted to them in dollars. As the Wall Street Journal has put it, “In the black market, the dollar still rules.”1 The fact that nearly three-quarters of all $100 bills circulate outside the United States attests to the dollar’s dominance of this dubious realm.
But what is true of illicit transactions is true equally of legitimate business. The dollar remains far and away the most important currency for invoicing and settling international transactions, including even imports and exports that do not touch U.S. shores. South Korea and Thailand set the prices of more than 80 percent of their trade in dollars despite the fact that only 20 percent of their exports go to American buyers. Fully 70 percent of Australia’s exports are invoiced in dollars despite the fact that fewer than 6 percent are destined for the United States. The principal commodity exchanges quote prices in dollars. Oil is priced in dollars. The dollar is used in 85 percent of all foreign exchange transactions worldwide. It accounts for nearly half of the global stock of international debt securities.2 It is the form in which central banks hold more than 60 percent of their foreign currency reserves.
This situation is more than a bit peculiar. It made sense after World War II when the United States accounted for more than half of the combined economic output of the Great Powers.3 America being far and away the largest importer and main source of trade credit, it made sense for imports and exports to be denominated in dollars. Since the United States was the leading source of foreign capital, it made sense that international financial business was transacted in dollars. And with these same considerations encouraging central banks to stabilize their currencies against the dollar, it made sense that they should hold dollars in reserve in case of a problem in foreign exchange markets.
But what made sense then makes less sense now, when both China and Germany export more than the United States. Today the U.S. share of global exports is only 13 percent. The United States is the source of less than 20 percent of foreign direct investment, down from nearly 85 percent between 1945 and 1980.4
These two changes are both manifestations of the same fact: the United States is less dominant economically than 50 years ago. This fact reflects the progress of other economies, first Europe, then Japan, and most recently emerging markets like China and India, in closing the per capita income gap. Economists refer to this narrowing as catch-up or convergence. It is entirely natural insofar as there is no intrinsic reason that U.S. incomes and levels of labor productivity should be multiples of those in the rest of the world. This process of catch-up is one of the great achievements of the late twentieth and early twenty-first centuries in that it has begun lifting out of poverty the majority of the world’s population. But it also means that the United States accounts for a smaller share of international transactions. And this fact creates an uneasy tension with the peculiar dominance of the dollar.
This dominance is something from which we Americans derive considerable benefit. An American tourist in New Delhi who can pay his cab driver in dollars is spared the inconvenience of having to change money at his hotel. The widespread international use of the dollar is similarly an advantage for American banks and firms. A German company exporting machine tools to China and receiving payment in dollars incurs the additional cost of converting those dollars into euros, the currency it uses to pay its workers and purchase its materials. Not so a U.S. exporter of machine tools. Unlike firms in other countries, the U.S. producer receives payment in the same currency, dollars, that it uses to pay its workers, suppliers, and shareholders.
Similarly, a Swiss bank accepting deposits in francs but making foreign loans in dollars, since that’s what its customers want, has to worry about the risk to its profits if the exchange rate moves.5 That risk can be managed, but doing so is an added cost of business. Our Swiss bank can protect itself by buying a forward contract that converts the receipts on its dollar loan into francs when the loan matures, at a rate agreed when the loan is made. But that additional transaction has an additional cost. American banks that make foreign loans in dollars as well as taking deposits in dollars are spared the expense of having to hedge their foreign currency positions in this way.
A more controversial benefit of the dollar’s international-currency status is the real resources that other countries provide the United States in order to obtain our dollars. It costs only a few cents for the Bureau of Engraving and Printing to produce a $100 bill, but other countries have to pony up $100 of actual goods and services in order to obtain one. (That difference between what it costs the government to print the note and a foreigner to procure it is known as “seignorage” after the right of the medieval lord, or seigneur, to coin money and keep for himself some of the precious metal from which it was made.) About $500 billion of U.S. currency circulates outside the United States, for which foreigners have had to provide the United States with $500 billion of actual goods and services.6
Even more important is that foreign firms and banks hold not just U.S. currency but bills and bonds that are convenient for international transactions and at the same time have the attraction of bearing interest. Foreign central banks hold close to $5 trillion of the bonds of the U.S. treasury and quasi-governmental agencies like Fannie Mae and Freddie Mac. They add to them year after year.
And insofar as foreign banks and firms value the convenience of dollar securities, they are willing to pay more to obtain them. Equivalently, the interest rate they require to hold them is less. This effect is substantial: the interest that the United States must pay on its foreign liabilities is two to three percentage points less than the rate of return on its foreign investments.7 The U.S. can run an external deficit in the amount of this difference, importing more than it exports and consuming more than it produces year after year without becoming more indebted to the rest of the world. Or it can scoop up foreign companies in that amount as the result of the dollar’s singular status as the world’s currency.
This has long been a sore point for foreigners, who see themselves as supporting American living standards and subsidizing American multinationals through the operation of this asymmetric financial system. Charles de Gaulle made the issue a cause cĂ©lèbre in a series of presidential press conferences in the 1960s. His finance minister, ValĂ©ry Giscard d’Estaing, referred to it as America’s “exorbitant privilege.”
Not that this high-flown rhetoric led to changes in the actual existing system. In international finance as in politics, incumbency is an advantage. With other countries doing the bulk of their transactions in dollars, it was impossible for any individual country, even one as critical of America’s exorbitant privilege as France, to move away from the currency. And what was true in the 1960s remained true for the balance of the twentieth century.
But today, in the wake of the most serious financial crisis in 80 years, a crisis born and bred in the United States, there is again widespread criticism of America’s exorbitant privilege. Other countries question whether the United States should have been permitted to run current account deficits approaching 6 percent of GDP in the run-up to the crisis. Emerging markets complain that as their economies expanded and their central banks felt compelled to augment their dollar reserves, they were obliged to provide cheap finance for the U.S. external deficit, like it or not. With cheap foreign finance keeping U.S. interest rates low and enabling American households to live beyond their means, poor households in the developing world ended up subsidizing rich ones in the United States. The cheap finance that other countries provided the U.S. in order to obtain the dollars needed to back an expanding volume of international transactions underwrote the practices that culminated in the crisis. The United States lit the fire, but foreigners were forced by the perverse structure of the system to provide the fuel.
If this was not injustice enough, there is the fact that America’s international financial position was actually strengthened by the crisis. In the course of 2007 the dollar weakened by about 8 percent on the foreign exchange market.8 But since our debts are denominated in our own currency, there was no impact on their dollar value. In contrast, our foreign investments, whether in bonds or factories, became more valuable as the dollar fell.9 The interest and dividends they threw off were worth more when converted back into dollars.
The dollar’s depreciation thereby improved the U.S. external position by almost $450 billion.10 This largely offset the increase in U.S. indebtedness to the rest of the world that would have otherwise resulted from our $660 billion current account deficit. It was almost enough to keep our debts to other countries stable, despite our consuming 6 percent more than we produced. Then in 2008, in the throes of the most serious financial crisis in 80 years, the federal government was able to borrow vast sums at low interest rates because foreigners figured that the dollar was the safest currency to be in at a time of great turmoil. And again in the spring of 2010, when financial volatility spiked, investors fled into the most liquid market, that for U.S. treasury bonds, pushing down the cost of borrowing for the U.S. government and, along with it, the mortgage interest rates available to American households. This is what exorbitant privilege is all about.
But now, as a result of the financial mismanagement that spawned the crisis and growing dissatisfaction with the operation of the international monetary system, the dollar’s singular status is in doubt. The U.S. government has not been a worthy steward of an international currency, its critics complain. It looked the other way while the private sector produced the mother of all financial crises. It ran enormous budget deficits and incurred a gigantic debt. Foreigners have lost faith in the almighty dollar. They are moving away from it as a unit in which to invoice and settle trade, denominate commodity prices, and conduct international financial transactions. The dollar is at risk of losing its exorbitant privilege to the euro, the renminbi, or the bookkeeping claims issued by the International Monetary Fund known as Special Drawing Rights (SDRs).
Or so it is said. It is said by no less an authority than Sarah Palin on her Facebook page, who warned in October 2009 that talk that the Gulf countries might shift to pricing oil in a basket of currencies “weakens the dollar and renews fears about its continued viability as an international reserve currency.”11
That this issue has flashed across the radar screens of politicians who are not exactly renowned for their financial expertise reflects the belief that larger things are at stake. It is thought that widespread international use of a currency confers on its issuer geopolitical and strategic leverage. Because the country’s financial position is stronger, its foreign policy is stronger. Because it pays less on its debts, it is better able to finance foreign operations and exert strategic influence. It does not depend on other people’s money. Instead, it has leverage over other countries that depend on its currency. Compare the nineteenth century, it is said, when Britannia ruled the waves and the pound dominated international financial markets, with the post–World War II period, when sterling lost its dominance and the United States, not Britain, called the foreign-policy shots.
Were all this right, there would have been no reason for me to write this book or for you to read it. In fact, however, much of what passes for conventional wisdom on this subject is wrong. To start, it has cause and effect backward. There may be an association between the economic and military power of a country and the use of its currency by others, but it is a country’s position as a great power that results in the international status of its currency. A currency is attractive because the country issuing it is large, rich, and growing. It is attractive because the country standing behind it is powerful and secure. For both reasons, the economic health of the country issuing the currency is critical for its acquisition and retention of an international role.
But whether its currency is used internationally has at best limited implications for a country’s economic performance and prospects. Seignorage is nice, but it is about number 23 on the list of factors, ranked in descending order of importance, determining the place of the United States in the world. That said, how the country does economically, and whether it avoids policy blunders as serious as those that led to the financial crisis, will determine the dollar’s fate. Sterling lost its position as an international currency because Britain lost its great-power status, not the other way around. And Britain lost its great-power status as a result of homegrown economic problems.
The conventional wisdom about the historical processes resulting in the current state of affairs—that incumbency is an overwhelming advantage in the competition for reserve currency status—is similarly wrong. It is asserted that the pound remained the dominant international currency until after World War II, long after the United States had overtaken Britain as the leading economy, reflecting those self-same advantages of incumbency. In fact, the dollar already rivaled sterling as an international currency in the mid-1920s, only 10 short years after the establishment of the Federal Reserve System. It did so as a result of some very concrete actions by the Fed to promote the dollar’s international role. This fact has very different implications than the conventional wisdom for how and when the Chinese renminbi might come to rival the dollar. It suggests that the challenge may come sooner rather than later.
Finally, the idea that the dollar is now doomed to lose its international currency status is equally wrong. The dollar has its problems, but so do its rivals. The euro is a currency without a state. When the euro area experiences economic and financial problems, as in 2010, there is no powerful executive branch with the power to solve them, only a collection of national governments more inclined to pander to their domestic constituencies. The only euro-area institution capable of quick action is the European Central Bank. And if quick action means printing money to monetize government debts, then this is hardly something that will inspire confidence in and international use of the euro. The renminbi, for its part, is a currency with too much state. Access to China’s financial markets and international use of its currency are limited by strict government controls. The SDR is funny money. It is not, in fact, a currency. It is not used to invoice and settle trade or in private financial transactions. As a result, it is not particularly attractive for use by governments in their own transactions.
The United States, whatever its other failings, is still the largest economy in the world. It has the largest financial markets of any country. Its demographics imply relatively favorable growth prospects.
But the fundamental fallacy behind the notion that the dollar is engaged in a death race with its rivals is the belief that there is room for only one international currency. History suggests otherwise. Aside from the very peculiar second half of the twentieth century, there has always been more than one international currency. There is no reason that a few years from now countries on China’s border could not use the renminbi in their international transactions, while countries in Europe’s neighborhood use the euro, and countries doing business with the United States use the dollar. There is no reason that only one country can have financial markets deep and broad enough to make international use of its currency attractive. There may have been only one country with sufficiently deep financial markets in the second half of the twentieth century, but not because this exclusivity is an intrinsic feature of the global financial system.
The world for which we need to prepare is thus one in which several international currencies coexist. It was with this world in mind that the euro was created. A world of several international currencies is similarly what China is after. China has no interest in “dethroning” the dollar. To the contrary, it has too much invested in the greenback. But preserving its investment in the dollar is entirely compatible with creating a more consequential international role for its own currency. And where the renminbi leads, other emerging market currencies, such as the Indian rupee and Brazilian real, could eventually follow.
Serious economic and financial mismanagement by the United States is the one thing that could precipitate flight from the dollar. And serious mismanagement, recent events remind us, is not something that can be ruled out. We may yet suffer a dollar crash, but only if we bring it on ourselves. The Chinese are not going to do it to us.
But this is to get ahead of the story.
DEBUT
When in 1620 a landing party of English religious dissidents led by William Bradford and Myles Standish came ashore near what is today Provincetown, Massachusetts, they brought with them English money and a custom of expressing values in pounds, shillings, and pence. The colonists were not a wealthy band, and it was not many years before they had expended their English money on supplies from the Old World. Finding a substitute was not easy in a colony without a mint or the permission to establish one, and with England prohibiting the export of coin (the English monarchs husbanding all the precious metal they possessed for fighting expensive wars).
Commodity currency was the obvious alternative. Every schoolchild learns about the colonists’ use of wampum. Native Americans valued the purple and white quahog and whelk shells strung in the form of necklaces and ornamental belts and were willing to part with furs, skins, and other commodities in order to obtain them.1 The colonists with their tools were efficient producers of necklaces and belts. From trade with the natives the use of wampum spread to transactions among the colonists themselves. In 1637 wampum was made legal tender, officially recognized money for paying debts, in the Massachusetts Bay Colony at a rate of six white beads or three purple beads per penny.
But there were only so many snail and clam shells to go around. So the colonists turned to other commodities for use in their barter transactions: corn, codfish, and beaver in the north, tobacco and rice in the south. These items were used in transactions because they were the dominant products of the region. Local governments allowed residents use corn or tobacco to discharge their tax obligations.2 The next step was to declare that the commodity in question should be accepted not just in public payments but by private parties. Massachusetts made corn legal tender. Connecticut did the same for wheat, Virginia for tobacco.3
Making these commodities legal tender had some awkward consequences. When Virginia gave tobacco legal-tender status, there was an incentive to increase production, of the lowest grades in particular. With more tobacco chasing the same goods, the purchasing power of tobacco declined. Farmers complained of low prices. The General Assembly of Burgesses, the representatives of Virginia’s agricultural districts, considered measures to restrict tobacco cultivation but could not agree. In 1682, farmers angry over low crop prices took matters into their own hands, rampaging through the fields and destroying their neighbors’ tobacco plants. The government mustered the militia. The rioters carried out their work under cover of darkness. Order was restored only after several months of police action.
Farm products like tobacco had further disadvantages as media of exchange, stores of value, and means of payment—most obviously bulk, lack of uniformity, and spoilage. There understandably developed a preference for coin. Since the English authorities did not permit the colonies to operate a mint, such coin as circulated had to be imported.4 English coin could be obtained by exporting other merchandise, although London with its prohibitions did not make this easy. Closer at hand, coin could be obtained in the West Indies, silver coins being abundant there as a result of Spain’s prolific Mexican and Peruvian mines. The North American colonists sold dried fish, whale oil, pickled beef, and grain in return for coin. Exportation of many of these products to destinations other than other English colonies and the mother country being prohibited, much of this was smuggled. Coin earned by exporting merchandise was supplemented by that acquired through piracy, an important industry for the seventeenth-century colonists in the established English tradition.5 The pirates spent much of their booty, which included Spanish coin, while on shore leave in the northern colonies.
The most popular coins, weighing 423.7 grains of silver were known as “pesos.” Valued at eight Spanish reals, they were referred to as “pieces of eight.”6 Dealers in foreign exchange in London referred to them as “dollars” or “Spanish dollars,” the Bohemian state of Joachimsthal having produced a coin of similar size and content known as the Joachimsthaler, or as anglicized the “Joachimsdollar.” In addition, gold johannes were imported from Portugal, louis d’or from France, sequins from Venice. But on the eve of the colonies’ war of independence, Spanish silver coins were the dominant part of the coinage.
Coin was supplemented with bills of credit—paper money issued via public loans.7 It was issued, that is, when the colonists’ English overseers permitted, Parliament prohibiting the practice starting in 1751. This ban was among the economic grievances setting the stage for the American Revolution. No less a figure than Benjamin Franklin objected to it in testimony to the British Parliament in 1766.
ALL ABOUT THE BENJAMINS
The colonies’ war of independence was necessarily improvised, but nowhere more than in the monetary sphere. Delegates to the Continental Congress, not being able to commit their principals, lacked the power to raise taxes. They sought to pay for the war by issuing IOUs, continental bills or “continentals” for short. Bills issued under the authority of the Continental Congress were supplemented by bills issued by each colony. The consequences predictably included bills trading at a confusing variety of different prices, inflation, and the disappearance of gold and silver from circulation.
It took the leaders of the new nation some time to regularize this irregular situation. In 1785 the Congress passed a resolution declaring that the “money unit of the United States of America be one dollar” and that the dollar should be divided according to the decimal system. Thomas Jefferson, having been exposed to the advantages of decimalization in France, insisted on the provision. A resolution adopted in August 1786 then referred to the hundredth part of a dollar as a cent and a tenth part as a dime. It defined the dollar in terms of grains of silver and gold at a ratio of 15.253 to 1.
In September 1786, Congress then ordered the establishment of a mint. In its initial months of operation, only a few one-half-, one-, and two-cent copper coins were produced, minting being an activity with which the locals had little experience. A number of states also engaged in coining. The Constitution, which came into force in March 1789, then asserted the power of Congress to coin money and regulate its value while prohibiting the states from coining money or emitting IOUs that circulated like money.8
The last phase of the birthing process was the Coinage Act of 1792, for which Alexander Hamilton was midwife. Hamilton, one of the three members of President George Washington’s first cabinet, believed fervently in the need to bind the thirteen states together. He saw a uniform currency as an effective form of glue. His Report on the Establishment of a Mint, submitted to the Congress in January 1791, offered detailed proposals for a mint and a uniform coinage to encourage commerce not just within but across the newly independent states.
Hamilton was a proponent of bimetallism, in which gold coins were used for large-value trade, silver coins for petty transactions. In the course of preparing his report, he examined the tables that Sir Isaac Newton prepared in 1717, when as master of the mint Newton had specified the pound’s value in terms of the two metals. The 1792 act based on Hamilton’s report similarly defined the dollar in terms of both gold and silver. It defined smaller denominations using the decimal system, dubbing them the quarter, dime, and cent.
To determine the silver content of the dollar, Hamilton had the Treasury weigh a sample of Spanish dollars. Their average silver content was 371.25 grains, not the official Spanish figure of 377, coins circulating in the United States being clipped and worn. The Americans being nothing if not pragmatic, 371.25 grains was adopted as the official silver content of the dollar. Drawing inspiration once more from Newton’s 1717 report, the ratio of silver to gold was set at fifteen to one.9
Acceptance of the new U.S. dollar was swift because of its resemblance to the Spanish dollars already in circulation. Indeed, Spanish dollars, notably those coined in Mexico, continued to circulate because of the slow progress of the mint. So widely did they circulate that in 1793 Congress recognized the most important ones as legal tender. Many remained in circulation until the middle of the nineteenth century. One enduring legacy of the Spanish coins that constituted the bulk of the circulation is the dollar sign. The sign “$” derives from the peso, the two parallel lines being the vertical portions of “P,” and the “S” indicating the plural. This explains why the “$” symbol is also used in countries whose currency is the peso—in Argentina, for example.
O CANADA
Over the balance of the nineteenth century the dollar had a colorful history, but it was almost entirely a domestic history. Canada was the one place outside the United States where it circulated. The British colonies of Upper and Lower Canada, like their colonial counterparts to the south, had no currency of their own. English, French, and Spanish coins all circulated. In the 1830s one writer complained that the coinage had “more the appearance of the fifteenth than the nineteenth century. All the antiquated cast-off rubbish, in the whole world, finds it way here, and remains. This Colony is literally the Botany Bay for all the condemned coins of other countries; instead of perishing in the crucible, as they ought to do, they are banished to Canada, where they are taken in hand.”10
These coins had legal tender status at values that depended on their gold and silver content. As trade with the newly independent United States expanded, they were increasingly supplemented by dollars. While the merchants of Upper and Lower Canada still did their accounting in pounds, shillings, and pence, their transactions increasingly were in dollars and cents.
In the 1850s, with U.S. coins in widespread use, a groundswell developed to give them official status in Canada. Francis Hincks, the onetime banker and railway speculator who served as prime minister from 1851 to 1854, endorsed the campaign, and in 1853 an act was passed recognizing not just pounds, shillings, and pence but also dollars and cents as units of Canadian currency. Simply shifting to the decimal system would have been easier, but the prospect excited fears that doing so would somehow lead to annexation by the United States. Finally in 1857, suppressing these exaggerated worries, the Canadian Parliament passed an act specifying that the accounts of provincial governments should be expressed in dollars and cents. In 1858 the English Royal Mint stamped the first Canadian silver coins in denominations of 5, 10, and 20 cents.
But even after confederation in 1867 and the issuance of a dominion currency, U.S. dimes, quarters, and half-dollars continued to circulate. The bullion content of these U.S. coins was typically 2.5 percent less than their face value. While they might be accepted at face value by merchants and individuals, banks accepted them only at a discount.
This lack of uniformity was a considerable nuisance. In 1868 the dominion government sought to eliminate it by exporting to New York the U.S. silver coins that had come into its possession. This making only a small dent in the problem, in 1870 it agreed to provide the banks a commission in return for buying up the remaining U.S. coin and to pay the cost of exporting it to New York.11 This was Hincks at work again, his having returned as finance minister in 1869 after a period as imperial governor in Barbados and British Guiana. The dominion government then issued full-bodied silver coins in 25- and 50-cent denominations to replace the U.S. coin that now finally disappeared from circulation. The dollar’s international role north of the border thereby came to an ignominious end.
OTHER PEOPLE’S MONEY
Not only did little U.S. money circulate outside the United States, especially after it was expelled from Canada, but the dollar played virtually no role in financing America’s own import and export trade. Whether an American merchant needed credit to purchase imports or to extend credit to a foreign purchaser of American goods, he secured it not in New York but in London or, less frequently, Paris or Berlin. It followed that this credit was denominated not in dollars but in pounds, francs, or marks.
Why London dominated this business is no mystery. Britain was the first industrial economy and the leading trading nation. With economic growth and development came the growth of financial markets. Already in the mid-nineteenth century Britain had a well-developed banking system. It had the Bank of England, chartered in 1694 to raise money for war with France, which had come to assume the functions of a modern central bank. It had stable money as a result of being on the gold standard.
It had not always been so. Traditionally the Royal Mint had been run by the Company of Moneyers, descended from the medieval gild of coiners, whose members were notorious for self-dealing, corruption, and drunkenness. Practices at the mint had so deteriorated by the end of the seventeenth century that the government took the extraordinary step of appointing the country’s premier scientist, the efficient and scrupulously honest Isaac Newton, as Warden of the Mint. Saddled with financial difficulties, Newton was happy to accept, since the position came with a salary. He addressed the personnel problem. He did his detailed study of the coinage. He put Britain on the gold standard in 1717.
By the nineteenth century, London had become the premier financial center. Because it was where members of the British Empire serviced their debts, London had developed efficient clearing mechanisms that could also be used by other countries. Britain was the leading foreign investor. And when one of its banks made a loan to a foreign borrower, that loan was naturally in the form of its own currency, the pound sterling. With so many loans denominated in sterling, it became natural for governments, when borrowing in London, to maintain accounts there in order to conveniently service their debts. These accounts were what subsequently came to known as “reserves.”
Because Britain was the leading importer of industrial raw materials and food, the most important commodity exchanges—the Manchester Cotton Exchange, the Liverpool Corn Market, and of course the London Gold Market—were located there. Britain was also an efficient provider of trade-related services such as shipping and insurance. All this made London an obvious place to obtain credit for those engaged in international trade. And for reasons of their own convenience, the credit provided by British banks was denominated in sterling. It followed that upwards of 60 percent of world trade was invoiced and settled in British pounds.12
NO CREDIT
When a businessman ships a batch of goods, he needs cash. He goes to his bank with papers showing that he has shipped the goods and what he will be paid in the future. If his credit—and the credit of the buyer—is good, he can get his money immediately rather than having to wait for the goods to arrive in the foreign market and for the buyer’s payment to arrive in the United States. The papers in question are known as “trade acceptances.” In purchasing them at a discount from their face value, a bank is said to “discount” them.
But having to rely on London for trade credit, as U.S. importers and exporters did, made the process positively labyrinthine. Picture the requirements facing a New York coffee roaster importing beans from Brazil.13 The importer first had to go to his bank to obtain a letter of credit specifying the terms of the transaction, the goods to be shipped, and the insurance on the shipment. In issuing the letter, his bank committed to paying out funds when receiving confirmation that the transaction was complete. The bank then sent a copy of the letter to the London bank guaranteeing payment of the bill. It gave the importer the original and a second copy.
The importer next sent the original letter of credit to the Brazilian dealer, authorizing him to draw on the London bank against his shipment of coffee. The dealer shipped the coffee and, with documents attached, presented his draft on the London bank to his Brazilian bank. The willingness of his Brazilian bank to purchase (or “discount”) the draft reflected the expectation that it would be “accepted” by a reputable British bank that would pay out the specified amount of cash when the draft matured.
After discounting the draft, the Brazilian bank sent one duplicate set of documents to the New York bank and another, along with its draft, to its correspondent bank in London. The correspondent could hold the accepted draft until it matured, at which point the correspondent would present it to the accepting bank and be paid, or sell it to another party. In practice other interested parties included not just banks but also business enterprises and individuals seeking to invest in relatively safe short-term assets. When presented with the draft for payment, the accepting bank in London checked it against the letter of credit it had received from New York. Finding everything in order, it sent the papers accompanying the draft back to the New York bank.
At this point the American importer, in order to obtain the bill of lading sent to the New York bank by the London bank as part of the documentation accompanying the draft, signed a trust receipt committing to hold the goods in trust for the bank as its property and to turn over to it the proceeds of his sales as they applied to the acceptance. An accepted bill was generally drawn to mature in 90 days, giving the importer time to sell the shipment. Prior to the draft maturing, the importer delivered the funds to his New York bank, which sent them on to the London bank. The London bank paid the holder of the acceptance on its maturity, and the transaction was complete.
One’s first reaction on encountering this exhaustingly long list of procedures is that the transaction could have been completed more easily had it not required multiple communications with London. American merchants complained of having to pay not just a fee to their New York bank for the letter of credit but also a collection charge to the bank in London. Since London banks preferred lending in sterling, the practice also exposed American merchants to the risk that the sterling-dollar exchange rate would move against them, which was an additional cost of doing business.14
These practices had still further disadvantages for American business. To the extent that finance and commercial services like shipping and insurance came bundled together, American providers of the latter found it more difficult to compete. Familiarity with facilities for providing trade credit similarly made London the obvious place to source other financial services—to underwrite bond issues, for example.
PROMINENT BY ITS ABSENCE
Great Britain was a small windswept island off the northeast coast of Europe. The United States, in contrast, was a continental economy. By 1870 it had pulled ahead of Britain in the production of goods and services. By 1912 it had pulled ahead as an exporter of merchandise.
It was thus anomalous that the United States continued to depend on London for trade finance and that the dollar played no international role. Part of the explanation lay in regulations preventing American banks from branching overseas. Extending credit to foreign merchants required information on their activities, something that British banks, with their far-flung branch networks, were in a position to gather. French, German, and Dutch banks similarly had foreign branch networks. But not so national banks in the United States, which were prohibited from branching not just internationally but even across state lines. In some states they were prohibited from branching at all.15
An exception was the International Banking Corporation, a specialized institution created to engage in foreign banking but which, to prevent it from using this advantage to dominate the domestic market, was prohibited from engaging in banking business in the United States. IBC was organized in 1901 by Marcus Hartley, owner of the Remington Arms Company, to promote and finance the expansion of American trade with Asia, the Spanish-American War having brought the region to his attention. Hartley and his partners copied the structure and raided the personnel of British banks already active in the Far East.16 By 1910 IBC had sixteen branches, mostly in Asia.17
In addition, some states allowed trust companies (bank-like companies that oversaw the affairs of trust funds and estates) to operate foreign branches. Foreign branches made it easier to invest in foreign bonds. But the only trust companies with foreign branches were the Farmers’ Loan and Trust Company, the Trust Company of America, the Guaranty Trust Company, the Empire Trust Company, and the Equitable Trust Company. Farmers’ Trust had two foreign branches, the others just one. Such was the extent of foreign branching by American financial institutions.18
Until the passage of the Federal Reserve Act in 1913, national banks were even prohibited from dealing in trade credit.19 The National Banking Act of 1863 and associated legislation included no provisions authorizing them to do so. And the courts, suspicious of banks encroaching into new areas, ruled that national banks could not engage in the business without express congressional authorization.20
Before putting too much weight on these legal restrictions, it is worth recalling that all the great accepting banks in London were private. The United States also had private banks that did not need state or federal charters and hence were free of regulatory restrictions. These included names like J.P. Morgan and Company, Brown Brothers and Company, and Lazard Frères. In principle, nothing prevented these banks from dealing in acceptances. Many had sister firms and offices across the water to provide market intelligence. J.P. Morgan had Morgan, Grenfell and Company. Lazard Frères had offices in London and Paris.
But even private banks contributed to the finance of U.S. foreign trade only to a very limited extent. Evidently something else made it hard for American banks, even private banks not inhibited by regulatory restrictions, to break into the market.
That something else was a cost disadvantage. London banks had a well-developed population of investors to whom trade acceptances might be resold, which made risks less and interest rates lower. With so many investors active on this market, it was possible to buy and sell these instruments without moving prices. To put it in the language of finance, the London market was exceptionally liquid. There was little uncertainty about the price one could obtain when discounting a bill. This encouraged yet more investors to come to London, adding further to the market’s liquidity. It made the decision of whether to ask for a draft on a well-known British house or an unfamiliar American competitor a no-brainer for our Brazilian coffee dealer. It was possible to engage in a large volume of business without moving prices.
And what worked for individual investors worked for governments and central banks. The liquidity of its market made London an attractive place for governments and central banks to hold reserves. And the more bills on London they substituted for gold—which, its other attractions notwithstanding, bore no interest—the greater was the liquidity of the market. This was the advantage of the incumbent international currency, the so-called “first-mover advantage” that enables it to hang on even when the country issuing it has gone into decline.
But the fact that France and Germany were able to enter the market suggests that Britain’s first-mover advantage was not insurmountable. Other factors must have been holding America back. One handicap was the volatility of its financial markets. By one count, the United States experienced fourteen financial crises in the century preceding World War I, of which 1907 was the worst. Interest rates spiked, and for many borrowers credit became unavailable at any price. This was not a market on which many people, given a choice, would finance their trade.
Then there was the fact that it proved impossible for the United States to keep both gold and silver coins in circulation, given that the market price of the two metals was changing continuously. Even after 1879, when the United States formally went onto the gold standard, its commitment remained uncertain. This was notably true in the 1890s, when the inflationist free-silver movement was given voice by William Jennings Bryan. Our Brazilian coffee dealer would have been reluctant to accept a contract in which he would receive dollars sometime in the future, given the risk that additional silver might be coined and the dollar might depreciate against currencies more firmly tied to gold.
BIDDLE’S FOLLY
Finally there was the fact that the United States had no central bank to stabilize the market. When London banks needed cash, they could raise it by reselling to the Bank of England some of the securities that they had purchased previously. (The practice was known, for self-evident reasons, as “rediscounting” at the Bank.) At the end of the nineteenth century, the Bank of England was the single largest purchaser of bills on the London market, sometimes accounting for the majority of all transactions.21
America had nothing resembling these arrangements. A proto-central bank, the Bank of the United States, had been founded in Philadelphia in 1791. The Bank of the United States was another Alexander Hamilton invention, Hamilton having educated himself about the advantages accruing to Britain from the existence of the Bank of England. Created over the objections of Thomas Jefferson and James Madison, who feared that it would lead to elite control of American finances, the Bank of the United States was the new nation’s largest financial institution and the only one permitted to operate in more than one state. It kept the Treasury Department’s accounts. By refusing to accept the notes of banks that did not pay out the designated amount in gold or silver, it maintained the link between the money stock and supply of precious metal. It provided a check on local monopoly power by offering an alternative to local banks charging exorbitant rates.
These other institutions predictably registered their displeasure when the charter of the Bank of the United States came up for renewal in 1810. They complained that the Bank was less than vigilant in refusing to accept the notes of a non-specie-paying bank when politically influential individuals or its own investors were among its shareholders. Jeffersonian Democrats interpreting the Constitution literally insisted that the Congress had no power to charter a bank. The bill to recharter was defeated.
State banks were thus freed of discipline on their note-issuing activities. The next years saw a massive lending boom fueled by a flood of state banknotes, leading first to inflation and then, inevitably, to a crash. In 1816 this unhappy experience caused the Congress to reverse itself and charter a second Bank of the United States, again with a head office in Philadelphia and again for 20 years.
The policies of the Second Bank attracted little notice under its initial presidents, the unremarkable William Jones and Langdon Cheves. This changed in 1823 when Cheves was succeeded by Nicholas Biddle. Biddle was exceptionally smart and knew it, having completed his studies at Princeton at the age of fifteen and being selected to deliver the valedictory address. His self-confidence was matched only by his commitment to federalism, which traced back to his Princeton days, and by his belief that a strong government needed a strong central bank.
As a young member of the Pennsylvania State Senate, Biddle had fought unsuccessfully to mobilize support for rechartering the First Bank. Now, as the president of the Second Bank, he expanded its operations. He increased its loans and investments. He enlarged its branch network and again used it to discipline other banks. And he made no secret of his contempt for his fellow bankers, most of whom did not measure up to his exalted standards.
This approach did not exactly smooth relations with the country’s other financial institutions, whose owners complained to their elected representatives. Biddle sought to buy congressional support with campaign contributions and bribes, but these proved less effective than they might have been in softer hands.
In 1832, 4 years ahead of schedule and with Biddle’s encouragement, the eventual Whig candidate for president, Henry Clay, introduced into the Senate a bill to recharter the Bank. When Clay’s bill was passed by the Congress, the president, Andrew Jackson, promptly vetoed it. A Tennessean, Jackson was wedded to the increasingly anachronistic Jeffersonian ideal of an agrarian republic. He saw the Bank as favoring an elite circle of bankers and industrialists and favoring the Northeast over the South and West. Jackson was therefore quite happy to make his opposition to the Bank a central issue in his 1832 reelection campaign.
Biddle was confident, given what he took as the lessons of 1811–15, that the issue would be a winner for Clay. The voters, having shorter memories and being less enamored of the Bank’s hard-money policies, proved him wrong. There was also the opposition of the New York financial community, which was not fond of an arrangement that made Philadelphia the seat of financial power. Once reelected, Jackson made clear that the Bank would be rechartered only over his dead body.
In 1836, its federal charter expiring, the second Bank of the United States took out a state charter and became the United States Bank of Pennsylvania. Biddle attempted to establish his state-chartered bank as a platform for building a market in bills of exchange in Philadelphia. But with no equivalent of the Bank of England to backstop the market, even a formidable state bank lacked the resources. Biddle attempted to secure a line of credit from the Bank of England for his operation but, not surprisingly, was rebuffed. At that point the 1836–37 financial crisis put an end to his plan.22
More than three-quarters of a century would pass before the United States again possessed a central bank. Among the consequences was an international monetary system in which the dollar played no role. For central banks and governments, sterling, not the dollar, was “as good as gold.” Not just the French franc, German mark, Swiss franc, and Dutch guilder but even the Italian lira, Belgian franc, and Austrian shilling all ranked ahead of the dollar on the international pecking order on the eve of World War I, despite the fact that the United States was far and away the largest economy.23 Sterling accounted for roughly half of all of the foreign exchange reserves of central banks and governments, the French franc 30 percent, the German mark 15 percent. In addition, small amounts of Dutch guilder and Swedish krona were held as foreign exchange reserves. But not dollars.
ENTER THE FED
After the 1907 financial crisis, concern over the instability of American finance fused with the desire to create a U.S. market in trade credits. The 1907 panic was caused, the experts explained, by the fact that financial transactions in New York were nothing more than stock market speculation, as opposed to the kind of wholesome investments backed by import and export business that dominated in London. Then there was the fact that, in the absence of a central bank, the major financial institutions had been forced to rely on Wall Street’s dominant figure, the supremely confident and supremely rich J. Pierpont Morgan, to organize a rescue. This was not entirely reassuring, since it was unclear whether Morgan or someone like him would be there the next time crisis struck.
This pointed to the need for a permanent mechanism for managing monetary problems. To investigate solutions, a National Monetary Commission was set up in 1908. It included eighteen members of Congress under the chairmanship of the brusque and intimidating senior senator from Rhode Island, Nelson Aldrich. Although descended from an old New England family (his forbearers included John Winthrop and Roger Williams), Aldrich’s parents were not rich; he married money rather than inheriting it. Politically, he worked his way up, starting with the Providence City Council. Economically, he put his wife’s money to work by investing in the Providence street railway system. (The two endeavors were clearly not unrelated.) From city council Aldrich moved to the Rhode Island House of Representatives, the U.S. House, and finally, in 1881, the Senate.
By 1910 Aldrich had been in the Senate for close to 30 years. Having risen to the chairmanship of the Finance Committee, he was used to getting his way and not much inclined to defer to his senatorial colleagues. A conservative Republican, he had previously concentrated on securing tariff protection for U.S. manufacturing. But the 1907 crisis convinced Aldrich of the need for a stronger monetary framework, much as the monetary turmoil experienced by the new nation had convinced Hamilton of the need for the Bank of the United States.
The question was what kind of monetary framework. As head of the investigatory commission, Aldrich hired advisors. He consulted experts. He led a mission to Europe to study arrangements there. The trip convinced him of the need for a European-style money market backed by a central bank. The upshot was the Aldrich Plan, proposing the creation of a National Reserve Association at whose center would be a central bank with the power to influence financial conditions and lend to banks in distress.
The author of the Aldrich Plan’s technical provisions, who was to play an important role in the dollar’s subsequent rise to international prominence, was the German-born financier Paul Warburg. Warburg had started his career working for Simon Hauer, an importer and exporter in Hamburg. After further seasoning working for bankers in London and Paris, he moved to the family banking firm of M.M. Warburg and Company.
Warburg was thus intimately familiar with the mechanics of international finance. He was also connected with the higher echelons of American banking. At the end of a round-the-world tour in 1892, he had met the charming and talented Nina Loeb, who just happened to be the daughter of one of the founders of the prominent New York bank Kuhn, Loeb and Co. One thing led to another, and the two were married in 1895. After 7 years in Hamburg, the couple relocated to New York, and Warburg took up a position with his father-in-law’s firm. It was on a visit to Kuhn, Loeb in preparation for his mission to Europe that Aldrich encountered Warburg, who seemed uniquely well informed about European finance. By this time Warburg had become a proponent of an American central bank to support the development of a market in trade acceptances. He in turn was impressed by broad-shouldered Aldrich, who he took as the embodiment of monetary reform. Warburg began writing Aldrich about his ideas. Again, one thing led to another.24
Shy and self-effacing, Warburg preferred working out of the public eye. With a thick German accent, he was not a natural public speaker. But the 1907 financial crisis made him a man with a mission. By the end of the year, he was publishing in the New York Times on the need for a European-style central bank to stabilize American financial markets. He was not deterred by letters abusing him for his “un-American views.” By 1908 he was giving speeches on financial crises. He was soon testifying before Congress and serving as head of the National Citizens’ League for the Promotion of Sound Banking, a lobby for monetary reform.
In November 1910, Warburg and Aldrich, together with A. Piatt Andrew, assistant secretary of the treasury and a former assistant professor at Harvard who had served as special assistant to Aldrich’s monetary commission, and three Wall Street titans—Benjamin Strong, head of Bankers Trust; Frank Vanderlip, a onetime financial journalist and former assistant secretary of the treasury newly appointed as president of National City Bank, the largest bank in the country; and Henry Davison, senior partner and in-house fixer at J.P. Morgan & Company—snuck off to Jeckyll Island off the Georgia coast to draft a blueprint for a central bank.25 That Andrew’s participation was not known even to his boss, Secretary of the Treasury Franklin MacVeagh, testifies to the boldness of the expedition. J. P. Morgan himself had regularly taken hunting vacations on Jeckyll Island, explaining the venue. The six conspirators traveled by private railcar, disguised as duck hunters, to prevent their movements from being traced. To avoid having their identities learned by porters, Vanderlip and Davison adopted the further artifice of referring to one another as Orville and Wilber.
After the New Year, the Jeckyll Island blueprint was announced as the Aldrich Plan. To reassure those fearful of overweening government control, it proposed a more decentralized central banking structure than existed in Europe. It described a National Reserve Association with fifteen regional branches, each with the authority to discount trade acceptances. To ensure what the plan’s authors saw as proper democratic control, it recommended that their directors be elected by the commercial banks associated with each individual branch.
This was not obviously enough to surmount deep-seated popular and congressional concern over concentrated financial power. The notion that small business and the farmer were exploited by big finance still resonated powerfully, as in the days of Andrew Jackson. Attaching Aldrich’s name to the plan also had the unfortunate consequence of exciting those suspicious of a Wall Street money trust. Aldrich’s daughter, Abby, had married John D. Rockefeller Jr., only son of the oil magnate John D. Rockefeller, the single richest person in the country, causing Aldrich to be widely viewed as Rockefeller’s mouthpiece. The governor of New Jersey and presidential hopeful Woodrow Wilson explained that, while had not read the Aldrich Plan, he disliked anything bearing the senator’s name.
Then there was the fact that Frank Vanderlip, another member of the Jeckyll Island traveling party, had already begun to position the institution he headed, National City Bank (forerunner of today’s Citigroup), to capitalize on the opportunities created by monetary reform. Vanderlip established the National City Company, a holding-company affiliate, to buy up state banks and trust companies and engage in activities prohibited of a national bank. The prospect of a megabank monopoly excited not just local bankers but also farmers and businessmen long suspicious of big finance. Congressman Charles A. Lindbergh Sr. of Minnesota, a leading critic of the financial establishment, member of the House Committee on Banking and Currency, and father of the famous aviator, introduced a resolution to investigate the money trust. “Wall Street brought on the 1907 panic,” Lindbergh thundered, and “got people to demand currency reform…and, if it dares, [it] will produce another panic to pass the Aldrich central bank plan. We need reform, but not at the hands of Wall Street.”26 Lindbergh had grown up on the Minnesota frontier in the heyday of the Populist revolt against extortionate bankers and railroads. He was, it can be fairly said, obsessed with the money trust. Lindbergh was reported to have read all of the dozen-plus studies published by Aldrich’s National Monetary Commission cover to cover and still found time to pen his own 318-page study of monetary reform.27
It took the better part of 2 years for something resembling the Aldrich Plan to wend its way through the Congress. To quiet the critics and satisfy himself, the bill signed by President Woodrow Wilson provided for a system of regional reserve banks with locally appointed boards, not unlike that in Aldrich’s plan, but supervised by a Federal Reserve Board whose seven members would be selected by the president and not by the bankers.
Lindbergh was not impressed. “This act establishes the most gigantic trust on earth,” he railed. “When the president signs this bill, the invisible government by the Monetary Power will be legalized. The people may not know it immediately, but the day of reckoning is only a few years removed.” That day came, of course, in 1929, although it did not take exactly the form that Lindbergh had in mind.
ACCEPT YOURSELF
The mandate of the new central bank was to provide an “elastic currency.” It was to regulate the supply of credit to prevent disruptive interest rate spikes and market seizures like those of 1907. Among its techniques would be purchasing trade acceptances, the studies of the National Monetary Commission having shown that this was how the Bank of England smoothed rates.
Buying trade acceptances denominated in dollars assumed, of course, a supply of dollar-denominated trade acceptances to be bought. Providing them required American banks to go abroad. The Federal Reserve Act therefore authorized national banks with capital of at least $1 million to establish branches in foreign countries.28 It allowed them to purchase trade acceptances up to a limit of 50 percent of their own funds.
How did this market get up and running, given the cost and reputational advantages possessed by London? The difference now was not just the Federal Reserve Act but also World War I. The war saw a dramatic expansion of U.S. export trade, as America became factory and grainery to the world. American multinationals established operations in Latin America and Asia. The United States was transformed from debtor to creditor nation.
The war also disrupted the provision of trade credit in Europe. As governments mobilized for war, capital for trade finance grew scarce. German and British banks turned to New York to accept endorsed bills for their clients’ imports not just from North American but from Latin America and Asia as well. The credit they received was denominated in dollars because this was the currency with which the New York banks were familiar.
But this was not the only reason. Starting in 1915 sterling’s value in terms of gold, still the standard measuring rod, oscillated violently. Contracting today for future payment in a currency whose value was uncertain was unappealing. It was especially unappealing given the existence of an alternative, the dollar, still firmly pegged to gold. Not just American traders but also Brazilian exporters of coffee, and more generally importers and exporters throughout Latin America and Asia, concluded that the dollar was the more attractive unit in which to do business.
American banks, free now to deal in acceptances, scrambled to attract this business. The always expansion-minded National City Bank set up a Foreign Trade Department to provide exporters with information on the foreign demand for U.S. products and the creditworthiness of customers, packaging this advice with its financial services. National City was soon extending some $20 million of trade acceptances annually.29
In January 1916, with American support, the British government succeeded in pegging sterling to the dollar.30 But even if the British authorities succeeded in stabilizing sterling for the moment, this did not create confidence that it would remain stable, given massive wartime budget deficits and the rapid rise of British prices. Predictably, the pound began falling when American support was withdrawn at the end of the war. Within a year it had lost a third of its value, more even than when Napoleon returned from Elba in 1815. All the while the dollar remained pegged to gold. It was not surprising that American importers and exporters saw the dollar as the more attractive unit in which to do business. And what was true of merchants and traders in the United States was true of those in other countries.
National City Bank under Frank Vanderlip was again in the vanguard of U.S. banks expanding abroad. Possessing a former financial journalist’s mindfulness of the power of publicity, Vanderlip moved quickly to advertize his bank’s ambitions. Immediately upon passage of the Federal Reserve Act, he had a questionnaire sent to 5,000 American firms soliciting their views on which foreign markets would benefit from the presence of a National City branch. The Du Pont Company, which, sensing the wartime demand for munitions, had opened a nitrate factory in Chile, replied that it was desirous of South American branches. National City set up a branch in Argentina followed by others in Brazil, Chile, and Cuba. In 1915 it acquired the International Banking Corporation, which it used to set up branches across Europe and Asia. Where the bank did not establish branches outright, it sent representatives to gather market intelligence and solicit business.
Other U.S. banks followed National City into the fray. By the end of 1920, American banking institutions operated 181 branches abroad.31 Of those 181 branches, 100 were foreign offices of seven banks doing regular banking business in the United States, and 29 of those 100 branches were foreign offices of National City Bank or its subsidiary, the International Banking Corporation.32
These American banks operating in other countries encouraged importers there to accept drafts in dollars drawn on them by American exporters. Foreigners exporting to the United States could similarly draw in dollars on a U.S. bank instead of drawing drafts in London. Thus, it was not just U.S. importers and exporters who made use of the new acceptance market in New York but also foreign merchants linked to it by the foreign branches of American banks.
IN STRONG HANDS
But the growth of the acceptance market in New York and its progeny, the international use of the dollar, entailed more than the miracle of the market. American banks were not yet capable of building a dollar acceptance market. Their costs were still too high, reflecting a dearth of other investors to whom to sell their acceptances. In their absence, banks were forced to hold the acceptances they originated on their own balance sheets. Doing so was expensive, since the yield to maturity on this paper was often less than what the banks themselves had to pay when borrowing money.
The obstacle was the lack of familiarity of investors with the asset class. And familiarizing them took time. As explained by the new industry’s advocate, the American Acceptance Council (another Paul Warburg creation), the investor “would have to be educated, first as to the nature of a bankers’ acceptance, second as to its attractiveness as an investment, and third, owing to its quality as a doubly secured risk [that it was guaranteed both by the original issuer and the accepting bank], that it would be offered at a lower rate than he had been accustomed to, when buying the best single name commercial paper.”33 Until this was done, dollar acceptance business would remain stunted.
Rather than relying on the invisible hand, the entirely visible hand of Benjamin Strong, now governor of the Federal Reserve Bank of New York, took hold of this problem. In the Hamiltonian tradition, Strong believed in the need for central control of financial affairs. His great-grandfather, also named Benjamin, had served as Alexander Hamilton’s first clerk in the Treasury. The great-grandson grew up in modest circumstances. His father superintended a section of the New York Central Railroad, and Strong himself chose to forego college for financial reasons. Starting as a clerk (and for that purpose taking a remedial penmanship course to correct his borderline-illegible handwriting), Strong rose through the financial ranks before being tapped by Henry Davison, a country club acquaintance, to work for the newly formed Bankers Trust Company. When during the 1907 financial crisis J. P. Morgan organized the New York banks to rescue their weaker brethren, Morgan turned to Davison to manage the effort, and Davison turned to Strong. Strong’s involvement in those 1907 rescue efforts made him an energetic advocate of financial reform and put him on the road to Jeckyll Island.
Like Warburg, who had helped recruit him to the governorship of the New York Fed, Strong saw the need for a trade acceptance market to stabilize America’s finances. Fostering a market in actual merchandise transactions, as opposed to financial speculation, would help to prevent a recurrence of 1907-style financial excesses, he believed. As governor of the New York Fed, Strong also appreciated that the existence of a market in trade acceptances gave the Bank of England a handle with which to manage credit conditions. He saw development of this market as enhancing the competitiveness of American industry and expanding the country’s foreign trade. He saw all this as a project that the Federal Reserve System should support.
Following Strong’s lead, the Federal Reserve Board therefore instructed the system’s regional branches to purchase acceptances for their own account.34 The reserve banks purchased acceptances to stabilize and reduce discount rates, and the favorable behavior of discount rates in turn encouraged the growth of the market. In the first half of the 1920s the Federal Reserve Banks were the dealers’ dominant counterparty. In addition, a few other knowledgeable investors were attracted to the market. The main ones were foreign banks, including foreign central banks, with large surplus balances in the United States for whom acceptances quickly became a favored investment. The July 1919 issue of the Federal Reserve Bulletin noted that most of the $10 million acquired by the Dutch Central Bank on behalf of Dutch sellers of flower bulbs and diamonds purchased by Americans in Holland was invested in bank acceptances.
Slowly dealers specializing in acceptance business appeared on the scene. The largest of them, the International Acceptance Bank, had as its chairman none other than one Paul M. Warburg. Warburg’s motivation for launching IAB was to finance German grain imports and help rebuild Germany’s war-torn economy. IAB was also a way for Warburg to help his brother Max, who still ran the family firm in Hamburg. IAB would work hand in glove with M. M. Warburg, giving the latter much-needed business in the straitened circumstances of the 1920s.35 Slowly but surely other banks also created subsidiaries to purchase and sell acceptances and market them to retail investors.
DEBUT OF THE DOLLAR
The growth of this market in trade acceptances finally allowed the dollar to assume a meaningful international role. By the second half of the 1920s more than half of all U.S. imports and exports were financed by bank acceptances denominated in dollars.36 The attractiveness of doing business in New York reflected the fact that the interest rate that importers and exporters had to pay was now as much as a full percentage point lower than in London. Not just those buying and selling goods to the United States but also merchants engaged in trade with other countries flocked to New York. By the end of the 1920s the value of dollar acceptances issued to finance trade between third countries, together with those backed by goods warehoused in foreign countries, approached that of acceptances issued to finance imports into the United States itself.
This trend was part of the growing importance of the United States in international transactions generally. Europe having been devastated by the war, the resource requirements of postwar reconstruction were immense. It followed that the continent looked abroad for finance. A United States flush with funds was the obvious place to look. To governments for whom this was not obvious, Strong drove home the point. He traveled to Europe to negotiate loans. From Poland to Romania he sent emissaries like the Princeton University money doctor Edwin Kemmerer to encourage countries to contract loans in the United States.
In doing so Strong competed with Montagu Norman, his counterpart at the Bank of England, who urged countries to seek assistance for financial stabilization not in the United States but through the League of Nations—of which the United States conveniently was not a member. A League loan in London might help a country stabilize its currency, but it would also encourage it to contract for further borrowing there. Negotiating bilaterally with the United States, in contrast, would lead to borrowing in New York. Although the two men were outwardly very different—where Strong was handsome and self-confident, Norman had the pinched features of a hypochondriac—they were friends and even vacationed together. Strong famously kept interest rates low in 1924–25 to support Norman’s effort to return sterling to the gold standard. But if allied in other causes, they were rivals in this one. Strong used all his leverage to encourage countries to arrange their stabilization loans in New York.
All through the 1920s capital flowed from the United States, where it was abundant, to Europe, where it was scarce. American banks arranged bond issues for European governments and corporations, denominating them in dollars so they could be marketed to American investors. They opened store-fronts to pitch them to retail customers.
This high-pressure salesmanship should have been a warning. As inexperienced U.S. financial institutions rushed into the field, they extended increasingly dubious loans. One is reminded of the scramble of regional banks in the later stages of the boom into the subprime mortgage market. The title of Ilse Mintz’s study Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920–1930 tells the tale.37 Inexperienced U.S. banks enthusiastically underwrote, and their clients enthusiastically subscribed, bonds issued on behalf of German cities for the construction of municipal swimming pools, a form of liquidity that did not directly enhance the capacity to repay. Eighty years later American borrowers got even by selling German banks collateralized debt obligations backed by those same subprime loans.
Lending in Latin America by new entrants like the Chase Securities Company fared little better. A loan to Cuba for a highway spanning the island foundered on the inability of the contractors to complete more than isolated segments of pavement. It didn’t help that, for political reasons, the government felt compelled to commence construction of separate segments in all five provinces. Investors were in the dark about the fact that the son-in-law of the Cuban president had been hired by the Cuban branch of the American bank during the period that the bank in question competed for the privilege of lending to the Cuban government.
When at the end of the 1920s new money to service old debts stopped flowing, the Ponzi-like nature of the scheme was revealed. The majority of the foreign bonds underwritten by American banks lapsed into default.
But these were problems for later. For now the main impact of these flows was to enhance the international role of the dollar. Before the war, the dollar exchange rate had been quoted in fewer financial centers than minor currencies like the Italian lira and the Austrian shilling. Now it was quoted more frequently than all rivals. By the second half of the 1920s, foreign acceptances in dollars exceeded foreign acceptances in sterling by a factor of two to one. By 1924 the dollar accounted for a larger share than the pound of the foreign exchange reserves of central banks and governments.
Incumbency is thought to be a powerful advantage in international currency competition. It is blithely asserted that another quarter of a century, until after World War II, had to pass before the dollar displaced sterling as the dominant international unit. But this supposed fact is not, in fact, a fact. From a standing start in 1914, the dollar had already overtaken sterling by 1925. This should be taken as a caution by those inclined to argue that incumbency gives the dollar formidable advantages today.
To be sure, it took an exceptional shock, World War I, and the market-making efforts of the Fed to effect this changing of the guard. Still, it is not impossible to imagine something analogous today. For the wartime shock to sterling, substitute chronic U.S. budget deficits. And for the efforts of the Fed to establish a market in trade acceptances in New York, substitute the efforts of Chinese officials to establish Shanghai as an international financial center. The renminbi replacing the dollar may not be anyone’s baseline scenario, but it is worth recalling the history of the 1920s before dismissing the possibility.
IT ALL COMES CRASHING DOWN
The financial flowering of the dollar, however, soon was all for naught. The Roaring Twenties gave way to the Great Depression. This mother of all depressions was global. It affected every country. One of its most destructive impacts was on international transactions. And with the decline in international transactions came a decline in the international role of the dollar.
Trade was bound to contract with so vicious a fall in output and spending. But this was not all: seeing spending collapse, governments slapped on tariffs and quotas in a desperate effort to bottle up the remaining demand. Not knowing what else to do, they used trade policy to shift spending toward domestically produced goods. In the United States, farmers who had endured depressed crop prices now allied with light industry along the Eastern Seaboard to push the Smoot-Hawley tariff through Congress. In the UK, the influential economist John Maynard Keynes had trumpeted the advantages of globalization in his 1919 best-seller, The Economic Consequences of the Peace. In 1931, seeing no alternative, he advised the British government to impose an across-the-board tariff in a last-ditch effort to boost spending on domestic goods. The result was the General Tariff of 1932.38 Germany followed with an “equalizing tariff.” The Netherlands abandoned its traditional free trade policy, raising import duties by 25 percent. And so on. Whereas global production of primary products and manufactures fell by 20 percent between 1929 and 1932, the volume of international trade fell by fully 36 percent.39 There was correspondingly less demand for dollars to finance and settle trade.
The implosion of long-term foreign lending was even more dramatic. New long-term foreign loans by U.S. investors, having peaked at $1.2 billion annually in 1927 and 1928, fell to less than $200 million in 1931 and a scant $700,000 in 1932.40 And since the dollars on which foreigners relied to purchase U.S. imports were no longer available, the tendency to hold balances in New York to service such obligations declined commensurately.
What made the Great Depression great, of course, was that it was allowed to destabilize banking systems. Banks that had extended loans not just to foreign governments and corporations but also to American firms, farmers, and municipalities now saw these investments go bad. As bank balance sheets deteriorated, depositors scrambled to withdraw their funds. A first wave of bank runs erupted in the final months of 1930. Most of the affected banks had links to the Nashville-based investment firm Caldwell and Company, which controlled the largest chain of banks in the South. These banks were all owned by Caldwell and Company itself or one of its affiliates, or else they were owned and operated by individuals with personal ties to the founder, Rogers Caldwell. Caldwell was the Michael Milken of his day, having established his firm in 1917 at the tender age of twenty-seven to underwrite the junk bonds of southern municipalities and sell them to retail investors.41 His father, James Caldwell, had come to Nashville in 1870, where he went to work for a wholesale grocery. Finding himself one day unable to complete an order for millet seed (seed used to raise hay for horses), James had bought up the entire supply in the city, cornering the market and doubling his investment. From there it was a small step into insurance and banking. The son similarly moved into banking, and his operations were similarly dubious. Often the main and, indeed, only customers of Caldwell’s banks were the same municipalities whose bonds Caldwell and Company underwrote and sold onward. When those municipalities experienced financial distress in 1930, so did Caldwell’s banks.
But had it not been Caldwell it would have been someone else. The deterioration of economic conditions made banking problems inevitable. By 1931 there were bank runs in all parts of the United States. Nor was the problem limited to America: banking panics erupted in Argentina, Mexico, Austria, Belgium, France, Germany, Hungary, Romania, the Baltic states, Egypt, Turkey, and the UK. Where there were banks, there were panics. Scarcely a part of the world was immune.
In some cases these crises were compounded by the failure of the authorities to act, but in others they were worsened by the very fact that authorities did act. When officials provided banks with emergency assistance, as in Britain, they signaled that they attached higher priority to stabilizing the financial system than stabilizing the currency. British banks, under pressure from their new American competitors, had provided credit to German exporters on concessional terms. As the financial crisis now spread to Germany, Berlin froze repayments. This punched a hole in the balance sheets of the London banks and, as well, in Britain’s balance of payments. Under other circumstances the Bank of England would have responded to the resulting gold losses by raising interest rates to attract capital from abroad. But it understood that higher rates would make funding their operations more expensive for the banks. So the Bank of England resisted the temptation to tighten. Some observers ascribed the Bank’s failure to defend sterling to the fact that the governor, Montagu Norman, was indisposed. Exhausted by the crisis, he had sailed off for a Canadian holiday. In fact, however, Norman’s seconds at the Bank knew exactly what they were doing. They were consciously choosing the stability of the banks over the stability of sterling.
Investors monitoring the Bank of England had no trouble seeing that currency depreciation was coming. Their self-preservation instincts kicking in, they scrambled to get their money out of the country before sterling’s depreciation eroded the value of their claims. They converted their funds into foreign currency and redeposited them abroad.
Still the Bank of England stuck with its strategy, which was to do nothing. Aside from two small increases in the second half of July, it resisted the pressure to raise interest rates to defend the exchange rate. The decision to abandon the gold standard and allow the pound to depreciate followed, unavoidably, on September 20.
Not everyone was pleased. British tourists disembarking in Manhattan from the White Star Line’s S.S. Homeric were shocked by how few dollars their pounds could buy. “A pound is still a pound in England,” huffed one. “I shall carry my pounds home with me! A bit high this, something of a holdup, what?”42 The response of industry, in contrast, was distinctively positive. “Bryan was right,” as Clark H. Minor, the UK-based president of International General Electric, summarized the lesson, referring to William Jennings Bryan’s campaign against gold. Minor was not the only one to draw the link; before long, the British Isles were engulfed in a “Britain for Bryan” boom.
DOLLAR BACKLASH
Sterling’s devaluation raised questions about whether the dollar was secure, shifting financial pressure to New York. Not just private investors but central banks, with France, Belgium, Switzerland, Sweden, and the Netherlands in the vanguard, rushed to convert their dollars into gold before the moment passed. Conversions started on September 21, the first business day after sterling’s devaluation. After waiting two weeks, the New York Fed raised its discount rate by a full percentage point to defend the dollar. A week later it raised the discount rate a second time, again by a full percentage point.
This was the sharpest increase in rates in such a short period in the history of the Federal Reserve. Not for 47 years, until 1978 and another episode of pronounced dollar weakness, would the Fed again raise rates so far so fast. Although the dollar exchange rate was stabilized by its aggressive action, the same cannot be said of the banking system. In October alone, 500 banks failed. In the six months from August 1931, nearly 2,000 went under. Such are the effects of raising interest rates in a financial crisis.
With the Fed stoutly defending the dollar, the pound/dollar exchange rate fell from $4.86 to $3.75 in a week. By December 1931 it had reached $3.25. Expensive British exports now became cheap. From $3.25, speculators concluded, the sterling exchange rate could only go up. Accordingly, it mounted a modest recovery. Freed to support the economy, the Bank of England could cut its discount rate to 2 percent, inaugurating the policy known as “cheap money.” Ultimately this was the same escape route chosen by other countries, starting with Britain’s Commonwealth and other trade partners, followed by the United States, which abandoned the gold standard in 1933, and concluding with France, Belgium, the Netherlands, and Switzerland, all members of the “gold bloc,” so named because they continued against all odds to cling to the gold standard before finally abandoning it in 1935–36.
With less trade, less foreign borrowing, and less commitment to defending exchange rates, there was less need for central banks to hold foreign currencies. When governments and central banks sought to influence market outcomes, they were now more likely to do so by tightening controls than by buying and selling foreign exchange. This change in strategy permitted them to drastically reduce their foreign currency holdings. Prior to Britain abandoning gold, the National Bank of Belgium, which held reserves in London, had asked the Bank of England whether there was a danger that sterling might be devalued. The Bank of England had responded that this step was out of the question. Having been burned, the National Bank of Belgium now sold off not just its sterling but also, just in case, its dollars. The Bank of France and others followed suit.
Although the importance of both the dollar and the pound as reserve currencies was diminished by the crisis, the sale of foreign currencies by central banks was disproportionately a sale of dollars. By the end of 1931, dollars accounted for 40 percent of remaining foreign exchange reserves worldwide, but sterling nearly 50 percent.43 This result might seem peculiar, given that the Fed was defending the dollar while the Bank of England was not doing likewise for sterling. But the U.S. depression was deeper and longer. The British economy began recovering in early 1932, but it took another year for activity to bottom out in the United States. The collapse in U.S. trade being even more severe, the volume of acceptance business fell off even more dramatically in New York.
That said, the single most important reason that sterling temporarily regained its lead as an international currency was the practice of the members of the Commonwealth and Empire of holding their reserves in London. For Commonwealth countries like Australia and New Zealand, doing so was more than a matter of economic logic. It was a demonstration of political solidarity. For the Empire it was not even a choice. The colonies did what they were told by the Foreign or Colonial Office. Because the United States lacked the same imperial prerogatives, the dollar did not enjoy the same support.
But with international transactions of all kinds depressed for the balance of the 1930s and with politics dominating economics, it was easy to miss that a changing of the guard had already taken place. It was easy to overlook that the dollar had overtaken sterling as the leading international currency. For anyone uncertain about the situation, however, World War II would clarify it soon enough.
DOMINANCE
For a quarter of a century after World War II, the dollar reigned supreme. Only the United States emerged strengthened from the war. Its economy towered over the world like none other. It accounted for fully half of global industrial production.1 Only its currency was freely traded.
As a result, barely two decades after its debut as an international currency the dollar was the dominant unit in which prices were quoted, trade was invoiced, and transactions were settled worldwide. For foreign central banks and governments the dollar was as good as gold, since the United States stood ready to sell gold at a fixed price of $35 an ounce.2 The Articles of Agreement of the International Monetary Fund, the newly created steward of the international system, acknowledged the currency’s unique status by authorizing countries to define their exchange rates in dollars. Other potential issuers of international currencies lacked either open financial markets, like Germany, or financial stability, like France. The UK lacked both. The dollar was not just the dominant international currency but, outside the British Commonwealth and Empire, effectively the only one.
Central banks still had the option of accumulating gold, but the supply of newly mined gold was limited. There was also the uncomfortable fact that since the Soviet Union and South Africa were the main producers, purchasing gold effectively subsidized two unsavory regimes.
These facts placed the dollar and the United States in a unique position. American consumers and investors could acquire foreign goods and companies without their government having to worry that the dollars used in their purchases would be presented for conversion into gold. Instead those dollars were hoarded by central banks, for which they were the only significant source of additional international reserves. America was able to run a balance-of-payments deficit “without tears,” in the words of the French economist Jacques Rueff. This ability to purchase foreign goods and companies using resources conjured out of thin air was the exorbitant privilege of which French Finance Minister ValĂ©ry Giscard d’Estaing so vociferously complained.
STERLING HANGOVER
For reasons of history if nothing else, the pound remained the dollar’s principal rival. The Commonwealth, Empire, and other members of the sterling area had given the UK an unlimited credit line during the war.3 They supplied Britain and its army with resources and war matĂ©riel, taking British treasury notes as IOUs. Britain and its allies meanwhile ran down their dollar reserves to procure supplies from the United States. By the end of the war the accumulated sterling balances of central banks and governments thus exceeded their dollar balances by a factor of two to one.4
Superficially this created the impression that the pound was still the leading reserve currency. But two-thirds of overseas financial claims on the UK were in the hands of that small part of the world that comprised the sterling area.5 Most of its members had accumulated sterling for wartime reasons. They maintained it now only because the controls imposed by Britain prevented them from exchanging it for goods or more useful currencies.
It was widely understood that holding sterling was a losing proposition. In the halcyon days before 1914, Britain’s assets abroad had greatly exceeded its liabilities. There had been no question about the stability of sterling or the security of foreigners’ investments. But now the country’s net external sterling liabilities, at $15 billion, were nearly six times its gold and foreign currency holdings.
If foreigners were allowed to freely sell their claims on Britain, their value would drop like a stone. This danger became acute in 1946 when the United States made the removal of Britain’s currency controls a precondition for extending a loan for British reconstruction. This was the one great wartime failure of John Maynard Keynes. This greatest of British economists had served H.M. Treasury in a variety of wartime capacities and led negotiations with the Americans over the structure of the postwar international monetary system. Supremely self-confident, he believed that the U.S. government would provide its ally with a postwar loan free of onerous conditions once it heard his compelling arguments. Instead, the Americans demanded that Britain remove its controls. Doing so, they believed, would expand the market for U.S. exports.6 For them the resumption of normal peacetime international financial relations was overdue.
Keynes’s failure to head off this demand reflected his limited understanding of American politics. In the British system, a government with a parliamentary majority could do pretty much as it pleased. An enlightened Roosevelt-Truman administration, Keynes reasoned, could similarly push through its chosen policies, enjoying as it did a majority in both houses of Congress. He failed to reckon with the independence of American legislators or their isolationist tendencies. The further one moved from the Eastern Seaboard, the less Americans and their congressional representatives valued their supposed special relationship with Britain. When the administration pushed for concessions for the British, the Congress pushed back.
Keynes’s failure to negotiate better terms may have also reflected his weakened physical state. He was suffering from a bacterial heart infection that subjected him to increasingly serious heart attacks. He tired easily and was frequently incapacitated. That the UK continued to rely on him to represent its interests, despite these problems, testifies to his singular intellectual capacity and stature.
The precise requirement laid down by the Americans was that all sterling now earned by foreigners should be freely usable in merchandise transactions within a year of the loan’s approval by the Congress. When current account convertibility, as this condition was known, was duly restored on July 15, 1947, residents of other countries rushed to convert sterling into dollars to purchase American goods.7 Britain’s reserve losses in the first month exceeded $1 billion. For a country with less than $2.5 billion of gold and foreign exchange, the position was untenable. Anthony Eden, deputy leader of the Conservatives, likened it to “doing the splits over an ever-widening abyss.”
With no choice, Britain slapped controls back on after just five weeks. So much for the idea that a convertible pound sterling might again play a leading international role.
British policymakers became understandably shy about current account convertibility. Not until 1959 would they try again. When 1959 arrived, sterling balances, still mainly in the hands of the Commonwealth and Empire, remained at the same level as a decade earlier. Dollar reserves, meanwhile, had more than tripled.8 It was clear which currency countries wanted when they accumulated reserves. It was not the currency of Europe’s sick man.
BATTLE OF ALGIERS
Nor were there other attractive options. The franc had once been an important reserve currency, but it never recovered from the political and financial chaos through which France suffered after World War I. France now also had its quagmire in Algeria. The aftermath of World War II saw bloody independence struggles around the world, but the Algerian conflict was especially violent. The Algerian National Liberation Front fought not just the French army but the rival Algerian National Movement. The army, under attack at home and abroad, split into two factions, with members of one plotting to overthrow the French government. There were massacres of civilians. Cafes in Paris were bombed. Torture was used to extract information from political prisoners.9 Meanwhile successive French governments, each weaker than the last, vacillated.
The culmination in the spring of 1958 was a political crisis in which a cabal of dissident army officers seized control of Algeria to prevent it from being abandoned by an indecisive Paris. Paratroopers from the Algerian corps then landed in Corsica, taking over the island on May 24. They planned next to seize Paris and replace the government with one more firmly committed to control of Algeria. Their generals sent out the coded message to prepare for the invasion. (“The carrots are cooked. The carrots are cooked.”) In Paris, the embattled government uncoiled rolls of barbed wire on the airfields to prevent the paratroops from landing. The public was not reassured.
Seeing their support dissolving, the leaders of the Fourth Republic agreed that the war hero, Charles de Gaulle, should be recalled to power. Only de Gaulle had the authority to put down the rebellion. His personal prestige was greater than that of “any Frenchman since Napoleon,” one expert observed. (Certainly this was de Gaulle’s own view.) The great man returned to Paris from his home village of Colombey-les-deux-Ă©glises. Granted emergency powers for six months, he brought the army into line.
This political crisis was also a financial crisis. The cost of the war was enormous, and the French central bank was forced to directly finance the government’s budget deficit. Already in 1955–1957 the Bank of France had lost two-thirds of its reserves. The finance ministry responded by tightening import licensing requirements, restricting purchases of foreign securities, and limiting the amount of currency that residents could carry when traveling abroad.
These measures to bottle up the pressure proved inadequate, what with the Bank of France continuously pumping additional money into circulation to finance the budget deficit. On August 12, 1957, the government was forced to devalue. The country’s militant trade unions, seeing the purchasing power of their earnings eroded, were not pleased. Buying them off required sharp wage increases, which dissipated the hoped-for improvement in competitiveness.10 The first devaluation having failed, a second one, this time by 17 percent, followed in barely a year. To minimize embarrassment, de Gaulle waited until the end of December and after the presidential elections, in which he won 78 percent of the electoral college.
This second devaluation, being accompanied by budget-balancing measures, restored external balance.11 But serial devaluation was not the behavior of a major power. It was not the behavior of a grand general. De Gaulle was imperious and preoccupied by the glory of France, not to mention his own glory. Presiding over the December devaluation rankled. That the French president subsequently became preoccupied with dethroning the dollar no doubt reflected his memories of this demoralizing episode.
RELUCTANT POWERS
Germany was the one European country with a history as a reserve center and no significant balance-of-payments problems.12 But memories of the first half of the 1920s, when it had experienced one of the most extreme hyperinflations in recorded history, were still fresh. German officials reacted now in almost Pavlovian fashion to the least whiff of inflation.
The problem was that each time the Bundesbank, West Germany’s newly established central bank, tightened monetary policy with the goal of curbing inflation, its higher interest rates attracted capital from abroad, loosening credit conditions and reigniting inflation fears. To limit inflows, Germany therefore maintained restrictions on purchases of money market instruments by nonresidents.13 Even had foreigners wished to use the deutschmark for international transactions, in other words, they would have been frustrated. Germany relaxed some of its controls in the late 1950s and 1960s but tightened them again in April 1970 and May 1971 in response to renewed capital inflows. None of this made the country an attractive place for foreigners to do financial business.
There was also the absence of competition from rising powers. In the third quarter of the twentieth century, Japan resembled today’s China, an Asian nation growing three times as fast as the United States. By the 1970s it had become the second largest economy in the world, such being the miracle of compound interest. But the yen then, like the renminbi now, played essentially no international role. Neighboring Asia, where Japan sourced materials and sold manufactures, was an obvious place where the yen might have been used to quote prices, settle transactions, and serve as international reserves. But memories of Japanese colonialism and wartime brutality had not faded. Relying on the yen would have sat uneasily with the neighbors. It did not help that the most important of them, Mao’s China, was cut off commercially and financially from the outside world.
And even had there been a desire to use the yen in international transactions, Japanese policymakers would have discouraged the practice. Their priority was export-led growth. They fostered export-oriented manufacturing using a hypercompetitive exchange rate and a battery of tax breaks and subsidies. They enlisted the Export-Import Bank of Japan and the Japan Development Bank, together with their influence over other financial institutions, to channel cheap investment finance to export industries.
Internationalizing the yen would have undermined those policies. Had banks been free to take money out of the country, they could have evaded government direction to lend to domestic producers at artificially low rates. Japanese financial markets had to be placed in an airtight compartment sealed off from the rest of the world. Allowing foreigners to invest in the country, as needed for reserve-currency status, would have subverted the industrial-policy strategy of the Ministry of International Trade and Industry (MITI). And allowing a foreign demand for yen to develop would have put upward pressure on the exchange rate, negating undervaluation as a tool of economic development. In order to internationalize its currency, Japan then, like China now, would have had to abandon its tried-and-true growth model.
Eventually in the 1980s it did. Japanese policymakers sought to transform Tokyo into an international financial center and cultivate an international role for the yen. But their so-called Big Bang reforms, which removed restrictions on domestic and international financial transactions, did not work as planned. The Big Bang allowed large Japanese companies to access the corporate bond market. Seeing that they were losing their corporate clients, the banks scrambled for other customers, whom they found in real estate developers. The Big Bang spawned a massive real estate boom and bust whose consequences took years to clean up (shades of the subprime crisis). As a result, Tokyo never rose above second-tier financial-center status.
There is a lesson here for Chinese policymakers seeking to transform Shanghai into an international financial center and the renminbi into an international currency. Tread cautiously when deregulating your financial markets. Be careful for what you wish for, and be even more careful how you attempt to make that wish a reality.
BANCOR TO THE WORLD
This lack of alternatives meant that the post–World War II international monetary system was dollar based. The problem for other countries, starting with Britain, was how to limit the ability of the United States to manipulate that system to its advantage.
The war had brought Great Britain to the brink of extinction as an independent nation and substantially reduced its economic and financial power. It was clear already before the conclusion of hostilities whose economy would be strong and whose would be weak. If they were going to shape the postwar international monetary system, British officials realized, they would have to rely on the power of ideas rather than the power of their economy.
So they turned again, in 1941, to their leading idea man, Maynard Keynes. Within weeks Keynes had come up with a scheme for a global central bank that he dubbed the Clearing Union. Each country would receive a line of credit denominated in bookkeeping units known as “bancor.” Governments could use those credits to purchase imports. Countries would be prevented from running balance-of-payments deficits indefinitely by the fact that their credits with the Clearing Union were limited. But they would also be discouraged from running chronic balance-of-payments surpluses by provisions requiring them to turn over a portion of any bancor and foreign currencies they earned to the Clearing Union.
While the Keynes Plan referred generically to countries running balance-of-payments surpluses, there was no doubt whom specifically it had in mind. Everyone realized that Keynes’s charges and penalties were targeted at the United States.
American negotiators, led by Harry Dexter White, were smart enough to understand this strategy. The son of immigrant parents—his surname was an Anglicization of Weit—White had risen from modest working-class origins in Boston to the undergraduate program at Stanford, from which he graduated with distinction, and then the Ph.D. program and an assistant professorship at Harvard. In 1934, after an unhappy stint at Lawrence College in Wisconsin, he was hired as an economic analyst at the U.S. Treasury. From there he rose to assistant to the secretary.
White necessarily possessed considerable strength of intellect to rise so far so fast. His Treasury superiors described him as a man of “extraordinary energy and quick intelligence.”14 Keynes acknowledged this capacity in his typical deprecating way, referring to White as “one of the few constructive brains in the Treasury.”15 White was as strong willed—some would say stubborn—as Keynes, if less charming. He was also as well schooled in international monetary matters, having written his dissertation on the French franc between 1880 and 1913, and was more than capable of dealing with Keynes’s more technical arguments.16
White’s own plan for monetary reform, on which he began working after Pearl Harbor, substituted for Keynes’s automatic taxes only the vague possibility of sanctions against a country running chronic external surpluses.17 Keynes had proposed that countries be provided credit lines at the Clearing Union totaling $26 billion—the equivalent today would be $16 trillion, greater than the value of all goods and services produced in the United States. The Americans feared, not without reason, that the financial resources of the Clearing Union would all be used to purchase U.S. goods, forcing America to effectively give them away. White therefore reduced Keynes’s $26 billion to $5 billion.18
Most importantly, the White Plan did away with bancor, proposing instead that the Stabilization Fund (White’s name for the Clearing Union) lend national currencies deposited by governments.19 The United States would provide the single largest share of the Fund’s resources, reflecting its weight in the world economy.
These differences were then hashed out in bilateral negotiations. More accurate would be to say that the two sides eventually agreed to something closely resembling the American proposal, reflecting America’s leverage and Britain’s lack thereof. When American negotiators, in order to bring the export lobby on board, insisted that the Articles of Agreement of the new institution, now called the International Monetary Fund, include the expectation that countries would remove restrictions on the use of their currencies for import and export transactions within 5 years, Britain had no choice but to agree. When the Americans insisted on eliminating Keynes’s tax on countries running chronic balance-of-payments surpluses and on doing away with bancor, again it had no choice. The main U.S. concession was to raise the total resources of the Fund from $5 billion to $8.5 billion, still far below Keynes’s opening bid of $26 billion. The assent of the other allies and neutrals was obtained in July 1944 at the conclusion of an exhausting two-week conference in the leafy resort town of Bretton Woods, New Hampshire.20
MARSHALLING SUPPORT
But still unclear was how, given the limited availability of credits, countries would obtain the dollars needed to finance imports from the United States. At the end of World War II, Europe and Japan desperately needed imported food and fuel for social stability. They needed capital goods for economic reconstruction. For the time being, the United States provided them through the United Nations Relief and Reconstruction Administration and the 1946 Anglo-American loan. But these bridging measures were of limited duration.21 And no one knew what would happen when they expired. American policymakers fancied that, with reconstruction and the peacetime reconversion of wartime armaments factories, Europe and Japan would immediately regain the capacity to earn the dollars needed to purchase imports. But in practice, reconversion took time. And exporting manufactures required first importing the capital equipment and other inputs required to produce them. The exports couldn’t come first.
The implications were alarming for Europe and Japan and more broadly for the international system. Countries short of cash might resort to exchange controls and clearing arrangements like those exploited by Germany in the 1930s. If they did, the open trading system that was a U.S. priority would be placed at risk. And from government control of imports, it might be a small step to government control of the economy.
With the intensification of the Cold War, this risk became too great for U.S. policymakers to bear. They responded with the Marshall Plan for Europe and the Dodge Plan for Japan, the first named for the World War II general appointed secretary of state in 1947, the second for the less imposing but no less influential chairman of Detroit Bank who served as special U.S. ambassador to Japan starting in 1948.
The Marshall Plan absorbed 10 percent of the federal budget in its first year. It was an extraordinary act of generosity, and Marshall was just the man to shepherd it through the Congress. The secretary was an exemplary citizen-soldier. His father had fought in the Civil War as a member of the Augusta, Kentucky, Home Guard, and the young Marshall had spent virtually his entire adult life in the military. By 1947 he had become a trusted public figure. He exuded the self-discipline and analytical rigor of a consummate military man. As army chief of staff from 1939 to 1945, Marshall never minced words about the need for personal sacrifice when testifying before Congress. Now, in 1947, he was characteristically blunt about the need for sacrifice to keep Europe in the Western camp.
Dodge, no kin of the automotive family of the same name, was a banker who had worked in Frankfurt and Berlin as financial advisor to the U.S. military government of Generals Dwight Eisenhower and Lucius Clay. Moving to Japan, he briskly applied the lessons he had learned in Germany.
The Marshall and Dodge Plans provided dollars to finance the imported inputs needed to get exports going again, averting the danger that countries would be forced to resort to barter. In this sense the Marshall and Dodge Plans saved the Bretton Woods System and, by implication, the international role of the dollar. Not without reason, some observers have referred to the post–World War II international monetary system not as Bretton Woods but as the “Marshall-Dodge fixed-rate dollar standard.”22
To accumulate reserves, countries must run trade surpluses—something that Europe and Japan were in no position to do in the second half of the 1940s—or else the reserve currency country must lend and invest abroad. After hesitating, the United States provided other countries with dollars through the Marshall and Dodge Plans. China today, like the United States in the 1940s, is running trade surpluses but also seeking to encourage wider international use of its currency. For other countries to get their hands on renminbi, China will therefore have to lend and invest abroad. That this lending and investment is something we are now beginning to see is an indication that Chinese officials know what’s up.
HEDGEHOG’S DILEMMA
The defeated powers Germany and Japan were the most successful at exporting and acquiring dollars. Denied foreign policy ambitions, they focused on growing their economies. Britain still had its empire. France had Algeria, which many Frenchmen regarded as an integral part of the French nation. Both countries had overseas commitments creating budgetary burdens. Both complained of the difficulty of acquiring the dollars needed to purchase foreign goods.
In the 1940s it had been possible to argue that the immensity of U.S. economic power, combined with the severity of postwar economic problems in other countries, made it impossible for them to obtain dollars without American help. Come the 1950s, however, Germany had shown that by investing and cutting costs it was possible to restart the export engine and accumulate all the dollars that might be required. This was something at which France also eventually succeeded by devaluing the franc, balancing its budget, and extricating itself from Algeria. It was something at which Britain only finally succeeded in the 1980s with the advent of Margaret Thatcher.
The point is that, sick men like Britain notwithstanding, by the end of the 1950s the dollar shortage was over. This was not an entirely happy development. The Bretton Woods arrangements had assumed that the dollar was as good as gold. The fact that the stock of foreign-held dollars was now poised to exceed U.S. gold holdings thus posed a threat to the system. It exposed the United States to the equivalent of a bank run if foreign holders all rushed to exchange their dollar claims for gold at the U.S. Treasury’s teller’s window, known colloquially as the “gold window.” American monetary liabilities to foreigners first exceeded U.S. gold reserves in 1960.23 It was no coincidence that the first serious episode of speculation against the dollar was in the second half of that year.
These problems should not have come as a surprise. There was an obvious flaw in a system whose operation rested on the commitment of the United States to provide two reserve assets, gold and dollars, both at a fixed price, but where the supply of one was elastic while the other was not. The Belgian-born economist Robert Triffin had warned of this problem in 1947 in a study for the Federal Reserve Board.24 The short, round-faced Triffin was a hedgehog rather than a fox. He knew this one big thing and wrote of it virtually to the exclusion of all else, as an economist at the Organization of European Cooperation and Development (the forerunner of today’s OECD) and then as professor at Yale University. He did this so single-mindedly that his name became synonymous with the problem.
The Triffin Dilemma was that if the United States refused to provide dollars to other countries, trade would stagnate and growth would be stifled. But if the United States did provide an unlimited supply of dollars, lubricating growth and trade, confidence in its commitment to convert them into gold would be eroded. Eventually there would be a run on U.S. gold stocks, destroying the country’s ability to maintain the $35 gold price. Or the United States might preemptively abandon its obligation to pay out gold at a fixed price. Either way the gold-dollar system was doomed. Triffin’s solution was to create an artificial unit along the lines of Keynes’s bancor that governments would be obliged to accept in international transactions. But, as of the early 1960s, he had few takers.
There is an evident analogy with the situation linking the United States and emerging markets like China and India in the early twenty-first century. The rapidly growing catch-up economies, Europe and Japan in the 1960s, emerging markets today, found themselves accumulating dollars almost despite themselves. Then as now they worried whether those dollars would hold their value. Then as now their worries created the danger of a disorderly scramble out of dollars that might destabilize financial markets.
The main difference today is that there are alternatives to the dollar in the form of the euro and, prospectively, other currencies. This creates at least the possibility of a smooth transition as foreign central banks and governments gradually diversify their reserves. If central banks and governments want to hold more euros, the European Central Bank can supply them.25 Since the euro and the dollar float against one another, this shift can be accompanied by a gradual adjustment in the relative price of the two currencies. The dollar can decline against the euro without threatening the stability of financial markets and the international system.
INTO THE DEEP END OF THE POOL
Not so in the 1960s. With other countries lacking deep and liquid financial markets open to the rest of the world, gold was the only alternative to the dollar. And newly mined gold was in short supply.26
If countries worked together, however, they might buy time. If countries holding dollars agreed not to convert them into gold, the system might be preserved while a permanent solution was sought. But there was an obvious incentive to convert one’s dollars into gold while others were exercising restraint. And since everyone was aware of this possibility, there was a temptation to cheat.
In 1961 the United States sought to address the problem by proposing an arrangement, the Gold Pool, in which other countries agreed to hold onto their dollars and reimburse the United States for half of its gold losses.27 Charles Coombs, vice president of the New York Fed, negotiated it on behalf of the administration. The Gold Pool was a blatantly asymmetric arrangement in which all the transfers went one way. It was an indication of the extent to which the structure of the system had other countries over a barrel.
The Gold Pool was a happy arrangement so long as there was no need to activate it. And through 1964 there was no need owing to large amounts of Soviet and South African gold flowing onto the market.28 But starting in 1965, supplies fell off. The members of the pool now had to sell gold to prevent its dollar price from rising on the London market. They had to reimburse the United States for half its losses. What had been a commitment in theory now had actual implications in practice. Italy began offsetting its contribution to the Gold Pool’s sales in London by converting dollars into gold in the United States. France, never a friend of the dollar, dropped out of the pool in early 1967, a fact disclosed by Paul Fabra, a financial journalist for Le Monde, in what was presumably a strategic leak by de Gaulle’s government.
All this heightened the urgency of a permanent solution. French leaders, more than a little anachronistically, advocated returning to a system in which gold alone was used to settle accounts. In doing so they drew inspiration from the impassioned writings of Jacques Rueff, a long-standing champion of the gold standard. Rueff had worked at the Bank of France in the 1930s, rising to the rank of deputy governor before being dismissed under the Vichy government’s anti-Semitic laws. He had firsthand knowledge of the gold standard’s operation, although how he could have seen France’s unhappy experience then as something to be emulated now is another matter. One explanation is that, as a follower of the Austrian economist Ludwig von Mises, Rueff was an ardent opponent of government interference with the market and viewed the gold standard as a guarantee that governments would not tamper with the monetary system. Another is that he was phobic about inflation, having lived through France’s high inflation in the 1920s. Indeed, the young Rueff had advised the prime minister, Raymond PoincarĂ©, on how to bring that inflation under control. His advice of budget cuts and one last devaluation did the trick.
As an opponent of economic planning, Rueff had been banished to the wilderness following World War II. But when de Gaulle returned to power and the problem became how to rein in inflation, the General called on Rueff to draft a stabilization plan. Once more he recommended budget cuts and one last devaluation, and once more the strategy worked. As a result of this triumph, Rueff acquired de Gaulle ear. He also acquired the public’s, which he bent by publishing some eighty-five articles on monetary matters in the course of the 1960s. When de Gaulle attacked the dollar at a press conference in early 1965, castigating the Bretton Woods System as “abusive and dangerous” and arguing that the world should return to a gold-based system, he was channeling Rueff. It did not hurt that Rueff’s arguments resonated with de Gaulle’s insistence that France should not take a back seat to any country, monetarily or otherwise.
But here, in fact, French leaders were engaging in the same kind of wishful thinking to which American policymakers succumbed in 1946. It was not clear, given limited gold production, where under a gold-based system the world would obtain the reserves needed to support an expanding volume of trade and investment. Rueff suggested raising the price of gold, but this ignored the danger that doing so once might create expectations that governments would do so again, encouraging gold hoarding and other destabilizing consequences. Raising the price of gold would reward countries—such as, not entirely coincidentally, France—that had done the most to undermine the system by converting their dollars. Raising the gold price would also create a windfall for the Soviet Union and South Africa. Predictably, Pravda applauded de Gaulle’s comments attacking the dollar.
The French position reflected a peculiar reading of history that ignored the fact that a pure gold-based system had not existed for the better part of a century, neither under the pre–World War I gold standard nor its interwar successor. Under both arrangements central banks had found it necessary to supplement their gold reserves with foreign bonds and bank deposits, including, it should be noted, French bonds and deposits. The French proposal may not have been realistic, but it nonetheless stood in the way of reaching agreement on an alternative.
Germany, not much more realistically, simply sought to preserve the existing system. Anxious to enhance its image as a loyal member of the Western alliance, it prioritized cooperation with the United States. Some German officials were less than enamored of the Americans. Karl Schiller, the moody professor who became economics minister in 1966, objected to the United States exploiting its security leverage and urged following de Gaulle’s example of selling dollars. For most German politicians, however, the security argument for cooperating dominated. Then there was the fact that Bundesbank was a large holder of dollars as a result of Germany’s chronic surpluses. For the German central bank then, like the Chinese central bank now, any international monetary initiative that downgraded the role and reduced the value of the dollar would have had costly financial consequences.
British policymakers generalized from the decline of sterling; they saw the dollar as next in line. As early as 1962 they proposed supplementing and ultimately replacing national currencies as reserves with a synthetic unit along the lines of Keynes’s bancor. This new unit, they suggested, could be introduced by exchanging it for national currencies already in the possession of central banks. This might have the ancillary benefit, from the British point of view, of removing the overhang of sterling in official hands and eliminating the possibility that it might all be sold off in a sudden panic on news of economic problems. But a financially weak Britain was in no position to drive the debate.
THE VINEYARDS OF INTERNATIONAL FINANCE
The outcome thus hinged on the U.S. position. The problem was that there was no U.S. position. Lacking other ideas, American officials simply restated their commitment to the $35 gold price. They resorted to scattershot tactics to strengthen the U.S. trade accounts. They instituted a Buy American policy for the Defense Department and tied U.S. foreign aid to purchases of American goods. They imposed a tax on foreign interest income and arm-twisted U.S. firms not to invest abroad. The difficulty with these expedients, aside from the fact that they distorted international markets, was that preventing the United States from running current account deficits and investing abroad also prevented other countries from acquiring reserves. It just shifted the world from one horn of the Triffin Dilemma to the other.
Thus, the only real alternative to abandoning the system was to take up Britain’s call for “paper gold.” Already in 1960, in advance of his inauguration, President-Elect Kennedy had appointed a task force to study the dollar problem. Professor Triffin was a member of this task force and did not hesitate to inject his proposals for a synthetic reserve unit.
But Douglas Dillon, the hardheaded ex-banker who served as Kennedy’s treasury secretary, had little patience for Triffin’s ideas. Dillon was former chairman of the investment bank Dillon, Read and son of the firm’s founder, Clarence Dillon. He had moved from banking to diplomacy and from there to policy by virtue of having been a major contributor to Eisenhower’s 1952 presidential campaign.
Eisenhower had first appointed Dillon ambassador to France, a position for which he was qualified mainly by the fact that his family owned the Haut-Brion vineyard. The French were less reassured by this investment in terroir than they were disturbed by Dillon’s lack of fluency in their language. Subsequently Dillon served as undersecretary of economic affairs and of state under Eisenhower, where he distinguished himself. Contrary to the silver-spoon presumption (by the time his stint in Paris ended in 1957, his rudimentary French had become quite good), he was a quick learner and a stickler for detail. So Kennedy plucked him from Eisenhower’s cabinet to reassure the markets and make good on a commitment to appoint a bipartisan cabinet.29 The two men had much in common: both were Harvard graduates, both had been naval officers during World War II, and both were sons of nouveau riche Wall Street wheeler-dealers.30 Dillon assured the president-elect that if he disagreed on an important matter of policy he would resign without causing a row. Kennedy was fully aware that Dillon had been a large contributor to the Nixon campaign. The choice thus spoke volumes about the need for a treasury secretary with investment banking experience and the gravitas to calm the markets.
Under Dillon, U.S. dollar policy had three straightforward elements. First, foreign governments should pay more of the costs of U.S. troops stationed in Europe. Second, the United States should use taxes and regulation more aggressively to support its currency. Third, the Europeans would be arm-twisted not to sell their dollars for gold. As for Triffin’s ambitious academic schemes, the nicest thing Dillon had to say was that they “weren’t very practical.”
With exceptional amounts of Soviet and South African gold flowing onto the market in 1963–1964, this approach sufficed for a holding action. But starting in late 1964 and especially after de Gaulle’s press conference in 1965, confidence began to ebb. Ten industrial countries (dubbing themselves, not very creatively, the Group of Ten) formed a committee to weigh proposals for reforming the system. There was a consensus on the need for change, but not much else. The French proposed issuing paper claims that governments would treat as equivalent to gold. This was essentially an effort to achieve the French objective of raising the price of gold, but through the back door. The maneuver would take place outside the IMF, which the French saw as dominated by the Anglo-Saxons. Other countries proposed instead working through the IMF by expanding the ability of countries to borrow from the Fund and in that way satisfying their need for additional reserves.
But in the absence of agreement, there was an inability to act. The report of the Group of Ten, published in the summer of 1965, concluded only that there existed “a range of views” on what to do.
WHITE HORSE WITH BLACK STRIPES
In April 1965 treasury secretary Dillon was succeeded by his undersecretary, Henry Fowler. The son of an engineer on the Norfolk & Western Railway, the folksy Fowler fashioned himself a country lawyer, taken to drinking root beer at sit-downs with the president. Unlike Dillon, he spoke no foreign language and had little experience in international finance. He did not initially enjoy the respect of his foreign counterparts. On an international swing designed to introduce him to the Europeans, the French finance minister, ValĂ©ry Giscard d’Estaing, pointedly failed to meet him at Orly Airport.
But Fowler quickly acquired definite views. He was skeptical that the gold-dollar system could be maintained. To Giscard and then the other Europeans, he indicated a willingness to discuss international monetary reform. The resulting discussions proceeded on two tracks: one via yet another Group of Ten study group, this one under Otmar Emminger, the no-nonsense vice president of the Bundesbank, the other in the Executive Board of the IMF. Fowler signaled his willingness to contemplate the creation a new reserve asset. France, finding that its proposal for an increase in the gold price received no support from Germany or other European countries, reluctantly agreed.
In August 1967 finance ministers finally recommended that the IMF be authorized to issue bookkeeping claims called Special Drawing Rights (SDRs for short) to supplement gold and dollar reserves. The term “special drawing rights,” substituted for “reserve drawing rights” at the insistence of the French, supposedly indicated that the new unit was a loan, not a currency. Since it was subject to repayment, the French reassured themselves, it would not be inflationary. Experts like Emminger dismissed the distinction. “What difference does it make?” he asked. “Is a zebra a white horse with black stripes or a black horse with white stripes?”
The SDR was linked to gold at a value equal to one U.S. dollar.31 The new unit would be allocated to IMF members in proportion to their existing rights to borrow from the Fund. Governments would be obliged to accept these bookkeeping claims from other governments and in transactions with the IMF itself. Through the periodic issuance of SDRs, the IMF could now provide countries with the additional reserves they needed to support their expanding trade and payments without adding to the overhang of dollars. Secretary Fowler, for whom the agreement was a personal triumph, hailed it as “the most ambitious and significant effort in the area of international monetary affairs since Bretton Woods.”
There were just two problems. First, SDRs were not very useful, since they were acceptable in transactions only with other governments and the IMF itself. Governments could not use them in transactions with private parties. Second, members holding 85 percent of voting power in the IMF had to agree before any SDRs were issued. France insisted on this provision to protect against what it saw as the danger of excessive liquidity creation. It assumed that with the Europeans voting as a bloc and possessing more than 15 percent of votes in the Fund, they could avert this danger. And different countries for different reasons hesitated to support issuance on a significant scale. France wanted to ensure that issuing SDRs, in relieving the pressure on the dollar, did not also relieve the pressure on the United States to cut its external deficit. Germany worried about the inflationary consequences. Developing countries argued that SDRs should be allocated to countries with the most need, namely themselves. As a result, the amendment to the Articles of Agreement under which SDRs could be created was only formally agreed to in May 1968, and the SDR facility was only finally activated in January 1970. It was too little, too late.
DOMINOS
It was too late because Britain’s chronic balance-of-payments problems had already come to a head. The August 1967 agreement to create the SDR was followed just three months later by a sharp devaluation of the pound. Britain’s troubles resulted from wage increases and the Arab-Israeli War, which led to the closure of the Suez Canal, disrupting international trade and raising the price of oil—and not incidentally leading oil-exporting countries in the Middle East to move funds out of sterling, given that Britain made no secret of its support for Israel. Occurring against the backdrop of a chronically uncompetitive industrial sector in Britain, these events made devaluation unavoidable.
The decision was announced on a cold and foggy Saturday when the markets were closed. Most Britons learned of it courtesy of the BBC, which, enjoying its broadcasting monopoly, was recycling Midnight Lace, a stale Doris Day thriller, which it interrupted to announce the less than thrilling news of a 14 percent reduction in the currency’s value. “I am quite shocked,” Sir Patrick Hennessy, chairman of Ford Motor Company’s British operations, told the press. “I have personally told my business friends abroad that it would not happen.”32
Sterling now mattered less than in 1931, but it still mattered enough for its devaluation to raise questions about the dollar. Another de Gaulle press conference in which the General alluded to the possibility that sterling’s devaluation might topple the dollar did not help. The price of gold shot up.
Obliged as they were to drive it back down, the remaining members of the Gold Pool sold $1 billion of gold in November and another $1 billion in December. By March U.S. gold losses were running half a billion dollars a day. One Swiss bank reportedly had to strengthen its vault to contain all the privately held gold that was flooding in.
Out of options, on Thursday, March 14, U.S. Treasury officials telephoned their British counterparts, requesting that they shut the London gold market. Sterling may no longer have been a first-class international currency, but one legacy of its earlier status was that London was still the main place where gold was traded. Closing the market required a proclamation by the queen. Although it was almost midnight, Prime Minister Harold Wilson rushed to Buckingham Palace, where he obtained the consent of Queen Elizabeth to close the market.
The United States then called an emergency meeting of Belgium, Britain, Germany, Italy, and the Netherlands, the remaining members of the Gold Pool. France should not have been offended, since it had already terminated its participation in the arrangement. But de Gaulle was characteristically piqued; he pointedly kept the Paris Bourse open while the London gold market was closed.
After two days of tense negotiations, U.S. and European officials agreed to a scheme devised by Italian central bank governor Guido Carli for a “two tier” gold market. Carli’s opinions carried weight; more than two decades earlier, at the precocious age of thirty, he had represented Italy at the Bretton Woods Conference. From there he went on to serve on the Executive Board of the IMF and, in Italy, at the foreign trade ministry, treasury, and central bank. He was widely respected for his candle power. His characteristically clever scheme divided transactions into a market tier on which the price of gold was free to fluctuate and an official tier where central banks would transact with one another at the official $35 price. Central banks were now relieved of having to devote real resources to the futile quest to keep the market price of gold from rising. At the same time, the dollar’s link to gold at the official $35 price, and therefore the entire Bretton Woods apparatus, remained in place.
President Johnson, echoing the comments of his treasury secretary the previous year, hailed the provisions of the two-tier gold market as “the most significant reforms of the international monetary system since Bretton Woods.”33 Not everyone was impressed. Leonid Brezhnev gleefully saw the decision as signaling “the beginning of the devaluation of the United States dollar” and “the possibility of a profound crisis of the capitalist system.”
Others, while not sharing Brezhnev’s sense of schadenfreude, similarly questioned the viability of the arrangement. They understood that governments would be tempted to buy gold from the United States at $35 an ounce and sell it for a higher price on the market. The only thing restraining them was fear of the unknown. If there was a run on U.S. gold reserves, rupturing the link between gold and the dollar, no one knew what kind of U.S. monetary policy would follow. No one could anticipate the implications for the international system.
Fear of the unknown was then trumped by fear of the known. With the election of Richard Nixon to the presidency in 1968, U.S. policies became increasingly unilateral and inflationary. Nixon saw no reason to cooperate with other countries. Rather, he sought to manipulate the system to maximal U.S. advantage and to free American foreign policy from financial constraints. Instead of negotiating, he adopted “bullying tactics” to get other countries to hold dollars.34
Nixon selected former Texas governor John Connally to play “bullyboy on the manicured playing fields of international finance” (Connally’s words). Nixon reached across party lines when appointing Connally as treasury secretary, just as Kennedy had reached across party lines when appointing Dillon. That Connally was longtime sidekick and onetime campaign manager of Nixon’s predecessor LBJ made the appointment startling. It came into focus when it was learned that Connally, while nominally stumping for Hubert Humphrey in the 1968 presidential campaign, had helped to identify oil and gas titans who might contribute to the Republican candidate.
The appointment reflected the president’s fascination with Connally, who cut the kind of dashing figure to which Nixon himself could never aspire. The former Texas governor was tall and handsome, with wavy white hair. Having been a thespian in school, he could be smooth and articulate. Like any actor, he was a publicity hound. He saw the treasury as a platform from which he could advance his presidential aspirations.
Better even a global than a national stage. Although international finance was not his strong suit—the oil depletion allowance was more like it—in May 1971 Connally turned down a request to testify before Congressman Wilbur Mills’s powerful House Ways and Means Committee in favor of a speech to an international monetary conference in Munich because he thought he could earn political points by attacking the Europeans on their own turf. “Considerations of friendship,” he warned them, were no longer enough for the United States to carry Europe’s water. The dollar problem would have to be solved by European countries assuming more of the U.S. defense burden and opening further to U.S. exports. If they didn’t, Connally continued, they would be subject to whatever policies the U.S. chose to enact.
Nixon’s foreign policy adviser Henry Kissinger warned privately that Connally’s scare tactics might backfire. And as Kissinger had predicted, at the Bank for International Settlements in June, European central bankers objected to both the tone and the content of Connally’s speech. When news of the clash leaked to the market, the ongoing drain of gold from the Treasury accelerated. On August 13 Britain, seeking to move before it was too late, asked the United States to convert some of its dollars into gold. This was the last straw; left no alternative, the United States suspended the conversion of dollars into gold, blaming “international speculators.” To make this look like an assertion of strength, Nixon dressed it up as a New Economic Program complete with tax cuts and a 90-day wage and price freeze. There was also a temporary 10 percent surcharge on imports designed to ensure that “American products will not be at a disadvantage because of unfair exchange rates.” In other words, the surcharge was intended to ensure, now that exchange rates were going to be adjusted, that they would be adjusted to U.S. advantage.
This abrupt, unilateral action was “hardly designed to win friends, or even to influence people, abroad,” in the words of the investment advisor Peter Bernstein.35 The 10 percent surcharge, in particular, won the United States no friends. But it placed other countries over a barrel. At the next meeting of finance ministers, Connally demanded to know what concessions the Europeans were prepared to offer in return for the United States dropping the surcharge and then, theatrically cupping his hand to his ear, observed, “I don’t hear any suggestions.”
The tactic was effective. With the stick of the surcharge, the United States was able to obtain, at a conference at the Smithsonian Institution in December, a new set of exchange rates that amounted to a significant devaluation of the dollar. The result was packaged as a revaluation of foreign currencies in a not very effective sop to U.S. prestige. One-upping his predecessor’s rhetoric in 1968, Nixon called it “the most significant monetary achievement in the history of the world.”36
But while the exchange rates were now different, the system was otherwise the same. Other currencies were still pegged to the dollar, the only difference now being that the U.S. Treasury no longer stood ready to convert dollars into gold for foreign central banks and governments. Nothing prevented the United States from running whatever policies it chose, a prospect that understandably alarmed countries pegging to its currency.
The danger materialized soon enough. Nixon blamed his defeat in the 1960 presidential election on the Federal Reserve’s tight monetary policy, which had depressed the economy. With the 1972 election approaching, he pressured the Fed under Arthur Burns to pump up the money supply. “Err toward inflation,” Nixon instructed him in a meeting at the White House.37
Burns was not accustomed to unsolicited advice. Formerly a very senior Columbia University professor where he had tutored, among other students, Alan Greenspan, he was convinced of appropriateness of the prevailing policy.38 But Nixon was not done. He hinted at legislation that would have allowed him to pack the Federal Reserve Board as FDR had attempted to pack the Supreme Court, and had Charles Colson, subsequently of Watergate fame, plant stories with United Press International about Burns lobbying for a pay increase.39 Burns in fact had only suggested that future Fed chairmen receive higher salaries so that they would be on an even footing with their European counterparts. But this was not the way the story was spun. So Burns goosed the money supply. Inflation accelerated. Pressure on the dollar intensified.
Clearly, something had to be done. So another committee was formed. The Committee of Twenty (with one finance minister or central banker for each of the twenty country groupings represented on the board of the International Monetary Fund) sought to reconcile the desire for exchange rate stability with the need for currencies to move against the dollar. Its proposal for an oxymoronic system of “fixed but adjustable rates” went nowhere. In the spring of 1973, in the midst of its work, another run on the dollar commenced, and the new set of exchange rates so laboriously agreed to at the Smithsonian collapsed.
The Committee of Twenty blithely continued work for another year before abandoning its deliberations. An interesting aspect of that work was the Report of the Technical Group on Indicators, which described how a specific set of indicators (the change in international reserves and the trade deficit or surplus) might be used to introduce “symmetry into the adjustment process.” This was code for the need to compel adjustment by chronic surplus countries, in this case Germany. The discussion paralleled the present debate over whether some kind of international mechanism should be created to compel China to appreciate its currency. It is thus worth recalling that that earlier discussion went nowhere.
QUELLE SURPRISE
None of this—not the devaluation, not the import surcharge, and not the inflation—enhanced the stature of the dollar. “The dollar is regarded all over the world as a sick currency,” read Leonard Silk’s lede in an article in the New York Times, which appeared, not without irony, on July 4, 1973. “Once upon a very recent time,” Time wrote, “only a banana republic would devalue its money twice within 14 months.” Parallels were drawn with sterling’s decline as an international currency. “For someone who spent the 1960s in England,” wrote the academic Emma Rothschild in the New York Times, “the decline of the dollar is like coming home.” Other currencies that were revalued in 1971–1973 were seen as increasingly serious rivals. There were widespread predictions of the dollar’s demise as the dominant unit in international transactions. The conventional wisdom, in other words, sounds remarkably familiar to modern ears.
It was anticipated that rivalry for reserve currency status would grow increasingly intense. With the shift to flexible exchange rates in 1973, it was thought that countries would need fewer reserves. Now that exchange rates were flexible, a shock to the balance of payments could be met by letting currencies adjust. No longer would central banks have to hold the currencies of others in order to intervene in the foreign exchange market.
What followed was therefore a surprise—two surprises, actually. The first one was that there was no decline in the demand for reserves. A series of studies found that countries when shifting to flexible exchange rates held the same or even more reserves. The explanation was simple: a floating exchange rate did not mean a freely floating exchange rate. Countries intervened when they concluded that the exchange rate had strayed too far from its fundamental value, and they came to this conclusion not infrequently. Their intervention required reserves—even more reserves given the continued expansion of trade and capital flows.
The second surprise was that there was no shift away from the dollar. Volatility there was in the share of dollars in foreign exchange reserves in the 1970s, but no secular decline. The dollar’s share of total identified international reserves remained close to 80 percent in 1977, as the United States pumped out dollars and the members of the Organization of Petroleum Exporting Countries (OPEC), having jacked up oil prices, parked their earnings in New York.40
STRONG DOLLAR POLICY
It was not the collapse of the dollar peg in the early 1970s so much as the subsequent inflation and mounting unease over the conduct of American monetary policy that precipitated movement away from the dollar. Consumer price inflation rose in every year of the Carter presidency, which did not make holding dollars attractive. Fears of U.S. intentions were fanned by statements by Treasury Secretary Michael Blumenthal in the summer of 1977 that the dollar was overstrong, the implication being that the secretary favored depreciation.
When Blumenthal’s “open mouth policy” caused the dollar to sag, the Arabs began muttering about using another currency when setting oil prices. The chastened secretary flew off to the Middle East to reassure them. The Europeans, seeing their currencies rise and their exporters squirm, reacted with fury. Blumenthal, the Frankfurter Allgemeine Zeitung wrote, was playing a “selfish, risky game that shows little responsibility toward the world economy.” With the situation threatening to spiral out of control, Blumenthal reversed course and announced that he believed in a strong dollar. It would be a long time before a U.S. treasury secretary would again be sufficiently courageous—or reckless—to say otherwise.
Arthur Burns, still Fed chairman, had been among those who blew a fuse over what he perceived as Blumenthal’s attempt to debase the currency, leading Blumenthal in turn to lobby against Burns’s reappointment. In this he was too successful: Burns was succeeded in March 1978 by G. William Miller, a slight, likeable Oklahoman whose command of the nuances of monetary policy was less than complete. Miller, the son of a storekeeper, had grown up in a town so small that it had no jail; as he described it, prisoners were simply chained to a log. He was given to telling bad jokes and laughing so hard that he botched the punchline. But he was also a force of nature: he had graduated from the Coast Guard Academy and the law school of the University of California, Berkeley and built a medium-sized textile manufacturer, Textron, into a giant conglomerate that produced Homelite chain saws, Speidel watchbands, and the Bell UH-1 helicopters, or Hueys, that were the workhorses of the Vietnam War.
Miller was a passionate advocate of equal employment for minorities, which may explain his single-minded pursuit of full employment. Basically he thought that the Fed should pursue employment growth to the exclusion of other goals. He denied that monetary policy could be effective in restraining inflation. Fighting inflation was first and foremost the responsibility, he insisted, of other branches of government. In the spring of 1979, with inflation continuing to accelerate, economists as ideologically diverse as the conservative consultant Alan Greenspan and the liberal Brookings Institution fellow Arthur Okun called for tighter money. Miller resisted, fearing that raising rates would squelch employment growth. Blumenthal and Charles Schultze, the head of Carter’s Council of Economic Advisors, their efforts at private persuasion having failed, were driven to leaking their complaints about Miller’s inaction to the press, in turn driving the president to ask them to desist.
Predictably, the dollar resumed its decline. Foreign currencies became so expensive that U.S. troops stationed in Europe had trouble making ends meet. NATO chief Alexander Haig reported that sympathetic West Germans were giving his soldiers care packages of food and cigarettes.
Complaints mounted about U.S. policy and the losses to which it exposed foreign holders of dollars. OPEC again discussed the possibility of pricing oil in another unit. Saudi Arabia and other members of the cartel made noises about moving their reserves into other currencies. Since doing so might weaken the dollar, their noises raised concerns that other countries might move preemptively in order to avoid ending up holding the bag, making talk of a dollar crash self-fulfilling.
Consideration was therefore given in late 1978 to creating a Substitution Account at the IMF through which dollar reserves could be exchanged for SDRs in an orderly fashion. The idea foundered on the question of who would take the losses if the dollar depreciated. If the answer was the IMF, then establishing the account was tantamount to transferring the risk of losses from some IMF members to others—from those holding lots of dollars to those holding few. If the answer was the United States, which would be asked to guarantee the holdings of the account (as the UK had been asked to guarantee the reserves of the sterling area after its 1967 devaluation), then the United States would incur very significant additional obligations. And this clearly was not something that the United States was willing to do.
Support for a Substitution Account evaporated in any case when Paul Volcker replaced Miller at the Fed in August 1979 and his tight-money policies caused the dollar to strengthen.41 As Nixon’s undersecretary of the treasury for international monetary affairs, Volcker had already been involved two devaluations of the dollar, in 1971 when the Bretton Woods System collapsed, and in 1973 when the Smithsonian Agreement came apart. In 1973, as the Treasury’s most conspicuous secret agent, he had flown 31,000 miles in five days, shuttling between Tokyo, Bonn and other capitals in a vain effort to salvage the agreement.42 Given his 6-foot, 7-inch frame, the German press immediately identified him on the streets of Bonn and exposed his supposedly secret mission. Volcker had no desire to oversee another unsuccessful currency adjustment. Having been in and out of the Federal Reserve System since 1952, he was Miller’s opposite not just physically but in his knowledge of domestic and international finance. As president of the Federal Reserve Bank of New York and therefore a member of the Federal Open Market Committee, he had already voted twice, in defiance of Miller, for raising interest rates.
For Carter, desperate now to support the dollar and restrain inflation, Volcker was the man. On cue, the Federal Open Market Committee under Tall Paul raised interest rates.43 The dollar recovered, causing talk of a Substitution Account to wither. But the proposal had already hit the rocks over the question of who would bear the exchange risk. The United States, in particular, was unwilling to see discussions continue, fearing pressure for it to guarantee the value of the dollars held in such an account. This is important to recall now that the idea of a Substitution Account through which countries like China might exchange their dollars for SDRs is again in the air.
THE DOLLAR ENDURES
Yet there was no migration away from the dollar. OPEC talked about pricing oil in a basket of currencies but did nothing. Nor was there active movement by central banks and governments into other currencies.44 Only Iran, where the revolution and hostage crisis created high tension with the United States, significantly altered the composition of its reserves.45 In 1977–1980, when there was the most talk about the dollar losing its exorbitant privilege, the main thing accounting for its declining share of global reserves was that other currencies became more valuable as the dollar depreciated, not that central banks sold what they held. The share of dollars then stabilized after Volcker took over at the Fed and the currency strengthened.46
The dollar’s continued dominance surprised many observers then, but it is hardly surprising now. The United States was still the world’s largest economy. It was still the leading trading nation. It still had the deepest financial markets. The deutschmark, its main rival, was the currency of an economy only a fraction its size. Germany was not a big supplier of the financial securities that were attractive to central banks and other foreign investors in any case, because its government budget was balanced and its financial system was bank based. Since the early 1970s the German authorities had required prior approval for sales of domestic fixed-income securities to nonresidents. They had raised reserve requirements on foreign-owned bank deposits to discourage capital inflows that might fuel inflation. When in 1979 Iran threatened to convert its dollar reserves into deutschmarks, the Bundesbank warned it off, fearing that capital inflows would swell the money supply and stoke inflation. It made clear that it would do whatever it took to discourage central banks and governments from accumulating deutschmarks. The United States, as a larger economy, could provide the international reserves required by other countries “without having its economic policy damaged by the fluctuations of capital flows,” the Bundesbankers observed.47
None of this made the deutschmark attractive for international use. The share of foreign exchange reserves in deutschmarks hovered below 15 percent all through the 1980s.
Nor were there other options. The UK, with its history of inflation and subpar growth, was in the early stages of a Thatcher experiment whose ultimate success remained uncertain. France suffered from slow growth, high unemployment, and financial problems that it sought to bottle up by tightening controls on international capital flows, further diminishing international use of its currency.48
The new player was Japan, whose share in global reserves had risen in the 1970s. That Japan was now an important trading nation and that everyone expected the yen to strengthen made it an obvious currency to add. But Japanese bond markets were small. The yen accounted for only 8 percent of total global reserves at its peak, which came in 1991. From there, Japan descended into an economic and financial funk, and the importance of the yen as a reserve and international currency descended with it.
THE MORE THINGS CHANGE
When the IMF economist George Tavlas surveyed this landscape in 1998, he noted that, notwithstanding talk of a tripolar yen-deutschmark-dollar world, the dollar still dominated international transactions.49 Petroleum prices were set in dollars. Other commodity prices were quoted in dollars. Two-thirds of all international bank loans were denominated in dollars. 40 percent of international bond issues marketed to foreign investors were in dollars.50 Dollars still accounted for more than 60 percent of total identified official holdings of foreign exchange. The dollar’s dominance remained an established fact.
This period was also when there was talk of a “new economy” and of whether America’s surging stock market signaled the advent of a cluster of high-tech innovations on which the country was singularly well suited to capitalize. The idea that the United States was set to outperform a rigid Europe and a depressed and deflated Japan bred confidence that the dollar would remain the dominant international currency. And new economy or not, the dollar’s dominance was supported by a lack of alternatives. The greenback was the predominant international currency, if for no other reason than by default.
But the time for celebration would be brief. For already movement was afoot to create what would constitute, for the first time in fully seven decades, a serious rival.
The Counterfeiters, an award-winning German film set in 1940s Europe, opens with the concentration camp survivor Salomon Sorowitsch, played by the Austrian actor Karl Markovics, sitting fully clothed on the beach holding a suitcase full of dollars. The war has just ended, and he intends to put that currency, of dubious provenance, to work on the tables of Monte Carlo. That it is dollars rather than French francs is essential to the authenticity of the scene. In post–World War II Europe it was the dollar, the currency of the only major economy still standing, that people in all countries wanted. Dollars were the only plausible currency that a Holocaust survivor might carry into a casino in 1945.
Fast forward now 50-odd years. In City of Ghosts, a 2002 thriller set in contemporary Cambodia, the hero, a crooked insurance salesman played by Matt Dillon, uses a suitcase full of dollars to ransom his partner and mentor, played by James Caan, who has been kidnapped by business associates. More than half a century of cinematic time has passed and the location is now developing Asia rather than Europe, but the suitcase still contains dollars, not Japanese yen or Chinese renminbi. That any self-respecting kidnapper would expect the ransom to be paid in dollars is so obvious as to go unstated.
The suitcase full of dollars is by now a standard trope of mystery novels and Hollywood screenplays. But this artistic convention reflects a common truth. For more than half a century the dollar has been the world’s monetary lingua franca. When a senator from the Republic of Kalmykia is caught shaking down a Russian airline, he is apprehended with a suitcase containing $300,000 in marked U.S. bills. When Somali pirates ransom a ship, they demand that the ransom money be parachuted to them in dollars. As the Wall Street Journal has put it, “In the black market, the dollar still rules.”1 The fact that nearly three-quarters of all $100 bills circulate outside the United States attests to the dollar’s dominance of this dubious realm.
But what is true of illicit transactions is true equally of legitimate business. The dollar remains far and away the most important currency for invoicing and settling international transactions, including even imports and exports that do not touch U.S. shores. South Korea and Thailand set the prices of more than 80 percent of their trade in dollars despite the fact that only 20 percent of their exports go to American buyers. Fully 70 percent of Australia’s exports are invoiced in dollars despite the fact that fewer than 6 percent are destined for the United States. The principal commodity exchanges quote prices in dollars. Oil is priced in dollars. The dollar is used in 85 percent of all foreign exchange transactions worldwide. It accounts for nearly half of the global stock of international debt securities.2 It is the form in which central banks hold more than 60 percent of their foreign currency reserves.
This situation is more than a bit peculiar. It made sense after World War II when the United States accounted for more than half of the combined economic output of the Great Powers.3 America being far and away the largest importer and main source of trade credit, it made sense for imports and exports to be denominated in dollars. Since the United States was the leading source of foreign capital, it made sense that international financial business was transacted in dollars. And with these same considerations encouraging central banks to stabilize their currencies against the dollar, it made sense that they should hold dollars in reserve in case of a problem in foreign exchange markets.
But what made sense then makes less sense now, when both China and Germany export more than the United States. Today the U.S. share of global exports is only 13 percent. The United States is the source of less than 20 percent of foreign direct investment, down from nearly 85 percent between 1945 and 1980.4
These two changes are both manifestations of the same fact: the United States is less dominant economically than 50 years ago. This fact reflects the progress of other economies, first Europe, then Japan, and most recently emerging markets like China and India, in closing the per capita income gap. Economists refer to this narrowing as catch-up or convergence. It is entirely natural insofar as there is no intrinsic reason that U.S. incomes and levels of labor productivity should be multiples of those in the rest of the world. This process of catch-up is one of the great achievements of the late twentieth and early twenty-first centuries in that it has begun lifting out of poverty the majority of the world’s population. But it also means that the United States accounts for a smaller share of international transactions. And this fact creates an uneasy tension with the peculiar dominance of the dollar.
This dominance is something from which we Americans derive considerable benefit. An American tourist in New Delhi who can pay his cab driver in dollars is spared the inconvenience of having to change money at his hotel. The widespread international use of the dollar is similarly an advantage for American banks and firms. A German company exporting machine tools to China and receiving payment in dollars incurs the additional cost of converting those dollars into euros, the currency it uses to pay its workers and purchase its materials. Not so a U.S. exporter of machine tools. Unlike firms in other countries, the U.S. producer receives payment in the same currency, dollars, that it uses to pay its workers, suppliers, and shareholders.
Similarly, a Swiss bank accepting deposits in francs but making foreign loans in dollars, since that’s what its customers want, has to worry about the risk to its profits if the exchange rate moves.5 That risk can be managed, but doing so is an added cost of business. Our Swiss bank can protect itself by buying a forward contract that converts the receipts on its dollar loan into francs when the loan matures, at a rate agreed when the loan is made. But that additional transaction has an additional cost. American banks that make foreign loans in dollars as well as taking deposits in dollars are spared the expense of having to hedge their foreign currency positions in this way.
A more controversial benefit of the dollar’s international-currency status is the real resources that other countries provide the United States in order to obtain our dollars. It costs only a few cents for the Bureau of Engraving and Printing to produce a $100 bill, but other countries have to pony up $100 of actual goods and services in order to obtain one. (That difference between what it costs the government to print the note and a foreigner to procure it is known as “seignorage” after the right of the medieval lord, or seigneur, to coin money and keep for himself some of the precious metal from which it was made.) About $500 billion of U.S. currency circulates outside the United States, for which foreigners have had to provide the United States with $500 billion of actual goods and services.6
Even more important is that foreign firms and banks hold not just U.S. currency but bills and bonds that are convenient for international transactions and at the same time have the attraction of bearing interest. Foreign central banks hold close to $5 trillion of the bonds of the U.S. treasury and quasi-governmental agencies like Fannie Mae and Freddie Mac. They add to them year after year.
And insofar as foreign banks and firms value the convenience of dollar securities, they are willing to pay more to obtain them. Equivalently, the interest rate they require to hold them is less. This effect is substantial: the interest that the United States must pay on its foreign liabilities is two to three percentage points less than the rate of return on its foreign investments.7 The U.S. can run an external deficit in the amount of this difference, importing more than it exports and consuming more than it produces year after year without becoming more indebted to the rest of the world. Or it can scoop up foreign companies in that amount as the result of the dollar’s singular status as the world’s currency.
This has long been a sore point for foreigners, who see themselves as supporting American living standards and subsidizing American multinationals through the operation of this asymmetric financial system. Charles de Gaulle made the issue a cause cĂ©lèbre in a series of presidential press conferences in the 1960s. His finance minister, ValĂ©ry Giscard d’Estaing, referred to it as America’s “exorbitant privilege.”
Not that this high-flown rhetoric led to changes in the actual existing system. In international finance as in politics, incumbency is an advantage. With other countries doing the bulk of their transactions in dollars, it was impossible for any individual country, even one as critical of America’s exorbitant privilege as France, to move away from the currency. And what was true in the 1960s remained true for the balance of the twentieth century.
But today, in the wake of the most serious financial crisis in 80 years, a crisis born and bred in the United States, there is again widespread criticism of America’s exorbitant privilege. Other countries question whether the United States should have been permitted to run current account deficits approaching 6 percent of GDP in the run-up to the crisis. Emerging markets complain that as their economies expanded and their central banks felt compelled to augment their dollar reserves, they were obliged to provide cheap finance for the U.S. external deficit, like it or not. With cheap foreign finance keeping U.S. interest rates low and enabling American households to live beyond their means, poor households in the developing world ended up subsidizing rich ones in the United States. The cheap finance that other countries provided the U.S. in order to obtain the dollars needed to back an expanding volume of international transactions underwrote the practices that culminated in the crisis. The United States lit the fire, but foreigners were forced by the perverse structure of the system to provide the fuel.
If this was not injustice enough, there is the fact that America’s international financial position was actually strengthened by the crisis. In the course of 2007 the dollar weakened by about 8 percent on the foreign exchange market.8 But since our debts are denominated in our own currency, there was no impact on their dollar value. In contrast, our foreign investments, whether in bonds or factories, became more valuable as the dollar fell.9 The interest and dividends they threw off were worth more when converted back into dollars.
The dollar’s depreciation thereby improved the U.S. external position by almost $450 billion.10 This largely offset the increase in U.S. indebtedness to the rest of the world that would have otherwise resulted from our $660 billion current account deficit. It was almost enough to keep our debts to other countries stable, despite our consuming 6 percent more than we produced. Then in 2008, in the throes of the most serious financial crisis in 80 years, the federal government was able to borrow vast sums at low interest rates because foreigners figured that the dollar was the safest currency to be in at a time of great turmoil. And again in the spring of 2010, when financial volatility spiked, investors fled into the most liquid market, that for U.S. treasury bonds, pushing down the cost of borrowing for the U.S. government and, along with it, the mortgage interest rates available to American households. This is what exorbitant privilege is all about.
But now, as a result of the financial mismanagement that spawned the crisis and growing dissatisfaction with the operation of the international monetary system, the dollar’s singular status is in doubt. The U.S. government has not been a worthy steward of an international currency, its critics complain. It looked the other way while the private sector produced the mother of all financial crises. It ran enormous budget deficits and incurred a gigantic debt. Foreigners have lost faith in the almighty dollar. They are moving away from it as a unit in which to invoice and settle trade, denominate commodity prices, and conduct international financial transactions. The dollar is at risk of losing its exorbitant privilege to the euro, the renminbi, or the bookkeeping claims issued by the International Monetary Fund known as Special Drawing Rights (SDRs).
Or so it is said. It is said by no less an authority than Sarah Palin on her Facebook page, who warned in October 2009 that talk that the Gulf countries might shift to pricing oil in a basket of currencies “weakens the dollar and renews fears about its continued viability as an international reserve currency.”11
That this issue has flashed across the radar screens of politicians who are not exactly renowned for their financial expertise reflects the belief that larger things are at stake. It is thought that widespread international use of a currency confers on its issuer geopolitical and strategic leverage. Because the country’s financial position is stronger, its foreign policy is stronger. Because it pays less on its debts, it is better able to finance foreign operations and exert strategic influence. It does not depend on other people’s money. Instead, it has leverage over other countries that depend on its currency. Compare the nineteenth century, it is said, when Britannia ruled the waves and the pound dominated international financial markets, with the post–World War II period, when sterling lost its dominance and the United States, not Britain, called the foreign-policy shots.
Were all this right, there would have been no reason for me to write this book or for you to read it. In fact, however, much of what passes for conventional wisdom on this subject is wrong. To start, it has cause and effect backward. There may be an association between the economic and military power of a country and the use of its currency by others, but it is a country’s position as a great power that results in the international status of its currency. A currency is attractive because the country issuing it is large, rich, and growing. It is attractive because the country standing behind it is powerful and secure. For both reasons, the economic health of the country issuing the currency is critical for its acquisition and retention of an international role.
But whether its currency is used internationally has at best limited implications for a country’s economic performance and prospects. Seignorage is nice, but it is about number 23 on the list of factors, ranked in descending order of importance, determining the place of the United States in the world. That said, how the country does economically, and whether it avoids policy blunders as serious as those that led to the financial crisis, will determine the dollar’s fate. Sterling lost its position as an international currency because Britain lost its great-power status, not the other way around. And Britain lost its great-power status as a result of homegrown economic problems.
The conventional wisdom about the historical processes resulting in the current state of affairs—that incumbency is an overwhelming advantage in the competition for reserve currency status—is similarly wrong. It is asserted that the pound remained the dominant international currency until after World War II, long after the United States had overtaken Britain as the leading economy, reflecting those self-same advantages of incumbency. In fact, the dollar already rivaled sterling as an international currency in the mid-1920s, only 10 short years after the establishment of the Federal Reserve System. It did so as a result of some very concrete actions by the Fed to promote the dollar’s international role. This fact has very different implications than the conventional wisdom for how and when the Chinese renminbi might come to rival the dollar. It suggests that the challenge may come sooner rather than later.
Finally, the idea that the dollar is now doomed to lose its international currency status is equally wrong. The dollar has its problems, but so do its rivals. The euro is a currency without a state. When the euro area experiences economic and financial problems, as in 2010, there is no powerful executive branch with the power to solve them, only a collection of national governments more inclined to pander to their domestic constituencies. The only euro-area institution capable of quick action is the European Central Bank. And if quick action means printing money to monetize government debts, then this is hardly something that will inspire confidence in and international use of the euro. The renminbi, for its part, is a currency with too much state. Access to China’s financial markets and international use of its currency are limited by strict government controls. The SDR is funny money. It is not, in fact, a currency. It is not used to invoice and settle trade or in private financial transactions. As a result, it is not particularly attractive for use by governments in their own transactions.
The United States, whatever its other failings, is still the largest economy in the world. It has the largest financial markets of any country. Its demographics imply relatively favorable growth prospects.
But the fundamental fallacy behind the notion that the dollar is engaged in a death race with its rivals is the belief that there is room for only one international currency. History suggests otherwise. Aside from the very peculiar second half of the twentieth century, there has always been more than one international currency. There is no reason that a few years from now countries on China’s border could not use the renminbi in their international transactions, while countries in Europe’s neighborhood use the euro, and countries doing business with the United States use the dollar. There is no reason that only one country can have financial markets deep and broad enough to make international use of its currency attractive. There may have been only one country with sufficiently deep financial markets in the second half of the twentieth century, but not because this exclusivity is an intrinsic feature of the global financial system.
The world for which we need to prepare is thus one in which several international currencies coexist. It was with this world in mind that the euro was created. A world of several international currencies is similarly what China is after. China has no interest in “dethroning” the dollar. To the contrary, it has too much invested in the greenback. But preserving its investment in the dollar is entirely compatible with creating a more consequential international role for its own currency. And where the renminbi leads, other emerging market currencies, such as the Indian rupee and Brazilian real, could eventually follow.
Serious economic and financial mismanagement by the United States is the one thing that could precipitate flight from the dollar. And serious mismanagement, recent events remind us, is not something that can be ruled out. We may yet suffer a dollar crash, but only if we bring it on ourselves. The Chinese are not going to do it to us.
But this is to get ahead of the story.
DEBUT
When in 1620 a landing party of English religious dissidents led by William Bradford and Myles Standish came ashore near what is today Provincetown, Massachusetts, they brought with them English money and a custom of expressing values in pounds, shillings, and pence. The colonists were not a wealthy band, and it was not many years before they had expended their English money on supplies from the Old World. Finding a substitute was not easy in a colony without a mint or the permission to establish one, and with England prohibiting the export of coin (the English monarchs husbanding all the precious metal they possessed for fighting expensive wars).
Commodity currency was the obvious alternative. Every schoolchild learns about the colonists’ use of wampum. Native Americans valued the purple and white quahog and whelk shells strung in the form of necklaces and ornamental belts and were willing to part with furs, skins, and other commodities in order to obtain them.1 The colonists with their tools were efficient producers of necklaces and belts. From trade with the natives the use of wampum spread to transactions among the colonists themselves. In 1637 wampum was made legal tender, officially recognized money for paying debts, in the Massachusetts Bay Colony at a rate of six white beads or three purple beads per penny.
But there were only so many snail and clam shells to go around. So the colonists turned to other commodities for use in their barter transactions: corn, codfish, and beaver in the north, tobacco and rice in the south. These items were used in transactions because they were the dominant products of the region. Local governments allowed residents use corn or tobacco to discharge their tax obligations.2 The next step was to declare that the commodity in question should be accepted not just in public payments but by private parties. Massachusetts made corn legal tender. Connecticut did the same for wheat, Virginia for tobacco.3
Making these commodities legal tender had some awkward consequences. When Virginia gave tobacco legal-tender status, there was an incentive to increase production, of the lowest grades in particular. With more tobacco chasing the same goods, the purchasing power of tobacco declined. Farmers complained of low prices. The General Assembly of Burgesses, the representatives of Virginia’s agricultural districts, considered measures to restrict tobacco cultivation but could not agree. In 1682, farmers angry over low crop prices took matters into their own hands, rampaging through the fields and destroying their neighbors’ tobacco plants. The government mustered the militia. The rioters carried out their work under cover of darkness. Order was restored only after several months of police action.
Farm products like tobacco had further disadvantages as media of exchange, stores of value, and means of payment—most obviously bulk, lack of uniformity, and spoilage. There understandably developed a preference for coin. Since the English authorities did not permit the colonies to operate a mint, such coin as circulated had to be imported.4 English coin could be obtained by exporting other merchandise, although London with its prohibitions did not make this easy. Closer at hand, coin could be obtained in the West Indies, silver coins being abundant there as a result of Spain’s prolific Mexican and Peruvian mines. The North American colonists sold dried fish, whale oil, pickled beef, and grain in return for coin. Exportation of many of these products to destinations other than other English colonies and the mother country being prohibited, much of this was smuggled. Coin earned by exporting merchandise was supplemented by that acquired through piracy, an important industry for the seventeenth-century colonists in the established English tradition.5 The pirates spent much of their booty, which included Spanish coin, while on shore leave in the northern colonies.
The most popular coins, weighing 423.7 grains of silver were known as “pesos.” Valued at eight Spanish reals, they were referred to as “pieces of eight.”6 Dealers in foreign exchange in London referred to them as “dollars” or “Spanish dollars,” the Bohemian state of Joachimsthal having produced a coin of similar size and content known as the Joachimsthaler, or as anglicized the “Joachimsdollar.” In addition, gold johannes were imported from Portugal, louis d’or from France, sequins from Venice. But on the eve of the colonies’ war of independence, Spanish silver coins were the dominant part of the coinage.
Coin was supplemented with bills of credit—paper money issued via public loans.7 It was issued, that is, when the colonists’ English overseers permitted, Parliament prohibiting the practice starting in 1751. This ban was among the economic grievances setting the stage for the American Revolution. No less a figure than Benjamin Franklin objected to it in testimony to the British Parliament in 1766.
ALL ABOUT THE BENJAMINS
The colonies’ war of independence was necessarily improvised, but nowhere more than in the monetary sphere. Delegates to the Continental Congress, not being able to commit their principals, lacked the power to raise taxes. They sought to pay for the war by issuing IOUs, continental bills or “continentals” for short. Bills issued under the authority of the Continental Congress were supplemented by bills issued by each colony. The consequences predictably included bills trading at a confusing variety of different prices, inflation, and the disappearance of gold and silver from circulation.
It took the leaders of the new nation some time to regularize this irregular situation. In 1785 the Congress passed a resolution declaring that the “money unit of the United States of America be one dollar” and that the dollar should be divided according to the decimal system. Thomas Jefferson, having been exposed to the advantages of decimalization in France, insisted on the provision. A resolution adopted in August 1786 then referred to the hundredth part of a dollar as a cent and a tenth part as a dime. It defined the dollar in terms of grains of silver and gold at a ratio of 15.253 to 1.
In September 1786, Congress then ordered the establishment of a mint. In its initial months of operation, only a few one-half-, one-, and two-cent copper coins were produced, minting being an activity with which the locals had little experience. A number of states also engaged in coining. The Constitution, which came into force in March 1789, then asserted the power of Congress to coin money and regulate its value while prohibiting the states from coining money or emitting IOUs that circulated like money.8
The last phase of the birthing process was the Coinage Act of 1792, for which Alexander Hamilton was midwife. Hamilton, one of the three members of President George Washington’s first cabinet, believed fervently in the need to bind the thirteen states together. He saw a uniform currency as an effective form of glue. His Report on the Establishment of a Mint, submitted to the Congress in January 1791, offered detailed proposals for a mint and a uniform coinage to encourage commerce not just within but across the newly independent states.
Hamilton was a proponent of bimetallism, in which gold coins were used for large-value trade, silver coins for petty transactions. In the course of preparing his report, he examined the tables that Sir Isaac Newton prepared in 1717, when as master of the mint Newton had specified the pound’s value in terms of the two metals. The 1792 act based on Hamilton’s report similarly defined the dollar in terms of both gold and silver. It defined smaller denominations using the decimal system, dubbing them the quarter, dime, and cent.
To determine the silver content of the dollar, Hamilton had the Treasury weigh a sample of Spanish dollars. Their average silver content was 371.25 grains, not the official Spanish figure of 377, coins circulating in the United States being clipped and worn. The Americans being nothing if not pragmatic, 371.25 grains was adopted as the official silver content of the dollar. Drawing inspiration once more from Newton’s 1717 report, the ratio of silver to gold was set at fifteen to one.9
Acceptance of the new U.S. dollar was swift because of its resemblance to the Spanish dollars already in circulation. Indeed, Spanish dollars, notably those coined in Mexico, continued to circulate because of the slow progress of the mint. So widely did they circulate that in 1793 Congress recognized the most important ones as legal tender. Many remained in circulation until the middle of the nineteenth century. One enduring legacy of the Spanish coins that constituted the bulk of the circulation is the dollar sign. The sign “$” derives from the peso, the two parallel lines being the vertical portions of “P,” and the “S” indicating the plural. This explains why the “$” symbol is also used in countries whose currency is the peso—in Argentina, for example.
O CANADA
Over the balance of the nineteenth century the dollar had a colorful history, but it was almost entirely a domestic history. Canada was the one place outside the United States where it circulated. The British colonies of Upper and Lower Canada, like their colonial counterparts to the south, had no currency of their own. English, French, and Spanish coins all circulated. In the 1830s one writer complained that the coinage had “more the appearance of the fifteenth than the nineteenth century. All the antiquated cast-off rubbish, in the whole world, finds it way here, and remains. This Colony is literally the Botany Bay for all the condemned coins of other countries; instead of perishing in the crucible, as they ought to do, they are banished to Canada, where they are taken in hand.”10
These coins had legal tender status at values that depended on their gold and silver content. As trade with the newly independent United States expanded, they were increasingly supplemented by dollars. While the merchants of Upper and Lower Canada still did their accounting in pounds, shillings, and pence, their transactions increasingly were in dollars and cents.
In the 1850s, with U.S. coins in widespread use, a groundswell developed to give them official status in Canada. Francis Hincks, the onetime banker and railway speculator who served as prime minister from 1851 to 1854, endorsed the campaign, and in 1853 an act was passed recognizing not just pounds, shillings, and pence but also dollars and cents as units of Canadian currency. Simply shifting to the decimal system would have been easier, but the prospect excited fears that doing so would somehow lead to annexation by the United States. Finally in 1857, suppressing these exaggerated worries, the Canadian Parliament passed an act specifying that the accounts of provincial governments should be expressed in dollars and cents. In 1858 the English Royal Mint stamped the first Canadian silver coins in denominations of 5, 10, and 20 cents.
But even after confederation in 1867 and the issuance of a dominion currency, U.S. dimes, quarters, and half-dollars continued to circulate. The bullion content of these U.S. coins was typically 2.5 percent less than their face value. While they might be accepted at face value by merchants and individuals, banks accepted them only at a discount.
This lack of uniformity was a considerable nuisance. In 1868 the dominion government sought to eliminate it by exporting to New York the U.S. silver coins that had come into its possession. This making only a small dent in the problem, in 1870 it agreed to provide the banks a commission in return for buying up the remaining U.S. coin and to pay the cost of exporting it to New York.11 This was Hincks at work again, his having returned as finance minister in 1869 after a period as imperial governor in Barbados and British Guiana. The dominion government then issued full-bodied silver coins in 25- and 50-cent denominations to replace the U.S. coin that now finally disappeared from circulation. The dollar’s international role north of the border thereby came to an ignominious end.
OTHER PEOPLE’S MONEY
Not only did little U.S. money circulate outside the United States, especially after it was expelled from Canada, but the dollar played virtually no role in financing America’s own import and export trade. Whether an American merchant needed credit to purchase imports or to extend credit to a foreign purchaser of American goods, he secured it not in New York but in London or, less frequently, Paris or Berlin. It followed that this credit was denominated not in dollars but in pounds, francs, or marks.
Why London dominated this business is no mystery. Britain was the first industrial economy and the leading trading nation. With economic growth and development came the growth of financial markets. Already in the mid-nineteenth century Britain had a well-developed banking system. It had the Bank of England, chartered in 1694 to raise money for war with France, which had come to assume the functions of a modern central bank. It had stable money as a result of being on the gold standard.
It had not always been so. Traditionally the Royal Mint had been run by the Company of Moneyers, descended from the medieval gild of coiners, whose members were notorious for self-dealing, corruption, and drunkenness. Practices at the mint had so deteriorated by the end of the seventeenth century that the government took the extraordinary step of appointing the country’s premier scientist, the efficient and scrupulously honest Isaac Newton, as Warden of the Mint. Saddled with financial difficulties, Newton was happy to accept, since the position came with a salary. He addressed the personnel problem. He did his detailed study of the coinage. He put Britain on the gold standard in 1717.
By the nineteenth century, London had become the premier financial center. Because it was where members of the British Empire serviced their debts, London had developed efficient clearing mechanisms that could also be used by other countries. Britain was the leading foreign investor. And when one of its banks made a loan to a foreign borrower, that loan was naturally in the form of its own currency, the pound sterling. With so many loans denominated in sterling, it became natural for governments, when borrowing in London, to maintain accounts there in order to conveniently service their debts. These accounts were what subsequently came to known as “reserves.”
Because Britain was the leading importer of industrial raw materials and food, the most important commodity exchanges—the Manchester Cotton Exchange, the Liverpool Corn Market, and of course the London Gold Market—were located there. Britain was also an efficient provider of trade-related services such as shipping and insurance. All this made London an obvious place to obtain credit for those engaged in international trade. And for reasons of their own convenience, the credit provided by British banks was denominated in sterling. It followed that upwards of 60 percent of world trade was invoiced and settled in British pounds.12
NO CREDIT
When a businessman ships a batch of goods, he needs cash. He goes to his bank with papers showing that he has shipped the goods and what he will be paid in the future. If his credit—and the credit of the buyer—is good, he can get his money immediately rather than having to wait for the goods to arrive in the foreign market and for the buyer’s payment to arrive in the United States. The papers in question are known as “trade acceptances.” In purchasing them at a discount from their face value, a bank is said to “discount” them.
But having to rely on London for trade credit, as U.S. importers and exporters did, made the process positively labyrinthine. Picture the requirements facing a New York coffee roaster importing beans from Brazil.13 The importer first had to go to his bank to obtain a letter of credit specifying the terms of the transaction, the goods to be shipped, and the insurance on the shipment. In issuing the letter, his bank committed to paying out funds when receiving confirmation that the transaction was complete. The bank then sent a copy of the letter to the London bank guaranteeing payment of the bill. It gave the importer the original and a second copy.
The importer next sent the original letter of credit to the Brazilian dealer, authorizing him to draw on the London bank against his shipment of coffee. The dealer shipped the coffee and, with documents attached, presented his draft on the London bank to his Brazilian bank. The willingness of his Brazilian bank to purchase (or “discount”) the draft reflected the expectation that it would be “accepted” by a reputable British bank that would pay out the specified amount of cash when the draft matured.
After discounting the draft, the Brazilian bank sent one duplicate set of documents to the New York bank and another, along with its draft, to its correspondent bank in London. The correspondent could hold the accepted draft until it matured, at which point the correspondent would present it to the accepting bank and be paid, or sell it to another party. In practice other interested parties included not just banks but also business enterprises and individuals seeking to invest in relatively safe short-term assets. When presented with the draft for payment, the accepting bank in London checked it against the letter of credit it had received from New York. Finding everything in order, it sent the papers accompanying the draft back to the New York bank.
At this point the American importer, in order to obtain the bill of lading sent to the New York bank by the London bank as part of the documentation accompanying the draft, signed a trust receipt committing to hold the goods in trust for the bank as its property and to turn over to it the proceeds of his sales as they applied to the acceptance. An accepted bill was generally drawn to mature in 90 days, giving the importer time to sell the shipment. Prior to the draft maturing, the importer delivered the funds to his New York bank, which sent them on to the London bank. The London bank paid the holder of the acceptance on its maturity, and the transaction was complete.
One’s first reaction on encountering this exhaustingly long list of procedures is that the transaction could have been completed more easily had it not required multiple communications with London. American merchants complained of having to pay not just a fee to their New York bank for the letter of credit but also a collection charge to the bank in London. Since London banks preferred lending in sterling, the practice also exposed American merchants to the risk that the sterling-dollar exchange rate would move against them, which was an additional cost of doing business.14
These practices had still further disadvantages for American business. To the extent that finance and commercial services like shipping and insurance came bundled together, American providers of the latter found it more difficult to compete. Familiarity with facilities for providing trade credit similarly made London the obvious place to source other financial services—to underwrite bond issues, for example.
PROMINENT BY ITS ABSENCE
Great Britain was a small windswept island off the northeast coast of Europe. The United States, in contrast, was a continental economy. By 1870 it had pulled ahead of Britain in the production of goods and services. By 1912 it had pulled ahead as an exporter of merchandise.
It was thus anomalous that the United States continued to depend on London for trade finance and that the dollar played no international role. Part of the explanation lay in regulations preventing American banks from branching overseas. Extending credit to foreign merchants required information on their activities, something that British banks, with their far-flung branch networks, were in a position to gather. French, German, and Dutch banks similarly had foreign branch networks. But not so national banks in the United States, which were prohibited from branching not just internationally but even across state lines. In some states they were prohibited from branching at all.15
An exception was the International Banking Corporation, a specialized institution created to engage in foreign banking but which, to prevent it from using this advantage to dominate the domestic market, was prohibited from engaging in banking business in the United States. IBC was organized in 1901 by Marcus Hartley, owner of the Remington Arms Company, to promote and finance the expansion of American trade with Asia, the Spanish-American War having brought the region to his attention. Hartley and his partners copied the structure and raided the personnel of British banks already active in the Far East.16 By 1910 IBC had sixteen branches, mostly in Asia.17
In addition, some states allowed trust companies (bank-like companies that oversaw the affairs of trust funds and estates) to operate foreign branches. Foreign branches made it easier to invest in foreign bonds. But the only trust companies with foreign branches were the Farmers’ Loan and Trust Company, the Trust Company of America, the Guaranty Trust Company, the Empire Trust Company, and the Equitable Trust Company. Farmers’ Trust had two foreign branches, the others just one. Such was the extent of foreign branching by American financial institutions.18
Until the passage of the Federal Reserve Act in 1913, national banks were even prohibited from dealing in trade credit.19 The National Banking Act of 1863 and associated legislation included no provisions authorizing them to do so. And the courts, suspicious of banks encroaching into new areas, ruled that national banks could not engage in the business without express congressional authorization.20
Before putting too much weight on these legal restrictions, it is worth recalling that all the great accepting banks in London were private. The United States also had private banks that did not need state or federal charters and hence were free of regulatory restrictions. These included names like J.P. Morgan and Company, Brown Brothers and Company, and Lazard Frères. In principle, nothing prevented these banks from dealing in acceptances. Many had sister firms and offices across the water to provide market intelligence. J.P. Morgan had Morgan, Grenfell and Company. Lazard Frères had offices in London and Paris.
But even private banks contributed to the finance of U.S. foreign trade only to a very limited extent. Evidently something else made it hard for American banks, even private banks not inhibited by regulatory restrictions, to break into the market.
That something else was a cost disadvantage. London banks had a well-developed population of investors to whom trade acceptances might be resold, which made risks less and interest rates lower. With so many investors active on this market, it was possible to buy and sell these instruments without moving prices. To put it in the language of finance, the London market was exceptionally liquid. There was little uncertainty about the price one could obtain when discounting a bill. This encouraged yet more investors to come to London, adding further to the market’s liquidity. It made the decision of whether to ask for a draft on a well-known British house or an unfamiliar American competitor a no-brainer for our Brazilian coffee dealer. It was possible to engage in a large volume of business without moving prices.
And what worked for individual investors worked for governments and central banks. The liquidity of its market made London an attractive place for governments and central banks to hold reserves. And the more bills on London they substituted for gold—which, its other attractions notwithstanding, bore no interest—the greater was the liquidity of the market. This was the advantage of the incumbent international currency, the so-called “first-mover advantage” that enables it to hang on even when the country issuing it has gone into decline.
But the fact that France and Germany were able to enter the market suggests that Britain’s first-mover advantage was not insurmountable. Other factors must have been holding America back. One handicap was the volatility of its financial markets. By one count, the United States experienced fourteen financial crises in the century preceding World War I, of which 1907 was the worst. Interest rates spiked, and for many borrowers credit became unavailable at any price. This was not a market on which many people, given a choice, would finance their trade.
Then there was the fact that it proved impossible for the United States to keep both gold and silver coins in circulation, given that the market price of the two metals was changing continuously. Even after 1879, when the United States formally went onto the gold standard, its commitment remained uncertain. This was notably true in the 1890s, when the inflationist free-silver movement was given voice by William Jennings Bryan. Our Brazilian coffee dealer would have been reluctant to accept a contract in which he would receive dollars sometime in the future, given the risk that additional silver might be coined and the dollar might depreciate against currencies more firmly tied to gold.
BIDDLE’S FOLLY
Finally there was the fact that the United States had no central bank to stabilize the market. When London banks needed cash, they could raise it by reselling to the Bank of England some of the securities that they had purchased previously. (The practice was known, for self-evident reasons, as “rediscounting” at the Bank.) At the end of the nineteenth century, the Bank of England was the single largest purchaser of bills on the London market, sometimes accounting for the majority of all transactions.21
America had nothing resembling these arrangements. A proto-central bank, the Bank of the United States, had been founded in Philadelphia in 1791. The Bank of the United States was another Alexander Hamilton invention, Hamilton having educated himself about the advantages accruing to Britain from the existence of the Bank of England. Created over the objections of Thomas Jefferson and James Madison, who feared that it would lead to elite control of American finances, the Bank of the United States was the new nation’s largest financial institution and the only one permitted to operate in more than one state. It kept the Treasury Department’s accounts. By refusing to accept the notes of banks that did not pay out the designated amount in gold or silver, it maintained the link between the money stock and supply of precious metal. It provided a check on local monopoly power by offering an alternative to local banks charging exorbitant rates.
These other institutions predictably registered their displeasure when the charter of the Bank of the United States came up for renewal in 1810. They complained that the Bank was less than vigilant in refusing to accept the notes of a non-specie-paying bank when politically influential individuals or its own investors were among its shareholders. Jeffersonian Democrats interpreting the Constitution literally insisted that the Congress had no power to charter a bank. The bill to recharter was defeated.
State banks were thus freed of discipline on their note-issuing activities. The next years saw a massive lending boom fueled by a flood of state banknotes, leading first to inflation and then, inevitably, to a crash. In 1816 this unhappy experience caused the Congress to reverse itself and charter a second Bank of the United States, again with a head office in Philadelphia and again for 20 years.
The policies of the Second Bank attracted little notice under its initial presidents, the unremarkable William Jones and Langdon Cheves. This changed in 1823 when Cheves was succeeded by Nicholas Biddle. Biddle was exceptionally smart and knew it, having completed his studies at Princeton at the age of fifteen and being selected to deliver the valedictory address. His self-confidence was matched only by his commitment to federalism, which traced back to his Princeton days, and by his belief that a strong government needed a strong central bank.
As a young member of the Pennsylvania State Senate, Biddle had fought unsuccessfully to mobilize support for rechartering the First Bank. Now, as the president of the Second Bank, he expanded its operations. He increased its loans and investments. He enlarged its branch network and again used it to discipline other banks. And he made no secret of his contempt for his fellow bankers, most of whom did not measure up to his exalted standards.
This approach did not exactly smooth relations with the country’s other financial institutions, whose owners complained to their elected representatives. Biddle sought to buy congressional support with campaign contributions and bribes, but these proved less effective than they might have been in softer hands.
In 1832, 4 years ahead of schedule and with Biddle’s encouragement, the eventual Whig candidate for president, Henry Clay, introduced into the Senate a bill to recharter the Bank. When Clay’s bill was passed by the Congress, the president, Andrew Jackson, promptly vetoed it. A Tennessean, Jackson was wedded to the increasingly anachronistic Jeffersonian ideal of an agrarian republic. He saw the Bank as favoring an elite circle of bankers and industrialists and favoring the Northeast over the South and West. Jackson was therefore quite happy to make his opposition to the Bank a central issue in his 1832 reelection campaign.
Biddle was confident, given what he took as the lessons of 1811–15, that the issue would be a winner for Clay. The voters, having shorter memories and being less enamored of the Bank’s hard-money policies, proved him wrong. There was also the opposition of the New York financial community, which was not fond of an arrangement that made Philadelphia the seat of financial power. Once reelected, Jackson made clear that the Bank would be rechartered only over his dead body.
In 1836, its federal charter expiring, the second Bank of the United States took out a state charter and became the United States Bank of Pennsylvania. Biddle attempted to establish his state-chartered bank as a platform for building a market in bills of exchange in Philadelphia. But with no equivalent of the Bank of England to backstop the market, even a formidable state bank lacked the resources. Biddle attempted to secure a line of credit from the Bank of England for his operation but, not surprisingly, was rebuffed. At that point the 1836–37 financial crisis put an end to his plan.22
More than three-quarters of a century would pass before the United States again possessed a central bank. Among the consequences was an international monetary system in which the dollar played no role. For central banks and governments, sterling, not the dollar, was “as good as gold.” Not just the French franc, German mark, Swiss franc, and Dutch guilder but even the Italian lira, Belgian franc, and Austrian shilling all ranked ahead of the dollar on the international pecking order on the eve of World War I, despite the fact that the United States was far and away the largest economy.23 Sterling accounted for roughly half of all of the foreign exchange reserves of central banks and governments, the French franc 30 percent, the German mark 15 percent. In addition, small amounts of Dutch guilder and Swedish krona were held as foreign exchange reserves. But not dollars.
ENTER THE FED
After the 1907 financial crisis, concern over the instability of American finance fused with the desire to create a U.S. market in trade credits. The 1907 panic was caused, the experts explained, by the fact that financial transactions in New York were nothing more than stock market speculation, as opposed to the kind of wholesome investments backed by import and export business that dominated in London. Then there was the fact that, in the absence of a central bank, the major financial institutions had been forced to rely on Wall Street’s dominant figure, the supremely confident and supremely rich J. Pierpont Morgan, to organize a rescue. This was not entirely reassuring, since it was unclear whether Morgan or someone like him would be there the next time crisis struck.
This pointed to the need for a permanent mechanism for managing monetary problems. To investigate solutions, a National Monetary Commission was set up in 1908. It included eighteen members of Congress under the chairmanship of the brusque and intimidating senior senator from Rhode Island, Nelson Aldrich. Although descended from an old New England family (his forbearers included John Winthrop and Roger Williams), Aldrich’s parents were not rich; he married money rather than inheriting it. Politically, he worked his way up, starting with the Providence City Council. Economically, he put his wife’s money to work by investing in the Providence street railway system. (The two endeavors were clearly not unrelated.) From city council Aldrich moved to the Rhode Island House of Representatives, the U.S. House, and finally, in 1881, the Senate.
By 1910 Aldrich had been in the Senate for close to 30 years. Having risen to the chairmanship of the Finance Committee, he was used to getting his way and not much inclined to defer to his senatorial colleagues. A conservative Republican, he had previously concentrated on securing tariff protection for U.S. manufacturing. But the 1907 crisis convinced Aldrich of the need for a stronger monetary framework, much as the monetary turmoil experienced by the new nation had convinced Hamilton of the need for the Bank of the United States.
The question was what kind of monetary framework. As head of the investigatory commission, Aldrich hired advisors. He consulted experts. He led a mission to Europe to study arrangements there. The trip convinced him of the need for a European-style money market backed by a central bank. The upshot was the Aldrich Plan, proposing the creation of a National Reserve Association at whose center would be a central bank with the power to influence financial conditions and lend to banks in distress.
The author of the Aldrich Plan’s technical provisions, who was to play an important role in the dollar’s subsequent rise to international prominence, was the German-born financier Paul Warburg. Warburg had started his career working for Simon Hauer, an importer and exporter in Hamburg. After further seasoning working for bankers in London and Paris, he moved to the family banking firm of M.M. Warburg and Company.
Warburg was thus intimately familiar with the mechanics of international finance. He was also connected with the higher echelons of American banking. At the end of a round-the-world tour in 1892, he had met the charming and talented Nina Loeb, who just happened to be the daughter of one of the founders of the prominent New York bank Kuhn, Loeb and Co. One thing led to another, and the two were married in 1895. After 7 years in Hamburg, the couple relocated to New York, and Warburg took up a position with his father-in-law’s firm. It was on a visit to Kuhn, Loeb in preparation for his mission to Europe that Aldrich encountered Warburg, who seemed uniquely well informed about European finance. By this time Warburg had become a proponent of an American central bank to support the development of a market in trade acceptances. He in turn was impressed by broad-shouldered Aldrich, who he took as the embodiment of monetary reform. Warburg began writing Aldrich about his ideas. Again, one thing led to another.24
Shy and self-effacing, Warburg preferred working out of the public eye. With a thick German accent, he was not a natural public speaker. But the 1907 financial crisis made him a man with a mission. By the end of the year, he was publishing in the New York Times on the need for a European-style central bank to stabilize American financial markets. He was not deterred by letters abusing him for his “un-American views.” By 1908 he was giving speeches on financial crises. He was soon testifying before Congress and serving as head of the National Citizens’ League for the Promotion of Sound Banking, a lobby for monetary reform.
In November 1910, Warburg and Aldrich, together with A. Piatt Andrew, assistant secretary of the treasury and a former assistant professor at Harvard who had served as special assistant to Aldrich’s monetary commission, and three Wall Street titans—Benjamin Strong, head of Bankers Trust; Frank Vanderlip, a onetime financial journalist and former assistant secretary of the treasury newly appointed as president of National City Bank, the largest bank in the country; and Henry Davison, senior partner and in-house fixer at J.P. Morgan & Company—snuck off to Jeckyll Island off the Georgia coast to draft a blueprint for a central bank.25 That Andrew’s participation was not known even to his boss, Secretary of the Treasury Franklin MacVeagh, testifies to the boldness of the expedition. J. P. Morgan himself had regularly taken hunting vacations on Jeckyll Island, explaining the venue. The six conspirators traveled by private railcar, disguised as duck hunters, to prevent their movements from being traced. To avoid having their identities learned by porters, Vanderlip and Davison adopted the further artifice of referring to one another as Orville and Wilber.
After the New Year, the Jeckyll Island blueprint was announced as the Aldrich Plan. To reassure those fearful of overweening government control, it proposed a more decentralized central banking structure than existed in Europe. It described a National Reserve Association with fifteen regional branches, each with the authority to discount trade acceptances. To ensure what the plan’s authors saw as proper democratic control, it recommended that their directors be elected by the commercial banks associated with each individual branch.
This was not obviously enough to surmount deep-seated popular and congressional concern over concentrated financial power. The notion that small business and the farmer were exploited by big finance still resonated powerfully, as in the days of Andrew Jackson. Attaching Aldrich’s name to the plan also had the unfortunate consequence of exciting those suspicious of a Wall Street money trust. Aldrich’s daughter, Abby, had married John D. Rockefeller Jr., only son of the oil magnate John D. Rockefeller, the single richest person in the country, causing Aldrich to be widely viewed as Rockefeller’s mouthpiece. The governor of New Jersey and presidential hopeful Woodrow Wilson explained that, while had not read the Aldrich Plan, he disliked anything bearing the senator’s name.
Then there was the fact that Frank Vanderlip, another member of the Jeckyll Island traveling party, had already begun to position the institution he headed, National City Bank (forerunner of today’s Citigroup), to capitalize on the opportunities created by monetary reform. Vanderlip established the National City Company, a holding-company affiliate, to buy up state banks and trust companies and engage in activities prohibited of a national bank. The prospect of a megabank monopoly excited not just local bankers but also farmers and businessmen long suspicious of big finance. Congressman Charles A. Lindbergh Sr. of Minnesota, a leading critic of the financial establishment, member of the House Committee on Banking and Currency, and father of the famous aviator, introduced a resolution to investigate the money trust. “Wall Street brought on the 1907 panic,” Lindbergh thundered, and “got people to demand currency reform…and, if it dares, [it] will produce another panic to pass the Aldrich central bank plan. We need reform, but not at the hands of Wall Street.”26 Lindbergh had grown up on the Minnesota frontier in the heyday of the Populist revolt against extortionate bankers and railroads. He was, it can be fairly said, obsessed with the money trust. Lindbergh was reported to have read all of the dozen-plus studies published by Aldrich’s National Monetary Commission cover to cover and still found time to pen his own 318-page study of monetary reform.27
It took the better part of 2 years for something resembling the Aldrich Plan to wend its way through the Congress. To quiet the critics and satisfy himself, the bill signed by President Woodrow Wilson provided for a system of regional reserve banks with locally appointed boards, not unlike that in Aldrich’s plan, but supervised by a Federal Reserve Board whose seven members would be selected by the president and not by the bankers.
Lindbergh was not impressed. “This act establishes the most gigantic trust on earth,” he railed. “When the president signs this bill, the invisible government by the Monetary Power will be legalized. The people may not know it immediately, but the day of reckoning is only a few years removed.” That day came, of course, in 1929, although it did not take exactly the form that Lindbergh had in mind.
ACCEPT YOURSELF
The mandate of the new central bank was to provide an “elastic currency.” It was to regulate the supply of credit to prevent disruptive interest rate spikes and market seizures like those of 1907. Among its techniques would be purchasing trade acceptances, the studies of the National Monetary Commission having shown that this was how the Bank of England smoothed rates.
Buying trade acceptances denominated in dollars assumed, of course, a supply of dollar-denominated trade acceptances to be bought. Providing them required American banks to go abroad. The Federal Reserve Act therefore authorized national banks with capital of at least $1 million to establish branches in foreign countries.28 It allowed them to purchase trade acceptances up to a limit of 50 percent of their own funds.
How did this market get up and running, given the cost and reputational advantages possessed by London? The difference now was not just the Federal Reserve Act but also World War I. The war saw a dramatic expansion of U.S. export trade, as America became factory and grainery to the world. American multinationals established operations in Latin America and Asia. The United States was transformed from debtor to creditor nation.
The war also disrupted the provision of trade credit in Europe. As governments mobilized for war, capital for trade finance grew scarce. German and British banks turned to New York to accept endorsed bills for their clients’ imports not just from North American but from Latin America and Asia as well. The credit they received was denominated in dollars because this was the currency with which the New York banks were familiar.
But this was not the only reason. Starting in 1915 sterling’s value in terms of gold, still the standard measuring rod, oscillated violently. Contracting today for future payment in a currency whose value was uncertain was unappealing. It was especially unappealing given the existence of an alternative, the dollar, still firmly pegged to gold. Not just American traders but also Brazilian exporters of coffee, and more generally importers and exporters throughout Latin America and Asia, concluded that the dollar was the more attractive unit in which to do business.
American banks, free now to deal in acceptances, scrambled to attract this business. The always expansion-minded National City Bank set up a Foreign Trade Department to provide exporters with information on the foreign demand for U.S. products and the creditworthiness of customers, packaging this advice with its financial services. National City was soon extending some $20 million of trade acceptances annually.29
In January 1916, with American support, the British government succeeded in pegging sterling to the dollar.30 But even if the British authorities succeeded in stabilizing sterling for the moment, this did not create confidence that it would remain stable, given massive wartime budget deficits and the rapid rise of British prices. Predictably, the pound began falling when American support was withdrawn at the end of the war. Within a year it had lost a third of its value, more even than when Napoleon returned from Elba in 1815. All the while the dollar remained pegged to gold. It was not surprising that American importers and exporters saw the dollar as the more attractive unit in which to do business. And what was true of merchants and traders in the United States was true of those in other countries.
National City Bank under Frank Vanderlip was again in the vanguard of U.S. banks expanding abroad. Possessing a former financial journalist’s mindfulness of the power of publicity, Vanderlip moved quickly to advertize his bank’s ambitions. Immediately upon passage of the Federal Reserve Act, he had a questionnaire sent to 5,000 American firms soliciting their views on which foreign markets would benefit from the presence of a National City branch. The Du Pont Company, which, sensing the wartime demand for munitions, had opened a nitrate factory in Chile, replied that it was desirous of South American branches. National City set up a branch in Argentina followed by others in Brazil, Chile, and Cuba. In 1915 it acquired the International Banking Corporation, which it used to set up branches across Europe and Asia. Where the bank did not establish branches outright, it sent representatives to gather market intelligence and solicit business.
Other U.S. banks followed National City into the fray. By the end of 1920, American banking institutions operated 181 branches abroad.31 Of those 181 branches, 100 were foreign offices of seven banks doing regular banking business in the United States, and 29 of those 100 branches were foreign offices of National City Bank or its subsidiary, the International Banking Corporation.32
These American banks operating in other countries encouraged importers there to accept drafts in dollars drawn on them by American exporters. Foreigners exporting to the United States could similarly draw in dollars on a U.S. bank instead of drawing drafts in London. Thus, it was not just U.S. importers and exporters who made use of the new acceptance market in New York but also foreign merchants linked to it by the foreign branches of American banks.
IN STRONG HANDS
But the growth of the acceptance market in New York and its progeny, the international use of the dollar, entailed more than the miracle of the market. American banks were not yet capable of building a dollar acceptance market. Their costs were still too high, reflecting a dearth of other investors to whom to sell their acceptances. In their absence, banks were forced to hold the acceptances they originated on their own balance sheets. Doing so was expensive, since the yield to maturity on this paper was often less than what the banks themselves had to pay when borrowing money.
The obstacle was the lack of familiarity of investors with the asset class. And familiarizing them took time. As explained by the new industry’s advocate, the American Acceptance Council (another Paul Warburg creation), the investor “would have to be educated, first as to the nature of a bankers’ acceptance, second as to its attractiveness as an investment, and third, owing to its quality as a doubly secured risk [that it was guaranteed both by the original issuer and the accepting bank], that it would be offered at a lower rate than he had been accustomed to, when buying the best single name commercial paper.”33 Until this was done, dollar acceptance business would remain stunted.
Rather than relying on the invisible hand, the entirely visible hand of Benjamin Strong, now governor of the Federal Reserve Bank of New York, took hold of this problem. In the Hamiltonian tradition, Strong believed in the need for central control of financial affairs. His great-grandfather, also named Benjamin, had served as Alexander Hamilton’s first clerk in the Treasury. The great-grandson grew up in modest circumstances. His father superintended a section of the New York Central Railroad, and Strong himself chose to forego college for financial reasons. Starting as a clerk (and for that purpose taking a remedial penmanship course to correct his borderline-illegible handwriting), Strong rose through the financial ranks before being tapped by Henry Davison, a country club acquaintance, to work for the newly formed Bankers Trust Company. When during the 1907 financial crisis J. P. Morgan organized the New York banks to rescue their weaker brethren, Morgan turned to Davison to manage the effort, and Davison turned to Strong. Strong’s involvement in those 1907 rescue efforts made him an energetic advocate of financial reform and put him on the road to Jeckyll Island.
Like Warburg, who had helped recruit him to the governorship of the New York Fed, Strong saw the need for a trade acceptance market to stabilize America’s finances. Fostering a market in actual merchandise transactions, as opposed to financial speculation, would help to prevent a recurrence of 1907-style financial excesses, he believed. As governor of the New York Fed, Strong also appreciated that the existence of a market in trade acceptances gave the Bank of England a handle with which to manage credit conditions. He saw development of this market as enhancing the competitiveness of American industry and expanding the country’s foreign trade. He saw all this as a project that the Federal Reserve System should support.
Following Strong’s lead, the Federal Reserve Board therefore instructed the system’s regional branches to purchase acceptances for their own account.34 The reserve banks purchased acceptances to stabilize and reduce discount rates, and the favorable behavior of discount rates in turn encouraged the growth of the market. In the first half of the 1920s the Federal Reserve Banks were the dealers’ dominant counterparty. In addition, a few other knowledgeable investors were attracted to the market. The main ones were foreign banks, including foreign central banks, with large surplus balances in the United States for whom acceptances quickly became a favored investment. The July 1919 issue of the Federal Reserve Bulletin noted that most of the $10 million acquired by the Dutch Central Bank on behalf of Dutch sellers of flower bulbs and diamonds purchased by Americans in Holland was invested in bank acceptances.
Slowly dealers specializing in acceptance business appeared on the scene. The largest of them, the International Acceptance Bank, had as its chairman none other than one Paul M. Warburg. Warburg’s motivation for launching IAB was to finance German grain imports and help rebuild Germany’s war-torn economy. IAB was also a way for Warburg to help his brother Max, who still ran the family firm in Hamburg. IAB would work hand in glove with M. M. Warburg, giving the latter much-needed business in the straitened circumstances of the 1920s.35 Slowly but surely other banks also created subsidiaries to purchase and sell acceptances and market them to retail investors.
DEBUT OF THE DOLLAR
The growth of this market in trade acceptances finally allowed the dollar to assume a meaningful international role. By the second half of the 1920s more than half of all U.S. imports and exports were financed by bank acceptances denominated in dollars.36 The attractiveness of doing business in New York reflected the fact that the interest rate that importers and exporters had to pay was now as much as a full percentage point lower than in London. Not just those buying and selling goods to the United States but also merchants engaged in trade with other countries flocked to New York. By the end of the 1920s the value of dollar acceptances issued to finance trade between third countries, together with those backed by goods warehoused in foreign countries, approached that of acceptances issued to finance imports into the United States itself.
This trend was part of the growing importance of the United States in international transactions generally. Europe having been devastated by the war, the resource requirements of postwar reconstruction were immense. It followed that the continent looked abroad for finance. A United States flush with funds was the obvious place to look. To governments for whom this was not obvious, Strong drove home the point. He traveled to Europe to negotiate loans. From Poland to Romania he sent emissaries like the Princeton University money doctor Edwin Kemmerer to encourage countries to contract loans in the United States.
In doing so Strong competed with Montagu Norman, his counterpart at the Bank of England, who urged countries to seek assistance for financial stabilization not in the United States but through the League of Nations—of which the United States conveniently was not a member. A League loan in London might help a country stabilize its currency, but it would also encourage it to contract for further borrowing there. Negotiating bilaterally with the United States, in contrast, would lead to borrowing in New York. Although the two men were outwardly very different—where Strong was handsome and self-confident, Norman had the pinched features of a hypochondriac—they were friends and even vacationed together. Strong famously kept interest rates low in 1924–25 to support Norman’s effort to return sterling to the gold standard. But if allied in other causes, they were rivals in this one. Strong used all his leverage to encourage countries to arrange their stabilization loans in New York.
All through the 1920s capital flowed from the United States, where it was abundant, to Europe, where it was scarce. American banks arranged bond issues for European governments and corporations, denominating them in dollars so they could be marketed to American investors. They opened store-fronts to pitch them to retail customers.
This high-pressure salesmanship should have been a warning. As inexperienced U.S. financial institutions rushed into the field, they extended increasingly dubious loans. One is reminded of the scramble of regional banks in the later stages of the boom into the subprime mortgage market. The title of Ilse Mintz’s study Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920–1930 tells the tale.37 Inexperienced U.S. banks enthusiastically underwrote, and their clients enthusiastically subscribed, bonds issued on behalf of German cities for the construction of municipal swimming pools, a form of liquidity that did not directly enhance the capacity to repay. Eighty years later American borrowers got even by selling German banks collateralized debt obligations backed by those same subprime loans.
Lending in Latin America by new entrants like the Chase Securities Company fared little better. A loan to Cuba for a highway spanning the island foundered on the inability of the contractors to complete more than isolated segments of pavement. It didn’t help that, for political reasons, the government felt compelled to commence construction of separate segments in all five provinces. Investors were in the dark about the fact that the son-in-law of the Cuban president had been hired by the Cuban branch of the American bank during the period that the bank in question competed for the privilege of lending to the Cuban government.
When at the end of the 1920s new money to service old debts stopped flowing, the Ponzi-like nature of the scheme was revealed. The majority of the foreign bonds underwritten by American banks lapsed into default.
But these were problems for later. For now the main impact of these flows was to enhance the international role of the dollar. Before the war, the dollar exchange rate had been quoted in fewer financial centers than minor currencies like the Italian lira and the Austrian shilling. Now it was quoted more frequently than all rivals. By the second half of the 1920s, foreign acceptances in dollars exceeded foreign acceptances in sterling by a factor of two to one. By 1924 the dollar accounted for a larger share than the pound of the foreign exchange reserves of central banks and governments.
Incumbency is thought to be a powerful advantage in international currency competition. It is blithely asserted that another quarter of a century, until after World War II, had to pass before the dollar displaced sterling as the dominant international unit. But this supposed fact is not, in fact, a fact. From a standing start in 1914, the dollar had already overtaken sterling by 1925. This should be taken as a caution by those inclined to argue that incumbency gives the dollar formidable advantages today.
To be sure, it took an exceptional shock, World War I, and the market-making efforts of the Fed to effect this changing of the guard. Still, it is not impossible to imagine something analogous today. For the wartime shock to sterling, substitute chronic U.S. budget deficits. And for the efforts of the Fed to establish a market in trade acceptances in New York, substitute the efforts of Chinese officials to establish Shanghai as an international financial center. The renminbi replacing the dollar may not be anyone’s baseline scenario, but it is worth recalling the history of the 1920s before dismissing the possibility.
IT ALL COMES CRASHING DOWN
The financial flowering of the dollar, however, soon was all for naught. The Roaring Twenties gave way to the Great Depression. This mother of all depressions was global. It affected every country. One of its most destructive impacts was on international transactions. And with the decline in international transactions came a decline in the international role of the dollar.
Trade was bound to contract with so vicious a fall in output and spending. But this was not all: seeing spending collapse, governments slapped on tariffs and quotas in a desperate effort to bottle up the remaining demand. Not knowing what else to do, they used trade policy to shift spending toward domestically produced goods. In the United States, farmers who had endured depressed crop prices now allied with light industry along the Eastern Seaboard to push the Smoot-Hawley tariff through Congress. In the UK, the influential economist John Maynard Keynes had trumpeted the advantages of globalization in his 1919 best-seller, The Economic Consequences of the Peace. In 1931, seeing no alternative, he advised the British government to impose an across-the-board tariff in a last-ditch effort to boost spending on domestic goods. The result was the General Tariff of 1932.38 Germany followed with an “equalizing tariff.” The Netherlands abandoned its traditional free trade policy, raising import duties by 25 percent. And so on. Whereas global production of primary products and manufactures fell by 20 percent between 1929 and 1932, the volume of international trade fell by fully 36 percent.39 There was correspondingly less demand for dollars to finance and settle trade.
The implosion of long-term foreign lending was even more dramatic. New long-term foreign loans by U.S. investors, having peaked at $1.2 billion annually in 1927 and 1928, fell to less than $200 million in 1931 and a scant $700,000 in 1932.40 And since the dollars on which foreigners relied to purchase U.S. imports were no longer available, the tendency to hold balances in New York to service such obligations declined commensurately.
What made the Great Depression great, of course, was that it was allowed to destabilize banking systems. Banks that had extended loans not just to foreign governments and corporations but also to American firms, farmers, and municipalities now saw these investments go bad. As bank balance sheets deteriorated, depositors scrambled to withdraw their funds. A first wave of bank runs erupted in the final months of 1930. Most of the affected banks had links to the Nashville-based investment firm Caldwell and Company, which controlled the largest chain of banks in the South. These banks were all owned by Caldwell and Company itself or one of its affiliates, or else they were owned and operated by individuals with personal ties to the founder, Rogers Caldwell. Caldwell was the Michael Milken of his day, having established his firm in 1917 at the tender age of twenty-seven to underwrite the junk bonds of southern municipalities and sell them to retail investors.41 His father, James Caldwell, had come to Nashville in 1870, where he went to work for a wholesale grocery. Finding himself one day unable to complete an order for millet seed (seed used to raise hay for horses), James had bought up the entire supply in the city, cornering the market and doubling his investment. From there it was a small step into insurance and banking. The son similarly moved into banking, and his operations were similarly dubious. Often the main and, indeed, only customers of Caldwell’s banks were the same municipalities whose bonds Caldwell and Company underwrote and sold onward. When those municipalities experienced financial distress in 1930, so did Caldwell’s banks.
But had it not been Caldwell it would have been someone else. The deterioration of economic conditions made banking problems inevitable. By 1931 there were bank runs in all parts of the United States. Nor was the problem limited to America: banking panics erupted in Argentina, Mexico, Austria, Belgium, France, Germany, Hungary, Romania, the Baltic states, Egypt, Turkey, and the UK. Where there were banks, there were panics. Scarcely a part of the world was immune.
In some cases these crises were compounded by the failure of the authorities to act, but in others they were worsened by the very fact that authorities did act. When officials provided banks with emergency assistance, as in Britain, they signaled that they attached higher priority to stabilizing the financial system than stabilizing the currency. British banks, under pressure from their new American competitors, had provided credit to German exporters on concessional terms. As the financial crisis now spread to Germany, Berlin froze repayments. This punched a hole in the balance sheets of the London banks and, as well, in Britain’s balance of payments. Under other circumstances the Bank of England would have responded to the resulting gold losses by raising interest rates to attract capital from abroad. But it understood that higher rates would make funding their operations more expensive for the banks. So the Bank of England resisted the temptation to tighten. Some observers ascribed the Bank’s failure to defend sterling to the fact that the governor, Montagu Norman, was indisposed. Exhausted by the crisis, he had sailed off for a Canadian holiday. In fact, however, Norman’s seconds at the Bank knew exactly what they were doing. They were consciously choosing the stability of the banks over the stability of sterling.
Investors monitoring the Bank of England had no trouble seeing that currency depreciation was coming. Their self-preservation instincts kicking in, they scrambled to get their money out of the country before sterling’s depreciation eroded the value of their claims. They converted their funds into foreign currency and redeposited them abroad.
Still the Bank of England stuck with its strategy, which was to do nothing. Aside from two small increases in the second half of July, it resisted the pressure to raise interest rates to defend the exchange rate. The decision to abandon the gold standard and allow the pound to depreciate followed, unavoidably, on September 20.
Not everyone was pleased. British tourists disembarking in Manhattan from the White Star Line’s S.S. Homeric were shocked by how few dollars their pounds could buy. “A pound is still a pound in England,” huffed one. “I shall carry my pounds home with me! A bit high this, something of a holdup, what?”42 The response of industry, in contrast, was distinctively positive. “Bryan was right,” as Clark H. Minor, the UK-based president of International General Electric, summarized the lesson, referring to William Jennings Bryan’s campaign against gold. Minor was not the only one to draw the link; before long, the British Isles were engulfed in a “Britain for Bryan” boom.
DOLLAR BACKLASH
Sterling’s devaluation raised questions about whether the dollar was secure, shifting financial pressure to New York. Not just private investors but central banks, with France, Belgium, Switzerland, Sweden, and the Netherlands in the vanguard, rushed to convert their dollars into gold before the moment passed. Conversions started on September 21, the first business day after sterling’s devaluation. After waiting two weeks, the New York Fed raised its discount rate by a full percentage point to defend the dollar. A week later it raised the discount rate a second time, again by a full percentage point.
This was the sharpest increase in rates in such a short period in the history of the Federal Reserve. Not for 47 years, until 1978 and another episode of pronounced dollar weakness, would the Fed again raise rates so far so fast. Although the dollar exchange rate was stabilized by its aggressive action, the same cannot be said of the banking system. In October alone, 500 banks failed. In the six months from August 1931, nearly 2,000 went under. Such are the effects of raising interest rates in a financial crisis.
With the Fed stoutly defending the dollar, the pound/dollar exchange rate fell from $4.86 to $3.75 in a week. By December 1931 it had reached $3.25. Expensive British exports now became cheap. From $3.25, speculators concluded, the sterling exchange rate could only go up. Accordingly, it mounted a modest recovery. Freed to support the economy, the Bank of England could cut its discount rate to 2 percent, inaugurating the policy known as “cheap money.” Ultimately this was the same escape route chosen by other countries, starting with Britain’s Commonwealth and other trade partners, followed by the United States, which abandoned the gold standard in 1933, and concluding with France, Belgium, the Netherlands, and Switzerland, all members of the “gold bloc,” so named because they continued against all odds to cling to the gold standard before finally abandoning it in 1935–36.
With less trade, less foreign borrowing, and less commitment to defending exchange rates, there was less need for central banks to hold foreign currencies. When governments and central banks sought to influence market outcomes, they were now more likely to do so by tightening controls than by buying and selling foreign exchange. This change in strategy permitted them to drastically reduce their foreign currency holdings. Prior to Britain abandoning gold, the National Bank of Belgium, which held reserves in London, had asked the Bank of England whether there was a danger that sterling might be devalued. The Bank of England had responded that this step was out of the question. Having been burned, the National Bank of Belgium now sold off not just its sterling but also, just in case, its dollars. The Bank of France and others followed suit.
Although the importance of both the dollar and the pound as reserve currencies was diminished by the crisis, the sale of foreign currencies by central banks was disproportionately a sale of dollars. By the end of 1931, dollars accounted for 40 percent of remaining foreign exchange reserves worldwide, but sterling nearly 50 percent.43 This result might seem peculiar, given that the Fed was defending the dollar while the Bank of England was not doing likewise for sterling. But the U.S. depression was deeper and longer. The British economy began recovering in early 1932, but it took another year for activity to bottom out in the United States. The collapse in U.S. trade being even more severe, the volume of acceptance business fell off even more dramatically in New York.
That said, the single most important reason that sterling temporarily regained its lead as an international currency was the practice of the members of the Commonwealth and Empire of holding their reserves in London. For Commonwealth countries like Australia and New Zealand, doing so was more than a matter of economic logic. It was a demonstration of political solidarity. For the Empire it was not even a choice. The colonies did what they were told by the Foreign or Colonial Office. Because the United States lacked the same imperial prerogatives, the dollar did not enjoy the same support.
But with international transactions of all kinds depressed for the balance of the 1930s and with politics dominating economics, it was easy to miss that a changing of the guard had already taken place. It was easy to overlook that the dollar had overtaken sterling as the leading international currency. For anyone uncertain about the situation, however, World War II would clarify it soon enough.
DOMINANCE
For a quarter of a century after World War II, the dollar reigned supreme. Only the United States emerged strengthened from the war. Its economy towered over the world like none other. It accounted for fully half of global industrial production.1 Only its currency was freely traded.
As a result, barely two decades after its debut as an international currency the dollar was the dominant unit in which prices were quoted, trade was invoiced, and transactions were settled worldwide. For foreign central banks and governments the dollar was as good as gold, since the United States stood ready to sell gold at a fixed price of $35 an ounce.2 The Articles of Agreement of the International Monetary Fund, the newly created steward of the international system, acknowledged the currency’s unique status by authorizing countries to define their exchange rates in dollars. Other potential issuers of international currencies lacked either open financial markets, like Germany, or financial stability, like France. The UK lacked both. The dollar was not just the dominant international currency but, outside the British Commonwealth and Empire, effectively the only one.
Central banks still had the option of accumulating gold, but the supply of newly mined gold was limited. There was also the uncomfortable fact that since the Soviet Union and South Africa were the main producers, purchasing gold effectively subsidized two unsavory regimes.
These facts placed the dollar and the United States in a unique position. American consumers and investors could acquire foreign goods and companies without their government having to worry that the dollars used in their purchases would be presented for conversion into gold. Instead those dollars were hoarded by central banks, for which they were the only significant source of additional international reserves. America was able to run a balance-of-payments deficit “without tears,” in the words of the French economist Jacques Rueff. This ability to purchase foreign goods and companies using resources conjured out of thin air was the exorbitant privilege of which French Finance Minister ValĂ©ry Giscard d’Estaing so vociferously complained.
STERLING HANGOVER
For reasons of history if nothing else, the pound remained the dollar’s principal rival. The Commonwealth, Empire, and other members of the sterling area had given the UK an unlimited credit line during the war.3 They supplied Britain and its army with resources and war matĂ©riel, taking British treasury notes as IOUs. Britain and its allies meanwhile ran down their dollar reserves to procure supplies from the United States. By the end of the war the accumulated sterling balances of central banks and governments thus exceeded their dollar balances by a factor of two to one.4
Superficially this created the impression that the pound was still the leading reserve currency. But two-thirds of overseas financial claims on the UK were in the hands of that small part of the world that comprised the sterling area.5 Most of its members had accumulated sterling for wartime reasons. They maintained it now only because the controls imposed by Britain prevented them from exchanging it for goods or more useful currencies.
It was widely understood that holding sterling was a losing proposition. In the halcyon days before 1914, Britain’s assets abroad had greatly exceeded its liabilities. There had been no question about the stability of sterling or the security of foreigners’ investments. But now the country’s net external sterling liabilities, at $15 billion, were nearly six times its gold and foreign currency holdings.
If foreigners were allowed to freely sell their claims on Britain, their value would drop like a stone. This danger became acute in 1946 when the United States made the removal of Britain’s currency controls a precondition for extending a loan for British reconstruction. This was the one great wartime failure of John Maynard Keynes. This greatest of British economists had served H.M. Treasury in a variety of wartime capacities and led negotiations with the Americans over the structure of the postwar international monetary system. Supremely self-confident, he believed that the U.S. government would provide its ally with a postwar loan free of onerous conditions once it heard his compelling arguments. Instead, the Americans demanded that Britain remove its controls. Doing so, they believed, would expand the market for U.S. exports.6 For them the resumption of normal peacetime international financial relations was overdue.
Keynes’s failure to head off this demand reflected his limited understanding of American politics. In the British system, a government with a parliamentary majority could do pretty much as it pleased. An enlightened Roosevelt-Truman administration, Keynes reasoned, could similarly push through its chosen policies, enjoying as it did a majority in both houses of Congress. He failed to reckon with the independence of American legislators or their isolationist tendencies. The further one moved from the Eastern Seaboard, the less Americans and their congressional representatives valued their supposed special relationship with Britain. When the administration pushed for concessions for the British, the Congress pushed back.
Keynes’s failure to negotiate better terms may have also reflected his weakened physical state. He was suffering from a bacterial heart infection that subjected him to increasingly serious heart attacks. He tired easily and was frequently incapacitated. That the UK continued to rely on him to represent its interests, despite these problems, testifies to his singular intellectual capacity and stature.
The precise requirement laid down by the Americans was that all sterling now earned by foreigners should be freely usable in merchandise transactions within a year of the loan’s approval by the Congress. When current account convertibility, as this condition was known, was duly restored on July 15, 1947, residents of other countries rushed to convert sterling into dollars to purchase American goods.7 Britain’s reserve losses in the first month exceeded $1 billion. For a country with less than $2.5 billion of gold and foreign exchange, the position was untenable. Anthony Eden, deputy leader of the Conservatives, likened it to “doing the splits over an ever-widening abyss.”
With no choice, Britain slapped controls back on after just five weeks. So much for the idea that a convertible pound sterling might again play a leading international role.
British policymakers became understandably shy about current account convertibility. Not until 1959 would they try again. When 1959 arrived, sterling balances, still mainly in the hands of the Commonwealth and Empire, remained at the same level as a decade earlier. Dollar reserves, meanwhile, had more than tripled.8 It was clear which currency countries wanted when they accumulated reserves. It was not the currency of Europe’s sick man.
BATTLE OF ALGIERS
Nor were there other attractive options. The franc had once been an important reserve currency, but it never recovered from the political and financial chaos through which France suffered after World War I. France now also had its quagmire in Algeria. The aftermath of World War II saw bloody independence struggles around the world, but the Algerian conflict was especially violent. The Algerian National Liberation Front fought not just the French army but the rival Algerian National Movement. The army, under attack at home and abroad, split into two factions, with members of one plotting to overthrow the French government. There were massacres of civilians. Cafes in Paris were bombed. Torture was used to extract information from political prisoners.9 Meanwhile successive French governments, each weaker than the last, vacillated.
The culmination in the spring of 1958 was a political crisis in which a cabal of dissident army officers seized control of Algeria to prevent it from being abandoned by an indecisive Paris. Paratroopers from the Algerian corps then landed in Corsica, taking over the island on May 24. They planned next to seize Paris and replace the government with one more firmly committed to control of Algeria. Their generals sent out the coded message to prepare for the invasion. (“The carrots are cooked. The carrots are cooked.”) In Paris, the embattled government uncoiled rolls of barbed wire on the airfields to prevent the paratroops from landing. The public was not reassured.
Seeing their support dissolving, the leaders of the Fourth Republic agreed that the war hero, Charles de Gaulle, should be recalled to power. Only de Gaulle had the authority to put down the rebellion. His personal prestige was greater than that of “any Frenchman since Napoleon,” one expert observed. (Certainly this was de Gaulle’s own view.) The great man returned to Paris from his home village of Colombey-les-deux-Ă©glises. Granted emergency powers for six months, he brought the army into line.
This political crisis was also a financial crisis. The cost of the war was enormous, and the French central bank was forced to directly finance the government’s budget deficit. Already in 1955–1957 the Bank of France had lost two-thirds of its reserves. The finance ministry responded by tightening import licensing requirements, restricting purchases of foreign securities, and limiting the amount of currency that residents could carry when traveling abroad.
These measures to bottle up the pressure proved inadequate, what with the Bank of France continuously pumping additional money into circulation to finance the budget deficit. On August 12, 1957, the government was forced to devalue. The country’s militant trade unions, seeing the purchasing power of their earnings eroded, were not pleased. Buying them off required sharp wage increases, which dissipated the hoped-for improvement in competitiveness.10 The first devaluation having failed, a second one, this time by 17 percent, followed in barely a year. To minimize embarrassment, de Gaulle waited until the end of December and after the presidential elections, in which he won 78 percent of the electoral college.
This second devaluation, being accompanied by budget-balancing measures, restored external balance.11 But serial devaluation was not the behavior of a major power. It was not the behavior of a grand general. De Gaulle was imperious and preoccupied by the glory of France, not to mention his own glory. Presiding over the December devaluation rankled. That the French president subsequently became preoccupied with dethroning the dollar no doubt reflected his memories of this demoralizing episode.
RELUCTANT POWERS
Germany was the one European country with a history as a reserve center and no significant balance-of-payments problems.12 But memories of the first half of the 1920s, when it had experienced one of the most extreme hyperinflations in recorded history, were still fresh. German officials reacted now in almost Pavlovian fashion to the least whiff of inflation.
The problem was that each time the Bundesbank, West Germany’s newly established central bank, tightened monetary policy with the goal of curbing inflation, its higher interest rates attracted capital from abroad, loosening credit conditions and reigniting inflation fears. To limit inflows, Germany therefore maintained restrictions on purchases of money market instruments by nonresidents.13 Even had foreigners wished to use the deutschmark for international transactions, in other words, they would have been frustrated. Germany relaxed some of its controls in the late 1950s and 1960s but tightened them again in April 1970 and May 1971 in response to renewed capital inflows. None of this made the country an attractive place for foreigners to do financial business.
There was also the absence of competition from rising powers. In the third quarter of the twentieth century, Japan resembled today’s China, an Asian nation growing three times as fast as the United States. By the 1970s it had become the second largest economy in the world, such being the miracle of compound interest. But the yen then, like the renminbi now, played essentially no international role. Neighboring Asia, where Japan sourced materials and sold manufactures, was an obvious place where the yen might have been used to quote prices, settle transactions, and serve as international reserves. But memories of Japanese colonialism and wartime brutality had not faded. Relying on the yen would have sat uneasily with the neighbors. It did not help that the most important of them, Mao’s China, was cut off commercially and financially from the outside world.
And even had there been a desire to use the yen in international transactions, Japanese policymakers would have discouraged the practice. Their priority was export-led growth. They fostered export-oriented manufacturing using a hypercompetitive exchange rate and a battery of tax breaks and subsidies. They enlisted the Export-Import Bank of Japan and the Japan Development Bank, together with their influence over other financial institutions, to channel cheap investment finance to export industries.
Internationalizing the yen would have undermined those policies. Had banks been free to take money out of the country, they could have evaded government direction to lend to domestic producers at artificially low rates. Japanese financial markets had to be placed in an airtight compartment sealed off from the rest of the world. Allowing foreigners to invest in the country, as needed for reserve-currency status, would have subverted the industrial-policy strategy of the Ministry of International Trade and Industry (MITI). And allowing a foreign demand for yen to develop would have put upward pressure on the exchange rate, negating undervaluation as a tool of economic development. In order to internationalize its currency, Japan then, like China now, would have had to abandon its tried-and-true growth model.
Eventually in the 1980s it did. Japanese policymakers sought to transform Tokyo into an international financial center and cultivate an international role for the yen. But their so-called Big Bang reforms, which removed restrictions on domestic and international financial transactions, did not work as planned. The Big Bang allowed large Japanese companies to access the corporate bond market. Seeing that they were losing their corporate clients, the banks scrambled for other customers, whom they found in real estate developers. The Big Bang spawned a massive real estate boom and bust whose consequences took years to clean up (shades of the subprime crisis). As a result, Tokyo never rose above second-tier financial-center status.
There is a lesson here for Chinese policymakers seeking to transform Shanghai into an international financial center and the renminbi into an international currency. Tread cautiously when deregulating your financial markets. Be careful for what you wish for, and be even more careful how you attempt to make that wish a reality.
BANCOR TO THE WORLD
This lack of alternatives meant that the post–World War II international monetary system was dollar based. The problem for other countries, starting with Britain, was how to limit the ability of the United States to manipulate that system to its advantage.
The war had brought Great Britain to the brink of extinction as an independent nation and substantially reduced its economic and financial power. It was clear already before the conclusion of hostilities whose economy would be strong and whose would be weak. If they were going to shape the postwar international monetary system, British officials realized, they would have to rely on the power of ideas rather than the power of their economy.
So they turned again, in 1941, to their leading idea man, Maynard Keynes. Within weeks Keynes had come up with a scheme for a global central bank that he dubbed the Clearing Union. Each country would receive a line of credit denominated in bookkeeping units known as “bancor.” Governments could use those credits to purchase imports. Countries would be prevented from running balance-of-payments deficits indefinitely by the fact that their credits with the Clearing Union were limited. But they would also be discouraged from running chronic balance-of-payments surpluses by provisions requiring them to turn over a portion of any bancor and foreign currencies they earned to the Clearing Union.
While the Keynes Plan referred generically to countries running balance-of-payments surpluses, there was no doubt whom specifically it had in mind. Everyone realized that Keynes’s charges and penalties were targeted at the United States.
American negotiators, led by Harry Dexter White, were smart enough to understand this strategy. The son of immigrant parents—his surname was an Anglicization of Weit—White had risen from modest working-class origins in Boston to the undergraduate program at Stanford, from which he graduated with distinction, and then the Ph.D. program and an assistant professorship at Harvard. In 1934, after an unhappy stint at Lawrence College in Wisconsin, he was hired as an economic analyst at the U.S. Treasury. From there he rose to assistant to the secretary.
White necessarily possessed considerable strength of intellect to rise so far so fast. His Treasury superiors described him as a man of “extraordinary energy and quick intelligence.”14 Keynes acknowledged this capacity in his typical deprecating way, referring to White as “one of the few constructive brains in the Treasury.”15 White was as strong willed—some would say stubborn—as Keynes, if less charming. He was also as well schooled in international monetary matters, having written his dissertation on the French franc between 1880 and 1913, and was more than capable of dealing with Keynes’s more technical arguments.16
White’s own plan for monetary reform, on which he began working after Pearl Harbor, substituted for Keynes’s automatic taxes only the vague possibility of sanctions against a country running chronic external surpluses.17 Keynes had proposed that countries be provided credit lines at the Clearing Union totaling $26 billion—the equivalent today would be $16 trillion, greater than the value of all goods and services produced in the United States. The Americans feared, not without reason, that the financial resources of the Clearing Union would all be used to purchase U.S. goods, forcing America to effectively give them away. White therefore reduced Keynes’s $26 billion to $5 billion.18
Most importantly, the White Plan did away with bancor, proposing instead that the Stabilization Fund (White’s name for the Clearing Union) lend national currencies deposited by governments.19 The United States would provide the single largest share of the Fund’s resources, reflecting its weight in the world economy.
These differences were then hashed out in bilateral negotiations. More accurate would be to say that the two sides eventually agreed to something closely resembling the American proposal, reflecting America’s leverage and Britain’s lack thereof. When American negotiators, in order to bring the export lobby on board, insisted that the Articles of Agreement of the new institution, now called the International Monetary Fund, include the expectation that countries would remove restrictions on the use of their currencies for import and export transactions within 5 years, Britain had no choice but to agree. When the Americans insisted on eliminating Keynes’s tax on countries running chronic balance-of-payments surpluses and on doing away with bancor, again it had no choice. The main U.S. concession was to raise the total resources of the Fund from $5 billion to $8.5 billion, still far below Keynes’s opening bid of $26 billion. The assent of the other allies and neutrals was obtained in July 1944 at the conclusion of an exhausting two-week conference in the leafy resort town of Bretton Woods, New Hampshire.20
MARSHALLING SUPPORT
But still unclear was how, given the limited availability of credits, countries would obtain the dollars needed to finance imports from the United States. At the end of World War II, Europe and Japan desperately needed imported food and fuel for social stability. They needed capital goods for economic reconstruction. For the time being, the United States provided them through the United Nations Relief and Reconstruction Administration and the 1946 Anglo-American loan. But these bridging measures were of limited duration.21 And no one knew what would happen when they expired. American policymakers fancied that, with reconstruction and the peacetime reconversion of wartime armaments factories, Europe and Japan would immediately regain the capacity to earn the dollars needed to purchase imports. But in practice, reconversion took time. And exporting manufactures required first importing the capital equipment and other inputs required to produce them. The exports couldn’t come first.
The implications were alarming for Europe and Japan and more broadly for the international system. Countries short of cash might resort to exchange controls and clearing arrangements like those exploited by Germany in the 1930s. If they did, the open trading system that was a U.S. priority would be placed at risk. And from government control of imports, it might be a small step to government control of the economy.
With the intensification of the Cold War, this risk became too great for U.S. policymakers to bear. They responded with the Marshall Plan for Europe and the Dodge Plan for Japan, the first named for the World War II general appointed secretary of state in 1947, the second for the less imposing but no less influential chairman of Detroit Bank who served as special U.S. ambassador to Japan starting in 1948.
The Marshall Plan absorbed 10 percent of the federal budget in its first year. It was an extraordinary act of generosity, and Marshall was just the man to shepherd it through the Congress. The secretary was an exemplary citizen-soldier. His father had fought in the Civil War as a member of the Augusta, Kentucky, Home Guard, and the young Marshall had spent virtually his entire adult life in the military. By 1947 he had become a trusted public figure. He exuded the self-discipline and analytical rigor of a consummate military man. As army chief of staff from 1939 to 1945, Marshall never minced words about the need for personal sacrifice when testifying before Congress. Now, in 1947, he was characteristically blunt about the need for sacrifice to keep Europe in the Western camp.
Dodge, no kin of the automotive family of the same name, was a banker who had worked in Frankfurt and Berlin as financial advisor to the U.S. military government of Generals Dwight Eisenhower and Lucius Clay. Moving to Japan, he briskly applied the lessons he had learned in Germany.
The Marshall and Dodge Plans provided dollars to finance the imported inputs needed to get exports going again, averting the danger that countries would be forced to resort to barter. In this sense the Marshall and Dodge Plans saved the Bretton Woods System and, by implication, the international role of the dollar. Not without reason, some observers have referred to the post–World War II international monetary system not as Bretton Woods but as the “Marshall-Dodge fixed-rate dollar standard.”22
To accumulate reserves, countries must run trade surpluses—something that Europe and Japan were in no position to do in the second half of the 1940s—or else the reserve currency country must lend and invest abroad. After hesitating, the United States provided other countries with dollars through the Marshall and Dodge Plans. China today, like the United States in the 1940s, is running trade surpluses but also seeking to encourage wider international use of its currency. For other countries to get their hands on renminbi, China will therefore have to lend and invest abroad. That this lending and investment is something we are now beginning to see is an indication that Chinese officials know what’s up.
HEDGEHOG’S DILEMMA
The defeated powers Germany and Japan were the most successful at exporting and acquiring dollars. Denied foreign policy ambitions, they focused on growing their economies. Britain still had its empire. France had Algeria, which many Frenchmen regarded as an integral part of the French nation. Both countries had overseas commitments creating budgetary burdens. Both complained of the difficulty of acquiring the dollars needed to purchase foreign goods.
In the 1940s it had been possible to argue that the immensity of U.S. economic power, combined with the severity of postwar economic problems in other countries, made it impossible for them to obtain dollars without American help. Come the 1950s, however, Germany had shown that by investing and cutting costs it was possible to restart the export engine and accumulate all the dollars that might be required. This was something at which France also eventually succeeded by devaluing the franc, balancing its budget, and extricating itself from Algeria. It was something at which Britain only finally succeeded in the 1980s with the advent of Margaret Thatcher.
The point is that, sick men like Britain notwithstanding, by the end of the 1950s the dollar shortage was over. This was not an entirely happy development. The Bretton Woods arrangements had assumed that the dollar was as good as gold. The fact that the stock of foreign-held dollars was now poised to exceed U.S. gold holdings thus posed a threat to the system. It exposed the United States to the equivalent of a bank run if foreign holders all rushed to exchange their dollar claims for gold at the U.S. Treasury’s teller’s window, known colloquially as the “gold window.” American monetary liabilities to foreigners first exceeded U.S. gold reserves in 1960.23 It was no coincidence that the first serious episode of speculation against the dollar was in the second half of that year.
These problems should not have come as a surprise. There was an obvious flaw in a system whose operation rested on the commitment of the United States to provide two reserve assets, gold and dollars, both at a fixed price, but where the supply of one was elastic while the other was not. The Belgian-born economist Robert Triffin had warned of this problem in 1947 in a study for the Federal Reserve Board.24 The short, round-faced Triffin was a hedgehog rather than a fox. He knew this one big thing and wrote of it virtually to the exclusion of all else, as an economist at the Organization of European Cooperation and Development (the forerunner of today’s OECD) and then as professor at Yale University. He did this so single-mindedly that his name became synonymous with the problem.
The Triffin Dilemma was that if the United States refused to provide dollars to other countries, trade would stagnate and growth would be stifled. But if the United States did provide an unlimited supply of dollars, lubricating growth and trade, confidence in its commitment to convert them into gold would be eroded. Eventually there would be a run on U.S. gold stocks, destroying the country’s ability to maintain the $35 gold price. Or the United States might preemptively abandon its obligation to pay out gold at a fixed price. Either way the gold-dollar system was doomed. Triffin’s solution was to create an artificial unit along the lines of Keynes’s bancor that governments would be obliged to accept in international transactions. But, as of the early 1960s, he had few takers.
There is an evident analogy with the situation linking the United States and emerging markets like China and India in the early twenty-first century. The rapidly growing catch-up economies, Europe and Japan in the 1960s, emerging markets today, found themselves accumulating dollars almost despite themselves. Then as now they worried whether those dollars would hold their value. Then as now their worries created the danger of a disorderly scramble out of dollars that might destabilize financial markets.
The main difference today is that there are alternatives to the dollar in the form of the euro and, prospectively, other currencies. This creates at least the possibility of a smooth transition as foreign central banks and governments gradually diversify their reserves. If central banks and governments want to hold more euros, the European Central Bank can supply them.25 Since the euro and the dollar float against one another, this shift can be accompanied by a gradual adjustment in the relative price of the two currencies. The dollar can decline against the euro without threatening the stability of financial markets and the international system.
INTO THE DEEP END OF THE POOL
Not so in the 1960s. With other countries lacking deep and liquid financial markets open to the rest of the world, gold was the only alternative to the dollar. And newly mined gold was in short supply.26
If countries worked together, however, they might buy time. If countries holding dollars agreed not to convert them into gold, the system might be preserved while a permanent solution was sought. But there was an obvious incentive to convert one’s dollars into gold while others were exercising restraint. And since everyone was aware of this possibility, there was a temptation to cheat.
In 1961 the United States sought to address the problem by proposing an arrangement, the Gold Pool, in which other countries agreed to hold onto their dollars and reimburse the United States for half of its gold losses.27 Charles Coombs, vice president of the New York Fed, negotiated it on behalf of the administration. The Gold Pool was a blatantly asymmetric arrangement in which all the transfers went one way. It was an indication of the extent to which the structure of the system had other countries over a barrel.
The Gold Pool was a happy arrangement so long as there was no need to activate it. And through 1964 there was no need owing to large amounts of Soviet and South African gold flowing onto the market.28 But starting in 1965, supplies fell off. The members of the pool now had to sell gold to prevent its dollar price from rising on the London market. They had to reimburse the United States for half its losses. What had been a commitment in theory now had actual implications in practice. Italy began offsetting its contribution to the Gold Pool’s sales in London by converting dollars into gold in the United States. France, never a friend of the dollar, dropped out of the pool in early 1967, a fact disclosed by Paul Fabra, a financial journalist for Le Monde, in what was presumably a strategic leak by de Gaulle’s government.
All this heightened the urgency of a permanent solution. French leaders, more than a little anachronistically, advocated returning to a system in which gold alone was used to settle accounts. In doing so they drew inspiration from the impassioned writings of Jacques Rueff, a long-standing champion of the gold standard. Rueff had worked at the Bank of France in the 1930s, rising to the rank of deputy governor before being dismissed under the Vichy government’s anti-Semitic laws. He had firsthand knowledge of the gold standard’s operation, although how he could have seen France’s unhappy experience then as something to be emulated now is another matter. One explanation is that, as a follower of the Austrian economist Ludwig von Mises, Rueff was an ardent opponent of government interference with the market and viewed the gold standard as a guarantee that governments would not tamper with the monetary system. Another is that he was phobic about inflation, having lived through France’s high inflation in the 1920s. Indeed, the young Rueff had advised the prime minister, Raymond PoincarĂ©, on how to bring that inflation under control. His advice of budget cuts and one last devaluation did the trick.
As an opponent of economic planning, Rueff had been banished to the wilderness following World War II. But when de Gaulle returned to power and the problem became how to rein in inflation, the General called on Rueff to draft a stabilization plan. Once more he recommended budget cuts and one last devaluation, and once more the strategy worked. As a result of this triumph, Rueff acquired de Gaulle ear. He also acquired the public’s, which he bent by publishing some eighty-five articles on monetary matters in the course of the 1960s. When de Gaulle attacked the dollar at a press conference in early 1965, castigating the Bretton Woods System as “abusive and dangerous” and arguing that the world should return to a gold-based system, he was channeling Rueff. It did not hurt that Rueff’s arguments resonated with de Gaulle’s insistence that France should not take a back seat to any country, monetarily or otherwise.
But here, in fact, French leaders were engaging in the same kind of wishful thinking to which American policymakers succumbed in 1946. It was not clear, given limited gold production, where under a gold-based system the world would obtain the reserves needed to support an expanding volume of trade and investment. Rueff suggested raising the price of gold, but this ignored the danger that doing so once might create expectations that governments would do so again, encouraging gold hoarding and other destabilizing consequences. Raising the price of gold would reward countries—such as, not entirely coincidentally, France—that had done the most to undermine the system by converting their dollars. Raising the gold price would also create a windfall for the Soviet Union and South Africa. Predictably, Pravda applauded de Gaulle’s comments attacking the dollar.
The French position reflected a peculiar reading of history that ignored the fact that a pure gold-based system had not existed for the better part of a century, neither under the pre–World War I gold standard nor its interwar successor. Under both arrangements central banks had found it necessary to supplement their gold reserves with foreign bonds and bank deposits, including, it should be noted, French bonds and deposits. The French proposal may not have been realistic, but it nonetheless stood in the way of reaching agreement on an alternative.
Germany, not much more realistically, simply sought to preserve the existing system. Anxious to enhance its image as a loyal member of the Western alliance, it prioritized cooperation with the United States. Some German officials were less than enamored of the Americans. Karl Schiller, the moody professor who became economics minister in 1966, objected to the United States exploiting its security leverage and urged following de Gaulle’s example of selling dollars. For most German politicians, however, the security argument for cooperating dominated. Then there was the fact that Bundesbank was a large holder of dollars as a result of Germany’s chronic surpluses. For the German central bank then, like the Chinese central bank now, any international monetary initiative that downgraded the role and reduced the value of the dollar would have had costly financial consequences.
British policymakers generalized from the decline of sterling; they saw the dollar as next in line. As early as 1962 they proposed supplementing and ultimately replacing national currencies as reserves with a synthetic unit along the lines of Keynes’s bancor. This new unit, they suggested, could be introduced by exchanging it for national currencies already in the possession of central banks. This might have the ancillary benefit, from the British point of view, of removing the overhang of sterling in official hands and eliminating the possibility that it might all be sold off in a sudden panic on news of economic problems. But a financially weak Britain was in no position to drive the debate.
THE VINEYARDS OF INTERNATIONAL FINANCE
The outcome thus hinged on the U.S. position. The problem was that there was no U.S. position. Lacking other ideas, American officials simply restated their commitment to the $35 gold price. They resorted to scattershot tactics to strengthen the U.S. trade accounts. They instituted a Buy American policy for the Defense Department and tied U.S. foreign aid to purchases of American goods. They imposed a tax on foreign interest income and arm-twisted U.S. firms not to invest abroad. The difficulty with these expedients, aside from the fact that they distorted international markets, was that preventing the United States from running current account deficits and investing abroad also prevented other countries from acquiring reserves. It just shifted the world from one horn of the Triffin Dilemma to the other.
Thus, the only real alternative to abandoning the system was to take up Britain’s call for “paper gold.” Already in 1960, in advance of his inauguration, President-Elect Kennedy had appointed a task force to study the dollar problem. Professor Triffin was a member of this task force and did not hesitate to inject his proposals for a synthetic reserve unit.
But Douglas Dillon, the hardheaded ex-banker who served as Kennedy’s treasury secretary, had little patience for Triffin’s ideas. Dillon was former chairman of the investment bank Dillon, Read and son of the firm’s founder, Clarence Dillon. He had moved from banking to diplomacy and from there to policy by virtue of having been a major contributor to Eisenhower’s 1952 presidential campaign.
Eisenhower had first appointed Dillon ambassador to France, a position for which he was qualified mainly by the fact that his family owned the Haut-Brion vineyard. The French were less reassured by this investment in terroir than they were disturbed by Dillon’s lack of fluency in their language. Subsequently Dillon served as undersecretary of economic affairs and of state under Eisenhower, where he distinguished himself. Contrary to the silver-spoon presumption (by the time his stint in Paris ended in 1957, his rudimentary French had become quite good), he was a quick learner and a stickler for detail. So Kennedy plucked him from Eisenhower’s cabinet to reassure the markets and make good on a commitment to appoint a bipartisan cabinet.29 The two men had much in common: both were Harvard graduates, both had been naval officers during World War II, and both were sons of nouveau riche Wall Street wheeler-dealers.30 Dillon assured the president-elect that if he disagreed on an important matter of policy he would resign without causing a row. Kennedy was fully aware that Dillon had been a large contributor to the Nixon campaign. The choice thus spoke volumes about the need for a treasury secretary with investment banking experience and the gravitas to calm the markets.
Under Dillon, U.S. dollar policy had three straightforward elements. First, foreign governments should pay more of the costs of U.S. troops stationed in Europe. Second, the United States should use taxes and regulation more aggressively to support its currency. Third, the Europeans would be arm-twisted not to sell their dollars for gold. As for Triffin’s ambitious academic schemes, the nicest thing Dillon had to say was that they “weren’t very practical.”
With exceptional amounts of Soviet and South African gold flowing onto the market in 1963–1964, this approach sufficed for a holding action. But starting in late 1964 and especially after de Gaulle’s press conference in 1965, confidence began to ebb. Ten industrial countries (dubbing themselves, not very creatively, the Group of Ten) formed a committee to weigh proposals for reforming the system. There was a consensus on the need for change, but not much else. The French proposed issuing paper claims that governments would treat as equivalent to gold. This was essentially an effort to achieve the French objective of raising the price of gold, but through the back door. The maneuver would take place outside the IMF, which the French saw as dominated by the Anglo-Saxons. Other countries proposed instead working through the IMF by expanding the ability of countries to borrow from the Fund and in that way satisfying their need for additional reserves.
But in the absence of agreement, there was an inability to act. The report of the Group of Ten, published in the summer of 1965, concluded only that there existed “a range of views” on what to do.
WHITE HORSE WITH BLACK STRIPES
In April 1965 treasury secretary Dillon was succeeded by his undersecretary, Henry Fowler. The son of an engineer on the Norfolk & Western Railway, the folksy Fowler fashioned himself a country lawyer, taken to drinking root beer at sit-downs with the president. Unlike Dillon, he spoke no foreign language and had little experience in international finance. He did not initially enjoy the respect of his foreign counterparts. On an international swing designed to introduce him to the Europeans, the French finance minister, ValĂ©ry Giscard d’Estaing, pointedly failed to meet him at Orly Airport.
But Fowler quickly acquired definite views. He was skeptical that the gold-dollar system could be maintained. To Giscard and then the other Europeans, he indicated a willingness to discuss international monetary reform. The resulting discussions proceeded on two tracks: one via yet another Group of Ten study group, this one under Otmar Emminger, the no-nonsense vice president of the Bundesbank, the other in the Executive Board of the IMF. Fowler signaled his willingness to contemplate the creation a new reserve asset. France, finding that its proposal for an increase in the gold price received no support from Germany or other European countries, reluctantly agreed.
In August 1967 finance ministers finally recommended that the IMF be authorized to issue bookkeeping claims called Special Drawing Rights (SDRs for short) to supplement gold and dollar reserves. The term “special drawing rights,” substituted for “reserve drawing rights” at the insistence of the French, supposedly indicated that the new unit was a loan, not a currency. Since it was subject to repayment, the French reassured themselves, it would not be inflationary. Experts like Emminger dismissed the distinction. “What difference does it make?” he asked. “Is a zebra a white horse with black stripes or a black horse with white stripes?”
The SDR was linked to gold at a value equal to one U.S. dollar.31 The new unit would be allocated to IMF members in proportion to their existing rights to borrow from the Fund. Governments would be obliged to accept these bookkeeping claims from other governments and in transactions with the IMF itself. Through the periodic issuance of SDRs, the IMF could now provide countries with the additional reserves they needed to support their expanding trade and payments without adding to the overhang of dollars. Secretary Fowler, for whom the agreement was a personal triumph, hailed it as “the most ambitious and significant effort in the area of international monetary affairs since Bretton Woods.”
There were just two problems. First, SDRs were not very useful, since they were acceptable in transactions only with other governments and the IMF itself. Governments could not use them in transactions with private parties. Second, members holding 85 percent of voting power in the IMF had to agree before any SDRs were issued. France insisted on this provision to protect against what it saw as the danger of excessive liquidity creation. It assumed that with the Europeans voting as a bloc and possessing more than 15 percent of votes in the Fund, they could avert this danger. And different countries for different reasons hesitated to support issuance on a significant scale. France wanted to ensure that issuing SDRs, in relieving the pressure on the dollar, did not also relieve the pressure on the United States to cut its external deficit. Germany worried about the inflationary consequences. Developing countries argued that SDRs should be allocated to countries with the most need, namely themselves. As a result, the amendment to the Articles of Agreement under which SDRs could be created was only formally agreed to in May 1968, and the SDR facility was only finally activated in January 1970. It was too little, too late.
DOMINOS
It was too late because Britain’s chronic balance-of-payments problems had already come to a head. The August 1967 agreement to create the SDR was followed just three months later by a sharp devaluation of the pound. Britain’s troubles resulted from wage increases and the Arab-Israeli War, which led to the closure of the Suez Canal, disrupting international trade and raising the price of oil—and not incidentally leading oil-exporting countries in the Middle East to move funds out of sterling, given that Britain made no secret of its support for Israel. Occurring against the backdrop of a chronically uncompetitive industrial sector in Britain, these events made devaluation unavoidable.
The decision was announced on a cold and foggy Saturday when the markets were closed. Most Britons learned of it courtesy of the BBC, which, enjoying its broadcasting monopoly, was recycling Midnight Lace, a stale Doris Day thriller, which it interrupted to announce the less than thrilling news of a 14 percent reduction in the currency’s value. “I am quite shocked,” Sir Patrick Hennessy, chairman of Ford Motor Company’s British operations, told the press. “I have personally told my business friends abroad that it would not happen.”32
Sterling now mattered less than in 1931, but it still mattered enough for its devaluation to raise questions about the dollar. Another de Gaulle press conference in which the General alluded to the possibility that sterling’s devaluation might topple the dollar did not help. The price of gold shot up.
Obliged as they were to drive it back down, the remaining members of the Gold Pool sold $1 billion of gold in November and another $1 billion in December. By March U.S. gold losses were running half a billion dollars a day. One Swiss bank reportedly had to strengthen its vault to contain all the privately held gold that was flooding in.
Out of options, on Thursday, March 14, U.S. Treasury officials telephoned their British counterparts, requesting that they shut the London gold market. Sterling may no longer have been a first-class international currency, but one legacy of its earlier status was that London was still the main place where gold was traded. Closing the market required a proclamation by the queen. Although it was almost midnight, Prime Minister Harold Wilson rushed to Buckingham Palace, where he obtained the consent of Queen Elizabeth to close the market.
The United States then called an emergency meeting of Belgium, Britain, Germany, Italy, and the Netherlands, the remaining members of the Gold Pool. France should not have been offended, since it had already terminated its participation in the arrangement. But de Gaulle was characteristically piqued; he pointedly kept the Paris Bourse open while the London gold market was closed.
After two days of tense negotiations, U.S. and European officials agreed to a scheme devised by Italian central bank governor Guido Carli for a “two tier” gold market. Carli’s opinions carried weight; more than two decades earlier, at the precocious age of thirty, he had represented Italy at the Bretton Woods Conference. From there he went on to serve on the Executive Board of the IMF and, in Italy, at the foreign trade ministry, treasury, and central bank. He was widely respected for his candle power. His characteristically clever scheme divided transactions into a market tier on which the price of gold was free to fluctuate and an official tier where central banks would transact with one another at the official $35 price. Central banks were now relieved of having to devote real resources to the futile quest to keep the market price of gold from rising. At the same time, the dollar’s link to gold at the official $35 price, and therefore the entire Bretton Woods apparatus, remained in place.
President Johnson, echoing the comments of his treasury secretary the previous year, hailed the provisions of the two-tier gold market as “the most significant reforms of the international monetary system since Bretton Woods.”33 Not everyone was impressed. Leonid Brezhnev gleefully saw the decision as signaling “the beginning of the devaluation of the United States dollar” and “the possibility of a profound crisis of the capitalist system.”
Others, while not sharing Brezhnev’s sense of schadenfreude, similarly questioned the viability of the arrangement. They understood that governments would be tempted to buy gold from the United States at $35 an ounce and sell it for a higher price on the market. The only thing restraining them was fear of the unknown. If there was a run on U.S. gold reserves, rupturing the link between gold and the dollar, no one knew what kind of U.S. monetary policy would follow. No one could anticipate the implications for the international system.
Fear of the unknown was then trumped by fear of the known. With the election of Richard Nixon to the presidency in 1968, U.S. policies became increasingly unilateral and inflationary. Nixon saw no reason to cooperate with other countries. Rather, he sought to manipulate the system to maximal U.S. advantage and to free American foreign policy from financial constraints. Instead of negotiating, he adopted “bullying tactics” to get other countries to hold dollars.34
Nixon selected former Texas governor John Connally to play “bullyboy on the manicured playing fields of international finance” (Connally’s words). Nixon reached across party lines when appointing Connally as treasury secretary, just as Kennedy had reached across party lines when appointing Dillon. That Connally was longtime sidekick and onetime campaign manager of Nixon’s predecessor LBJ made the appointment startling. It came into focus when it was learned that Connally, while nominally stumping for Hubert Humphrey in the 1968 presidential campaign, had helped to identify oil and gas titans who might contribute to the Republican candidate.
The appointment reflected the president’s fascination with Connally, who cut the kind of dashing figure to which Nixon himself could never aspire. The former Texas governor was tall and handsome, with wavy white hair. Having been a thespian in school, he could be smooth and articulate. Like any actor, he was a publicity hound. He saw the treasury as a platform from which he could advance his presidential aspirations.
Better even a global than a national stage. Although international finance was not his strong suit—the oil depletion allowance was more like it—in May 1971 Connally turned down a request to testify before Congressman Wilbur Mills’s powerful House Ways and Means Committee in favor of a speech to an international monetary conference in Munich because he thought he could earn political points by attacking the Europeans on their own turf. “Considerations of friendship,” he warned them, were no longer enough for the United States to carry Europe’s water. The dollar problem would have to be solved by European countries assuming more of the U.S. defense burden and opening further to U.S. exports. If they didn’t, Connally continued, they would be subject to whatever policies the U.S. chose to enact.
Nixon’s foreign policy adviser Henry Kissinger warned privately that Connally’s scare tactics might backfire. And as Kissinger had predicted, at the Bank for International Settlements in June, European central bankers objected to both the tone and the content of Connally’s speech. When news of the clash leaked to the market, the ongoing drain of gold from the Treasury accelerated. On August 13 Britain, seeking to move before it was too late, asked the United States to convert some of its dollars into gold. This was the last straw; left no alternative, the United States suspended the conversion of dollars into gold, blaming “international speculators.” To make this look like an assertion of strength, Nixon dressed it up as a New Economic Program complete with tax cuts and a 90-day wage and price freeze. There was also a temporary 10 percent surcharge on imports designed to ensure that “American products will not be at a disadvantage because of unfair exchange rates.” In other words, the surcharge was intended to ensure, now that exchange rates were going to be adjusted, that they would be adjusted to U.S. advantage.
This abrupt, unilateral action was “hardly designed to win friends, or even to influence people, abroad,” in the words of the investment advisor Peter Bernstein.35 The 10 percent surcharge, in particular, won the United States no friends. But it placed other countries over a barrel. At the next meeting of finance ministers, Connally demanded to know what concessions the Europeans were prepared to offer in return for the United States dropping the surcharge and then, theatrically cupping his hand to his ear, observed, “I don’t hear any suggestions.”
The tactic was effective. With the stick of the surcharge, the United States was able to obtain, at a conference at the Smithsonian Institution in December, a new set of exchange rates that amounted to a significant devaluation of the dollar. The result was packaged as a revaluation of foreign currencies in a not very effective sop to U.S. prestige. One-upping his predecessor’s rhetoric in 1968, Nixon called it “the most significant monetary achievement in the history of the world.”36
But while the exchange rates were now different, the system was otherwise the same. Other currencies were still pegged to the dollar, the only difference now being that the U.S. Treasury no longer stood ready to convert dollars into gold for foreign central banks and governments. Nothing prevented the United States from running whatever policies it chose, a prospect that understandably alarmed countries pegging to its currency.
The danger materialized soon enough. Nixon blamed his defeat in the 1960 presidential election on the Federal Reserve’s tight monetary policy, which had depressed the economy. With the 1972 election approaching, he pressured the Fed under Arthur Burns to pump up the money supply. “Err toward inflation,” Nixon instructed him in a meeting at the White House.37
Burns was not accustomed to unsolicited advice. Formerly a very senior Columbia University professor where he had tutored, among other students, Alan Greenspan, he was convinced of appropriateness of the prevailing policy.38 But Nixon was not done. He hinted at legislation that would have allowed him to pack the Federal Reserve Board as FDR had attempted to pack the Supreme Court, and had Charles Colson, subsequently of Watergate fame, plant stories with United Press International about Burns lobbying for a pay increase.39 Burns in fact had only suggested that future Fed chairmen receive higher salaries so that they would be on an even footing with their European counterparts. But this was not the way the story was spun. So Burns goosed the money supply. Inflation accelerated. Pressure on the dollar intensified.
Clearly, something had to be done. So another committee was formed. The Committee of Twenty (with one finance minister or central banker for each of the twenty country groupings represented on the board of the International Monetary Fund) sought to reconcile the desire for exchange rate stability with the need for currencies to move against the dollar. Its proposal for an oxymoronic system of “fixed but adjustable rates” went nowhere. In the spring of 1973, in the midst of its work, another run on the dollar commenced, and the new set of exchange rates so laboriously agreed to at the Smithsonian collapsed.
The Committee of Twenty blithely continued work for another year before abandoning its deliberations. An interesting aspect of that work was the Report of the Technical Group on Indicators, which described how a specific set of indicators (the change in international reserves and the trade deficit or surplus) might be used to introduce “symmetry into the adjustment process.” This was code for the need to compel adjustment by chronic surplus countries, in this case Germany. The discussion paralleled the present debate over whether some kind of international mechanism should be created to compel China to appreciate its currency. It is thus worth recalling that that earlier discussion went nowhere.
QUELLE SURPRISE
None of this—not the devaluation, not the import surcharge, and not the inflation—enhanced the stature of the dollar. “The dollar is regarded all over the world as a sick currency,” read Leonard Silk’s lede in an article in the New York Times, which appeared, not without irony, on July 4, 1973. “Once upon a very recent time,” Time wrote, “only a banana republic would devalue its money twice within 14 months.” Parallels were drawn with sterling’s decline as an international currency. “For someone who spent the 1960s in England,” wrote the academic Emma Rothschild in the New York Times, “the decline of the dollar is like coming home.” Other currencies that were revalued in 1971–1973 were seen as increasingly serious rivals. There were widespread predictions of the dollar’s demise as the dominant unit in international transactions. The conventional wisdom, in other words, sounds remarkably familiar to modern ears.
It was anticipated that rivalry for reserve currency status would grow increasingly intense. With the shift to flexible exchange rates in 1973, it was thought that countries would need fewer reserves. Now that exchange rates were flexible, a shock to the balance of payments could be met by letting currencies adjust. No longer would central banks have to hold the currencies of others in order to intervene in the foreign exchange market.
What followed was therefore a surprise—two surprises, actually. The first one was that there was no decline in the demand for reserves. A series of studies found that countries when shifting to flexible exchange rates held the same or even more reserves. The explanation was simple: a floating exchange rate did not mean a freely floating exchange rate. Countries intervened when they concluded that the exchange rate had strayed too far from its fundamental value, and they came to this conclusion not infrequently. Their intervention required reserves—even more reserves given the continued expansion of trade and capital flows.
The second surprise was that there was no shift away from the dollar. Volatility there was in the share of dollars in foreign exchange reserves in the 1970s, but no secular decline. The dollar’s share of total identified international reserves remained close to 80 percent in 1977, as the United States pumped out dollars and the members of the Organization of Petroleum Exporting Countries (OPEC), having jacked up oil prices, parked their earnings in New York.40
STRONG DOLLAR POLICY
It was not the collapse of the dollar peg in the early 1970s so much as the subsequent inflation and mounting unease over the conduct of American monetary policy that precipitated movement away from the dollar. Consumer price inflation rose in every year of the Carter presidency, which did not make holding dollars attractive. Fears of U.S. intentions were fanned by statements by Treasury Secretary Michael Blumenthal in the summer of 1977 that the dollar was overstrong, the implication being that the secretary favored depreciation.
When Blumenthal’s “open mouth policy” caused the dollar to sag, the Arabs began muttering about using another currency when setting oil prices. The chastened secretary flew off to the Middle East to reassure them. The Europeans, seeing their currencies rise and their exporters squirm, reacted with fury. Blumenthal, the Frankfurter Allgemeine Zeitung wrote, was playing a “selfish, risky game that shows little responsibility toward the world economy.” With the situation threatening to spiral out of control, Blumenthal reversed course and announced that he believed in a strong dollar. It would be a long time before a U.S. treasury secretary would again be sufficiently courageous—or reckless—to say otherwise.
Arthur Burns, still Fed chairman, had been among those who blew a fuse over what he perceived as Blumenthal’s attempt to debase the currency, leading Blumenthal in turn to lobby against Burns’s reappointment. In this he was too successful: Burns was succeeded in March 1978 by G. William Miller, a slight, likeable Oklahoman whose command of the nuances of monetary policy was less than complete. Miller, the son of a storekeeper, had grown up in a town so small that it had no jail; as he described it, prisoners were simply chained to a log. He was given to telling bad jokes and laughing so hard that he botched the punchline. But he was also a force of nature: he had graduated from the Coast Guard Academy and the law school of the University of California, Berkeley and built a medium-sized textile manufacturer, Textron, into a giant conglomerate that produced Homelite chain saws, Speidel watchbands, and the Bell UH-1 helicopters, or Hueys, that were the workhorses of the Vietnam War.
Miller was a passionate advocate of equal employment for minorities, which may explain his single-minded pursuit of full employment. Basically he thought that the Fed should pursue employment growth to the exclusion of other goals. He denied that monetary policy could be effective in restraining inflation. Fighting inflation was first and foremost the responsibility, he insisted, of other branches of government. In the spring of 1979, with inflation continuing to accelerate, economists as ideologically diverse as the conservative consultant Alan Greenspan and the liberal Brookings Institution fellow Arthur Okun called for tighter money. Miller resisted, fearing that raising rates would squelch employment growth. Blumenthal and Charles Schultze, the head of Carter’s Council of Economic Advisors, their efforts at private persuasion having failed, were driven to leaking their complaints about Miller’s inaction to the press, in turn driving the president to ask them to desist.
Predictably, the dollar resumed its decline. Foreign currencies became so expensive that U.S. troops stationed in Europe had trouble making ends meet. NATO chief Alexander Haig reported that sympathetic West Germans were giving his soldiers care packages of food and cigarettes.
Complaints mounted about U.S. policy and the losses to which it exposed foreign holders of dollars. OPEC again discussed the possibility of pricing oil in another unit. Saudi Arabia and other members of the cartel made noises about moving their reserves into other currencies. Since doing so might weaken the dollar, their noises raised concerns that other countries might move preemptively in order to avoid ending up holding the bag, making talk of a dollar crash self-fulfilling.
Consideration was therefore given in late 1978 to creating a Substitution Account at the IMF through which dollar reserves could be exchanged for SDRs in an orderly fashion. The idea foundered on the question of who would take the losses if the dollar depreciated. If the answer was the IMF, then establishing the account was tantamount to transferring the risk of losses from some IMF members to others—from those holding lots of dollars to those holding few. If the answer was the United States, which would be asked to guarantee the holdings of the account (as the UK had been asked to guarantee the reserves of the sterling area after its 1967 devaluation), then the United States would incur very significant additional obligations. And this clearly was not something that the United States was willing to do.
Support for a Substitution Account evaporated in any case when Paul Volcker replaced Miller at the Fed in August 1979 and his tight-money policies caused the dollar to strengthen.41 As Nixon’s undersecretary of the treasury for international monetary affairs, Volcker had already been involved two devaluations of the dollar, in 1971 when the Bretton Woods System collapsed, and in 1973 when the Smithsonian Agreement came apart. In 1973, as the Treasury’s most conspicuous secret agent, he had flown 31,000 miles in five days, shuttling between Tokyo, Bonn and other capitals in a vain effort to salvage the agreement.42 Given his 6-foot, 7-inch frame, the German press immediately identified him on the streets of Bonn and exposed his supposedly secret mission. Volcker had no desire to oversee another unsuccessful currency adjustment. Having been in and out of the Federal Reserve System since 1952, he was Miller’s opposite not just physically but in his knowledge of domestic and international finance. As president of the Federal Reserve Bank of New York and therefore a member of the Federal Open Market Committee, he had already voted twice, in defiance of Miller, for raising interest rates.
For Carter, desperate now to support the dollar and restrain inflation, Volcker was the man. On cue, the Federal Open Market Committee under Tall Paul raised interest rates.43 The dollar recovered, causing talk of a Substitution Account to wither. But the proposal had already hit the rocks over the question of who would bear the exchange risk. The United States, in particular, was unwilling to see discussions continue, fearing pressure for it to guarantee the value of the dollars held in such an account. This is important to recall now that the idea of a Substitution Account through which countries like China might exchange their dollars for SDRs is again in the air.
THE DOLLAR ENDURES
Yet there was no migration away from the dollar. OPEC talked about pricing oil in a basket of currencies but did nothing. Nor was there active movement by central banks and governments into other currencies.44 Only Iran, where the revolution and hostage crisis created high tension with the United States, significantly altered the composition of its reserves.45 In 1977–1980, when there was the most talk about the dollar losing its exorbitant privilege, the main thing accounting for its declining share of global reserves was that other currencies became more valuable as the dollar depreciated, not that central banks sold what they held. The share of dollars then stabilized after Volcker took over at the Fed and the currency strengthened.46
The dollar’s continued dominance surprised many observers then, but it is hardly surprising now. The United States was still the world’s largest economy. It was still the leading trading nation. It still had the deepest financial markets. The deutschmark, its main rival, was the currency of an economy only a fraction its size. Germany was not a big supplier of the financial securities that were attractive to central banks and other foreign investors in any case, because its government budget was balanced and its financial system was bank based. Since the early 1970s the German authorities had required prior approval for sales of domestic fixed-income securities to nonresidents. They had raised reserve requirements on foreign-owned bank deposits to discourage capital inflows that might fuel inflation. When in 1979 Iran threatened to convert its dollar reserves into deutschmarks, the Bundesbank warned it off, fearing that capital inflows would swell the money supply and stoke inflation. It made clear that it would do whatever it took to discourage central banks and governments from accumulating deutschmarks. The United States, as a larger economy, could provide the international reserves required by other countries “without having its economic policy damaged by the fluctuations of capital flows,” the Bundesbankers observed.47
None of this made the deutschmark attractive for international use. The share of foreign exchange reserves in deutschmarks hovered below 15 percent all through the 1980s.
Nor were there other options. The UK, with its history of inflation and subpar growth, was in the early stages of a Thatcher experiment whose ultimate success remained uncertain. France suffered from slow growth, high unemployment, and financial problems that it sought to bottle up by tightening controls on international capital flows, further diminishing international use of its currency.48
The new player was Japan, whose share in global reserves had risen in the 1970s. That Japan was now an important trading nation and that everyone expected the yen to strengthen made it an obvious currency to add. But Japanese bond markets were small. The yen accounted for only 8 percent of total global reserves at its peak, which came in 1991. From there, Japan descended into an economic and financial funk, and the importance of the yen as a reserve and international currency descended with it.
THE MORE THINGS CHANGE
When the IMF economist George Tavlas surveyed this landscape in 1998, he noted that, notwithstanding talk of a tripolar yen-deutschmark-dollar world, the dollar still dominated international transactions.49 Petroleum prices were set in dollars. Other commodity prices were quoted in dollars. Two-thirds of all international bank loans were denominated in dollars. 40 percent of international bond issues marketed to foreign investors were in dollars.50 Dollars still accounted for more than 60 percent of total identified official holdings of foreign exchange. The dollar’s dominance remained an established fact.
This period was also when there was talk of a “new economy” and of whether America’s surging stock market signaled the advent of a cluster of high-tech innovations on which the country was singularly well suited to capitalize. The idea that the United States was set to outperform a rigid Europe and a depressed and deflated Japan bred confidence that the dollar would remain the dominant international currency. And new economy or not, the dollar’s dominance was supported by a lack of alternatives. The greenback was the predominant international currency, if for no other reason than by default.
But the time for celebration would be brief. For already movement was afoot to create what would constitute, for the first time in fully seven decades, a serious rival.
Comments
Post a Comment