the quest
RUSSIA RETURNS
On the night of December 25, 1991, Soviet president Mikhail Gorbachev went on national television to make a startling announcement—one that would have been almost unimaginable even a year or two earlier: “I hereby discontinue my activities at the post of the President of the Union of Soviet Socialist Republics.” And, he added, the Soviet Union would shortly cease to exist.
“We have a lot of everything—land, oil and gas and other natural resources—and there was talent and intellect in abundance,” he continued. “However, we were living much worse than people in the industrialized countries were living and we were increasingly lagging behind them.” He had tried to implement reforms but he had run out of time. A few months earlier, diehard communists had tried to stage a coup but failed. The coup had, however, set in motion the final disintegration. “The old system fell apart even before the new system began to work,” he said.
“Of course,” he added, “there were mistakes made that could have been avoided, and many of the things that we did could have been done better.” But he would not give up hope. “Some day our common efforts will bear fruit and our nations will live in a prosperous, democratic society.” He concluded simply, “I wish everyone all the best.”1
With that, he faded out into the ether and uncertainty of the night.
His whole speech had taken just twelve minutes. That was it. After seven decades, communism was finished in the land in which it had been born.
Six days later, on December 31, the USSR, the Union of Soviet Socialist Republics, formally ceased to exist. Mikhail Gorbachev, the last president of the Soviet Union, handed over the “football”—the suitcase with the codes to activate the Soviet nuclear arsenal—to Boris Yeltsin, the first president of the Russian Federation. There were no ringing of bells, no honking of horns, to mark this great transition. Just a stunned and muted—and disbelieving—response. The Soviet Union, a global superpower, was gone. The successors would be fifteen states, ranging in size from the huge Russian Federation to tiny Estonia. Russia was, by far, the first among equals: it was the legatee of the old Soviet Union; it inherited not only the nuclear codes, but the ministries and the debts of the USSR. What had been the closed Soviet Union was now, to one degree or another, open to the world. That, among other things, would redraw the map of world oil.
Among the tens of millions who had watched Gorbachev’s television farewell on December 25 was Valery Graifer. To Graifer, the collapse of the Soviet Union was nothing less than “a catastrophe, a real catastrophe.” For half a decade, he had been at the very center of the Soviet oil and gas industry. He had led the giant West Siberia operation, the last great industrial achievement of the Soviet system. Graifer had been sent there in the mid-1980s, when production had begun faltering, to restore output and push it higher. Under him, West Siberia had reached 8 million barrels per day—almost rivaling Saudi Arabia’s total output. The scale of the enterprise was enormous: some 450,000 people ultimately reported up to him. And yet West Siberia was part of an even bigger Soviet industry. “It was one big oil family throughout all the republics of the Soviet Union,” he later said. “If anyone had told me that this family was about to collapse, I would have laughed.” But the shock of the collapse wore off, and within a year he had launched a technology company to serve whatever would be the new oil industry of independent Russia. “We had a tough time,” he said. “But I saw that life goes on.”2
“THINGS ARE BAD WITH BREAD”
One of the lasting ironies of the Soviet Union was that while the communist system was almost synonymous with force-paced industrialization, its economy in its final decades was so heavily dependent on vast natural resources—oil and gas in particular.
The economic system that Joseph Stalin had imposed on the Soviet Union was grounded in central planning, five-year plans, and self-sufficiency—what Stalin called, “socialism in one country.” The USSR was largely shut off from the world economy. It was only in the 1960s that the Soviet Union reemerged on the world market as a significant exporter of oil and then, in the 1970s, of natural gas. “Crude oil along with other natural resources were,” as one Russian oil leader later said, “nearly the single existing link of the Soviet Union to the world” for “earning the hard currency so desperately needed by this largely isolated country.”3
By the end of the 1960s, the Soviet economy was showing signs of decay and incapacity to maintain economic growth. But, as a significant oil exporter, it received a huge windfall from the 1973 October War and the Arab oil embargo: the quadrupling of oil prices. The economy further benefitted in the early 1980s when oil prices doubled in response to the Iranian Revolution. This surge in oil revenues helped keep the enfeebled Soviet economy going for another decade, enabling the country to finance its superpower military status and meet other urgent needs.
At the top of the list of these needs were the food imports required, because of its endemic agricultural crisis, in order to avert acute shortages, even famine, and social instability. Sometimes the threat of food shortages was so imminent that Soviet premier Alexei Kosygin would call the head of oil and gas production and tell him, “Things are bad with bread. Give me three million tons [of oil] over the plan.”
Economist Yegor Gaidar, acting Russian prime minister in 1992, summed up the impact of these oil price increases: “The hard currency from oil exports stopped the growing food supply crisis, increased the import of equipment and consumer goods, ensured a financial basis for the arms race and the achievement of nuclear parity with the United States and permitted the realization of such risky foreign policy actions as the war in Afghanistan.”4
The increase in prices also allowed the Soviet Union to go on without reforming its economy or altering its foreign policy. Trapped by its own inertia the Soviet leadership failed to give serious consideration to the thought that oil prices might fall someday, let alone prepare for such an eventuality.
IS THE WORLD RUNNING OUT OF OIL?
Since the beginning of the twenty-first century, a fear has come to pervade the prospects for oil and also feeds anxieties about overall global stability. This fear, that the world is running out of oil, comes with a name: peak oil. It argues that the world is near or at the point of maximum output, and that an inexorable decline has already begun, or is soon to set in. The consequences, it is said, will be grim: “An unprecedented crisis is just over the horizon,” writes one advocate of the peak oil theory. “There will be chaos in the oil industry, in governments and in national economies.” Another warns of consequences including “war, starvation, economic recession, possibly even the extinction of homo sapiens.” The date of the peak has tended to move forward. It was supposed to arrive by Thanksgiving 2005. Then the “unbridgeable supply demand gap” was expected to open up “after 2007.” Then it would arrive in 2011. Now some say “there is a significant risk of a peak before 2020.”1
The peak oil theory embodies an “end of technology/end of opportunity” perspective, that there will be no more significant innovation in oil production, nor significant new resources that can be developed.
The peak may be the best-known image of future supply. But there is another, more appropriate, way to visualize the course of supply: as a plateau. The world has decades of further production growth before flattening out into a plateau—perhaps sometime around midcentury—at which time a more gradual decline will begin.
ABOVEGROUND RISKS
To be sure, there’s hardly a shortfall of risks in the years ahead. Developing the resources to meet the requirements of a growing world is a very big and expensive challenge. The International Energy Agency estimates that new development will require as much as $8 trillion over the next quarter century. Projects will grow larger and more complex and there is no shortage of geological challenges. 2
But many of the most decisive risks will be what are called “above ground.” The list is long, and they are economic, political, and military: What policies do governments make, what terms do they require, how do they implement their choices, and what is the quality and timeliness of decision making? Do countries provide companies with access to develop resources and do companies gain a license to operate? What is happening to costs in the oil field? What is the relationship between state-owned national oil companies and the traditional international oil companies, and between importing and exporting countries? How stable is a country, and how big are threats from civil war, corruption, and crime? What are the relations between central governments and regions and provinces? What are the threats of war and turmoil in different parts of the world? How vulnerable is the supply system to terrorism?
All of these are significant and sober questions. How they play out—and interact—will do much to determine future levels of production. But these are not issues of physical resources, but of what happens above ground.
Moreover, decision making on the basis of a peak oil view can create risks of its own. Ali Larijani, the speaker of Iran’s parliament, declared that Iran needs its nuclear program because “fossil fuels are coming to an end. We know the expiration date of our reserves.” Such an expectation is surprising coming from a country with the world’s second-largest conventional natural gas reserves and among the world’s largest oil reserves.3
This peak oil theory may seem new. In fact, it has been around for a long time. This is not the first time that the world has run out of oil. It is the fifth. And this time too, as with the previous episode, the peak presumes limited technological innovation and that economics does not really matter.
ALTERNATING CURRENTS
Electricity underpins modern civilization. This fundamental truth is often expressed in terms of “keeping the lights on,” which is appropriate, as lighting was electricity’s first major market and remains a necessity. But today that phrase is also a metaphor for its pervasiveness and essentiality. Electricity delivers a precision unmatched by any other form of energy; it is also almost infinitely versatile in how it can be used.
Consider what would not work and would not happen without electric power. Obviously, no refrigerators, no air-conditioning, no television, no elevators. It is essential for every kind of industrial processing. The new digital world relies on electricity’s precision to drive everything that runs on microprocessors—computers, telephones, smart phones, medical equipment, espresso machines. Electricity makes possible and integrates the real-time networks of communications, finance, and trade that shape the world economy. And its importance only grows, as most new energy-consuming devices require electricity.1
Electricity may be all-pervasive. But it is also mostly taken for granted, much more so than oil. After all, gasoline usage requires the conscious activity once or twice a week of pulling into the filling station and filling up. To tap into electricity, all one needs to do is flip a switch. When people think about power, it’s usually only when the monthly bill arrives or on those infrequent times when the lights are suddenly extinguished either by a storm or some breakdown in the delivery system.
All this electrification did indeed begin with a flip of a switch.
THE WIZARD OF MENLO PARK
On the afternoon of September 4, 1882, the polymathic inventor Thomas Edison was in the Wall Street offices of the nation’s most powerful banker, J. P. Morgan. At 3:00 p.m., Edison threw the switch. “They’re on!” a Morgan director exclaimed, as a hundred lightbulbs lit up, filling the room with their light.2
Nearby, at the same moment, 52 bulbs went on in the offices of the New York Times, which proclaimed the new electric light “soft,” and “graceful to the eye . . . without a particle of flicker to make the head ache.” The current for these bulbs flowed underground, through wires and tubes, from a coal-fired electric generating plant that Edison had built a few blocks away, on Pearl Street, partly financed by J. P. Morgan, to serve one square mile of lower Manhattan. With that, the age of electricity had begun.
The Pearl Street station was the first central generating plant in the United States. It was also a major engineering challenge for Edison and his organization; it required the building of six huge “dynamos,” or generators, which, at 27 tons each, were nicknamed “Jumbos” after the huge elephant from Africa with which the circus showman P. T. Barnum was then touring America.
Another landmark event in electric power occurred a few months later, on January 18, 1883. That was the first electricity bill ever—dispatched to the Ansonia Brass and Copper Company, for the historic sum of $50.44.3
It had required a decade of intense, almost round-the-clock work by Thomas Edison and his team to get to that electric moment on Pearl Street. Still only in his midthirties at the time, Edison had already made himself America’s most celebrated inventor with his breakthroughs on the telegraph and the phonograph. He was also said to be the most famous American in the rest of the world. Edison was to establish the record for the greatest number of American patents ever issued to one person—a total of 1,093. Much later, well into the twentieth century, newspaper and magazine polls continued to select him as America’s “greatest” and “most useful citizen.”
Edison was largely self-taught; he had only a couple of years of formal schooling, plus six years as an itinerant telegrapher, making such achievements even more remarkable. His partial deafness made him somewhat isolated and self-centered, but also gave him an unusual capacity for concentration and creativity. He proceeded by experiment, reasoning, and sheer determination, and, as he once said, “by methods which I could not explain.” He had set up a research laboratory in Menlo Park, New Jersey, with the ambitious aim, as he put it, of making an invention factory that would deliver “a minor invention every ten days and a big thing every six months or so.”4
“THE SUBDIVISION OF LIGHT”
That was not so easy, as he found when he homed in on electricity. He wanted to replace the then-prevalent gas-fired lamp. What he also wanted to do, in his own words, was to “subdivide” light; that is, deliver electric light not just over a few large streetlights as was then possible, but make it “subdivided so that it could be brought into private homes.”
Many scoffed at Edison’s grand ambition. Experts appointed by the British Parliament dismissed Edison’s research as “good enough for our transatlantic friends” but “unworthy of the attention of practical or scientific men.”
To prove them wrong and successfully subdivide light, Edison would have to create an entire system—not just the lightbulb but also the means to generate electricity and distribute it across a city. “Edison’s genius,” one scholar has written, “lay in his ability to direct a process involving problem identification, solution as idea, research and development, and introduction into use.” His aim was not just to invent a better lightbulb (there had already been 20 or so of one kind or another) but to introduce an entire system of lighting—and to do so on a commercial basis, and as quickly as possible.5
The inventor had to start somewhere, which did mean with the lightbulb. The challenge, for a practical bulb, was to find a filament that, when electricity flowed through it, would give off a pleasing light but that also could last not just one hour but for many hours. After experimenting with a wide variety of possible sources—including hairs from the beards of two of his employees—he came up with a series of carbon filament, first made from cotton thread and then from cardboard and then bamboo, that passed the test.
Years of acrimonious and expensive litigation followed among Edison and other competing lightbulb inventors over who had infringed whose patents. The U.S. Court of Appeals finally resolved the legal fight in the United States in 1892. In Britain, however, the court upheld competing patents by the English scientist Joseph Wilson Swan. Rather than fight Swan, Edison established a joint venture with him to manufacture lightbulbs in Britain.
To create an entire system required considerable funding. Although not called such at the time, one of the other inventions that could be credited to Edison and his investors was venture capital. For what he developed in Menlo Park, New Jersey, was a forerunner of the venture capital industry that would grow, coincidentally, around another Menlo Park—this one in Silicon Valley in California. As an Edison biographer has observed, it was his melding of the “laboratory and business enterprise that enabled him to succeed.”6
Costs were a constant problem, and as they increased, so did the pressures. The price of copper, needed for the wires, kept going up. “It is very expensive experimenting ,” Edison moaned at one point. The rising costs strained his relations with his investors, leading him to complain, “Capital is timid.”
But he did keep happy his lead investor—J. P. Morgan—by wiring up Morgan’s Italianate mansion on Madison Avenue in the East 30s in New York City with 385 bulbs. That required the installation of a steam engine and electric generators in a specially dug cellar under the mansion. The clanging noise irritated not only the neighbors but also Mrs. Morgan. Moreover, the system required a technician to be on duty from 3:00 p.m. to 11:00 p.m. every day, which was not exactly efficient. Making matters worse, one night Edison’s wiring set J. P. Morgan’s library on fire. But, through it all, Morgan remained phlegmatic, with his eye on the objective. “I hope the Edison Company appreciates the value of my house as an experimental station,” the banker dryly remarked.7
GLACIAL CHANGE
On the morning of August 17, 1856, as the first sunlight revealed the pure white cone of a distant peak, John Tyndall left the hotel not far from the little resort town of Interlaken in Switzerland and set out by himself, making his way through a gorge toward a mountain. He finally reached his destination, the edge of a glacier. He was overcome by what he encountered—“a savage magnificence such as I had not previously beheld.” And then, sweating with great exertion but propelled by a growing rapture, he worked his way up onto the glacier itself. He was totally alone in the white emptiness.
The sheer isolation on the ice stunned him. The silence was broken only intermittently, by the “gusts of the wind, or by the weird rattle of the debris which fell at intervals from the melting ice.” Suddenly, a giant cascading roar shook the sky. He froze with fear. He then realized what it was—an avalanche. He fixed his eyes “upon a white slope some thousands of feet above” and watched, transfixed, as the distant ice gave way and tumbled down. Once again, it was eerily quiet. But then, a moment later, another thundering avalanche shook the sky.1
“A SENTIMENT OF WONDER”
It had been seven years earlier, in 1849, that Tyndall had caught his first glimpse of a glacier. This occurred on his first visit to Switzerland, while he was still doing graduate studies in chemistry in Germany. But it was not until this trip in 1856 that Tyndall—by then already launched on a course that would eventually rank him as one of the great British scientists of the nineteenth century—came back to Switzerland for the specific purpose of studying glaciers. The consequences would ultimately have a decisive impact on the understanding of climate.
Over those weeks that followed his arrival in Interlaken in 1856, Tyndall was overwhelmed again and again by what he beheld—the vastness of the ice, massive and monumental and deeply mysterious. He felt, he said, a “sentiment of wonder approaching to awe.” The glaciers captured his imagination. They also became an obsession, repeatedly drawing him back to Switzerland, to scale them, to explore them, to try to understand them—and to risk his life on them.
Born in Ireland, the son of a constable and sometime shoemaker, Tyndall had originally come to England to work as a surveyor. But in 1848, distressed at his inability to get a proper scientific education in Britain, he took all his savings, such as they were, and set off for Germany to study with the chemist Robert Bunsen (of Bunsen burner fame). There he assimilated to his core what he called “the language of experiment.” Returning to Britain, he would gain recognition for his scientific work, and then go on to establish himself as a towering figure at the Royal Institution. Among his many accomplishments, he would provide the answer to the basic question of why the sky is blue.2
Yet it was to Switzerland that he returned, sometimes almost yearly, to trek through the high altitudes, investigate the terrain, and, yoking on ropes, claw his way up the sides of mountains and on to his beloved glaciers. One year he almost ascended to the top of the Matterhorn, which would have made him the first man to surmount it. But then a sudden violent storm erupted, and his guides held him back from risking the last few hundred feet.
Tyndall grasped something fundamental about the glaciers. They were not stationary. They were not frozen in time. They moved. He described one valley where he “observed upon the rocks and mountains the action of ancient glaciers which once filled the valley to the height of more than a thousand feet above its present level.” But now the glaciers were gone. That, thereafter, became one of his principal scientific preoccupations—how glaciers moved and migrated, how they grew and how they shrank.3
Tyndall’s fascination with glaciers was rooted in the conviction held by a handful of nineteenth-century scientists that Swiss glaciers were the key to determining whether there had once been an Ice Age. And, if so, why it had ended? And, more frightening, might it come back? That in turn led Tyndall to ask questions about temperature and about that narrow belt of gases that girds the world—the atmosphere. His quest for answers would lead him to a fundamental breakthrough that would explain how the atmosphere works. For this Tyndall ranks as one of the key links in the chain of scientists stretching from the late eighteenth century until today who are responsible for providing the modern understanding of climate.
But how did climate change go from a subject of scientific inquiry, which engaged a few scientists like Tyndall, which to one of the dominating energy issues of our age? That is a question profoundly important to the energy future.
THE NEW ENERGY QUESTION
Traditionally, energy issues have revolved around questions about price, availability, security—and pollution. The picture has been further complicated by the decisions governments make about the distribution of energy and money and access to resources, and by the risks of geopolitical clash over those resources.
But now energy policies of all kinds are being reshaped by the issue of climate change and global warming. In response, some seek to transform, radically, the energy system in order to drastically reduce the amount of carbon dioxide and other greenhouse gases that are released when coal, oil, and natural gas—and wood and other combustibles—are burned to generate energy.
This is an awesome challenge. For today over 80 percent of America’s energy—and that of the world—is supplied by the combustion of fossil fuels. Put simply: the industrial civilization that has evolved over two and a half centuries rests on a hydrocarbon foundation.
On the night of December 25, 1991, Soviet president Mikhail Gorbachev went on national television to make a startling announcement—one that would have been almost unimaginable even a year or two earlier: “I hereby discontinue my activities at the post of the President of the Union of Soviet Socialist Republics.” And, he added, the Soviet Union would shortly cease to exist.
“We have a lot of everything—land, oil and gas and other natural resources—and there was talent and intellect in abundance,” he continued. “However, we were living much worse than people in the industrialized countries were living and we were increasingly lagging behind them.” He had tried to implement reforms but he had run out of time. A few months earlier, diehard communists had tried to stage a coup but failed. The coup had, however, set in motion the final disintegration. “The old system fell apart even before the new system began to work,” he said.
“Of course,” he added, “there were mistakes made that could have been avoided, and many of the things that we did could have been done better.” But he would not give up hope. “Some day our common efforts will bear fruit and our nations will live in a prosperous, democratic society.” He concluded simply, “I wish everyone all the best.”1
With that, he faded out into the ether and uncertainty of the night.
His whole speech had taken just twelve minutes. That was it. After seven decades, communism was finished in the land in which it had been born.
Six days later, on December 31, the USSR, the Union of Soviet Socialist Republics, formally ceased to exist. Mikhail Gorbachev, the last president of the Soviet Union, handed over the “football”—the suitcase with the codes to activate the Soviet nuclear arsenal—to Boris Yeltsin, the first president of the Russian Federation. There were no ringing of bells, no honking of horns, to mark this great transition. Just a stunned and muted—and disbelieving—response. The Soviet Union, a global superpower, was gone. The successors would be fifteen states, ranging in size from the huge Russian Federation to tiny Estonia. Russia was, by far, the first among equals: it was the legatee of the old Soviet Union; it inherited not only the nuclear codes, but the ministries and the debts of the USSR. What had been the closed Soviet Union was now, to one degree or another, open to the world. That, among other things, would redraw the map of world oil.
Among the tens of millions who had watched Gorbachev’s television farewell on December 25 was Valery Graifer. To Graifer, the collapse of the Soviet Union was nothing less than “a catastrophe, a real catastrophe.” For half a decade, he had been at the very center of the Soviet oil and gas industry. He had led the giant West Siberia operation, the last great industrial achievement of the Soviet system. Graifer had been sent there in the mid-1980s, when production had begun faltering, to restore output and push it higher. Under him, West Siberia had reached 8 million barrels per day—almost rivaling Saudi Arabia’s total output. The scale of the enterprise was enormous: some 450,000 people ultimately reported up to him. And yet West Siberia was part of an even bigger Soviet industry. “It was one big oil family throughout all the republics of the Soviet Union,” he later said. “If anyone had told me that this family was about to collapse, I would have laughed.” But the shock of the collapse wore off, and within a year he had launched a technology company to serve whatever would be the new oil industry of independent Russia. “We had a tough time,” he said. “But I saw that life goes on.”2
“THINGS ARE BAD WITH BREAD”
One of the lasting ironies of the Soviet Union was that while the communist system was almost synonymous with force-paced industrialization, its economy in its final decades was so heavily dependent on vast natural resources—oil and gas in particular.
The economic system that Joseph Stalin had imposed on the Soviet Union was grounded in central planning, five-year plans, and self-sufficiency—what Stalin called, “socialism in one country.” The USSR was largely shut off from the world economy. It was only in the 1960s that the Soviet Union reemerged on the world market as a significant exporter of oil and then, in the 1970s, of natural gas. “Crude oil along with other natural resources were,” as one Russian oil leader later said, “nearly the single existing link of the Soviet Union to the world” for “earning the hard currency so desperately needed by this largely isolated country.”3
By the end of the 1960s, the Soviet economy was showing signs of decay and incapacity to maintain economic growth. But, as a significant oil exporter, it received a huge windfall from the 1973 October War and the Arab oil embargo: the quadrupling of oil prices. The economy further benefitted in the early 1980s when oil prices doubled in response to the Iranian Revolution. This surge in oil revenues helped keep the enfeebled Soviet economy going for another decade, enabling the country to finance its superpower military status and meet other urgent needs.
At the top of the list of these needs were the food imports required, because of its endemic agricultural crisis, in order to avert acute shortages, even famine, and social instability. Sometimes the threat of food shortages was so imminent that Soviet premier Alexei Kosygin would call the head of oil and gas production and tell him, “Things are bad with bread. Give me three million tons [of oil] over the plan.”
Economist Yegor Gaidar, acting Russian prime minister in 1992, summed up the impact of these oil price increases: “The hard currency from oil exports stopped the growing food supply crisis, increased the import of equipment and consumer goods, ensured a financial basis for the arms race and the achievement of nuclear parity with the United States and permitted the realization of such risky foreign policy actions as the war in Afghanistan.”4
The increase in prices also allowed the Soviet Union to go on without reforming its economy or altering its foreign policy. Trapped by its own inertia the Soviet leadership failed to give serious consideration to the thought that oil prices might fall someday, let alone prepare for such an eventuality.
IS THE WORLD RUNNING OUT OF OIL?
Since the beginning of the twenty-first century, a fear has come to pervade the prospects for oil and also feeds anxieties about overall global stability. This fear, that the world is running out of oil, comes with a name: peak oil. It argues that the world is near or at the point of maximum output, and that an inexorable decline has already begun, or is soon to set in. The consequences, it is said, will be grim: “An unprecedented crisis is just over the horizon,” writes one advocate of the peak oil theory. “There will be chaos in the oil industry, in governments and in national economies.” Another warns of consequences including “war, starvation, economic recession, possibly even the extinction of homo sapiens.” The date of the peak has tended to move forward. It was supposed to arrive by Thanksgiving 2005. Then the “unbridgeable supply demand gap” was expected to open up “after 2007.” Then it would arrive in 2011. Now some say “there is a significant risk of a peak before 2020.”1
The peak oil theory embodies an “end of technology/end of opportunity” perspective, that there will be no more significant innovation in oil production, nor significant new resources that can be developed.
The peak may be the best-known image of future supply. But there is another, more appropriate, way to visualize the course of supply: as a plateau. The world has decades of further production growth before flattening out into a plateau—perhaps sometime around midcentury—at which time a more gradual decline will begin.
ABOVEGROUND RISKS
To be sure, there’s hardly a shortfall of risks in the years ahead. Developing the resources to meet the requirements of a growing world is a very big and expensive challenge. The International Energy Agency estimates that new development will require as much as $8 trillion over the next quarter century. Projects will grow larger and more complex and there is no shortage of geological challenges. 2
But many of the most decisive risks will be what are called “above ground.” The list is long, and they are economic, political, and military: What policies do governments make, what terms do they require, how do they implement their choices, and what is the quality and timeliness of decision making? Do countries provide companies with access to develop resources and do companies gain a license to operate? What is happening to costs in the oil field? What is the relationship between state-owned national oil companies and the traditional international oil companies, and between importing and exporting countries? How stable is a country, and how big are threats from civil war, corruption, and crime? What are the relations between central governments and regions and provinces? What are the threats of war and turmoil in different parts of the world? How vulnerable is the supply system to terrorism?
All of these are significant and sober questions. How they play out—and interact—will do much to determine future levels of production. But these are not issues of physical resources, but of what happens above ground.
Moreover, decision making on the basis of a peak oil view can create risks of its own. Ali Larijani, the speaker of Iran’s parliament, declared that Iran needs its nuclear program because “fossil fuels are coming to an end. We know the expiration date of our reserves.” Such an expectation is surprising coming from a country with the world’s second-largest conventional natural gas reserves and among the world’s largest oil reserves.3
This peak oil theory may seem new. In fact, it has been around for a long time. This is not the first time that the world has run out of oil. It is the fifth. And this time too, as with the previous episode, the peak presumes limited technological innovation and that economics does not really matter.
ALTERNATING CURRENTS
Electricity underpins modern civilization. This fundamental truth is often expressed in terms of “keeping the lights on,” which is appropriate, as lighting was electricity’s first major market and remains a necessity. But today that phrase is also a metaphor for its pervasiveness and essentiality. Electricity delivers a precision unmatched by any other form of energy; it is also almost infinitely versatile in how it can be used.
Consider what would not work and would not happen without electric power. Obviously, no refrigerators, no air-conditioning, no television, no elevators. It is essential for every kind of industrial processing. The new digital world relies on electricity’s precision to drive everything that runs on microprocessors—computers, telephones, smart phones, medical equipment, espresso machines. Electricity makes possible and integrates the real-time networks of communications, finance, and trade that shape the world economy. And its importance only grows, as most new energy-consuming devices require electricity.1
Electricity may be all-pervasive. But it is also mostly taken for granted, much more so than oil. After all, gasoline usage requires the conscious activity once or twice a week of pulling into the filling station and filling up. To tap into electricity, all one needs to do is flip a switch. When people think about power, it’s usually only when the monthly bill arrives or on those infrequent times when the lights are suddenly extinguished either by a storm or some breakdown in the delivery system.
All this electrification did indeed begin with a flip of a switch.
THE WIZARD OF MENLO PARK
On the afternoon of September 4, 1882, the polymathic inventor Thomas Edison was in the Wall Street offices of the nation’s most powerful banker, J. P. Morgan. At 3:00 p.m., Edison threw the switch. “They’re on!” a Morgan director exclaimed, as a hundred lightbulbs lit up, filling the room with their light.2
Nearby, at the same moment, 52 bulbs went on in the offices of the New York Times, which proclaimed the new electric light “soft,” and “graceful to the eye . . . without a particle of flicker to make the head ache.” The current for these bulbs flowed underground, through wires and tubes, from a coal-fired electric generating plant that Edison had built a few blocks away, on Pearl Street, partly financed by J. P. Morgan, to serve one square mile of lower Manhattan. With that, the age of electricity had begun.
The Pearl Street station was the first central generating plant in the United States. It was also a major engineering challenge for Edison and his organization; it required the building of six huge “dynamos,” or generators, which, at 27 tons each, were nicknamed “Jumbos” after the huge elephant from Africa with which the circus showman P. T. Barnum was then touring America.
Another landmark event in electric power occurred a few months later, on January 18, 1883. That was the first electricity bill ever—dispatched to the Ansonia Brass and Copper Company, for the historic sum of $50.44.3
It had required a decade of intense, almost round-the-clock work by Thomas Edison and his team to get to that electric moment on Pearl Street. Still only in his midthirties at the time, Edison had already made himself America’s most celebrated inventor with his breakthroughs on the telegraph and the phonograph. He was also said to be the most famous American in the rest of the world. Edison was to establish the record for the greatest number of American patents ever issued to one person—a total of 1,093. Much later, well into the twentieth century, newspaper and magazine polls continued to select him as America’s “greatest” and “most useful citizen.”
Edison was largely self-taught; he had only a couple of years of formal schooling, plus six years as an itinerant telegrapher, making such achievements even more remarkable. His partial deafness made him somewhat isolated and self-centered, but also gave him an unusual capacity for concentration and creativity. He proceeded by experiment, reasoning, and sheer determination, and, as he once said, “by methods which I could not explain.” He had set up a research laboratory in Menlo Park, New Jersey, with the ambitious aim, as he put it, of making an invention factory that would deliver “a minor invention every ten days and a big thing every six months or so.”4
“THE SUBDIVISION OF LIGHT”
That was not so easy, as he found when he homed in on electricity. He wanted to replace the then-prevalent gas-fired lamp. What he also wanted to do, in his own words, was to “subdivide” light; that is, deliver electric light not just over a few large streetlights as was then possible, but make it “subdivided so that it could be brought into private homes.”
Many scoffed at Edison’s grand ambition. Experts appointed by the British Parliament dismissed Edison’s research as “good enough for our transatlantic friends” but “unworthy of the attention of practical or scientific men.”
To prove them wrong and successfully subdivide light, Edison would have to create an entire system—not just the lightbulb but also the means to generate electricity and distribute it across a city. “Edison’s genius,” one scholar has written, “lay in his ability to direct a process involving problem identification, solution as idea, research and development, and introduction into use.” His aim was not just to invent a better lightbulb (there had already been 20 or so of one kind or another) but to introduce an entire system of lighting—and to do so on a commercial basis, and as quickly as possible.5
The inventor had to start somewhere, which did mean with the lightbulb. The challenge, for a practical bulb, was to find a filament that, when electricity flowed through it, would give off a pleasing light but that also could last not just one hour but for many hours. After experimenting with a wide variety of possible sources—including hairs from the beards of two of his employees—he came up with a series of carbon filament, first made from cotton thread and then from cardboard and then bamboo, that passed the test.
Years of acrimonious and expensive litigation followed among Edison and other competing lightbulb inventors over who had infringed whose patents. The U.S. Court of Appeals finally resolved the legal fight in the United States in 1892. In Britain, however, the court upheld competing patents by the English scientist Joseph Wilson Swan. Rather than fight Swan, Edison established a joint venture with him to manufacture lightbulbs in Britain.
To create an entire system required considerable funding. Although not called such at the time, one of the other inventions that could be credited to Edison and his investors was venture capital. For what he developed in Menlo Park, New Jersey, was a forerunner of the venture capital industry that would grow, coincidentally, around another Menlo Park—this one in Silicon Valley in California. As an Edison biographer has observed, it was his melding of the “laboratory and business enterprise that enabled him to succeed.”6
Costs were a constant problem, and as they increased, so did the pressures. The price of copper, needed for the wires, kept going up. “It is very expensive experimenting ,” Edison moaned at one point. The rising costs strained his relations with his investors, leading him to complain, “Capital is timid.”
But he did keep happy his lead investor—J. P. Morgan—by wiring up Morgan’s Italianate mansion on Madison Avenue in the East 30s in New York City with 385 bulbs. That required the installation of a steam engine and electric generators in a specially dug cellar under the mansion. The clanging noise irritated not only the neighbors but also Mrs. Morgan. Moreover, the system required a technician to be on duty from 3:00 p.m. to 11:00 p.m. every day, which was not exactly efficient. Making matters worse, one night Edison’s wiring set J. P. Morgan’s library on fire. But, through it all, Morgan remained phlegmatic, with his eye on the objective. “I hope the Edison Company appreciates the value of my house as an experimental station,” the banker dryly remarked.7
GLACIAL CHANGE
On the morning of August 17, 1856, as the first sunlight revealed the pure white cone of a distant peak, John Tyndall left the hotel not far from the little resort town of Interlaken in Switzerland and set out by himself, making his way through a gorge toward a mountain. He finally reached his destination, the edge of a glacier. He was overcome by what he encountered—“a savage magnificence such as I had not previously beheld.” And then, sweating with great exertion but propelled by a growing rapture, he worked his way up onto the glacier itself. He was totally alone in the white emptiness.
The sheer isolation on the ice stunned him. The silence was broken only intermittently, by the “gusts of the wind, or by the weird rattle of the debris which fell at intervals from the melting ice.” Suddenly, a giant cascading roar shook the sky. He froze with fear. He then realized what it was—an avalanche. He fixed his eyes “upon a white slope some thousands of feet above” and watched, transfixed, as the distant ice gave way and tumbled down. Once again, it was eerily quiet. But then, a moment later, another thundering avalanche shook the sky.1
“A SENTIMENT OF WONDER”
It had been seven years earlier, in 1849, that Tyndall had caught his first glimpse of a glacier. This occurred on his first visit to Switzerland, while he was still doing graduate studies in chemistry in Germany. But it was not until this trip in 1856 that Tyndall—by then already launched on a course that would eventually rank him as one of the great British scientists of the nineteenth century—came back to Switzerland for the specific purpose of studying glaciers. The consequences would ultimately have a decisive impact on the understanding of climate.
Over those weeks that followed his arrival in Interlaken in 1856, Tyndall was overwhelmed again and again by what he beheld—the vastness of the ice, massive and monumental and deeply mysterious. He felt, he said, a “sentiment of wonder approaching to awe.” The glaciers captured his imagination. They also became an obsession, repeatedly drawing him back to Switzerland, to scale them, to explore them, to try to understand them—and to risk his life on them.
Born in Ireland, the son of a constable and sometime shoemaker, Tyndall had originally come to England to work as a surveyor. But in 1848, distressed at his inability to get a proper scientific education in Britain, he took all his savings, such as they were, and set off for Germany to study with the chemist Robert Bunsen (of Bunsen burner fame). There he assimilated to his core what he called “the language of experiment.” Returning to Britain, he would gain recognition for his scientific work, and then go on to establish himself as a towering figure at the Royal Institution. Among his many accomplishments, he would provide the answer to the basic question of why the sky is blue.2
Yet it was to Switzerland that he returned, sometimes almost yearly, to trek through the high altitudes, investigate the terrain, and, yoking on ropes, claw his way up the sides of mountains and on to his beloved glaciers. One year he almost ascended to the top of the Matterhorn, which would have made him the first man to surmount it. But then a sudden violent storm erupted, and his guides held him back from risking the last few hundred feet.
Tyndall grasped something fundamental about the glaciers. They were not stationary. They were not frozen in time. They moved. He described one valley where he “observed upon the rocks and mountains the action of ancient glaciers which once filled the valley to the height of more than a thousand feet above its present level.” But now the glaciers were gone. That, thereafter, became one of his principal scientific preoccupations—how glaciers moved and migrated, how they grew and how they shrank.3
Tyndall’s fascination with glaciers was rooted in the conviction held by a handful of nineteenth-century scientists that Swiss glaciers were the key to determining whether there had once been an Ice Age. And, if so, why it had ended? And, more frightening, might it come back? That in turn led Tyndall to ask questions about temperature and about that narrow belt of gases that girds the world—the atmosphere. His quest for answers would lead him to a fundamental breakthrough that would explain how the atmosphere works. For this Tyndall ranks as one of the key links in the chain of scientists stretching from the late eighteenth century until today who are responsible for providing the modern understanding of climate.
But how did climate change go from a subject of scientific inquiry, which engaged a few scientists like Tyndall, which to one of the dominating energy issues of our age? That is a question profoundly important to the energy future.
THE NEW ENERGY QUESTION
Traditionally, energy issues have revolved around questions about price, availability, security—and pollution. The picture has been further complicated by the decisions governments make about the distribution of energy and money and access to resources, and by the risks of geopolitical clash over those resources.
But now energy policies of all kinds are being reshaped by the issue of climate change and global warming. In response, some seek to transform, radically, the energy system in order to drastically reduce the amount of carbon dioxide and other greenhouse gases that are released when coal, oil, and natural gas—and wood and other combustibles—are burned to generate energy.
This is an awesome challenge. For today over 80 percent of America’s energy—and that of the world—is supplied by the combustion of fossil fuels. Put simply: the industrial civilization that has evolved over two and a half centuries rests on a hydrocarbon foundation.
REBIRTH OF RENEWABLES
It was the first and only press conference ever held on top of the White House. On June 20, 1979, President Carter, along with his wife, Rosalynn, tramped up onto the roof, entourage and press in tow, in order to dedicate a solar hot water–heater system. “No one can ever embargo the sun,” Carter declared. He put the system’s cost at $28,000 but quickly added that the investment would pay for itself in seven to ten years, given high energy prices. “A generation from now,” he said, this solar heater could be “a small part of one of the greatest and most exciting adventures ever undertaken by the American people . . . harnessing the power of the sun.” Or, he said, it could be “a curiosity, a museum piece.”
And there, standing on the White House roof, he set a grand goal: that the United States would get 20 percent of its energy from solar by the year 2000. He promised to spend $1 billion over the next year to get the initiative going.1
By the time of Carter’s 1979 press conference, the idea that the world needed to transition to what was then called solar energy (and later renewables) had already become a clear trend in energy thinking. The Arab oil embargo earlier that decade, and the then unfolding Iranian Revolution, brought not only disruption in petroleum supplies but also grave fears about the future of world oil. All that combined with a sharpening environmental consciousness to make solar and renewable energy the natural solution. It was clean and it provided stability. And it would never run out. In Washington, incentives were wheeled into place to jump-start a renewable industry. Research dollars started to flow. Technologists, big companies, small companies, entrepreneurs, activists, and enthusiasts were all getting into the solar game.
But nothing like 20 percent happened. Instead what followed this initial burst of enthusiasm were decades of disappointment, disillusionment, bankruptcies, and sheer stagnation. It was only in the late 1990s that the industry, by then established in Japan and Germany with strong government support, began to revive in the United States, and only around 2004–5 that it started to gain real scale. Even as late as 2010, renewables accounted for only 8 percent of the U.S. energy supply—about the same share it had in 1980. Remove two items—hydropower (which has been constant for many years) and biomass (primarily ethanol)—and renewables in 2009 constituted less than 1.5 percent of the total U.S. energy supply. Much the same holds true around the world.
Yet today renewables are reenergized to become a growing part of energy supply, embraced as a key solution to the triple challenges of energy supply, security, and climate change. China’s President Hu Jintao said that China must “seize preemptive opportunities in the new round of the global energy revolution.” The European Union has gone further, with a 20 percent renewable goal for 2020. “I want us to be the greenest government ever,” declared British prime minister David Cameron, promising “the most dramatic change in our energy policy since the advent of nuclear energy.” In 2011, German Chancellor Angela Merkel set a new target for Germany—to move renewables’ share of electricity from 17 percent in 2011 to 35 percent by 2020.
More than any other president before him, Barack Obama has invested his administration in remaking the energy system and driving it toward a renewable foundation. Indeed, he has raised the stakes in renewable energy to the level of national destiny. “The nation that leads the world in creating new energy sources,” he said, “will be the nation that leads the twenty-first-century global economy.” Both companies and investors now see renewables as a large and growing part of the huge global energy market.2
Yet reaching the higher targets will be no easy achievement given the scale and complexity of the energy system that supplies the world’s economy. Today it is still at the level of policy and politics where the future of renewables is primarily determined. They are, mostly, not competitive with conventional energy, although costs have come down substantially over the years. A global price on carbon, whether in the form of a carbon tax or a cap-and-trade system, would further augment the competitive economics of renewables against conventional energy.
Still, renewables are set, after a twenty-five-year hiatus, to become a significant and growing part of the energy mix. It is almost as though a time chasm has closed, compressing the decades and conjoining the late 1970s with the second decade of the twenty-first century.
CARBOHYDRATE MAN
The researcher was sitting in his office in Cambridge, Massachusetts, on a sleepy May afternoon in 1978 when the phone rang. “Admiral Rickover is on the line,” said the assistant’s voice. In a moment the admiral himself came on. He had just read an article by the researcher, and he had a message he wanted to deliver.
“Wood—fuel of the future. Wood!” he declared in the manner of one not used to being contradicted. “Fuel of the future!”
And with not much more than that, the Father of the Nuclear Navy—and the progenitor of nuclear power—abruptly hung up.
What Rickover was pointing at that afternoon was the potential for biological energy and biomass: energy generated from plant matter and other sources, and not by fossil fuels or uranium. The nation had just gone through an oil crisis and was on the edge of another. Now the man who had created the nuclear navy in record time was announcing that the future was about “growing” fuels.
Today legions of scientists, farmers, entrepreneurs, agribusiness managers, and venture capitalists use words like “ethanol,” “cellulosic,” and “biomass” rather than “wood.” But they share Rickover’s vision of growing fuel.
The best-known agricultural fuel is ethanol: ethyl alcohol made, in the first instance, from corn or sugar. In terms of technology, it’s hardly different than brewing beer or making rum. Beyond this is the “holy grail”: cellulosic ethanol, ethanol fermented and distilled on a massive scale from agricultural or urban waste or specially designed crops. Another agricultural fuel is biodiesel, made from soybeans or palm oil or even from the leftover grease from fast-food restaurants. Some argue that the still-better choices would be other biofuels, such as butanol. And then there is algae, which functions like little natural refineries.
THE BIOFUEL VISION
Whatever approaches prevail, biofuels suggest the possibility of a new era, characterized by the application of biology and biotech and understanding of the genome—the full DNA sequence of an organism—to the production of energy. The rise of the biofuels brings a new entrant into energy: the life scientist. Only in the last decade has biology begun to be applied systematically to energy.
Over this same period biofuels have generated enormous political swell in the United States, starting of course with the traditional advocates: farmers and their political allies who have always looked to ethanol as a way to diversify agricultural markets, generate additional revenues, and contribute to farm income and rural development. But there are new supporters: environmentalists (at least some), automobile companies, Silicon Valley billionaires, Hollywood moguls, along with national security specialists, who want to reduce oil imports because of worries about the Middle East and the geopolitical power of oil. More recently, they have all been joined by formidable new players: the U.S. Navy and Air Force, which are promoting biofuels development to improve combat capabilities and increase flexibility—and to diversify away from oil. The air force is experimenting with green jet fuel. The navy has a goal that half of its liquid fuels be biofuels by 2020 and laid out a vision of the “Great Green Fleet.”
This broad-based political support has generated an impressive array of programs, subsidies, incentives, and federal and state mandates meant to jump-start the biofuels industry in the United States. The most compelling is the requirement that the amount of biofuels blended with transportation fuel must almost triple from about somewhere below 1 million barrels per day in 2011 to 2.35 mbd by 2022. This could be the equivalent of about 20 percent of all motor fuel in the United States. It is like adding to world supply another Venezuela or Nigeria. The push to biofuels has been global. The European Union mandates at least 10 percent renewable energy, including biofuels, in the transport sector of each member state by 2020. India has proposed an ambitious 20 percent target for biofuels blending by 2017. But the champion is Brazil, where 60 percent of automotive motor fuel today is already ethanol.
In the biofuels vision, the process that produces fossil fuels—compressing organic matter into oil at tremendous pressure and heat deep below the earth’s surface over hundreds of millions of years—could be foreshortened into a cycle measured in seasons. A larger and larger share of the world’s transportation fuels would be cultivated, rather than drilled for. Hydrocarbon Man—the quintessential embodiment of the twentieth century, the century of oil—would increasingly give way over the twenty-first century to Carbohydrate Man. If this vision eventuates and biofuels do take away significant market share from traditional oil-based fuels over the next few decades, the results would reset global economics and politics. And agri-dollars would come to compete with petro-dollars.
Significant growth of ethanol use has already been registered. Today the amount of ethanol blended into gasoline is close to 900,000 barrels per day, in terms of volume, almost 10 percent of total U.S. gasoline (including blended ethanol) consumption. However, ethanol, on a volume basis, has only about two thirds the energy value of conventional gasoline, and so on an energy basis, today’s ethanol consumption is the energy equivalent of 600,000 barrels per day of gasoline.
Ethanol’s share in the United States is likely to grow over the next few years, although it must first contend with a “wall” on the amount of ethanol that can be blended with gasoline for use in all gas-powered vehicles. The fear is that greater concentrations of ethanol could harm engines not designed to run on biofuels.
There is also E85 fuel, which contains between 70 percent and 85 percent ethanol, but it can only be used in flex-fuel vehicles that can switch between oil and ethanol-based fuels or all-ethanol vehicles, specifically designed to accommodate this type of fuel. Currently such vehicles total only about 3 percent of the U.S. car fleet.
All this may strike many as new. But it isn’t, not by any means.
Comments
Post a Comment