When Stanford University energy economist Danny Cullenward looks at California’s policies on climate change, he sees a potential time bomb.
The state wants to slash greenhouse gas emissions so deeply in the coming years that oil refineries and other industries could face skyrocketing costs to comply with regulations, driving up gasoline prices until the system loses political support. If that happens, an effort touted as an international model for fighting global warming could collapse.
Not everyone agrees with Cullenward’s assessment, but it reflects how experts, officials and lawmakers are starting to reckon with the state’s steep ambitions and the understanding that its current policies may no longer be adequate. Although California has been gliding toward its initial goal of reducing emissions to 1990 levels by 2020, it must cut an additional 40% by 2030 under a law signed by Gov. Jerry Brown last year.
“It’s going to take bold proposals to get us to where we need to be,” said Cullenward, who has helped shape legislation in the Capitol.
Getting the details right means the difference between California burnishing its role as an incubator for innovation or proving itself to be a canary in the coal mine, and lawmakers are sorting through a flood of ideas this year. One proposal would accelerate the adoption of renewable energy and eventually phase out all fossil fuels for generating electricity. Some advocates want a regional power grid to share clean energy across state lines. Everyone is looking for ways to turn climate policies into jobs in their local communities.
“We’ve already decided as a state and as a Legislature that we want to dramatically reduce pollution and move forward toward a clean energy future,” Senate leader Kevin de León (D-Los Angeles) said. “That debate is over. Now we’re deciding how to get there.”
Those conversations, however, could prove just as contentious as previous debates, expose divisions among environmentalists and force lawmakers to make difficult decisions about squeezing a state with the world’s sixth-largest economy into a dramatically smaller emissions footprint.
“Everything is a huge question mark,” said Rob Lapsley, president of the California Business Roundtable, which represents the state’s largest corporations.
Front and center is the haggling over extending the cap-and-trade program, which requires companies to buy permits to release greenhouse gases into the atmosphere. Permits can also be traded on a secondary market. It’s the only system of its kind in the country, and it faces steep legal challenges that can only be fully resolved with new legislation to keep it operating.
Brown wants to settle the issue next month, and there’s wide consensus around keeping the program in some form. Oil companies that once launched an expensive ballot campaign to block it are now negotiating its extension — albeit on terms that are friendlier to their industry — and even some Republicans are on board with the idea.
But there are still disagreements over how to move forward, some of which were highlighted with the recent release of a proposal Cullenward worked on with one of his Stanford colleagues, environmental law professor Michael Wara.
The legislation, SB 775, which was written by Sen. Bob Wieckowski (D-Fremont) and backed by De León, would create a higher minimum price for emission permits and increase it annually to provide a steeper incentive for companies to clean up their operations.
There would also be a firm ceiling on how high prices could climb to guard against sticker shock as permits become more valuable while the state ratchets down emissions to meet its 2030 goal.
The legislation would make another significant change. Instead of sending cap-and-trade revenue only to projects intended to reduce emissions, such as subsidies for electric cars or affordable housing near mass transit, a chunk of the money would be distributed to Californians — much like a tax rebate.
Although it’s not yet clear how the rebates would function, the proposal is an acknowledgement that costs for gasoline and electricity are likely to rise, and lawmakers want to help insulate voters from the effects. The state’s transition toward low-emission technology could prove expensive over time, requiring the purchase of millions of electric vehicles and shuttering natural gas operations in favor of new solar plants.
“This is a massive infrastructure replacement program for California,” said Snuller Price, senior partner at E3, an energy efficiency consulting firm that has worked with state regulators. “We’re swapping out all of our things.”
Wara said California needs different policies to set a new target.
“It’s a totally different animal. We need to acknowledge that,” he said. “It’s going to a level of [emissions] reductions that no one has ever achieved.”
The idea has been embraced by some policy wonks — “state of the art,” one writer proclaimed — but others see peril in this approach. No longer would companies be able to finance offsets — green projects intended to lower emissions anywhere in the country — to meet their obligations under the program, cutting the flow of cash from California industries to environmental efforts nationwide.
The legislation also includes a modification to the program that some environmental advocates fear would make it harder to ensure the state meets its goals. Under the new proposal, the state would sell an unlimited number of permits if prices reach the ceiling, rather than restricting how many are available as the program does now.
That adjustment would make cap and trade function more like a tax, said Nathaniel Keohane, vice president at the Environmental Defense Fund, which is critical of the proposal and doesn’t see the same threat of price spikes.
“It’s a fundamental, philosophical thing,” he said.
De León wants to accelerate the process of reducing emissions from generating electricity. He launched new legislation, SB 100, to require the state to use renewable sources such as wind and solar for 60% of its power by 2030, up from the current target of 50%. By 2045, the use of fossil fuels such as coal and natural gas to generate electricity would no longer be allowed.
It’s a big climb — about 20% of the state’s electricity came from renewable sources in 2015, the latest figures available. The proposal has been embraced by labor groups who see jobs in building new infrastructure, but some are skeptical.
Brent Newell, legal director at the Center on Race, Poverty and the Environment, doesn’t want to see incentives for producing bio-gas from cow manure at industrial-scale dairies, which are a source of air and water pollution.
Although the legislation is “pointing the compass” in the right direction, he said, “that’s not clean energy.”
Reaching the point where fossil fuels aren’t used to keep the lights on will require new approaches to California’s electricity grid. Renewable sources can be difficult to manage because it’s impossible to control when the sun shines or the wind blows. The challenge is finding ways to soak up electricity when there’s too much, such as charging batteries or pumping water into reservoirs, and then releasing it when needed.
Another approach involves integrating California’s grid with other states, providing a wider market for excess solar energy that’s produced here on sunny days and allowing more wind energy to flow in from turbines elsewhere in the region.
“There’s a huge amount of efficiency to be gained,” said Don Furman, who directs the Fix the Grid campaign.
The idea would require California to share control of the electricity grid with other states, which unnerves some lawmakers and advocates. Unions also fear changes that would make building energy projects more attractive outside of California.
Debates over these issues are drawing the most attention in the Capitol, but other proposals are bubbling up as well, a sign that many lawmakers want to get involved in the issue.
One measure would make it easier for low-income Californians to access solar power. Another would create a system for tracking electricity consumption to help pinpoint areas for more efficiency.
“A lot of small steps create big momentum,” said Lauren Navarro, a senior policy manager at the Environmental Defense Fund. “These are pieces of what it takes to get to a clean-energy economy.”
A steep 5km ramp corkscrews down from the mouth of a tunnel (pictured above) into the bowels of the Earth. At the bottom, a yellow rig is drilling boreholes into the rock face, preparing it for blasting. The air is chilly, but within a few years, it may feel more like a Finnish sauna. Buried in holes in the floor will be copper canisters, 5.2 metres long, containing the remains of some of the world’s most radioactive nuclear waste. When the drilling is finished, in a century or so, 3,250 canisters each containing half a tonne of spent fuel will be buried in up to 70km of tunnels. Then the entire area will be sealed to make it safe for posterity.
The hundred-year timescale already means this is a megaproject. But that is just the beginning. The radioactive isotopes of plutonium used in nuclear-power plants must be stored for tens of thousands of years before they are safe. Finland aims to isolate its stockpile in the Onkalo repository, a burial chamber beneath the small forested island of Olkiluoto, home to one of its two nuclear-power plants, for at least 100,000 years.
In geological terms, that is a heartbeat; Finland’s bedrock is 1.9bn years old. But in human terms, 4,000 generations are almost inconceivable. As Mika Pohjonen, the managing director of Posiva, the utility-owned Finnish company overseeing the project, says, no one knows whether humans, creatures (or machines) will rule the Earth above by then—let alone whether they will be able to read today’s safety manuals. A hundred thousand years ago, Finland was under an ice sheet and Homo sapiens had not yet reached Europe.
Posiva has commissioned studies on the possibility that in the intervening millennia the area could be inundated by rising seas caused by global warming, or buried beneath a few kilometres of ice once more. Scientists have studied Greenland as an analogue to ice-capped Finland. The firm’s assurance to future generations is that if, in tens of thousands of years, a future Finn digs a 400-metre-deep well and draws water contaminated with 21st-century nuclear waste, it will be safe to drink.
But Posiva’s immediate priority is to create disposal caverns far enough from rock fissures and groundwater that Finland’s nuclear authorities allow it to start moving the canisters to their tomb in the early 2020s. “This is drilling with silk gloves on,” Mr Pohjonen says, as the machine pounds the rock with a deafening roar. “It has to be done gently.”
Nuclear authorities around the world are watching with interest because in the past two years Finland has become the first country to license and start building a final repository for highly radioactive waste fuel from nuclear reactors. Experts at the International Atomic Energy Agency (IAEA), a global body, say other countries, such as Sweden and France, are close behind. In America, Donald Trump’s administration has included a budget request for $120m to restart construction of a high-level waste repository at Yucca Mountain in Nevada, chosen in 1987 but stalled since 2010.
The disposal of nuclear fuel is among the most intractable of infrastructure projects. And there are already 266,000 tonnes of it in storage around the world, about 70,000 tonnes more than there were a decade ago. As Markku Lehtonen, a Finnish academic at the University of Sussex, puts it, the costs are high; the benefits are about avoiding harm rather than adding value; and evaluation is not about assessing risk, but about dealing with “uncertainty, ambiguity and ignorance” over a protracted timescale. Not everyone is convinced that permanent disposal is urgent, either. Some argue that semi-cooled fuel could be kept in cement dry-storage casks, as much is in America, for generations until technologies are developed to handle it. A blue-ribbon commission in America in 2012 mentioned the benefits of keeping spent fuel in storage for a longer time in order to keep the options open. But it also said that final storage was essential.
For all the countries committed to burial, Finland represents an overdue step in the right direction. It offers two lessons. The first is to find a relatively stable geological area, and reliable storage technology. The second is to build a broad consensus that the waste can be handled and disposed of responsibly. Like other Nordic success stories, it will be hard to replicate. “Finland has a kind of unique institutional context: a high trust in experts and representative democracy,” says Matti Kojo, of Finland’s Tampere University. “You cannot just copy a model from Finland.”
Under solid ground
The geological part, though the timespan is greatest, is probably the least tricky. Finland began the search for a site in 1983, shortly after it began generating nuclear power, and chose Olkiluoto after reviewing 100 areas. It has mapped faults and fissures in the bedrock, and sited the repository in a seismic “quiet zone”. It says it will avoid burying canisters close to potential pressure points, to minimise the danger that rock movements would crush or tear the canisters and cause radioactive leakage. Finland’s Radiation and Nuclear Safety Authority (STUK) called Posiva’s analysis of the bedrock and groundwater “state of the art”.
Ismo Aaltonen, Posiva’s chief geologist, says that earthquakes cannot be ruled out, especially if the bedrock shifts upwards in the melting period after a future ice age. Olkiluoto is still rising as it rebounds from the pressure of the last one, which ended more than 10,000 years ago. Close to the repository’s entrance, he points to scratchmarks on the rocks—“footprints of the last ice age” left by the retreating ice cap. But whether in crystalline granite, as in Finland and Sweden, or clay, as in France, or volcanic rock, as in Yucca Mountain, nuclear experts are confident that deep geological disposal can be safe. “There is a great deal of evidence that we can find many sites in the world with adequate geological properties for the required safety,” says Stefan Mayer, a waste-disposal expert at the IAEA.
Technology is the next hurdle. As well as 400-500 metres of bedrock between the canisters and the surface, there will be several man-made layers: steel, copper, water-absorbent bentonite clay around the canisters, and bentonite plugs sealing the caverns and, eventually, the access tunnel.
A model in the visitor’s centre, with moving parts that replicate all this in miniature, makes the whole set-up look safer than Fort Knox. Posiva says it has modelled copper deposits in ancient rocks to assess the likelihood of corrosion. STUK, however, says it will need more study on the potential for the copper to deteriorate. Some academics, including Mr Kojo, are worried that the Finnish media have underplayed concerns about copper corrosion, compared with other countries with similar “multi-barrier” protection systems.
The trickiest challenge, though, is to build broader societal consent. Finland appears to have succeeded by starting early and sticking to its timetable. The decision to find a site and start disposing of nuclear waste in the 2020s was taken 40 years ago. In 1994 its parliament banned the import and export of spent nuclear fuel, which increased the pressure to find a home-grown solution. Few other countries have demonstrated the same determination. The good news is that, because waste needs to be cooled in tanks for 30-50 years before being disposed of, emerging nuclear powerhouses such as China have time to prepare.
Finns’ trust in their nuclear industry has remained high, despite accidents elsewhere, such as those at Chernobyl in 1986 and Fukushima in 2011. Finland’s four nuclear reactors operate at among the world’s highest utilisation rates, and supply 26% of its electricity. Its two nuclear utilities, TVO and Fortum, which co-own Posiva, are themselves part of an electricity system in which Finnish industries and many municipalities have a stake, bolstering public support. The Onkalo repository is situated next door to TVO’s two working Olkiluoto reactors, which means people nearby are—in the phrase of academics—“nuclearised”, that is, convinced of the benefits of nuclear power. Surveys suggest positive attitudes to nuclear power nationally exceed negative ones.
Finns’ trust in government as a whole is high. Vesa Lakaniemi, the mayor of the 9,300-strong municipality of Eurajoki in which Olkiluoto lies (who once did a summer job at TVO), says it did not take much to persuade locals to support the site. Income from the nuclear industry gives them slightly lower taxes, good public services and a restored mansion for the elderly. They trust the waste will be handled safely and transparently. “It’s Finnish design. Finnish rock is solid rock. Regulation is strict everywhere in the world but Finnish people do these things very well,” he says.
Faith in the future
Some academics worry that Finland is taking waste disposal too much on faith. Any mishap could erode trust in an instant, as happened in Japan, another “high-trust” society, after the Fukushima disaster. TVO admits that negative attitudes towards nuclear power have risen as the construction of its third reactor at Olkiluoto has been plagued by delays, cost overruns and squabbles with the French-German contractors. The experience has shown that STUK tolerates no shortcuts, but some fear that its relationship with Posiva sometimes appears too close. Sweden and France have moved towards licensing repositories with far more criticism from NGOs and the media, suggesting more robust engagement.
Other countries, including America and France, follow principles of reversibility or retrievability, meaning they can reverse the disposal process while it is under way or retrieve waste after burial, if technologies and social attitudes change. Finland’s model is more closed; it would take a huge amount of digging to recover the waste once it has been sealed. But analysts say there is no single correct approach. Britain, for instance, has done things by the book but still failed to find a place for a repository.
Finally, there is the matter of cost. Finland’s nuclear-waste kitty, collected from the utilities, currently stands at €2.5bn ($2.7bn). By the time it is closed, the price is expected to be €3.5bn. That is reassuringly modest for a 100-year project, partly reflecting the fact that Finland’s nuclear industry, even when the planned total of five reactors are up and running, is relatively small. Other countries have higher costs, and less discipline. Yucca Mountain, for instance, was once estimated to cost $96bn to complete. In 2012 America had $27bn in its disposal fund, collected from ratepayers, none of which has gone towards nuclear-waste management.
It may be hard to replicate Finland’s exact model, but its sense of responsibility is seen as an inspiration. When visiting the Finnish repository, authorities from elsewhere, be they American, Chinese, Australian, Japanese or British, learn that safeguarding the future is not just a question of seismology, technology, sociology and cash. It is also an ethical one.
Scientists are investigating whether releasing tons of particulates into the atmosphere might be good for the planet. Not everyone thinks this is a good idea.
For the past few years, the Harvard professor David Keith has been sketching this vision: Ten Gulfstream jets, outfitted with special engines that allow them to fly safely around the stratosphere at an altitude of 70,000 feet, take off from a runway near the Equator. Their cargo includes thousands of pounds of a chemical compound — liquid sulfur, let’s suppose — that can be sprayed as a gas from the aircraft. It is not a one-time event; the flights take place throughout the year, dispersing a load that amounts to 25,000 tons. If things go right, the gas converts to an aerosol of particles that remain aloft and scatter sunlight for two years. The payoff? A slowing of the earth’s warming — for as long as the Gulfstream flights continue.
Keith argues that such a project, usually known as solar geoengineering, is technologically feasible and — with a back-of-the-envelope cost of under $1 billion annually — ought to be fairly cheap from a cost-benefit perspective, considering the economic damages potentially forestalled: It might do good for a world unable to cut carbon-dioxide emissions enough to prevent further temperature increases later this century.
What surprised me, then, as Keith paced around his Harvard office one morning in early March, was his listing all the reasons humans might not want to hack the environment. “Actually, I’m writing a paper on this right now,” he said. Most of his thoughts were related to the possible dangers of trying to engineer our way out of a climate problem of nearly unimaginable scientific, political and moral complexity. Solar geoengineering might lead to what some economists call “lock-in,” referring to the momentum that a new technology, even one with serious flaws, can assume after it gains a foothold in the market. The qwerty keyboard is one commonly cited example; the internal combustion engine is another. Once we start putting sulfate particles in the atmosphere, he mused, would we really be able to stop?
Another concern, he said, is “just the ethics about messing with nature.” Tall, wiry and kinetic, with thinning hair and a thick beard that gives him the look of the backcountry skier he is, Keith proudly showed me the framed badge that his father, a biologist, wore when he attended the landmark United Nations Conference on the Human Environment in Stockholm in 1972. Now 53, Keith has taken more wilderness trips — hiking, rock climbing, canoeing — than he can properly recall, and for their recent honeymoon, he and his wife were dropped off by helicopter 60 miles from the nearest road in northern British Columbia. “It was quite rainy,” he told me, “and that ended up making it even better.” So the prospect of intentionally changing the climate, he confessed, is not just unpleasant — “it initially struck me as nuts.”
It still strikes him as a moral hazard, to use a term he borrows from economics. A planet cooled by an umbrella of aerosol particles — an umbrella that works by reflecting back into space, say, 1 percent of the sun’s incoming energy — might give societies less incentive to adopt greener technologies and radically cut carbon emissions. That would be disastrous, Keith said. The whole point of geoengineering is not to give us license to forget about the buildup of CO₂. It’s to lessen the ill effects of the buildup and give us time to transition to cleaner energy.
Beyond these conceivable dangers, though, a more fundamental problem lurks: Solar geoengineering simply might not work. It has been a subject of intense debate among climate scientists for roughly a decade. But most of what we know about its potential effects derives from either computer simulations or studies on volcanic eruptions like that of Mount Pinatubo in 1991, which generated millions of tons of sunlight-scattering particulates and might have cooled the planet by as much as 0.5 degrees Celsius, or nearly 1 degree Fahrenheit. The lack of support for solar geoengineering’s efficacy informs Keith’s thinking about what we should do next. Actively tinkering with our environment — fueling up the Gulfstream jets and trying to cool things down — is not something he intends to try anytime soon, if ever. But conducting research is another matter.
A decade ago, when Keith was among the few American scientists to advocate starting a geoengineering research program, he was often treated at science conferences as an outlier. “People would sort of inch away or, really, tell me I shouldn’t be doing this,” he said. Geoengineering was seen as a scientific taboo and Keith its dark visionary. “The preconception was that I was some kind of Dr. Strangelove figure,” he told me — “which I didn’t like.”
Attitudes appear to have changed over the past few years, at least in part because of the continuing academic debates and computer-modeling studies. The National Academy of Sciences endorsed the pursuit of solar geoengineering research in 2015, a stance also taken in a later report by the Obama administration. A few influential environmental groups, like the Natural Resources Defense Council and the Environmental Defense Fund, now favor research.
In the meantime, Keith’s own work at Harvard has progressed. This month, he is helping to start Harvard’s Solar Geoengineering Research Program, a broad endeavor that begins with $7 million in funding and intends to reach $20 million over seven years. One backer is the Hewlett Foundation; another is Bill Gates, whom Keith regularly advises on climate change. Keith is planning to conduct a field experiment early next year by putting particles into the stratosphere over Tucson.
The new Harvard program is not merely intent on getting its concepts out of the lab and into the field, though; a large share of its money will also be directed to physical and social scientists at the university, who will evaluate solar geoengineering’s environmental dangers — and be willing to challenge its ethics and practicality. Keith told me, “It’s really important that we have a big chunk of the research go to groups whose job will be to find all the ways that it won’t work.” In other words, the technology that Keith has long believed could help us ease our predicament — “the nuclear option” for climate, as one opponent described it to me, to be considered only when all else has failed — will finally be investigated to see whether it is a reasonable idea. At the same time, it will be examined under the premise that it may in fact be a very, very bad one.
Climate change already presents a demoralizing array of challenges — melting ice sheets and species extinctions — but the ultimate severity of its impacts depends greatly on how drastically technology and societies can change over the next few decades. The growth of solar and wind power in recent years, along with an apparent decrease in coal use, suggest that the global community will succeed in curtailing CO₂ emissions. Still, that may not happen nearly fast enough to avert some dangerous consequences. As Keith likes to point out, simply reducing emissions doesn’t reverse global warming. In fact, even if annual global CO₂ emissions decrease somewhat, the total atmospheric CO₂ may continue to increase, because the gas is so slow to dissipate. We may still be living with damaging amounts of atmospheric carbon dioxide a half-century from now, with calamitous repercussions. The last time atmospheric CO₂ levels were as elevated as they are today, three million years ago, sea levels were most likely 45 feet higher, and giant camels roamed above the Arctic Circle.
Recently, I met with Daniel Schrag, who is the head of the Harvard University Center for the Environment, an interdisciplinary teaching and research department. Schrag, who helped recruit Keith to Harvard, painted a bleak picture of our odds of keeping global temperatures from rising beyond levels considered safe by many climate scientists. When you evaluate the time scales involved in actually switching our energy systems to cleaner fuels, Schrag told me, “the really depressing thing is you start to understand why any of these kinds of projections — for 2030 or 2050 — are absurd.” He went on: “Are they impossible? No. I want to give people hope, too. I’d love to make this happen. And we have made a lot of progress on some things, on solar, on wind. But the reality is we haven’t even started doing the hard stuff.”
Schrag described any kind of geoengineering as “at best an imperfect solution that is operationally extremely challenging.” Yet to Schrag and Keith, the political and technical difficulties associated with a rapid transition to a zero-carbon-emissions world make it sensible to look into geoengineering research. There happens to be a number of different plans for how to actually do it, however — including the fantastical (pumping seawater onto Antarctica to combat sea-level rise) and the impractical (fertilizing oceans with iron to foster the growth of algae, which would absorb more CO₂). Some proposals involve taking carbon out of the air, using either immense plant farms or absorption machines. (Keith is involved with such sequestration technology, which faces significant hurdles in terms of cost and feasibility.) Another possible approach would inject salt crystals into clouds over the ocean to brighten them and cool targeted areas, like the dying Great Barrier Reef. Still, the feeling among Keith and his colleagues is that aerosols sprayed into the atmosphere might be the most economically and technologically viable approach of all — and might yield the most powerful global effect.
It is not a new idea. In 2000, Keith published a long academic paper on the history of weather and climate modification, noting that an Institute of Rainmaking was established in Leningrad in 1932 and that American engineers began a cloud-seeding campaign in Vietnam a few decades later. A report issued in 1965 by President Lyndon B. Johnson’s administration called attention to the dangers of increasing concentrations of CO₂ and, anticipating Keith’s research, speculated that a logical response might be to change the albedo, or reflectivity, of the earth. To Keith’s knowledge, though, there have been only two actual field experiments so far. One, by a Russian scientist in 2009, released aerosols into the lower atmosphere via helicopter and appears to have generated no useful data. “It was a stunt,” Keith says. Another was a modest attempt at cloud brightening a few years ago by a team at the Scripps Institution of Oceanography at the University of California, San Diego.
Downstairs from Keith’s Harvard office, there is a lab cluttered with students fiddling with pipettes and arcane scientific instruments. When I visited in early March, Zhen Dai, a graduate student who works with Keith, was engaged with a tabletop apparatus, a maze of tubes and pumps and sensors, meant to study how chemical compounds interact with the stratosphere. For the moment, Keith’s group is leaning toward beginning its field experiments with ice crystals and calcium carbonate — limestone — that has been milled to particles a half-micron in diameter, or less than 1/100th the width of a human hair. They may eventually try a sulfur compound too. The experiment is called Scopex, which stands for Stratospheric Controlled Perturbation Experiment. An instrument that can disperse an aerosol of particles — say, several ounces of limestone dust — will be housed in a gondola that hangs beneath a balloon that ascends to 70,000 feet. The whole custom-built contraption, whose two small propellers will be steered from the ground, will also include a variety of sensors to collect data on any aerosol plume. Keith’s group will measure the sunlight-scattering properties of the plume and evaluate how its particles interact with atmospheric gases, especially ozone. The resulting data will be used by computer models to try to predict larger-scale effects.
But whether a scientist should be deliberately putting foreign substances into the atmosphere, even for a small experiment like this, is a delicate question. There is also the difficulty of deciding on how big the atmospheric plumes should get. When does an experiment become an actual trial run? Ultimately, how will the scientists know if geoengineering really works without scaling it up all the way?
Keith cites precedents for his thinking: a company that scatters cremation ashes from a high-altitude balloon, and jet engines, whose exhaust contains sulfates. But the crux of the problem that Harvard’s Solar Geoengineering Research Program wrestles with is intentionality. Frank Keutsch, a professor of atmospheric sciences at Harvard who is designing and running the Scopex experiments with Keith, told me: “This effort with David is very different from all my other work, because for those other field experiments, we’ve tried to measure the atmosphere and look at processes that are already there. You’re not actually changing nature.” But in this case, Keutsch agrees, they will be.
During one of our conversations, Keith suggested that I try to flip my thinking for a moment. “What if humanity had never gotten into fossil fuels,” he posed, “and the world had gone directly to generating energy from solar or wind power?” But then, he added, what if in this imaginary cleaner world there was a big natural seep of a heat-trapping gas from within the earth? Such events have happened before. “It would have all the same consequences that we’re worried about now, except that it’s not us doing the CO₂ emissions,” Keith said. In that case, the reaction to using geoengineering to cool the planet might be one of relief and enthusiasm.
In other words, decoupling mankind’s actions — the “sin,” as Keith put it, of burning fossil fuels — from our present dilemma can demonstrate the value of climate intervention. “No matter what, if we emit CO₂, we are hurting future generations,” Keith said. “And it may or may not be true that doing some solar geo would over all be a wise thing to do, but we don’t know yet. That’s the reason to do research.”
There are risks, undeniably — some small, others potentially large and terrifying. David Santillo, a senior scientist at Greenpeace, told me that some modeling studies suggest that putting aerosols in the atmosphere, which might alter local climates and rain patterns and would certainly affect the amount of sunlight hitting the earth, could have a significant impact on biodiversity. “There’s a lot more we can do in theoretical terms and in modeling terms,” Santillo said of the Harvard experiments, “before anyone should go out and do this kind of proof-of-concept work.” Alan Robock, a professor of atmospheric sciences at Rutgers, has compiled an exhaustive list of possible dangers. He thinks that small-scale projects like the Scopex experiment could be useful, but that we don’t know the impacts of large-scale geoengineering on agriculture or whether it might deplete the ozone layer (as volcanic eruptions do). Robock’s list goes on from there: Solar geoengineering would probably reduce solar-electricity generation. It would do nothing to reduce the increasing acidification of the oceans, caused by seawater absorbing carbon dioxide. A real prospect exists, too, that if solar geoengineering efforts were to stop abruptly for any reason, the world could face a rapid warming even more dangerous than what’s happening now — perhaps too fast for any ecological adaptation.
Keith is well aware of Robock’s concerns. He also makes the distinction that advocating research is not the same as advocating geoengineering. But the line can blur. Keith struck me as having a fair measure of optimism that his research can yield insights into materials and processes that can reduce the impacts of global warming while averting huge risks. For instance, he is already encouraged by computer models that suggest the Arctic ice cap, which has shrunk this year to the smallest size observed during the satellite era, could regrow under cooler conditions brought on by light-scattering aerosols. He also believes that the most common accusation directed against geoengineering — that it might disrupt precipitation patterns and lead to widespread droughts — will prove largely unfounded.
But Keith is not trained as an atmospheric scientist; he’s a hands-on physicist-engineer who likes to take machinery apart. There are deep unknowns here. Keutsch, for one, seems uncertain about what he will discover when the group actually tries spraying particulates high above the earth. The reduction of sunlight could adversely affect the earth’s water cycle, for example. “It really is unclear to me if this approach is feasible,” he says, “and at this point we know far too little about the risks. But if we want to know whether it works, we have to find out.”
Finally, what if something goes wrong either in research or in deployment? David Battisti, an atmospheric scientist at the University of Washington, told me, “It’s not obvious to me that we can reduce the uncertainty to anywhere near a tolerable level — that is, to the level that there won’t be unintended consequences that are really serious.” While Battisti thought Keith’s small Scopex experiment posed little danger — “The atmosphere will restore itself,” he said — he noted that the whole point of the Harvard researchers’ work is to determine whether solar geoengineering could be done “forever,” on a large-scale, round-the-clock basis. When I asked Battisti if he had issues with going deeper into geoengineering research, as opposed to geoengineering itself, he said: “Name a technology humans have developed that they haven’t used. I can’t think of any. So we can work on this for sure. But we are in this dilemma: Once we do develop this technology, it will be tempting to use it.”
Suppose Keith’s research shows that solar geoengineering works. What then? The world would need to agree where to set the global thermostat. If there is no consensus, could developed nations impose a geoengineering regimen on poorer nations? On the second point, if this technology works, it would arguably be unethical not to use it, because the world’s poorest populations, facing drought and rising seas, may suffer the worst effects of a changing climate.
In recent months, a group under the auspices of the Carnegie Council in New York, led by Janos Pasztor, a former United Nations climate official, has begun to work through the thorny international issues of governance and ethics. Pasztor told me that this effort will most likely take four years. And it is not lost on him — or anyone I spoke with in Keith’s Harvard group — that the idea of engineering our environment is taking hold as we are contemplating the engineering of ourselves through novel gene-editing technologies. “They both have an effect on shaping the pathway where human beings are now and where will they be,” says Sheila Jasanoff, a professor of science and technology studies at Harvard who sometimes collaborates with Keith. Jasanoff also points out that each technology potentially enables rogue agents to act without societal consent.
This is a widespread concern. We might reach a point at which some countries pursue geoengineering, and nothing — neither costs nor treaties nor current technologies — can stop them. Pasztor sketched out another possibility to me: “You could even have a nightmare scenario, where a country decides to do geoengineering and another country decides to do counter-geoengineering.” Such a countermeasure could take the form of an intentional release of a heat-trapping gas far more potent than CO₂, like a hydrochlorofluorocarbon. One of Schrag’s main concerns, in fact, is that geoengineering a lower global temperature might preserve ecosystems and limit sea-level rise while producing irreconcilable geopolitical frictions. “One thing I can’t figure out,” he told me, “is how do you protect the Greenland ice sheet and still have Russia have access to its northern ports, which they really like?” Either Greenland and Siberia will melt, or perhaps both can stay frozen. You probably can’t split the difference.
For the moment, and perhaps for 10 or 20 years more, these are mere hypotheticals. But the impacts of climate change were once hypotheticals, too. Now they’ve become possibilities and probabilities. And yet, as Tom Ackerman, an atmospheric scientist at the University of Washington, said at a recent discussion among policy makers that I attended in Washington: “We are doing an experiment now that we don’t understand.” He was not talking about geoengineering; he was observing that the uncertainty about the potential risks of geoengineering can obscure the fact that there is uncertainty, too, about the escalating disasters that may soon result from climate change.
His comment reminded me of a claim made more than a half-century ago, long before the buildup of CO₂ in the atmosphere had become the central environmental and economic problem of our time. Two scientists, Roger Revelle and Hans Suess, wrote in a scientific paper, “Human beings are now carrying out a large-scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future.”
If anything could sway a fence-sitter to consider whether geoengineering research makes sense, perhaps it is this. The fact is, we are living through a test already.
Editor’s Note: In 2008 professors Keith, Schrag and Battisti participated in Novim’s first scientific study – “Geoengineering the Atmosphere”
The fate of reform measures hangs on ballot language written by the state attorney general, usually a Democrat elected with strong union support.
More than 20 times in the last 15 years, political leaders looking to control California’s fast-growing public pension costs have tried to put reform initiatives before the voters.
None of the proposals has made it onto the ballot.
Often, advocates could not raise enough money for signature gathering, advertising and other costs of an initiative campaign. Some of the most promising efforts, however, ran into a different kind of obstacle: an official summary, written by the state attorney general, that described the initiative in terms likely to alienate voters. Facing bleak prospects at the polls, the sponsors abandoned the campaigns.
Taxpayer advocates contend that the attorneys general — Democrats elected with robust support from organized labor — put a finger on the scale, distilling the initiatives in language that echoed labor’s rhetoric.
Labor leaders say the summaries were neutral and accurate, and that the problem lay with the initiatives — which, they contend, would have diluted benefits already promised to public employees.
The attorney general’s title and summary, which appear on petitions and in the official voter guide, can powerfully shape attitudes toward a ballot measure. The language has emerged as a battleground between those seeking to overhaul California’s public retirement system and those determined to defend it.
“It’s the one thing every voter will see, and it’s the last thing every voter will see,” said Thomas W. Hiltachk, a lawyer who specializes in California initiatives and has run campaigns in support of Republican ballot measures. “Whether you have a well-funded campaign or an underfunded campaign, those words are critically important.”
Retirement benefits are the fastest-growing expense in many municipal budgets. In Los Angeles and other cities, they account for 20% or more of general fund spending. The burden has pushed some cities to the edge of bankruptcy.
Yet a string of court rulings, known collectively as “the California Rule,” has posed a formidable barrier to change. It prohibits cuts in pension benefits already granted or promised. Under the rule, pensions are considered binding contracts protected by the state Constitution.
For that reason, many of the cost-saving measures passed by the Legislature in recent years, including later retirement ages and smaller monthly pension checks, did not affect employees already on the payroll. They applied only to newly hired workers. As a result, the savings will not kick in for many years.
Pension reform advocates say that achieving real relief in the near term will require reductions in benefits already granted to current employees. Because of the California Rule, that can be done only by amending the Constitution. And that requires a ballot initiative.
A wide majority of California voters surveyed have favored changing the pension system to save money. Support drops sharply when the change is framed as reducing benefits for teachers, police and firefighters.
That’s why the attorney general’s choice of words is so important. By law, the title and summary “shall be true and impartial” and not likely to “create prejudice for or against the proposed measure.”
Disputes over the language have figured prominently in several major reform attempts. The most recent, in 2013-14, was led by then-San Jose Mayor Chuck Reed and former San Diego City Councilman Carl DeMaio.
Reed, a Democrat, and DeMaio, a Republican, proposed a constitutional amendment to alter the California Rule by targeting future benefits of current employees. Workers would keep retirement benefits they had earned, but future benefits would no longer be guaranteed; they would be determined through collective bargaining or public referendum.
A survey conducted for labor groups opposed to the initiative found that majority support for pension reform collapsed if it was described as “eliminating police, firefighters, and other public employees’ vested pension benefits” or “eliminating state constitutional protections.”
The word “eliminate” “fosters a visceral negative response from voters,” according to a memo by the labor coalition’s Washington pollsters.
The Sacramento Bee published an article about the memo in December 2013. Three weeks later, then-Atty. Gen. Kamala Harris issued her summary of the initiative.
It said the Reed-DeMaio measure “eliminates constitutional protections for vested pension and retiree healthcare benefits for current public employees, including teachers, nurses, and peace officers, for future work performed.”
Reed and DeMaio sued the attorney general, accusing her of modeling her ballot language on the labor survey. The suit suggested an alternative summary: “Amends California constitution to allow government employers to negotiate with government employees to modify pension and retiree healthcare benefits for future work performed.”
The court sided with Harris, ruling that Reed and DeMaio had not proved the summary was false or misleading and that the attorney general is afforded “considerable latitude” in crafting the language.
Reed and DeMaio dropped the initiative in March 2014 after concluding that it was unlikely to win with Harris’ ballot language.
“I personally didn’t think she would be so obviously, egregiously negative,” said Reed, now special counsel at Hopkins & Carley, a Silicon Valley law firm.
Harris had been elected attorney general in 2010 with strong financial support from labor: more than $600,000 in donations to her campaign and to independent expenditure committees, according to the National Institute on Money in State Politics. She raised a total of $7.5 million that year.
Harris received an additional $400,000 from labor for her 2014 reelection effort, and she collected $73,102 from public employee unions in her successful $14-million campaign for the U.S. Senate last year.
Harris did not respond to requests to be interviewed for this article.
Steve Maviglio, a spokesman for Californians for Retirement Security, the labor coalition that opposed the initiative, said the campaign contributions to Harris don’t prove anything. He said the labor survey indicated that the initiative would lose “regardless of how the ballot language is written.”
Maviglio said recent pension initiatives have simply been too extreme for voters to support. “I think that’s just a lame excuse for their political malpractice,” he said.
A former senior advisor to Harris said the attorney general was keenly aware of how the title and summary could affect a ballot measure’s prospects.
Staff attorneys typically drafted multiple versions after consulting both advocates for and opponents of a particular initiative, the former advisor said. Staff lawyers would weigh the pros and cons of each, and Harris would approve the final wording.
Regarding the Reed-DeMaio initiative, the former advisor said the similarity between the attorney general’s summary and the labor memo reflected shared values, not a quid pro quo.
It was the second time Harris approved summary language that proponents of pension reform regarded as unfair.
California Pension Reform, a Republican-led advocacy group, proposed an initiative for the 2012 ballot that would have reduced benefits for both current and newly hired public workers. It called for imposing caps on how much government employers could contribute toward workers’ retirements.
The attorney general’s summary stated that the initiative “eliminates constitutional protections for current and future public employees’ vested pension benefits.”
California Pension Reform dropped the initiative, asserting that the “false and misleading title and summary make it nearly impossible to pass.” At the time, Harris’ office rejected the criticism, saying the title and summary accurately described “the initiative’s chief points and purposes.”
Dan Pellissier, president of the advocacy group and a former aide to Assembly Republicans, said there wasn’t enough time to challenge the attorney general in court and still collect enough signatures to meet the ballot deadline. He said the summary was unfair because it stated as fact that pension benefits are constitutionally protected when the issue is in dispute.
One of Harris’ predecessors, Democratic Atty. Gen. Bill Lockyer, was accused of writing politically charged language for a pension measure in 2005. The initiative, proposed by then-Gov. Arnold Schwarzenegger, would have given future state workers 401(k)-style retirement accounts instead of traditional pensions.
Schwarzenegger said in his State of the State Address that year that California’s pension obligations had risen from $160 million in 2000 to $2.6 billion, “threatening our state.”
But the Republican governor abandoned the initiative in April 2005, after Lockyer’s office issued a title and summary that said the measure would eliminate death and disability benefits for future public employees.
Schwarzenegger’s initiative did not mention death benefits. But the governor’s advisors appeared to have overlooked a key detail: death and disability benefits were tied to guaranteed pensions. Newly hired civil servants, who wouldn’t have such pensions, wouldn’t have the associated benefits either, unless they were provided separately.
Opponents of the measure seized on the issue and mobilized widows of slain police officers to speak out against Schwarzenegger’s proposal.
Schwarzenegger said at the time that he would never eliminate police death benefits, and that Lockyer had misinterpreted the initiative.
The governor’s communications director, Rob Stutzman, suggested that the attorney general was trying to curry favor with labor unions to mount a possible bid for governor. Lockyer received more than $1.5 million in campaign contributions from public employee unions during his two terms as attorney general.
Lockyer, now a lawyer with the firm Brown Rudnick in Orange County, said his staff’s analysis of the Schwarzenegger initiative was correct. “They complained about it, but it was a lot of political whining,” he said.
Jon Coupal, president of the Howard Jarvis Taxpayers Assn. which backed Schwarzenegger’s proposal, disagreed. He said nothing in the initiative would have prevented death and disability benefits from continuing. “They created ambiguity out of whole cloth,” he said.
Reed and other proponents of pension reform plan to put a new measure on the ballot next year. If they do, the title and summary will be written by California’s new attorney general, former U.S. Rep. Xavier Becerra, a Democrat from Los Angeles.
Becerra was nominated to serve the remainder of Harris’ term after she was sworn in as a U.S. senator in January. During his confirmation hearing, Becerra was asked about the attorney general’s obligation to write neutral summaries for ballot measures.
“I understand the importance of a word,” he said, adding: “The words I get to issue on behalf of the people of this state will be the words that are operative for everyone.”
After his confirmation, during his first news conference as attorney general, Becerra addressed the issue again. He said he recognized the need for “fiscally sound” pension policies, but added that his father was a retired union construction worker, with a pension.
“Do I want to see someone like my father be told that he’s not going to get what he bargained for?” he said. “You drive on the roads that my dad built. I think anyone who works hard deserves to get what they bargained for.”
Judy Lin is a reporter at CALmatters, a nonprofit journalism venture in Sacramento.
In the delirious sci-fi thriller by the Korean director Bong Joon-ho, an attempt to engineer the climate and stop global warming goes horribly wrong. The planet freezes. Only the passengers on a train endlessly circumnavigating the globe survive. Those in first class eat sushi and quaff wine. People in steerage eat cockroach protein bars.
Scientists must start looking into this. Seriously.
News about the climate has become alarming over the last few months. In December, startled scientists revealed that temperatures in some parts of the Arctic had spiked more than 35 degrees Fahrenheit above their historical averages. In March, others reported that sea ice in the Arctic had dropped to its lowest level on record. A warming ocean has already killed large chunks of Australia’s Great Barrier Reef.
Let’s get real. The odds that these processes could be slowed, let alone stopped, by deploying more solar panels and wind turbines seemed unrealistic even before President Trump’s election. It is even less likely now that Mr. Trump has gone to work undermining President Barack Obama’s strategy to reduce greenhouse gas emissions.
That is where engineering the climate comes in. Last month, scholars from the physical and social sciences who are interested in climate change gathered in Washington to discuss approaches like cooling the planet by shooting aerosols into the stratosphere or whitening clouds to reflect sunlight back into space, which may prove indispensable to prevent the disastrous consequences of warming.
Aerosols could be loaded into military jets, to be sprayed into the atmosphere at high altitude. Clouds at sea could be made more reflective by spraying them with a fine saline mist, drawn from the ocean.
The world’s immediate priority may be to reduce greenhouse gas emissions to meet and hopefully exceed the promises made at the climate summit meeting in Paris in December 2015. But as Janos Pasztor, who heads the Carnegie Climate Geoengineering Governance Initiative, told me, “The reality is that we may need more tools even if we achieve these goals.”
The carbon dioxide that humanity has pumped into the atmosphere is already producing faster, deeper changes to the world’s climate and ecosystems than were expected not long ago. Barring some technology that could pull it out at a reasonable cost — a long shot for the foreseeable future, according to many scientists — it will stay there for a long time, warming the atmosphere further for decades to come.
The world is not cutting emissions fast enough to prevent global temperatures from spiking into dangerous territory, slashing crop yields and decimating food production in many parts of the world, as well as flooding coastal cities while parching large swaths of the globe, killing perhaps millions of mostly poor people from heat stress alone.
Solving the climate imperative will require cutting greenhouse gas emissions down to zero, ideally in this century, and probably sucking some out. But solar geoengineering could prove a critical complement to mitigation, giving humanity time to develop the political will and the technologies to achieve the needed decarbonization.
With Mr. Trump pushing the United States, the world’s second-largest emitter after China, away from its mitigation commitments, geoengineering looks even more compelling.
“If the United States starts going backwards or not going forward fast enough in terms of emissions reductions, then more and more people will start talking about these options,” said Mr. Pasztor, a former United Nations assistant secretary general on climate change.
While many of the scholars gathered in Washington expressed misgivings about deploying geoengineering technologies, there was a near-universal consensus on the need to invest more in research — not only into the power to cool the atmosphere but also into the potential side effects on the atmosphere’s chemistry and on weather patterns in different world regions.
While it is known that solar radiation management can cool the atmosphere, fears that field research would look too much like deployment have so far limited research pretty much to computer modeling of its effects and small-scale experiments in the lab.
Critically, the academics noted, the research agenda must include an open, international debate about the governance structures necessary to deploy a technology that, at a stroke, would affect every society and natural system in the world. In other words, geoengineering needs to be addressed not as science fiction, but as a potential part of the future just a few decades down the road.
“Today it is still a taboo, but it is a taboo that is crumbling,” said David Keith, a noted Harvard physicist who was an organizer of the conclave.
Arguments against geoengineering are in some ways akin to those made against genetically modified organisms and so-called Frankenfood. It amounts to messing with nature. But there are more practical causes for concern about the deployment of such a radical technology. How would it affect the ozone in the stratosphere? How would it change patterns of precipitation?
Moreover, how could the world agree on the deployment of a technology that will have different impacts on different countries? How could the world balance the global benefit of a cooling atmosphere against a huge disruption of the monsoon on the Indian subcontinent? Who would make the call? Would the United States agree to this kind of thing if it brought drought to the Midwest? Would Russia let it happen if it froze over its northern ports?
Geoengineering would be cheap enough that even a middle-income country could deploy it unilaterally. Some scientists have estimated that solar radiation management could cool the earth quickly for as little as $5 billion per year or so. What if the Trump administration decided to focus American efforts to combat climate change on geoengineering alone?
That wouldn’t work, in the end. If greenhouse gases were not removed from the atmosphere, the world would heat up in a snap as soon as the aerosol injections were turned off. Still, the temptation to combat climate change on the cheap while continuing to exploit fossil fuels could be hard to resist for a president who promised to revive coal and has shown little interest in global diplomacy.
As Scott Barrett, an environmental economist from Columbia University who was at the meeting in Washington, noted, “The biggest challenge posed by geoengineering is unlikely to be technical, but rather involve the way we govern the use of this unprecedented technology.”
These ethical considerations should be taken into account in any research program into managing the rays of the sun. Perhaps researchers should refrain from taking money from an American administration that denies climate science, to avoid delegitimizing the technology in the eyes of the rest of the world.
People should keep in mind the warning by Alan Robock, a Rutgers University climatologist, who argued that the worst case from the deployment of geoengineering technologies might be nuclear war.
But it would be a mistake to halt research into this new technological tool. Geoengineering might ultimately prove to be a bad idea for a variety of reasons. But only further research can tell us that.
The best way to think of the options ahead is as offering a balance of risks. On one plate sit whatever pitfalls geoengineering might bring. They might be preferable to the prospect of radical climate change. Thinking in terms of delirious sci-fi fantasies, the trade-off won’t necessarily be between cockroach protein bars and some happy future of cheap, renewable energy. It is more likely to pit cockroach treats against some dystopian, broiling world.
There’s a lot of exciting work happening in science journalism lately for sure, whether it’s the surge in science blogging or new publications like STAT, Undark, and Nautilus.
The news media is still struggling financially, however, and the industry is undergoing some major shifts. For some outlets, that’s led to shrinking or even disappearing science desks. Journalists are a scrappy bunch, though, always adjusting to keep the news business alive and kicking. Philanthropic resources have become an important part of that retooling.
The latest move by a funder to give a jolt to journalism come from the Howard Hughes Medical Institute, one of the largest private funders of academic science research. HHMI also has a large science education program, giving $86 million in fiscal year 2016.
That funding has backed work like creating science resources for educators, providing research opportunities to college students, and engaging the public. The last area includes a film production arm, and collaborations with outlets like NOVA, the New York Times, and Science Friday. In fiscal year 2014, HHMI funded nonprofit news outlet The Conversation with $500,000 for science journalism, according to tax forms.
Now, a collaboration between HHMI and the AP is backing two year-long projects to bolster science coverage. One is a series of stories, profiles, videos and graphics about genetic medicine; the second supports multimedia coverage aimed at putting scientific evidence in the context of subjects like the environment and public health.
Funds will increase the number of journalists on the AP’s team and the number of stories the service can publish. While the announcement states that HHMI will offer expert background information and educational materials, the AP assures that it will retain editorial control over what gets published.
While we’ve seen some funders back journalism in response to attacks from the Trump administration, HHMI’s Sean B. Carroll, vice president of science education, says this was in the works well before the election. The institute takes a longer view in its science education work, and this collaboration is more in response to larger struggles in journalism, he said via email.
“The pressures on newsrooms have led to a widespread reduction of science journalists. As the world’s largest news gathering organization, AP is perfectly positioned to provide its vast client base with more and deeper science reporting,” Carroll said.
The AP is a large news cooperative, so there’s potential for stories the funder is facilitating to have greater reach.
Generally speaking, there are a couple kinds of journalism funders out there—those supporting the field on basic principle (Omidyar, Knight) and those enhancing coverage of a particular topic that’s been neglected for whatever reason. We’ve seen a lot of nonprofit environmental coverage, for example. RWJF backs health journalism in a big way. Gates is a major backer of education reporting. And on the science beat, the Sloan Foundation supports various media, including popular books like Hidden Figures and shows like Radiolab.
Media philanthropy isn’t new, but as it’s grown, funders are still feeling out some of the ground rules and best practices. There’s been controversy regarding what kind of influence funders have on the coverage of news outlets.
My only criticism of this particular initiative is that HHMI is not disclosing the amount of funding involved. I don’t believe there is anything sinister behind that decision, and HHMI staff say they generally refrain from sharing financials. Both partners publicized the collaboration and cited motivation to increase attention to and understanding of science. But one of the guiding principles in journalism philanthropy should always be a high level of transparency, so we’d like to see all the cards on the table when it comes to this kind of grant.
In the years before California’s civil engineers got around to confining the Sacramento River, it often spilled over its banks, inundating huge swaths of the Central Valley. Sometimes the floodwater would stand for a hundred days at a time. The botanist William Henry Brewer, writing in 1862, after a season of torrential rains, described the valley as “a lake extending from the mountains on one side to the coast range hills on the other.” The water was so deep, he reported, that cargo steamers could navigate it. “Nearly every house and farm over this immense region is gone,” Brewer wrote. “America has never before seen such desolation by flood as this has been, and seldom has the Old World seen the like.” Half a century later, to solve the problem, California built a number of flood-control systems, including the Sacramento Weir, a series of forty-eight hand-operated gates placed strategically along the Sacramento and American Rivers. When the waters rose, they would now be shunted into an unpopulated expanse known as the Yolo Bypass, a floodplain roughly equivalent in size to twenty Central Parks.
This winter, for the first time in a decade, and after five years of a crippling statewide drought, the Yolo Bypass is submerged again. Situated at the heart of the Pacific Flyway, a great migratory corridor stretching from Alaska to the tip of South America, the area teems with sandhill cranes, California brown pelicans, and dozens of other bird species. But its estuarine tranquillity is deceptive. In the past five months—the wettest since record-keeping began, in 1895—California has experienced widespread hydrological chaos. In January, after a series of heavy rainstorms, water managers activated the Sacramento Weir, filling the Yolo Bypass. In February, emergency releases from Anderson Lake Dam, in Santa Clara County, flooded hundreds of homes in San Jose. The rain also caused landslides near Big Sur, washing out several roads and bridges and leaving about four hundred people stranded. But it was the near-failure of the dam at Lake Oroville, three and a half hours north of San Francisco, that made the scale of the crisis clear. Oroville is the state’s second-largest reservoir but arguably its most important; it feeds the California Aqueduct, which supplies drinking water to twenty-five million residents across greater Los Angeles and irrigates millions of acres of Central Valley farmland.
Less than a year ago, Lake Oroville was a vivid symbol of the state’s prolonged drought. Aerial images showed a landscape of spider-webbed mudflats and desiccated tributaries as the reservoir fell to levels not seen in almost forty years. Starting in January, though, the lake rapidly filled to capacity. To prevent the water from breaching the dam, engineers began discharging it at a rate of 2.7 billion gallons per hour—about the same flow as at Niagara Falls. The frothing cascade, with its countless bubbles acting as tiny jackhammers, hollowed out a cavernous pit in the concrete spillway. The engineers diverted the flow to an earthen emergency spillway, but the torrent rapidly chewed away at that, too. With the integrity of the dam under threat, close to two hundred thousand local residents were evacuated. (A total failure of the structure, according to one water manager, would have sent a thirty-foot wave tearing through communities downstream.)
Ask most Californians, however, and they’ll tell you that the chaos is in service of a greater good. As of last week, according to the National Drought Mitigation Center, more than three-quarters of the state is out of the drought, with barely one per cent falling into the “severe” category—almost the reverse of the situation at this time last year. Already in 2017, many parts of California have received more than twice their average annual precipitation. The numbers would seem to paint a picture of watery salvation. But Peter Gleick, the chief scientist at the Oakland-based Pacific Institute, told me that one year of heavy precipitation, even a record-breaking one, will not undo the most serious repercussion of the drought: a severe deficit of groundwater. For years, Central Valley farmers have drawn liberally from the region’s aquifers to compensate for reduced supplies from canals and aqueducts. When a large enough volume of groundwater is pumped away, the land can slump like a punctured air mattress. Areas along the valley’s western edge have sunk by nearly thirty feet since the nineteen-twenties, and in some places the local infrastructure—roads, bridges, even the California Aqueduct itself—is at risk. Farmers and municipalities have responded by digging deeper wells, but such measures seem to be prolonging the inevitable. In Tulare County, south of Fresno, where groundwater overdraft has been particularly severe, the number of reported well failures has continued to climb, almost quadrupling since 2014, in spite of last year’s above-average precipitation and this year’s deluge.
Climate change is a significant contributor to the problem. As the Stanford climatologist Noah Diffenbaugh noted in 2015, California’s reservoirs, aqueducts, and canals are vestiges of a cooler, less drought-prone past. The state’s model of water storage is snowpack-dependent, meaning that it works properly only when the bulk of the water in the system is locked up in mountain snow. These days, though, more precipitation falls as rain than as snow, placing stress on the reservoirs. And even though this year has seen record snowpack—a hundred and eighty-five per cent of the average, as of March 1st—California has also experienced dozens of so-called rain-on-snow events, which further hasten the melting. Meanwhile, warmer temperatures are projected to shift the snow line to higher altitudes, dramatically shrinking the over-all size of the state’s snow reservoir. At current rates of warming, the Sierra Nevada could lose a quarter of its snowpack by the middle of the century, according to the California Department of Water Resources.
Sudden swings between drought and flood have been part of California’s climatic history for a long time, Diffenbaugh told me, but those swings now stand to become more extreme. “This is exactly what climate scientists have predicted for at least the last thirty years,” he said. The solution, Gleick said, is to prioritize the aquifers—and quickly, because severe land subsidence can permanently eliminate storage space. “Unless a massive effort is made to both reduce overdraft and to artificially enhance recharge rates, California’s groundwater will continue to decline,” he wrote in an e-mail. Not only are there fewer regulatory hurdles involved in underground water-banking than, say, permitting a new reservoir or desalination plant, but the costs of groundwater storage are far lower than these other options, scientists at Stanford’s Bill Lane Center for the American West recently found.
Last Friday, the state’s Department of Water Resources reopened the patched-up concrete spillway at Lake Oroville. Hundreds of millions of dollars’ worth of repairs remain, but water managers must make room in the reservoir for the spring melt. At the moment, there is no large-scale engineering system that would allow the huge surge of surface water currently flowing across California to be delivered to the Central Valley’s aquifers. And more to the point, perhaps, the state lacks the sorts of regulations that would make such a system viable; a law passed in 2014 requires that government agencies “achieve sustainability” in how they apportion groundwater, but not until 2040. Ultimately, Gleick told me, California won’t pursue artificial recharge until it can keep better track of who is using what. “It’s like putting money into a bank account that anyone else can withdraw,” he said. “Until it’s monitored, no one will make a deposit.”
‘Virtual power plants’ would store renewable energy in batteries by day and redistribute it when demand surges after sunset
California utilities including PG&E Corp., Edison International and Sempra Energy are testing new ways to network solar panels, battery storage, two-way communication devices and software to create “virtual power plants” that manage green power and feed it into the power grid as needed.
The Golden State is ramping up renewable energy as it pledges to be a bulwark against the Trump administration’s pro-fossil fuel policies. But first, it has to figure out what to do with all the excess power it generates when the sun is shining and the wind is blowing.
California’s solar farms create so much power during daylight hours that they often drive real-time wholesale prices in the state to zero. Meanwhile, the need for electricity can spike after sunset, sometimes sending real-time prices as high as $1,000 a megawatt-hour.
Utility companies are looking to correct that supply-demand mismatch and ease the strain on the electric grid as California considers retiring its last nuclear plant in 2025 while nearly doubling the power it gets from renewable sources to 50% by 2030.
Last month, power company AES Corp. flipped the switch on a bank of 400,000 lithium-ion batteries it installed in Escondido, Calif., for Sempra Energy. Sempra’s San Diego utility plans to use the batteries, made by Samsung SDI Co. Ltd., to smooth out power flows on its grid.
Tesla Inc. is supplying batteries to a Los Angeles-area network that would serve Edison International, which would be the world’s largest of its kind when finished in 2020, according to the developer, Advanced Microgrid Solutions. The network would spread across more than 100 office buildings and industrial properties.
When the Edison utility needs more electricity on its system, the batteries would be able to deliver 360 megawatt-hours of extra power to the buildings and the grid, enough to power 20,000 homes for a day, on short notice. At other times, the batteries would help firms hosting the arrays to cut their utility bills, said Susan Kennedy, chief executive of Advanced Microgrid Solutions, which is developing the project.
“It will show how you can use communication and control technology to make a bunch of distributed energy assets act like one big one,” said J.B. Straubel, Tesla’s chief technical officer.
The companies declined to say how much the project would cost.
PG&E plans to use clean energy to replace the 2,200-megawatt Diablo Canyon nuclear power plant, which it is proposing to shut down in 2025. The San Francisco utility, which plans to invest about $1 billion through 2020 to modernize its grid, is testing batteries, software and other technologies.
“We are rethinking the grid and how it operates,” said Steve Malnight, PG&E’s senior vice president of strategy and policy.
Virtual power plants remain a considerably more expensive option than building a traditional power plant to meet peak demand.
Stored power from lithium-ion batteries can do the work of a natural-gas peaker plant at an average cost of between $285 and $581 a megawatt-hour, according to a December report by Lazard Ltd. In contrast, electricity from a new gas peaker plant costs between $155 and $227 a megawatt-hour, according to Lazard.
But some of the equipment barely existed five years ago: As prices for technologies such as battery storage fall, utilities should be able to adopt more of them, said Michael Picker, president of the state Public Utilities Commission.
California currently has to sell excess solar power at low prices or give it away to utilities in Arizona and other states, through a real-time power market run by California’s Independent System Operator, which oversees the state grid.
Sometimes, offering the excess power at low prices isn’t enough and prices go negative, as a way for power suppliers to encourage other utilities to take power they can’t use. That happened on 178 days last year.
Utilities in Colorado, New York and other states that plan to get a higher percentage of their power from renewables are also experimenting with virtual power-plant technology.
Consolidated Edison Inc. is using solar panels, batteries and power conservation technologies at several dozen New York City buildings to reduce peak demand by as much as 52 megawatts. Because of the $200 million project, the utility can postpone installing more than $1 billion of conventional power equipment for another 20 years, said Matthew Ketschke, a Con Edison vice president.
Virtual power plants alone, however, may not solve problems created by boosting intermittent renewable energy.
In Arizona, regulators want to double the state’s renewable energy target to 30% by 2030. But some utilities worry that adding more solar power on top of California’s already-robust supply could be costly and wasteful, even with battery storage.
“Storage may help you within the day, but a battery isn’t designed to store energy from March until it’s needed in June,” said Jeff Guldner, senior vice president of public policy at Arizona Public Service Co. in Phoenix.
Since former NASA engineer Kirk Sorensen revived forgotten molten salt reactor (MSR) technology in the 2000s, interest in MSR technology has been growing quickly. Since 2011, four separate companies in North America have announced plans for MSRs: Flibe Energy (started by Sorenson himself), Transatomic Power (started by two recent MIT graduates), Terrestrial Energy (based in Canada, which recently partnered with Department of Energy’s Oak Ridge National Laboratory), and Martingale, Inc., which recently made public its design for its ThorCon MSR.
In addition, there is now renewed interest in MSRs in Japan, Russia, France and China, with China also announcing that MSR technology is one of its “five innovation centers that will unite the country’s leading talents for research in advanced science and technology fields, according to the Chinese Academy of Sciences.”
Why this sudden interest in a nuclear technology that dates back to the 1950s? The answer lies in both the phenomenal safety of MSRs and their potential to help solve so many of today’s energy related problems, from climate change to energy poverty to the intermittency of wind and solar power. In fact, MSRs can operate so safely, they may alleviate public fears about nuclear energy. Before looking at the potential of MSRs, though, it is useful to first take a high-level look at what they are and how they work.
What is a Molten Salt Reactor?
A molten salt reactor (MSR) is a type of nuclear reactor that uses liquid fuel instead of the solid fuel rods used in conventional nuclear reactors. Using liquid fuel provides many advantages in safety and simplicity of design.
The figure above shows one type of MSR design. As shown towards the left, the reactor contains “fuel salt”, which is fuel (such as uranium-235) dissolved in a mixture of molten fluoride salts. After a fission chain reaction starts in the reactor, the rate of fission stabilizes once the fuel salt reaches around 700 degrees Celcius. If the reactor gets hotter than 700 degrees, the resulting expansion of the fuel salt pushes some of the fuel into the circulation loop; this, in turn, decreases the fission rate (since fission cannot be maintained in the loop), causing the fuel to cool.
Unlike conventional reactors, the rate of fission in an MSR is inherently stable. Nonetheless, should the fuel salt become too hot to operate safely, a freeze plug (made of salts kept solid by a cooling fan) below the reactor will melt and the liquid content of the reactor will flow down into emergency dump tanks where it cannot continue to fission and can cool safely.
The control rods at the top of the reactor provide further control of the rate of fission by absorbing neutrons that might otherwise cause a fission reaction. A test in the 1960s showed that an MSR can continue to run safely without operator intervention even after intentional removal of a control rod during full operation.
The fuel salt is circulated through a heat exchanger where it is cooled by another molten salt loop that is free of radioactive fuel and fission products. The heat from this second loop can be used to do work, such as heating water to turn a steam turbine to generate electricity.
The fuel salt is also circulated through a chemical processing plant. This plant is used to both remove undesired fission products and add more fuel to the reactor.
Why Molten Salt Reactors?
MSRs are a huge departure from the conventional reactors most people are familiar with. Key features include:
MSRs are walk-away safe. They cannot melt down as can conventional reactors because they are molten by design. An operator cannot even force an MSR to overheat. If for some reason an MSR were to overheat, the heat would melt a freeze-plug at the bottom of the reactor vessel and the liquid fuel salts would drain into the emergency cooling tanks where it would cool and solidify. No operator interaction nor even emergency backup power is needed for this to happen.
Even a human engineered breach (such as a terrorist attack) of an MSR cannot cause any significant release of radioactivity. The fuel salts for MSRs work at normal atmospheric pressure, so a breach of the reactor containment vessel would simply leak out the liquid fuel which would then solidify in as it cooled. (By comparison, a breach of a conventional reactor leads to the highly pressurized and radioactive water coolant spewing into the atmosphere and potentially leaking into surrounding bodies of water.) Additionally, radioactive byproducts of fission like iodine-131, cesium-134 and cesium-137 (such as those released into the atmosphere and ocean by the Fukushima meltdown) are physically bound to the hardened coolant and do not leave the reactor site.
A solution to nuclear waste and stockpiles of plutonium
Conventional reactors use solid ceramic fuel rods containing enriched uranium. The fission of uranium in the fuel releases gases, such as xenon, which causes the fuel rods to crack. This cracking, in turn, makes it necessary to remove and replace the fuel rods well before most of the actinides (elements that remain radioactive for thousands of years) such as uranium have fissioned. This is why nuclear waste is radioactive for a very long time.
However, the actinides that remain in the cracked fuel rods is still an excellent source of fuel for reactors. France, for example, recycles the waste instead of burying it so that these actinides can be placed in new fuel rods and used to make more electricity.
Because MSRs use liquid fuel, the release of gases simply bubbles up, typically to an off-gas unit in the coolant loop (not shown in figure) where it can be removed. Since the liquid fuel is unaffected by the releases of gas, the fuel can be left in the reactor until almost all the actinides are fissioned, leaving only elements that are radioactive for a relatively short time (300 years or less). The result is that MSRs have no long term issue with regard to nuclear waste.
Not only do MSRs not have a long term waste issue, they can be used to dispose of current stockpiles of nuclear waste by using those stockpiles as fuel. Even stockpiles of plutonium can be disposed of this way. In fact, conventional reactors typically use only 3-to-5% of the available energy in their fuel rods before the fuel rods must be replaced because of cracking. MSRs can use up most of the rest of the available fuel in these rods to make electricity.
Note: The reason that conventional reactors can’t use up all actinides in their fuel rods is a bit more complex than what is described above. The neutrons in conventional reactors only move fast enough to cause enriched uranium to fission. Fissioning most all of the actinides also requires much faster moving neutrons, which can be achieved in both MSRs and solid-fuel reactors, such as the GE Hitachi PRISM.
Abundant energy cheaper than energy from coal
How do we get all 7 billion people on the planet (perhaps 9 billion by 2050) to agree to drastically cut their CO2 emissions? The answer: make it in their immediate self-interest by providing cheap C02-free energy, energy cheaper than they can get by burning coal.
MSRs can be made cheaply because they are simple compared to conventional reactors that have large pressurized containment domes and many engineered (and not inherent) and redundant safety systems. Having far few parts than conventional reactors, MSRs are inherently cheaper. This simplicity also allows MSRs to be small, which in turn makes them ideal for factory-based mass production (unlike conventional reactors). The cost efficiencies associated with mass production further drive down the cost and can make the ramp up of nuclear power much faster.
Load following solar and wind power
A significant limitation of solar and wind power is their intermittency and unreliability. Currently these issues are dealt with in the U.S. by quickly firing-up natural gas plants to load follow solar and wind power. In other words, gas plants must ramp up quickly when power from wind and sun is scarce, and ramp down quickly when the sun is shining or the wind is blowing. Unfortunately, this is an inefficient way to burn natural gas, which can result in almost as much CO2 output from gas plants ramping up and down as from when they simply run continuously. And, of course, continued use of natural gas requires continued fracking. (Although many hope that a grid-level energy storage technology will someday negate the need to use natural gas plants, no economic energy storage is on the horizon.)
Unlike conventional nuclear reactors, the characteristics of MSRs make them good candidates for CO2-free load following of solar and wind power. This is because slowing down nuclear reactions results in an increased release of xenon gas. When conventional reactors do this, they must wait several days to restart while the xenon gas decays. This extra xenon is not a problem for MSRs because of their off-gas system, which allows immediate removal of xenon; hence, no delay is needed after ramping up or down an MSR.
Note that conventional reactors can be designed to load follow, but typically haven’t been for economic reasons (because more profit can be made by running conventional reactors at full power for base load applications).
Abundant energy for millions of years
Although it is sometimes claimed that nuclear power is not sustainable, the truth is that there is enough nuclear fuel on earth to provide humanity with abundant energy for millions of years. MSRs can run on uranium and existing stockpiles of plutonium and nuclear waste. A variant of an MSR, a liquid fluoride thorium reactor (LFTR), will be able to use abundant thorium as a fuel. In addition, breeder reactors (which include some types of MSRs) make it possible to use uranium-238 as fuel, which makes up 93.3% of all natural uranium. Conventional reactors use only uranium-235, which makes up a mere 0.7% of natural uranium.
Replaces fossil fuels where wind and solar are problematic
MSR technology has potential far beyond generating electricity cheaply and without emitting CO2. For example, MSRs could be used to replace fossil fuels for high heat industrial processes such as water desalinization and the production of cement and aluminum. (In the U.S., industrial processes account for a little over 5% of greenhouse gases.) MSRs can even provide high heat for cheap production of feedstock for synthetic, CO2-free liquid fuels.
MSRs could also be used to power large container ships, which currently run on diesel. The 15 largest of these ships produce as much air pollution every day as do all of the cars on the planet.
Weapons Proliferation Concerns
No nuclear reactor can be made proliferation proof, but MSRs have some significant advantages for proliferation resistance. First, the waste from MSRs is not useful for use in nuclear weapons since MSRs fission almost all actinides. Second, MSRs can use up existing stockpiles of nuclear waste from conventional reactors as well as existing stockpiles of plutonium, making these materials unavailable for use in nuclear weapons.
A Very Brief History of MSR Technology
MSRs were first developed in the U.S. in the 1950s for use in a nuclear-powered aircraft bomber (the idea being that the bomber could remain in the air indefinitely). Though a small experimental reactor ran successfully, the program was canceled when it became clear that in-air refueling of bombers was viable.
Under the supervision of Alvin Weinberg in the 1960s, Oak Ridge National Laboratory built an experimental MSR that ran successfully for four years. Weinberg realized early on that MSRs were the ideal type of reactor for civilian use because they cannot melt down. He was eventually fired by the Nixon administration for this advocacy.
In the 2000s, then NASA engineer Kirk Sorenson, who was tasked with how to power a station on the moon, found that MSRs were the best solution. He also realized that MSRs are a great solution on earth. His tireless advocacy for MSRs has generated much interest.
The Intergovernmental Panel on Climate Change, the International Energy Agency, the United Nations, the Obama Administration and even over 70% of climate scientists agree that we must ramp up nuclear power if we are going succeed in dealing with climate change. Because of its exceptional safety and low cost, perhaps MSR technology is a nuclear technology that most everyone can embrace.
Corrections, additions and clarifications (January 25, 2015):
“cesium-137 and iodine-141” changed to: “iodine-131, cesium-134 and cesium-137”.
Added the following to section on load following: “Note that conventional reactors can be designed to load follow, but typically haven’t been for economic reasons (because more profit can be made by running conventional reactors at full power for base load applications).”
Added the following to section on using nuclear waste and plutonium as fuel: “Note: The reason that conventional reactors can’t use up all actinides in their fuel rods is a bit more complex than what is described above. The neutrons in conventional reactors only move fast enough to cause enriched uranium to fission. Fissioning most all of the actinides also requires much faster moving neutrons, which can be achieved in both MSRs and solid-fuel reactors, such as the GE Hitachi PRISM.“
Changed sentence on breeder reactors to: “In addition, breeder reactors (which include some types of MSRs) make it possible to use uranium-238 as fuel, which makes up 93.3% of all natural uranium. Conventional reactors use only uranium-235, which makes up a mere 0.7% of natural uranium.”
Added the following sentence to “Replaces fossil fuels where wind and solar are problematic”: “MSRs can even provide high heat for cheap production of feed-stock for synthetic, CO2-free liquid fuels.”
President Trump has questioned the science behind climate change as “a hoax” in positioning himself as a champion of coal. The three largest American coal producers are taking a different tack.
Seeking to shore up their struggling industry, the coal producers are voicing greater concern about greenhouse gas emissions. Their goal is to frame a new image for coal as a contributor, not an obstacle, to a clean-energy future — an image intended to foster their legislative agenda.
Executives of the three companies — Cloud Peak Energy, Peabody Energy and Arch Coal — are going so far as to make common cause with some of their harshest critics, including the Natural Resources Defense Council and the Clean Air Task Force. Together, they are lobbying for a tax bill to expand government subsidies to reduce the environmental impact of coal burning.
The technology they are promoting is carbon capture and sequestration — an expensive and, up to now, unwieldy method of trapping carbon dioxide emitted from coal-fired power plants before the gas can blanket the atmosphere and warm the planet.
“We can’t turn back time,” said Richard Reavey, vice president for government and public affairs at Cloud Peak Energy. “We have to accept that there are reasonable concerns about carbon dioxide and climate, and something has to be done about it. It’s a political reality, it’s a social reality, and it has to be dealt with.”
The coal executives say the steady gains of renewable energy — along with robust environmental regulations in recent years, many of which they still oppose — are not sufficient to stabilize the climate and still meet energy needs in the years to come. They reason that coal and other fossil fuels will still dominate the fuel mix for the next several decades, and that only capturing carbon from coal-fired and gas-fired power plants can meaningfully shift the world to a low-carbon future. Their argument is backed, at least in part, by many world energy experts and environmentalists.
A similar, at least partial metamorphosis has taken place in the oil and gas and utility industries in recent years with mixed results, although there has been progress in expanding the deployment of renewables like wind and solar for power and in the capture of methane in oil fields to stem a powerful greenhouse gas. The coal executives argue that given the same incentives and subsidies as renewables, carbon capture and sequestration can also take off.
Support among coal executives for capturing carbon at power plants is not entirely new, but their increasingly vocal acknowledgment of climate science in support of the technology is a far stretch from many of the views expressed in recent years.
“We need a low-carbon fossil solution,” said Deck S. Slone, senior vice president for strategy and public policy at Arch Coal. “The political landscape is always shifting and carbon concerns are certainly not going away. We think there is a solution out there in the form of technology that is an answer to the climate challenge and that quite frankly will be good for our business long term.”
Coal executives remain strongly opposed to the Obama administration’s blueprint for reducing dependence on coal for power, known as the Clean Power Plan, which is being contested in the courts. But they say that any rollback of Obama regulatory policies by the new administration may not be enough to keep utilities from switching from coal to low-cost natural gas and renewables, and that only assurances of government support for carbon capture and sequestration can give utilities certainty that coal has a long-term future and encourage them to retrofit old power plants to be cleaner burning.
Last year, total United States coal production was 18 percent lower than in 2015 and was the lowest level since 1978. Many companies were forced into bankruptcy. With gas prices rising in recent months, coal made a modest rebound at the end of last year, especially in the Powder River Basin of Montana and Wyoming, where the production economics are generally best.
Vic Svec, a Peabody senior vice president, said that his company was looking to make “a fresh start” as it comes out of bankruptcy, and that part of that fresh start was recognizing that fossil fuels “contribute to greenhouse gas emissions and concern regarding these emissions has become part of the global, societal regulatory landscape.” He added, “There is a market for low-carbon energy sources, and we want to be part of that future.”
Environmentalists say they believe that the coal industry, having dealt with a sharp downturn in recent years and facing an aggressive investor divestment movement, may be shifting its views on climate change more for its own business interests than any newfound love for the environment.
“To the extent that they are saying things that seem much more rational than in the past,” said David Hawkins, director of the climate program at the National Resources Defense Council, “they are trying to persuade skeptical investors that coal has a future.” Nevertheless, he added that his group was willing to work with the companies, even while it was suing them in court on other issues, “if they are willing to join in properly crafted legislation.”
The carbon legislation, introduced last year, would increase the federal tax credit for capture and sequestration to $50 per ton of carbon dioxide from $20. And it would expand available credits by more than a third for permanent storage for the purpose of flooding the carbon into declining oil fields to coax production. The method, already popular in West Texas and supported by the oil and gas industry, gives utilities that deploy the technology an added revenue stream.
When introduced, the measure had broad support from senators as varied as Sheldon Whitehouse, a Rhode Island liberal who is active on climate issues, and Mitch McConnell, the Republican leader from Kentucky, who is one of the strongest backers of the coal industry in Congress. Proponents are preparing to reintroduce the legislation, and coal executives say they hope the Trump administration will get on board.
Senator Heidi Heitkamp, Democrat of North Dakota, who is a leading sponsor of the legislation and a former director in a coal gasification company, said she had seen a shift in the stance of coal executives. “I see people at the table who weren’t at the table before,” she said. “As long as they see that the issue of CO2 is not going to go away, they are going to roll up their sleeves and try to find a way that works for the utility industry and the coal industry.”
One obstacle to the bill could be cost. Supporters have asked the Joint Committee on Taxation to evaluate the effect of the legislation on the federal budget but have not heard back yet.
Opponents say it would merely extend the life of the coal industry.
“For 40 years, I’ve been told clean coal is right around the corner, just give us another few subsidies,” said Dan Becker, director of the Safe Climate Campaign, an environmental group. “Carbon capture and sequestration may work someday in the distant future, but right now it barely works on a technical level. It’s way far away from working on a cost-effectiveness level.”
There are only a handful of commercial-scale operations for carbon capture and sequestration globally. But coal executives say proper permitting and legal protections, along with the tax credits, could bring a surge in construction in the United States within a decade. And as the technology improves and implementation becomes less expensive, the United States could export the technology and make coal-fired power cleaner around the world.
But developing commercial-scale carbon capture has been bedeviled by cost overruns and long delays. The operations not only are expensive to build but also require a lot of power, making plants less efficient. The federal government canceled one such project, called FutureGen, after it was granted more than $1 billion by the Obama administration.
Still, coal executives are staking much of their futures on the technology.
“We’re confident,” Mr. Svec of Peabody said, “that it needs to be a part of any serious effort toward reducing greenhouse gases from industrial sources.”
Correction: February 27, 2017 An earlier version of this article misstated the amount of a proposed increase in a federal tax credit for capture and sequestration of carbon dioxide, as well as the current amount. Legislation would raise the credit to $50 per ton (not $20), from the current $20 (not $10). The article also misstated a former role of Senator Heidi Heitkamp at a coal-gasification company. She was a director, not an executive.