nuclear-news

The News That Matters about the Nuclear Industry

Oh dear! Transatomic Power has been making false claims about Generation IV nuclear reactors

text-cat-questionIt’s interesting the way that, for dubious nuclear enterprises, they like to put a young woman at the top. Is this to make the nuclear image look young and trendy? Or is it so they she can cop the flak when it all goes wrong?

Below – Leslie Dewan – CEO of Transatomic Power

dewan-leslie-poisoned-chaliceNuclear Energy Startup Transatomic Backtracks on Key Promises The company, backed by Peter Thiel’s Founders Fund, revised inflated assertions about its advanced reactor design after growing concerns prompted an MIT review. MIT Technology Review by James Temple  February 24, 2017 Nuclear energy startup Transatomic Power has backed away from bold claims for its advanced reactor technology after an informal review by MIT professors highlighted serious errors in the company’s calculations, MIT Technology Review has learned.

The Cambridge, Massachusetts-based company, founded in 2011 by a pair of MIT students in the Nuclear Science & Engineering department, asserted that its molten salt reactor design could run on spent nuclear fuel from conventional reactors and generate energy far more efficiently than them. In a white paper published in March 2014, the company proclaimed its reactor “can generate up to 75 times more electricity per ton of mined uranium than a light-water reactor.”

Those lofty claims helped it raise millions in venture capital, secure a series of glowing media profiles (including in this publication), and draw a rock-star lineup of technical advisors. But in a paper on its site dated November 2016, the company downgraded “75 times” to “more than twice.” In addition, it now specifies that the design “does not reduce existing stockpiles of spent nuclear fuel,” or use them as its fuel source. The promise of recycling nuclear waste, which poses tricky storage and proliferation challenges, was a key initial promise of the company that captured considerable attention.

“In early 2016, we realized there was a problem with our initial analysis and started working to correct the error,” cofounder Leslie Dewan said in an e-mail response to an inquiry from MIT Technology Review.

The dramatic revisions followed an analysis in late 2015 by Kord Smith, a nuclear science and engineering professor at MIT and an expert in the physics of nuclear reactors.

At that point, there were growing doubts in the field about the company’s claims and at least some worries that any inflated claims could tarnish the reputation of MIT’s nuclear department, which has been closely associated with the company. Transatomic also has a three-year research agreement with the department, according to earlier press releases.

In reviewing the company’s white paper, Smith noticed immediate red flags. He relayed his concerns to his department head and the company, and subsequently conducted an informal review with two other professors.

“I said this is obviously incorrect based on basic physics,” Smith says. He asked the company to run a test, which ended up confirming that “their claims were completely untrue,” Smith says.

He notes that promising to increase the reactor’s fuel efficiency by 75 times is the rough equivalent of saying that, in a single step, you’d developed a car that could get 2,500 miles per gallon.

Ultimately, the company redid its analysis, and produced and posted a new white paper………

The company has raised at least $4.5 million from Peter Thiel’s Founders Fund, Acadia Woods Partners, and Daniel Aegerter of Armada Investment AG. Venture capital veteran Ray Rothrock serves as chairman of the company.

Founders Fund didn’t immediately respond to an inquiry……https://www.technologyreview.com/s/603731/nuclear-energy-startup-transatomic-backtracks-on-key-promises/

February 25, 2017 Posted by | Reference, spinbuster, technology, USA | Leave a comment

Effect of air pollution might have masked mid-20th Century sea ice loss

Air pollution may have masked mid-20th Century sea ice loss https://www.sciencedaily.com/releases/2017/02/170223124327.htm February 23, 2017

Source:
American Geophysical Union
Summary:
sea-ice-meltingfHumans may have been altering Arctic sea ice longer than previously thought, according to researchers studying the effects of air pollution on sea ice growth in the mid-20th Century.

Humans may have been altering Arctic sea ice longer than previously thought, according to researchers studying the effects of air pollution on sea ice growth in the mid-20th Century. The new results challenge the perception that Arctic sea ice extent was unperturbed by human-caused climate change until the 1970s.

Scientists have observed Arctic sea ice loss since the mid-1970s and some climate model simulations have shown the region was losing sea ice as far back as 1950. In a new study, recently recovered Russian observations show an increase in sea ice from 1950 to 1975 as large as the subsequent decrease in sea ice observed from 1975 to 2005. The new observations of mid-century sea ice expansion led researchers behind the new study to the search for the cause.

The new study supports the idea that air pollution is to blame for the observed Arctic sea ice expansion. Particles of air pollution that come primarily from the burning of fossil fuels may have temporarily hidden the effects of global warming in the third quarter of the 20th Century in the eastern Arctic, the researchers say.

These particles, called sulfate aerosols, reflect sunlight back into space and cool the surface. This cooling effect may have disguised the influence of global warming on Arctic sea ice and may have resulted in sea ice growth recorded by Russian aerial surveys in the region from 1950 through 1975, according to the new research.

“The cooling impact from increasing aerosols more than masked the warming impact from increasing greenhouse gases,” said John Fyfe, a senior scientist at Environment and Climate Change Canada in Victoria and a co-author of the new study accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union.

To test the aerosol idea, researchers used computer modeling to simulate sulfate aerosols in the Arctic from 1950 through 1975. Concentrations of sulfate aerosols were especially high during these years before regulations like the Clean Air Act limited sulfur dioxide emissions that produce sulfate aerosols.

The study’s authors then matched the sulfate aerosol simulations to Russian observational data that suggested a substantial amount of sea ice growth during those years in the eastern Arctic. The resulting simulations show the cooling contribution of aerosols offset the ongoing warming effect of increasing greenhouse gases over the mid-twentieth century in that part of the Arctic. This would explain the expansion of the Arctic sea ice cover in those years, according to the new study.

Aerosols spend only days or weeks in the atmosphere so their effects are short-lived. The weak aerosol cooling effect diminished after 1980, following the enactment of clean air regulations. In the absence of this cooling effect, the warming effect of long-lived greenhouse gases like carbon dioxide has prevailed, leading to Arctic sea ice loss, according to the study’s authors.

The new study helps sort out the swings in Arctic sea ice cover that have been observed over the last 75 years, which is important for a better understanding of sea ice behavior and for predicting its behavior in the future, according to Fyfe.

The new study’s use of both observations and modeling is a good way to attribute the Arctic sea ice growth to sulfate aerosols, said Cecilia Bitz, a sea ice researcher at the University of Washington in Seattle who has also looked into the effects of aerosols on Arctic ice. The sea ice record prior to satellite images is “very sparse,” added Bitz, who was not involved in the new study.

Bitz also points out that some aerosols may have encouraged sea ice to retreat. Black carbon, for instance, is a pollutant from forest fires and other wood and fossil fuel burning that can darken ice and cause it to melt faster when the sun is up — the opposite effect of sulfates. Also, black carbon emissions in some parts of the Arctic are still quite common, she said.


Story Source:

Materials provided by American Geophysical Union.

February 25, 2017 Posted by | climate change, oceans, Reference, ARCTIC | Leave a comment

The link between nuclear power stations and cancer rates

radiation-causing-cancerA link between cancer rates and nuclear plants? http://www.pottsmerc.com/article/MP/20170221/NEWS/170229937  Joseph Mangano Executive Director Radiation and Public Health Project 02/21/17,SINCE THE TWO NUCLEAR REACTORS AT LIMERICK Began operating in the 1980s, the question of whether toxic radiation releases affected local cancer rates has persisted.

February 24, 2017 Posted by | health, Reference, USA | Leave a comment

The Fukushima Daichi nuclear power complex is a continuing, permanent, catastrophe

Caldicott,-Helen-4highly-recommendedHELEN CALDICOTT: The Fukushima nuclear meltdown continues unabated https://independentaustralia.net/politics/politics-display/helen-caldicott-the-fukushima-nuclear-meltdown-continues-unabated,10019  3 February 2017,  Dr Helen Caldicott, explains recent robot photos taken of Fukushima’s Daiichi nuclear reactors: radiation levels have not peaked, but have continued to spill toxic waste into the Pacific Ocean — but it’s only now the damage has been photographed.

RECENT reporting of a huge radiation measurement at Unit 2 in the Fukushima Daichi reactor complex does not signify that there is a peak in radiation in the reactor building.

All that it indicates is that, for the first time, the Japanese have been able to measure the intense radiation given off by the molten fuel, as each previous attempt has led to failure because the radiation is so intense the robotic parts were functionally destroyed.

The radiation measurement was 530 sieverts, or 53,000 rems (Roentgen Equivalent for Man). The dose at which half an exposed population would die is 250 to 500 rems, so this is a massive measurement. It is quite likely had the robot been able to penetrate deeper into the inner cavern containing the molten corium, the measurement would have been much greater.

These facts illustrate why it will be almost impossible to “decommission” units 1, 2 and 3 as no human could ever be exposed to such extreme radiation. This fact means that Fukushima Daichi will remain a diabolical blot upon Japan and the world for the rest of time, sitting as it does on active earthquake zones.

What the photos taken by the robot did reveal was that some of the structural supports of Unit 2 have been damaged. It is also true that all four buildings were structurally damaged by the original earthquake some five years ago and by the subsequent hydrogen explosions so, should there be an earthquake greater than seven on the Richter scale, it is very possible that one or more of these structures could collapse, leading to a massive release of radiation as the building fell on the molten core beneath. But units 1, 2 and 3 also contain cooling pools with very radioactive fuel rods — numbering 392 in Unit 1, 615 in Unit 2, and 566 in Unit 3; if an earthquake were to breach a pool, the gamma rays would be so intense that the site would have to be permanently evacuated. The fuel from Unit 4 and its cooling pool has been removed.

But there is more to fear.

The reactor complex was built adjacent to a mountain range and millions of gallons of water emanate from the mountains daily beneath the reactor complex, causing some of the earth below the reactor buildings to partially liquefy. As the water flows beneath the damaged reactors, it immerses the three molten cores and becomes extremely radioactive as it continues its journey into the adjacent Pacific Ocean.

Every day since the accident began, 300 to 400 tons of water has poured into the Pacific where numerous isotopes – including cesium 137, 134, strontium 90, tritium, plutonium, americium and up to 100 more – enter the ocean and bio-concentrate by orders of magnitude at each step of the food chain — algae, crustaceans, little fish, big fish then us.

Fish swim thousands of miles and tuna, salmon and other species found on the American west coast now contain some of these radioactive elements, which are tasteless, odourless and invisible. Entering the human body by ingestion they concentrate in various organs, irradiating adjacent cells for many years. The cancer cycle is initiated by a single mutation in a single regulatory gene in a single cell and the incubation time for cancer is any time from 2 to 90 years. And no cancer defines its origin.

We could be catching radioactive fish in Australia or the fish that are imported could contain radioactive isotopes, but unless they are consistently tested we will never know.

As well as the mountain water reaching the Pacific Ocean, since the accident, TEPCO has daily pumped over 300 tons of sea water into the damaged reactors to keep them cool. It becomes intensely radioactive and is pumped out again and stored in over 1,200 huge storage tanks scattered over the Daichi site. These tanks could not withstand a large earthquake and could rupture releasing their contents into the ocean.

But even if that does not happen, TEPCO is rapidly running out of storage space and is trying to convince the local fishermen that it would be okay to empty the tanks into the sea. The Bremsstrahlung radiation like x-rays given off by these tanks is quite high – measuring 10 milirems – presenting a danger to the workers. There are over 4,000 workers on site each day, many recruited by the Yakuza (the Japanese Mafia) and include men who are homeless, drug addicts and those who are mentally unstable.

There’s another problem. Because the molten cores are continuously generating hydrogen, which is explosive, TEPCO has been pumping nitrogen into the reactors to dilute the hydrogen dangers.

Vast areas of Japan are now contaminated, including some areas of Tokyo, which are so radioactive that roadside soil measuring 7,000 becquerels (bc) per kilo would qualify to be buried in a radioactive waste facility in the U.S..

As previously explained, these radioactive elements concentrate in the food chain. The Fukushima Prefecture has always been a food bowl for Japan and, although much of the rice, vegetables and fruit now grown here is radioactive, there is a big push to sell this food both in the Japanese market and overseas. Taiwan has banned the sale of Japanese food, but Australia and the U.S. have not.

Prime Minister Abe recently passed a law that any reporter who told the truth about the situation could be gaoled for ten years. In addition, doctors who tell their patients their disease could be radiation related will not be paid, so there is an immense cover-up in Japan as well as the global media.

The Prefectural Oversite Committee for Fukushima Health is only looking at thyroid cancer among the population and by June 2016, 172 people who were under the age of 18 at the time of the accident have developed, or have suspected, thyroid cancer; the normal incidence in this population is 1 to 2 per million.

However, other cancers and leukemia that are caused by radiation are not being routinely documented, nor are congenital malformations, which were, and are, still rife among the exposed Chernobyl population.

Bottom line, these reactors will never be cleaned up nor decommissioned because such a task is not humanly possible. Hence, they will continue to pour water into the Pacific for the rest of time and threaten Japan and the northern hemisphere with massive releases of radiation should there be another large earthquake.

February 22, 2017 Posted by | Fukushima continuing, Reference | Leave a comment

Radioactive contamination persists in former uranium mining site: the reason why

water-radiationThis cycling in the aquifer may result in the persistent plumes of uranium contamination found in groundwater, something that wasn’t captured by earlier modeling efforts.

Study helps explain why uranium persists in groundwater at former mining sites      https://www.sciencedaily.com/releases/2017/02/170202163234.htm

February 2, 2017

Source:
SLAC National Accelerator Laboratory
Summary:
A recent study helps describe how uranium cycles through the environment at former uranium mining sites and why it can be difficult to remove.

Decades after a uranium mine is shuttered, the radioactive element can still persist in groundwater at the site, despite cleanup efforts.

A recent study led by scientists at the Department of Energy’s SLAC National Accelerator Laboratory helps describe how the contaminant cycles through the environment at former uranium mining sites and why it can be difficult to remove. Contrary to assumptions that have been used for modeling uranium behavior, researchers found the contaminant binds to organic matter in sediments. The findings provide more accurate information for monitoring and remediation at the sites.

The results were published in the Proceedings of the National Academy of Sciences.

In 2014, researchers at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) began collaborating with the DOE Office of Legacy Management, which handles contaminated sites associated with the legacy of DOE’s nuclear energy and weapons production activities. Through projects associated with the Uranium Mill Tailings Radiation Control Act, the DOE remediated 22 sites in Colorado, Wyoming and New Mexico where uranium had been extracted and processed during the 1940s to 1970s.

Uranium was removed from the sites as part of the cleanup process, and the former mines and waste piles were capped more than two decades ago. Remaining uranium deep in the subsurface under the capped waste piles was expected to leave these sites due to natural groundwater flow. However, uranium has persisted at elevated levels in nearby groundwater much longer than predicted by scientific modeling………

“For the most part, uranium contamination has only been looked at in very simple model systems in laboratories,” Bone says. “One big advancement is that we are now looking at uranium in its native environmental form in sediments. These dynamics are complicated, and this research will allow us to make field-relevant modeling predictions.”In an earlier study, the SLAC team discovered that uranium accumulates in the low-oxygen sediments near one of the waste sites in the upper Colorado River basin. These deposits contain high levels of organic matter — such as plant debris and bacterial communities.During this latest study, the researchers found the dominant form of uranium in the sediments, known as tetravalent uranium, binds to organic matter and clays in the sediments. This makes it more likely to persist at the sites. The result conflicted with current models used to predict movement and longevity of uranium in sediments, which assumed that it formed an insoluble mineral called uraninite.

Different chemical forms of the element vary widely in how mobile they are — how readily they move around — in water, says Sharon Bone, lead author on the paper and a postdoctoral researcher at SSRL, a DOE Office of Science User Facility.

Since the uranium is bound to organic matter in sediments, it is immobile under certain conditions. Tetravalent uranium may become mobile when the water table drops and oxygen from the air enters spaces in the sediment that were formerly filled with water, particularly if the uranium is bound to organic matter in sediments rather than being stored in insoluble minerals.

“Either you want the uranium to be soluble and completely flushed out by the groundwater, or you just want the uranium to remain in the sediments and stay out of the groundwater,” Bone says. “But under fluctuating seasonal conditions, neither happens completely.”

This cycling in the aquifer may result in the persistent plumes of uranium contamination found in groundwater, something that wasn’t captured by earlier modeling efforts.

February 6, 2017 Posted by | environment, Reference, USA, water | Leave a comment

Global warming might accelerate, due to the Interdecadal Pacific Oscillation, “El Tío” (the uncle)

climate-changeMeet El Niño’s cranky uncle that could send global warming into hyperdrive, The Conversation, Research Fellow in Climate and Water Resources, University of Melbourne, Climate Extremes Research Fellow, University of Melbourne                                 , Science Fellow, Met Office Hadley Centre   Professor of Atmospheric Science, University of MelbourneSenior Research Scientist, CSIRO, PhD student, University of Melbourne      February 6, 2017

You’ve probably heard about El Niño, the climate system that brings dry and often hotter weather to Australia over summer.

You might also know that climate change is likely to intensify drought conditions, which is one of the reasons climate scientists keep talking about the desperate need to reduce greenhouse gas emissions, and the damaging consequences if we don’t.

El Niño is driven by changes in the Pacific Ocean, and shifts around with its opposite, La Niña, every 2-7 years, in a cycle known as the El Niño Southern Oscillation or ENSO.

But that’s only part of the story. There’s another important piece of nature’s puzzle in the Pacific Ocean that isn’t often discussed.

It’s called the Interdecadal Pacific Oscillation, or IPO, a name coined by a study which examined how Australia’s rainfall, temperature, river flow and crop yields changed over decades.

Since El Niño means “the boy” in Spanish, and La Niña “the girl”, we could call the warm phase of the IPO “El Tío” (the uncle) and the negative phase “La Tía” (the auntie).

These erratic relatives are hard to predict. El Tío and La Tía phases have been compared to a stumbling drunk. And honestly, can anyone predict what a drunk uncle will say at a family gathering?

What is El Tío?

Like ENSO, the IPO is related to the movement of warm water around the Pacific Ocean. Begrudgingly, it shifts its enormous backside around the great Pacific bathtub every 10-30 years, much longer than the 2-7 years of ENSO.

The IPO’s pattern is similar to ENSO, which has led climate scientists to think that the two are strongly linked. But the IPO operates on much longer timescales.

We don’t yet have conclusive knowledge of whether the IPO is a specific climate mechanism, and there is a strong school of thought which proposes that it is a combination of several different mechanisms in the ocean and the atmosphere.

Despite these mysteries, we know that the IPO had an influence on the global warming “hiatus” – the apparent slowdown in global temperature increases over the early 2000s……….

Since about the year 2000, some of the excess heat trapped by greenhouse gases has been getting buried in the deep Pacific Ocean, leading to a slowdown in global warming over about the last 15 years. It appears as though we have a kind auntie, La Tía perhaps, who has been cushioning the blow of global warming. For the time being, anyway.

The flip side of our kind auntie is our bad-tempered uncle, El Tío. He is partly responsible for periods of accelerated warming, like the period from the late 1970s to the late 1990s.

The IPO has been in its “kind auntie” phase for well over a decade now. But the IPO could be about to flip over to El Tío. If that happens, it is not good news for global temperatures – they will accelerate upwards……….

more work needs to be done to predict the next shift in the IPO and climate change. This is the topic of a new set of experiments that are going to be part the next round of climate model comparisons.

With further model development and new observations of the deep ocean available since 2005, scientists will be able to more easily answer some of these important questions.

Whatever the case, cranky old El Tío is waiting just around the corner. His big stick is poised, ready to give us a massive hiding: a swift rise in global temperatures over the coming decades. https://theconversation.com/meet-el-ninos-cranky-uncle-that-could-send-global-warming-into-hyperdrive-72360

February 6, 2017 Posted by | 2 WORLD, climate change, Reference | Leave a comment

A faulty concept of “acceptable risk” – by America’s Nuclear Regulatory Commission

the nuclear manufacturers—Westinghouse and General Electric—.. refuse to participate in any project unless they are guaranteed to be free of any liability for any offsite accident consequences. If they believed the NRC risk calculations, they would have no difficulty in accepting the litigation risk—but they obviously don’t. In short, the organizations most highly knowledgeable about nuclear safety don’t trust the NRC’s probabilistic calculations………
A definition of risk that placed greater emphasis on avoiding large-consequence events would be more in line with the common sense of the public whom the NRC is supposed to be protecting. If nuclear power is to have any long-term future, it will have to go beyond even that level of protection….Just as the nuclear manufacturers don’t want to bet their companies on calculations of nuclear safety, neither do people at large want to bet their cities and countrysides.

NRC-jpgWhen 10,000 square miles of contamination is an acceptable risk: The NRC’s faulty concept, Bulletin of the Atomic Scientists, 9 JANUARY 2017 Victor Gilinsky In making safety decisions, the Nuclear Regulatory Commission uses accident probability calculations that are much more optimistic than anything that nuclear manufacturers like General Electric and Westinghouse actually believe. The result is weak public protection. A good example is the NRC commissioners’ rejection in 2014 of a proposal to limit the possible severe consequences of spent fuel pool fires in nuclear power plants because the proposal’s cost, however modest, exceeded the value of the expected reduction in “risk.”

Spent fuel pools are where highly radioactive (and thus thermally hot) used reactor fuel is stored after it is removed from the reactor core. If a pool loses its water supply, the spent fuel can overheat and eventually burn, releasing large quantities of radioactivity.  The spent fuel pool issue gained prominence after the 2011 Fukushima accident. For a time during the accident the dominant concern was that spent fuel in Fukushima’s damaged Unit 4 pool might catch fire. It didn’t happen, but it could have multiplied the effects of the catastrophic Fukushima accident manyfold. The NRC staff told the commissioners in 2014 that a worst-case spent fuel pool fire in a US plant like those at Fukushima—of which there are nearly three dozen—could release 25 times more long-lasting radioactivity than escaped from the Fukushima reactor vessels, and perhaps even more. Such a release could render 10,000 square miles uninhabitable and (around the Pennsylvania nuclear plant the staff chose as an example) could require the evacuation of 4 million persons.

The specific proposal before the commissioners was to limit the amount of radioactive spent fuel in a pool and thus to reduce the consequences of a fire by a factor of ten. This would be accomplished by speeding up the transfer of radioactive spent (used) fuel from the pool into “dry cask” storage. The plant owners have to do this eventually, but earlier transfers increase the cost. The commissioners saw their role as deciding whether the safety benefit—the reduction in risk—warranted this cost increase.

In fact, they weren’t deciding anything. The commissioners lent an air of official seriousness to the proceeding, but the decision making was on autopilot. It involved calculating the average risk (R) of an accident by multiplying two numbers, the accident’s probability (P) and its consequence (C). If P is sufficiently small, the average risk (or P times C) will be negligible no matter how large the consequence. And, therefore, the possible reduction in risk will hardly be worth any expenditure. That is how it worked in the 2014 case of a possible spent fuel fire, and that is how it has worked in most cases involving protection against severe accidents.

Actually, most cases don’t get this far. The commission has a threshold for the staff to investigate a safety issue posed by a hypothetical accident. If the estimated probability of “prompt” deaths offsite is below 2 in 1 million per year, the NRC staff need not investigate further. This involves a kind of Catch-22. The NRC assumes effective evacuation of the surrounding area in the event of an accident, so there aren’t people to be irradiated, and even substantial accidents don’t exceed the commission’s threshold……..

Consider the implications of NRC’s risk definition for the risk of long-term land contamination: The NRC staff’s projection of about 10,000 square miles, when multiplied by the staff-estimated accident probability, becomes an annual risk of about one-thousandth of a square mile, or less than an acre per year. Since valuable farmland runs at several thousand dollars per acre, the NRC conclusion is that any safety improvement that costs more than that isn’t worthwhile in terms of saving land. Similarly, the risk of displacing persons, becomes about half a person displaced per year, perhaps at a cost of tens of thousands of dollars, and so, again, per NRC logic, it is not worth spending more than that to avoid long-term evacuations to protect against severe spent fuel pool fires. This isn’t the conclusion most people would arrive at for themselves or their home towns.

There are several things wrong with the NRC’s cost-benefit approach to nuclear safety. To begin with, neither factor in the risk formula—probability times consequence—can be calculated with any accuracy. For example, the consequences of an accident requiring the long-term, possibly permanent, evacuation of 4 million will surely not be limited to the expense of such an evacuation. It would, for example, almost certainly spell the end of nuclear power use in the United States and likely in many countries, with huge economic consequences. …….

Nor is the situation much better when it comes to estimating the accident probability. As there is little data on large accidents, the accident probability is a calculated number. The NRC staff relies increasingly on elaborate calculations that model the various failure modes of a nuclear plant. For outsiders, or for that matter the NRC commissioners themselves, the result essentially comes out of a black box. …..

Which brings us to a deep flaw in NRC’s safety methodology—its reliance on the average risk as the figure of merit. It is by no means the only possible measure of risk. We know that in many statistical situations the average is not the best choice to characterize the data.  It works where there are well-established data on both probabilities and consequences as, for example, in considering measures to reduce auto accidents. It doesn’t make sense for high consequence/low probability events, for one thing, because the numbers are so poorly known. Also, using average risk doesn’t reflect what most people—the people the NRC is supposed to be protecting—want to achieve. They don’t want to risk losing a city, no matter what the calculated probabilities. That is how the nuclear manufacturers—Westinghouse and General Electric—see it, too. They refuse to participate in any project unless they are guaranteed to be free of any liability for any offsite accident consequences. If they believed the NRC risk calculations, they would have no difficulty in accepting the litigation risk—but they obviously don’t. In short, the organizations most highly knowledgeable about nuclear safety don’t trust the NRC’s probabilistic calculations………

Any change in the NRC’s approach to nuclear risk must come from the outside; the agency has too much invested in the current approach for internal reform to have a chance. When a witness at the 2014 Commission meeting on spent fuel pool fires, Clark University professor Gordon Thompson, questioned using the average risk as the figure of merit, only one commissioner took notice and that was to ridicule the notion. The commissioners should have paid more attention.

A definition of risk that placed greater emphasis on avoiding large-consequence events would be more in line with the common sense of the public whom the NRC is supposed to be protecting. If nuclear power is to have any long-term future, it will have to go beyond even that level of protection. A 2012 report of the American Society of Mechanical Engineers, a group heavily involved with the nuclear industry, called for a major step-up in nuclear safety and warned that severe accident impacts on people’s lives were “wholly inconsistent with an economically viable and socially acceptable use of nuclear energy.” Just as the nuclear manufacturers don’t want to bet their companies on calculations of nuclear safety, neither do people at large want to bet their cities and countrysides. http://thebulletin.org/when-10000-square-miles-contamination-acceptable-risk-nrc%E2%80%99s-faulty-concept10459

February 3, 2017 Posted by | Reference, safety, USA | Leave a comment

Thorium nuclear power was a commercial failure- nothing to do with nuclear weapons, as pro nukers pretend

Thorium-pie-in-skyThorium Reactors: Fact and Fiction, Skeptoid  These next-generation reactors have attracted a nearly cultish following. Is it justified?   by Brian Dunning  Skeptoid Podcast #555  January 24, 2017

Podcast transcript     “………True or False? Thorium reactors were never commercially developed because they can’t produce bomb material.

This is mostly false, although it’s become one of the most common myths about thorium reactors. There are other very good reasons why uranium-fueled reactors were developed commercially instead of thorium-fueled reactors. If something smells like a conspiracy theory, you’re always wise to take a second, closer look.

When we make weapons-grade Pu239 for nuclear weapons, we use special production reactors designed to burn natural uranium, and only for about three months, to avoid contaminating it with Pu240. Only a very few reactors were ever built that can both do that and generate electricity. The rest of the reactors out there that generate electricity could have been any design that was wanted. So why weren’t thorium reactors designed instead? We did have some test thorium-fueled reactors built and running in the 1960s. The real reason has more to do with the additional complexity, design challenges, and expense of these MSBR (molten salt breeder) reactors.

In 1972, the US Atomic Energy Commission published a report on the state of MSBR reactors. Here’s a snippet of what was found:

A number of factors can be identified which tend to limit further industrial involvement at this time, namely:

  • The existing major industrial and utility commitments to the LWR, HTGR, and LMFBR.
  • The lack of incentive for industrial investment in supplying fuel cycle services, such as those required for solid fuel reactors.
  • The overwhelming manufacturing and operating experience with solid fuel reactors in contrast with the very limited involvement with fluid fueled reactors.
  • The less advanced state of MSBR technology and the lack of demonstrated solutions to the major technical problems associated with the MSBR concept.

In short, the technology was just too complicated, and it never became mature enough.

It is, however, mostly true that, if we’re going to use a commercial reactor to get plutonium for a bomb, recycling spent fuel from a uranium reactor is easier, and you can get proper weapons-grade plutonium this way. It is possible to get reactor-grade plutonium from a thorium reactor that can be made into a bomb — one was successfully tested in 1962 — but it’s a much lower yield bomb and it’s much harder to get the plutonium.

The short answer is that reduced weapons proliferation is not the strongest argument for switching from uranium fuel to thorium fuel for power generation. Neither reactor type is what’s typically designed and used for bomb production. Those already exist, and will continue to provide all the plutonium that governments are ever likely to need for that purpose.

There’s every reason to take fossil fuels completely out of our system; we have such absurdly better options. If you’re like me and want to see this approach be a multi-pronged one, one that major energy companies, smaller community providers, and individual homeowners can all embrace, then advocate for nukes. You don’t need to specify thorium or liquid fuel or breeders; they’re already the wave of the future — a future which, I hope, will be clean, bright, and bountiful.  https://skeptoid.com/episodes/4555

February 1, 2017 Posted by | 2 WORLD, Reference, thorium | 1 Comment

Joint Statements on Climate Change from National Academies of Science Around the World

climate-change
Skeptical Science 27 January 2017  This is a re-post from Significant Figures by Peter Gleick

National academies of sciences from around the world have published formal statements and declarations acknowledging the state of climate science, the fact that climate is changing, the compelling evidence that humans are responsible, and the need to debate and implement strategies to reduce emissions of greenhouse gases. Not a single national science academy disputes or denies the scientific consensus around human-caused climate change. A few examples of joint academy statements since 2000 on climate are listed here. Many national academies have, in addition, published their own reports and studies on climate issues. These are not included here.

The Science of Climate Change (Statement of 17 National Science Academies, 2001)

http://science.sciencemag.org/content/292/5520/1261

Following the release of the third in the ongoing series of international reviews of climatescience conducted by the Intergovernmental Panel on Climate Chang (IPCC), seventeen national science academies issued a joint statement, entitled “The Science of Climate Change,” acknowledging the IPCC study to be the scientific consensus on climate changescience.

The seventeen signatories were:

  • Australian Academy of Sciences
  • Royal Flemish Academy of Belgium for Sciences and the Arts
  • Brazilian Academy of Sciences
  • Royal Society of Canada
  • Caribbean Academy of Sciences
  • Chinese Academy of Sciences
  • French Academy of Sciences
  • German Academy of Sciences, Leopoldina
  • Indian National Science Academy
  • Indonesian Academy of Sciences
  • Royal Irish Academy
  • Accademia Nazionale dei Lincei (Italy)
  • Academy of Sciences Malaysia
  • Academy Council of the Royal Society of New Zealand
  • Royal Swedish Academy of Sciences
  • Turkish Academy of Sciences
  • Royal Society (UK)

Joint science academies’ statement: Global response to climate change(Statement of 11 National Science Academies, 2005)

http://nationalacademies.org/onpi/06072005.pdf

Eleven national science academies, including all the largest emitters of greenhouse gases, signed a statement that the scientific understanding of climate change was sufficiently strong to justify prompt action. The statement explicitly endorsed the IPCC consensus and stated:

“…there is now strong evidence that significant global warming is occurring. The evidence comes from direct measurements of rising surface air temperatures and subsurface ocean temperatures and from phenomena such as increases in average global sea levels, retreating glaciers, and changes to many physical and biological systems. It is likely that most of the warming in recent decades can be attributed to human activities (IPCC 2001). This warming has already led to changes in the Earth’s climate.”………

Joint science academies’ statement: Global response to climate change(Statement of 11 National Science Academies, 2005)

http://nationalacademies.org/onpi/06072005.pdf………

 

Joint science academies’ statement on Growth and responsibility: sustainability, energy efficiency and climate protection (Statement of 13 National Science Academies, 2007)

http://www.pik-potsdam.de/aktuelles/nachrichten/dateien/G8_Academies%20Declaration.pdf……..

A joint statement on sustainability, energy efficiency, and climate change(Statement of 13 individual National Science Academies and the African Academy of Sciences, 2007)

http://www.interacademies.net/File.aspx?id=4825………..

 

Zmian klimatu, globalnego ocieplenia i ich alarmujących skutkow: “Climate change, global warming and its alarming consequences” (Statement of the Polish Academy of Sciences, December 2007)

http://bit.ly/2jwgtNL………

 

Joint Science Academies’ Statement: Climate Change Adaptation and the Transition to a Low Carbon Society (Statement of 13 National Academies of Sciences, June 2008)

http://www.nationalacademies.org/includes/climatechangestatement.pdf……..

 

Climate change and the transformation of energy technologies for a low carbon future (Statement of 13 National Academies of Sciences, May 2009)

http://www.leopoldina.org/en/press/press-releases/press-release/press/713/………

Health Effects of Climate Change (Statement of the Inter Academy Medical Panel/42 National Academies of Sciences, 2010)

http://www.leopoldina.org/de/publikationen/detailansicht/publication/health-effects-of-climate-change-2010/………

Climate Change: Evidence and Causes (Joint Statement of the Royal Society and the U.S. National Academy of Sciences, February 2014)

http://nas-sites.org/americasclimatechoices/events/a-discussion-on-climate-change-evidence-and-causes/……..

Position de l’Académie sur les Changements Climatiques (Statement of the Académie Royale des Science, des Lettres & des Beaux-Arts de Belgique, November 12, 2014)

https://t.co/SZT9VvU8vx………

 

U.K. Science Communiqué on Climate Change (Joint Statement of the Royal Society and member organizations, July 2015)

https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF…….

 

Facing critical decisions on climate change (Joint Statement of the European Academies Science Advisory Council and its 29 members, 2015)

http://www.leopoldina.org/de/publikationen/detailansicht/publication/facing-critical-decisions-on-climate-change-in-2015/

Facing critical decisions on climate change in 2015

The science of climate change reported by the IPCC Fourth Assessment (2007) and Fifth Assessment (2014) have been thoroughly evaluated by numerous national academies (e.g. Royal Society/National Academy of Sciences, Royal Swedish Academy of Sciences) and by international bodies. Advances in science and technology have increased our knowledge of how to mitigate climate change, uncertainties in the scientific analysis continue to be addressed, co-benefits of mitigation to health have been revealed, and new business opportunities have been found. EASAC remains concerned, however, that progress in turning this substantial evidence base into an international policy response has so far failed to match the full magnitude and urgency of the problem

Even if emissions of GHG stopped altogether, existing concentrations of GHG in the atmosphere would continue to exert a warming effect for a long time. Whatever measures are put in place to reduce the intensity of global human-induced climate forcing, building resilience through adaptation will be necessary to provide more resilience to the risks already emerging as a result of climate change…

Signatories/Members of the European Academies Science Advisory Council

  • Academia Europaea
  • All European Academies (ALLEA)
  • The Austrian Academy of Sciences
  • The Royal Academies for Science and the Arts of Belgium
  • The Bulgarian Academy of Sciences
  • The Croatian Academy of Sciences and Arts
  • The Czech Academy of Sciences
  • The Royal Danish Academy of Sciences and Letters
  • The Estonian Academy of Sciences
  • TheCouncil of Finnish Academies
  • The German Academy of Sciences Leopoldina
  • The Academy of Athens
  • The Hungarian Academy of Sciences
  • The Royal Irish Academy
  • The Accademia Nazionale dei Lincei
  • The Latvian Academy of Sciences
  • The Lithuanian Academy of Sciences
  • The Royal Netherlands Academy of Arts and Sciences
  • The Norwegian Academy of Science and Letters
  • The Polish Academy of Sciences
  • The Academy of Sciences of Lisbon
  • The Romanian Academy
  • The Slovak Academy of Sciences
  • The Slovenian Academy of Sciences and Arts
  • The Spanish Royal Academy of Sciences
  • The Royal Swedish Academy of Sciences
  • The Swiss Academies of Arts and Sciences
  • The Royal Society
  • The Federation of European Academies of Medicine (FEAM) (Observer)

 

[This list is not a complete summary of the many individual or joint statements of national academies of sciences. Please send additions and corrections to pgleick@pacinst.orghttps://www.skepticalscience.com/joint-statements-on-climate-change-from-nas-around-world.html

January 30, 2017 Posted by | 2 WORLD, climate change, Reference | Leave a comment

Testing a deep borehole as a potential way to bury highly radioactive nuclear trash

Christina Macpherson's websites & blogs

Christina Macpherson’s websites & blogs

It is a good idea to at least test the feasibility of deep boreholes. As one resident said “Something must be done with the wastes”. There is no obligation on that community to agree to actually accept high level nuclear waste – only to host the testing of the deep bore concept.

The whole project would really make sense if it were combined with a definite plan to STOP MAKING TOXIC RADIOACTIVE WASTES, by closing down all nuclear reactors. This could be done, with genuine good will, and planning for compensation and transition to other employment for workers in the nuclear industry.

New Mexico town steps up for nuclear borehole project  LMT Online, , January 15, 2017 “……. The U.S. Energy Department, Quay County and two energy development companies say the nation’s latest nuclear waste experiment could inject as much as $40 million into the county’s economy. Nara Visa residents just have to agree to let the companies drill a three-mile-deep borehole — seven times deeper than the Waste Isolation Pilot Plant in Carlsbad — into the crystalline, granite crust of the earth a few miles outside of town, on land currently occupied by fat, black cattle.

Right now, the project is pegged as a scientific experiment. The Energy Department says no nuclear waste will be placed in the test borehole.

The ultimate goal is to find a permanent place to dispose of the ever-growing and deadly stockpile of spent nuclear fuel rods and high-level radioactive waste collected at nuclear reactors and nuclear weapons laboratories nationwide.

Until this year, no town in the U.S. had agreed to the proposal. But when the Quay County Commission approved the plan in October, it put Nara Visa on track to become the first.

About seven miles outside Nara Visa, there is a small, gravel roadside park where semi-truck drivers pull off U.S. 54 to sleep. Below the earth, the granite is devoid of oil but just right for deep drilling.

These 10 acres belong to Louis and Elaine James, who’ve agreed to lease it to the government………

As far as the nuclear waste component is concerned, Louis James, 69, said, “I have more of a problem with it sitting over at Pantex 100 miles away than I do with it being under the ground, because you know it will get you if they ever attack those spots.” He was referring to the Pantex Plant, a nuclear weapons assembly facility outside Amarillo, Texas……

The test hole planned for the James’ property is meant to be just 8 1/2 inches wide but would go deep below ground, first through the water table and a mile through sediment before hitting the top of a crystalline rock layer. From there, the hole would be drilled another two miles into the Earth. This is the layer where nuclear waste would be stored, then sealed off with a steel casing and concrete to protect the environment and water in the mile span separating the waste from the land’s surface.

borehole-16

Utah-based DOSECC Exploration Services LLC and Enercon Federal Services, Inc., based in Atlanta, are developing the Nara Visa proposal and are one of four groups that have been granted the go-ahead from the Energy Department for Phase 1 of the project. This is referred to as “community buy-in,” gaining not only public approval but also support for the project, and securing the land for the borehole site.

If DOSECC and Enercon win this bid, they will get $35 million over a five-year period to drill the first hole. The Energy Department will grant an additional $50 million to drill a second, wider borehole if the first is successful……

State Rep. Dennis Roch said that after meeting with the companies, he felt confident there was “no connection between this viability test and the ultimate decision of where to dispose of nuclear waste way down the road.”…….

The Nara Visa site would only be permitted for drilling, he added. Nuclear waste storage would require an entirely different permitting and regulatory process…….

WIPP, after being closed for nearly three years following the radiation leak, began depositing waste below ground for the first time in December. But the stagnation of waste disposal at these facilities left the Energy Department scrambling for alternatives, and in 2012, deep boreholes resurfaced as a potential alternative, an idea that was first floated in the 1950s.

To store all of the waste sitting at 77 U.S. facilities, the Energy Department needs to drill 950 boreholes at an estimated $20 million per hole, or $71 billion for the entire project, including transportation, environmental reclamation, monitoring and site characterization, according to the 2010 Sandia study. In contrast, Yucca Mountain was estimated to cost $96 billion.

Each hole is expected to contain 400 vertically stacked fuel pods that, unlike the costly steel drums used to pack waste headed to WIPP, would not require specialized containers but instead would be stored in their spent fuel form or glass. Multiple boreholes could be drilled just over 200 meters apart to avoid thermal reactions.

Though the Sandia study said boreholes could be used for nuclear reactor waste, Mast from Enercon said he believes the Energy Department is only looking at boreholes for waste from nuclear weapons development.

To actually begin placing nuclear waste in the boreholes will require an amendment to the Nuclear Waste Policy Act.

Before the proposal reaches that stage, Greg Mello, director of the watchdog Los Alamos Study Group, says the government should be more transparent about exactly what type of high-level nuclear waste would go in the holes: spent fuel rods, nuclear weapons waste or down-blended plutonium. …..http://www.lmtonline.com/news/article/New-Mexico-town-steps-up-for-nuclear-borehole-10858853.php

January 16, 2017 Posted by | Reference, USA, wastes | 1 Comment

Analysing the pros and cons of tax-payer subsidies for nuclear power

Nuclear power producers want government-mandated long-term contracts or other mechanisms that require customers to buy power from their troubled units at prices far higher than they would pay otherwise.

In California and in Nebraska, utilities plan to replace nuclear plants that are closing early for economic reasons almost entirely with electricity from carbon-free sources. Such transitions are achievable in most systems as long as the shutdowns are planned in advance to be carbon-free.

We should not rely further on the unfulfilled prophesies that nuclear lobbyists have deployed so expensively for so long.

Tax - payershighly-recommendedShould troubled nuclear reactors be subsidized? http://bangordailynews.com/2017/01/13/the-point/compete-or-suckle-should-troubled-nuclear-reactors-be-subsidized/ By Peter Bradford, The Conversation

Since the 1950s, U.S. nuclear power has commanded immense taxpayer and consumer subsidy based on promises of economic and environmental benefits. Many of these promises are unfulfilled, but new ones take their place and more subsidies follow.

Today, the nuclear industry claims that keeping all operating reactors running for many years, no matter how uneconomic they become, is essential in order to reach U.S. climate change targets.

Economics have always challenged U.S. reactors. After more than 100 construction cancellations and cost overruns costing up to $5 billion apiece, Forbes magazine in 1985 called nuclear power “the greatest managerial disaster in business history … only the blind, or the biased, can now think that most of the money [$265 billion by 1990] has been well spent.” U.S. Atomic Energy Commission Chair Lewis Strauss’ 1954 promise that electric power would be “ too cheap to meter” is today used to mock nuclear economics, not commend them.

As late as 1972, the Atomic Energy Commission forecast that the U.S. would have 1,000 power reactors by the year 2000. Today, we have 100 operating power reactors, down from a peak of 112 in 1990. Since 2012, power plant owners have retired five units and announced plans to close nine more. Four new reactors are likely to come on line. Without strenuous government intervention, almost all of the rest will close by midcentury. Because these recent closures have been abrupt and unplanned, the replacement power has come in substantial part from natural gas, causing a dismaying uptick in greenhouse gas emissions.

The nuclear industry, led by the forlornly named lobbying group Nuclear Matters, still obtains large subsidies for new reactor designs that cannot possibly compete at today’s prices. But its main function now is to save operating reactors from closure brought on by their own rising costs, by the absence of a U.S. policy on greenhouse gas emissions and by competition from less expensive natural gas, carbon-free renewables and more efficient energy use.

Only billions more dollars in subsidies and the retarding of rapid deployment of cheaper technologies can save these reactors. Only fresh claims of unique social benefit can justify such steps.

When I served on the U.S. Nuclear Regulatory Commission from 1977 through 1982, it issued more licenses than in any comparable period since. Arguments that the U.S. couldn’t avoid dependence on Middle Eastern oil and keep the lights on without a vast increase in nuclear power were standard fare then and throughout my 20 years chairing the New York and Maine utility regulatory commissions. In fact, we attained these goals without the additional reactors, a lesson to remember in the face of claims that all of today’s nuclear plants are needed to ward off climate change.

Nuclear power in competitive electricity markets

During nuclear power’s growth years in the 1960s and 1970s, almost all electric utility rate regulation was based on recovering the money necessary to build and run power plants and the accompanying infrastructure. But in the 1990s, many states broke up the electric utility monopoly model.

Now a majority of U.S. power generation is sold in competitive markets. Companies profit by producing the cheapest electricity or providing services that avoid the need for electricity.

To justify their current subsidy demands, nuclear advocates assert three propositions. First, they contend that power markets undervalue nuclear plants because they do not compensate reactors for avoiding carbon emissions or for other attributes such as diversifying the fuel supply or running more than 90 percent of the time.

Second, they assert that other low-carbon sources cannot fill the gap because the wind doesn’t always blow and the sun doesn’t always shine. So power grids will use fossil-fired generators for more hours if nuclear plants close.

Finally, nuclear power supporters argue that these intermittent sources receive substantial subsidies while nuclear energy does not, thereby enabling renewables to underbid nuclear even if their costs are higher.

Nuclear power producers want government-mandated long-term contracts or other mechanisms that require customers to buy power from their troubled units at prices far higher than they would pay otherwise.

Providing such open-ended support will negate several major energy trends that currently benefit customers and the environment. First, power markets have been working reliably and effectively. A large variety of cheaper, more efficient technologies for producing and saving energy, as well as managing the grid more cheaply and cleanly, have been developed. Energy storage, which can enhance the round-the-clock capability of some renewables is progressing faster than had been expected, and it is now being bid into several power markets — notably the market serving Pennsylvania, New Jersey and Maryland.

Long-term subsidies for uneconomic nuclear plants also will crowd out penetration of these markets by energy efficiency and renewables. This is the path New York has taken by committing at least $7.6 billion in above-market payments to three of its six plants to assure that they operate through 2029.

Nuclear power versus other carbon-free fuels  While power markets do indeed undervalue low-carbon fuels, all of the other premises underlying the nuclear industry approach are flawed. In California and in Nebraska, utilities plan to replace nuclear plants that are closing early for economic reasons almost entirely with electricity from carbon-free sources. Such transitions are achievable in most systems as long as the shutdowns are planned in advance to be carbon-free.

In California, these replacement resources, which include renewables, storage, transmission enhancements and energy efficiency measures, will for the most part be procured through competitive processes. Indeed, any state where a utility threatens to close a plant can run an auction to ascertain whether there are sufficient low-carbon resources available to replace the unit within a particular time frame. Only then will regulators know whether, how much and for how long they should support nuclear units.

If New York had taken this approach, each of the struggling nuclear units could have bid to provide power in such an auction. They might well have succeeded for the immediate future, but some or all would probably not have won after that.

Closing the noncompetitive plants would be a clear benefit to the New York economy. This is why a large coalition of big customers, alternative energy providers and environmental groups opposed the long-term subsidy plan.

The industry’s final argument — that renewables are subsidized and nuclear is not — ignores overwhelming history. All carbon-free energy sources together have not received remotely as much government support as has flowed to nuclear power.

Nuclear energy’s essential components — reactors and enriched uranium fuel — were developed at taxpayer expense. Private utilities were paid to build nuclear reactors in the 1950s and early 1960s, and received subsidized fuel. According to a study by the Union of Concerned Scientists, total subsidies paid and offered to nuclear plants between 1960 and 2024 generally exceed the value of the power that they produced.

The U.S. government also has pledged to dispose of nuclear power’s most hazardous wastes — a promise that has never been made to any other industry. By 2020, taxpayers will have paid some $21 billion to store those wastes at power plant sites.

Furthermore, under the 1957 Price-Anderson Act, each plant owner’s accident liability is limited to some $300 million per year, even though the Fukushima disaster showed that nuclear accident costs can exceed $100 billion. If private companies that own U.S. nuclear power plants had been responsible for accident liability, they would not have built reactors. The same is almost certainly true of responsibility for spent fuel disposal.

Finally, as part of the transition to competition in the 1990s, state governments were persuaded to make customers pay off some $70 billion in excessive nuclear costs. Today, the same nuclear power providers are asking to be rescued from the same market forces for a second time.

Christopher Crane, the president and CEO of Exelon, which owns the nation’s largest nuclear fleet, preaches temperance from a bar stool when he disparages renewable energy subsidies by asserting, “I’ve talked for years about the unintended consequences of policies that incentivize technologies versus outcomes.“

But he’s right about unintended and unfortunate consequences. We should not rely further on the unfulfilled prophesies that nuclear lobbyists have deployed so expensively for so long. It’s time to take Crane at his word by using our power markets, adjusted to price greenhouse gas emissions, to prioritize our low carbon outcome over his technology.

Peter Bradford is a the former chair of the Maine’s Public Utilities Commission and former U.S. Nuclear Regulatory commissioner. He also is on the board of the Union of Concerned Scientists. This piece was originally published on TheConversation.com.

January 14, 2017 Posted by | business and costs, politics, Reference, USA | Leave a comment

Background to shutdown of Indian Point nuclear power plant

Indian Point nuclear plant, which the government of New York would prefer to close. Photo: Ricky Flores/The Journal Newshighly-recommendedAn engineer’s perspective on the Indian Point shutdown http://enformable.com/2017/01/an-engineers-perspective-on-the-indian-point-shutdown/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+Enformable+%28Enformable%29  Author: , 11 Jan 17  

The good—the very good—energy news is that the Indian Point nuclear power plants 26 miles north of New York City will be closed in the next few years under an agreement reached between New York State and the plants’ owner, Entergy.

New York Governor Andrew Cuomo has long been calling for the plants to be shut down because, as the New York Times related in its story on the pact, they pose “too great a risk to New York City.” Environmental and safe-energy organizations have been highly active for decades in working for the shutdown of the plants. Under the agreement, one Indian Point plant will shut down by April 2020, the second by April 2021.

They would be among the many nuclear power plants in the U.S. which their owners have in recent years decided to close or have announced will be shut down in a few years.

This comes in the face of nuclear power plant accidents—the most recent the ongoing Fukushima nuclear disaster in Japan—and competitive power being less expensive including renewable and safe solar and wind energy.

Last year the Fort Calhoun nuclear plant in Nebraska closed following the shutdowns of Kewanee in Wisconsin, Vermont Yankee in Vermont, Crystal River 3 in Florida and both San Onofre 2 and 3 in California. Nuclear plant operators say they will close Palisades in Michigan next year and then Oyster Creek in New Jersey and Pilgrim in Massachusetts in 2019 and California’s Diablo Canyon 1 in 2024 and Diablo Canyon 3 in 2025.

This brings the number of nuclear plants down to a few more than 90—a far cry from President Richard Nixon’s scheme to have 1,000 nuclear plants in the U.S. by the year 2000.

But the bad—the very bad—energy news is that there are still many promoters of nuclear power in industry and government still pushing and, most importantly, the transition team of incoming President Donald Trump has been “asking for ways to keep nuclear power alive,” as Bloomberg news reported last month.

As I was reading last week the first reports on the Indian Point agreement, I received a phone call from an engineer who has been in the nuclear industry for more than 30 years—with his view of the situation.

The engineer, employed at nuclear plants and for a major nuclear plant manufacturer, wanted to relate that even with the Indian Point news—“and I’d keep my fingers crossed that there is no disaster involving those aged Indian Point plants in those next three or four years”—nuclear power remains a “ticking time bomb.” Concerned about retaliation, he asked his name not be published.

Here is some of the information he passed on—a story of experiences of an engineer in the nuclear power industry for more than three decades and his warnings and expectations.

THE SECRETIVE INPO REPORT SYSTEM

Several months after the accident at the Three Mile Island nuclear plant in Pennsylvania in March 1979, the nuclear industry set up the Institute of Nuclear Power Operations (INPO) based in Atlanta, Georgia. The idea was to have a nuclear industry group that “would share information” on problems and incidents at nuclear power plants, he said.

If there is a problem at one nuclear power plant, through an INPO report it is communicated to other nuclear plant operators. Thus the various plant operators could “cross-reference” happenings at other plants and determine if they might apply to them.

The reports are “coded by color,” explained the engineer. Those which are “green” involve an incident or condition that might or might not indicate a wider problem. A “yellow” report is on an occurrence “that could cause significant problems down the road.” A “red” report is the most serious and represents “a problem that could have led to a core meltdown”—and could be present widely among nuclear plants and for which action needs to be taken immediately.

The engineer said he has read more than 100 “Code Red” reports. What they reflect, he said, is that “we’ve been very, very lucky so far!”

If the general public would see these “red” reports, its view on nuclear power would turn strongly negative, said the engineer.

But this is prevented by INPO, “created and solely funded by the nuclear industry,” thus its reports “are not covered by the U.S. Freedom of Information Act and are regarded as highly secretive.” The reports should be required to be made public, said the engineer. “It’s high time the country wakes up to the dangers we undergo with nuclear power plants.”

THE NRC INSPECTION FARCE

The U.S. Nuclear Regulatory Commission (NRC) is supposed to be the federal agency that is the watchdog over nuclear power plants and it frequently boasts of how it has “two resident inspectors” at each nuclear power plant in the nation, he noted.

However, explained the engineer, “the NRC inspectors are not allowed to go into the plant on their own. They have to be escorted. There can be no surprise inspections. Indeed, the only inspections that can be made are those that come after the NRC inspectors “get permission from upper management at the plant.”

The inspectors “have to contact upper management and say they want to inspect an area. The word is then passed down from management that inspectors are coming—so ‘clean up’ whatever is the situation is.”

“The inspectors hands are tied,” said the engineer.

THE 60- AND NOW 80-YEAR OPERATING DELUSION

When nuclear power plants were first designed decades ago, explained the engineer, the extent of their mechanical life was established at 40 years. The engineer is highly familiar with these calculations having worked for a leading manufacturer of nuclear plants, General Electric.

The components in nuclear plants, particularly their steel parts, “have an inherent working shelf life,” said the engineer.

In determining the 40-year total operating time, the engineer said that calculated were elements that included the wear and tear of refueling cycles, emergency shutdowns and the “nuclear embrittlement from radioactivity that impacts on the nuclear reactor vessel itself including the head bolts and other related piping, and what the entire system can handle. Further, the reactor vessel is the one component in a nuclear plant that can never be replaced because it becomes so hot with radioactivity. If a reactor vessel cracks, there is no way of repairing it and any certainty of containment of radioactivity is not guaranteed.”

Thus the U.S. government limited the operating licenses it issued for all nuclear power plants to 40 years. However, in recent times the NRC has “rubber-stamped license extensions” of an additional 20 years now to more than 85 of the nuclear plants in the country—permitting them to run for 60 years. Moreover, a push is now on, led by nuclear plant owners Exelon and Dominion, to have the NRC grant license extensions of 20 additional years—to let nuclear plants run for 80 years.

Exelon, the owner of the largest number of nuclear plants in the U.S., last year announced it would ask the NRC to extend the operating licenses of its two Peach Bottom plants in Pennsylvania to 80 years. Dominion declared earlier that it would seek NRC approval to run its two Surry nuclear power plants in Virginia for 80 years.

“That a nuclear plant can run for 60 years or 80 years is wishful thinking,” said the engineer. “The industry has thrown out the window all the data developed about the lifetime of a nuclear plant. It would ignore the standards to benefit their wallets, for greed, with total disregard for the country’s safety.”

The engineer went on that since “Day One” of nuclear power, because of the danger of the technology, “they’ve been playing Russian roulette—putting one bullet in the chamber and hoping that it would not fire. By going to 60 years and now possibly to 80 years, “they’re putting all the bullets in every chamber—and taking out only one and pulling the trigger.”

Further, what the NRC has also been doing is not only letting nuclear plants operate longer but “uprating” them—allowing them to run “hotter and harder” to generate more electricity and ostensibly more profit. “Catastrophe is being invited,” said the engineer.

 THE CARBON-FREE MYTH

A big argument of nuclear promoters in a period of global warming and climate change is that “reactors aren’t putting greenhouse gases out into the atmosphere,” noted the engineer.

But this “completely ignores” the “nuclear chain”—the cycle of the nuclear power process that begins with the mining of uranium and continues with milling, enrichment and fabrication of nuclear fuel “and all of this is carbon intensive.” There are the greenhouse gasses discharged during the construction of the steel and formation of the concrete used in nuclear plants, transportation that is required, and in the construction of the plants themselves.

“It comes back to a net gain of zero,” said the engineer.

Meanwhile, “we have so many ways of generating electric power that are far more truly carbon-free.”

THE BOTTOM LINE

“The bottom line,” said the engineer, “is that radioactivity is the deadliest material which exists on the face of this planet—and we have no way of controlling it once it is out. With radioactivity, you can’t see it, smell it, touch it or hear it—and you can’t clean it up. There is nothing with which we can suck up radiation.”

Once in the atmosphere—once having been emitted from a nuclear plant through routine operation or in an accident—“that radiation is out there killing living tissue whether it be plant, animal or human life and causing illness and death.”

What about the claim by the nuclear industry and promoters of nuclear power within the federal government of a “new generation” of nuclear power plants that would be safer? The only difference, said the engineer, is that it might be a “different kind of gun—but it will have the same bullets: radioactivity that kills.”

The engineer said “I’d like to see every nuclear plant shut down—yesterday.”

In announcing the agreement on the closing of Indian Point, Governor Cuomo described it as a “ticking time bomb.” There are more of them. Nuclear power overall remains, as the experienced engineer from the nuclear industry said, a “ticking time bomb.”

And every nuclear power plant needs to be shut down.

January 13, 2017 Posted by | climate change, politics, Reference, safety | Leave a comment

The danger of plutonium being released at United States at Naval Base Kitsap-Bangor.

plutonium_04Puget Sound’s ticking nuclear time bomb, Crosscut by , 10 Jan 17  “……“Command and Control” shows what can happen when the weapons built to protect us threaten to destroy us, and it speaks directly to Puget Sound citizens: Locally, we face a similar threat in Hood Canal with the largest concentration of deployed nuclear weapons in the United States at Naval Base Kitsap-Bangor.

An accident at Bangor involving nuclear weapons occurred in November 2003 when a ladder penetrated a nuclear nose cone during a routine missile offloading at the Explosives Handling Wharf. All missile-handling operations at the Strategic Weapons Facility Pacific (SWFPAC) were stopped for nine weeks until Bangor could be recertified for handling nuclear weapons. Three top commanders were fired but the public was never informed until information was leaked to the media in March 2004.

The Navy never publicly admitted that the 2003 accident occurred. The Navy failed to report the accident at the time to county or state authorities. Public responses from governmental officials were generally in the form of surprise and disappointment.

The result of such an explosion likely would not cause a nuclear detonation. Instead, plutonium from the approximately 108 nuclear warheads on one submarine could be spread by the wind…… http://crosscut.com/2017/01/nuclear-accidents-bangor-accident-command-and-control/

January 11, 2017 Posted by | - plutonium, Reference, safety, USA, weapons and war | Leave a comment

How the public pays and pays to keep the nuclear industry alive

text-my-money-2Nuclear Energy Dangerous to Your Wallet, Not Only the Environment  http://www.counterpunch.org/2016/01/01/nuclear-energy-dangerous-to-your-wallet-not-only-the-environment/ Pete Dolack writes the Systemic Disorder blog and has been an activist with several groups. His book, It’s Not Over: Learning From the Socialist Experiment, is available from Zero Books.

Quite an insult: Subsidies prop up an industry that points a dagger at the heart of the communities where ever it operates. The building of nuclear power plants drastically slowed after the disasters at Three Mile Island and Chernobyl, so it is at a minimum reckless that the latest attempt to resuscitate nuclear power pushes forward heedless of Fukushima’s discharge of radioactive materials into the air, soil and ocean.

There are no definitive statistics on the amount of subsidies enjoyed by nuclear power providers — in part because there so many different types of subsidies — but it amounts to a figure, whether we calculate in dollars, euros or pounds, in the hundreds of billions. Quite a result for an industry whose boosters, at its dawn a half-century ago, declared that it would provide energy “too cheap to meter.”

Taxpayers are not finished footing the bill for the industry, however. There is the matter of disposing radioactive waste (often borne by governments rather than energy companies) and fresh subsidies being granted for new nuclear power plants. None of this is unprecedented — government handouts have the been the industry’s rule from its inception. A paper written by Mark Cooper, a senior economic analyst for the Vermont Law School Institute for Energy and the Environment, notes the lack of economic viability then:

“In the late 1950s the vendors of nuclear reactors knew that their technology was untested and that nuclear safety issues had not been resolved, so they made it clear to policymakers in Washington that they would not build reactors if the Federal government did not shield them from the full liability of accidents.” [page iv]

Nor have the economics of nuclear energy become rational today. A Union of Concerned Scientists paper, Nuclear Power: Still Not Viable Without Subsidies, states:

“Despite the profoundly poor investment experience with taxpayer subsidies to nuclear plants over the past 50 years, the objectives of these new subsidies are precisely the same as the earlier subsidies: to reduce the private cost of capital for new nuclear reactors and to shift the long-term, often multi-generational risks of the nuclear fuel cycle away from investors. And once again, these subsidies to new reactors—whether publicly or privately owned—could end up exceeding the value of the power produced.” [page 3]

The many ways of counting subsidies

Among the goodies routinely given away, according to the Concerned Scientists, are: Continue reading

January 9, 2017 Posted by | business and costs, politics, Reference, USA | Leave a comment

USA’s EPA ( Nuclear Industry Protection Agency) confirms dramatic increase in radiation will be permitted in drinking water

text-EPA-Nuclear-ProtectionRADICAL DRINKING WATER RADIATION RISE CONFIRMED IN EPA PLAN http://www.peer.org/news/news-releases/radical-drinking-water-radiation-rise-confirmed-in-epa-plan.html EPA Hid Planned Exposure Levels 1,000s of Times Safe Drinking Water Act Limits PEER, Dec 22, 2016 Washington, DC 


— In the last days of the Obama Administration, the U.S. Environmental Protection Agency is about to dramatically increase allowable public exposure to radioactivity to levels thousands of times above the maximum limits of the Safe Drinking Water Act, according to documents the agency surrendered in a federal lawsuit brought by Public Employees for Environmental Responsibility (PEER). These radical rollbacks cover the “intermediate period” following a radiation release and could last for up to several years. This plan is in its final stage of approval.

The documents indicate that the plan’s rationale is rooted in public relations, not public health. Following Japan’s Fukushima meltdown in 2011, EPA’s claims that no radioactivity could reach the U.S. at levels of concern were contradicted by its own rainwater measurements showing contamination from Fukushima throughout the U.S. well above Safe Drinking Water Act limits. In reaction, EPA prepared new limits 1000s of times higher than even the Fukushima rainwater because “EPA experienced major difficulties conveying to the public that the detected levels…were not of immediate concern for public health.”

When EPA published for public comment the proposed “Protective Action Guides,” it hid proposed new concentrations for all but four of the 110 radionuclides covered, and refused to reveal how much they were above Safe Drinking Water Act limits. It took a lawsuit to get EPA to release documents showing that –

  • The proposed PAGs for two radionuclides (Cobalt-60 and Calcium-45) are more than 10,000 times Safe Drinking Water Act limits. Others are hundreds or thousands of times higher;
  • According to EPA’s own internal analysis, some concentrations are high enough to deliver a lifetime permissible dose in a single day. Scores of other radionuclides would be allowed at levels that would produce a lifetime dose in a week or a month;
  • The levels proposed by the Obama EPA are higher than what the Bush EPA tried to adopt–also in its final days. That plan was ultimately withdrawn; and
  • EPA hid the proposed increases from the public so as to “avoid confusion,” intending to release the higher concentrations only after the proposal was adopted. The documents also reveal that EPA’s radiation division even hid the new concentrations from other divisions of EPA that were critical of the proposal, requiring repeated efforts to get them to even be disclosed internally.
  • “To cover its embarrassment after being caught dissembling about Fukushima fallout on American soil, EPA is pursuing a justification for assuming a radioactive fetal position even in cases of ultra-high contamination,” stated PEER Executive Director Jeff Ruch, noting that New York Attorney General Eric Schneiderman has called for the PAGs to be withdrawn on both public health and legal grounds. “The Safe Drinking Water Act is a federal law; it cannot be nullified or neutered by regulatory ‘guidance.’”

    Despite claims of transparency, EPA solicited public comment on its plan even as it hid the bulk of the plan’s effects. Nonetheless, more than 60,000 people filed comments in opposition.

    “The Dr. Strangelove wing of EPA does not want this information shared with many of its own experts, let alone the public,” added Ruch, noting that PEER had to file a Freedom of Information Act lawsuit to force release of exposure limits. “This is a matter of public health that should be promulgated in broad daylight rather than slimed through in the witching hours of a departing administration.”

January 6, 2017 Posted by | politics, Reference, USA, water | 1 Comment