Mark Bittman Wrong On Gas: How New York Times Columnist Misunderstands Shale Revolution

Shale Natural Gas

In last week’s Opinion Pages of the New York Times, columnist Mark Bittman argues that natural gas has little potential as a bridge to zero-carbon energy and that gas should play a limited role in the country’s energy strategy. He concludes by urging the nation to dismantle its existing energy infrastructure, and filling the gap with wind, solar, and other renewables. Here we examine Bittman’s arguments, and the studies and literature he employs to back them up, and find them wanting.  

Natural gas and nuclear have done more than any other fuel source to displace coal, and have saved the United States 54 billion tonnes of carbon dioxide emissions since 1950. In the past five years, natural gas alone has displaced coal and driven the country’s power sector emissions down 20 percent, leading to immense environmental and human health benefits. What follows is a response to Mark Bittman’s dreary diagnosis of natural gas. 

Gas is killing coal across the country. In the last five years, coal’s share of electricity has declined from 50 to 38 percent, while gas’ share has increased from 22 to 30 percent. This shift has culminated in a massive nationwide reduction in carbon dioxide emissions and enormous environmental and health benefits. As we document in a recent report, coal is far worse than gas on virtually every health and environmental metric; it leads to many more deaths, pollutant emissions, and much greater ecosystem impacts than gas. The wide-ranging benefits of gas’ assault on coal should be celebrated, not obfuscated.

Gas is a bridge fuel to zero-carbon energy if we want it to be. Studies that actually model natural gas as a bridge fuel find natural gas could help stabilize atmospheric carbon dioxide concentrations and play a significant role in limiting the atmospheric carbon dioxide concentration to 550 parts per million, provided that zero-carbon energy sources such as nuclear power, carbon capture-equipped gas plants, and renewables are simultaneously deployed.

Claiming that our situation is too dire for bridge fuels, Bittman cites a Climate Progress article that relies on an International Energy Agency (IEA) special report titled “Are we entering a golden age I m gas?” The report, however, fails to model a gas-bridge scenario, instead only showing what would happen if natural gas increased and growth in zero-carbon energy remained at business-as-usual levels. By definition, if gas was employed as a bridge fuel it would not be expected to stabilize atmospheric concentrations of carbon dioxide single-handedly, but would be simultaneously deployed with other forms of low-carbon energy. 

Renewables need gas. One of natural gas’ most important strengths as a bridge technology is its ability to support the continued expansion and deployment of wind, solar, and other zero-carbon energy. By providing backup and firming capacity, the expansion of gas-fired power plants can accelerate the integration of intermittent power into existing electricity grids. Bittman concludes by calling for a “huge push to real renewables” and suggests “[dismantling] the existing infrastructure, starting with coal and nuclear,” but this is tantamount to destroying the very infrastructure wind and solar need.

Gas is a historic driver of emissions reductions. Since 1950, two energy sources â€" nuclear power and natural gas â€" have dramatically reduced the carbon intensity of our economy. Had the energy from these two sources been supplied with dirtier coal, there would be an additional 54 billion tonnes of carbon dioxide in the atmosphere today. By contrast, in 2012, the entire world’s energy sector emitted 35 billion tonnes of carbon dioxide. This is 36 times more carbon dioxide displacement than that achieved by all non-hydro renewables. Rather than obsess over methane leakage rates, we should celebrate these gains.

Consensus shows 2 percent or less methane leakage. By relying on a few sources, including a dubious study by researchers at Cornell University, and invoking anecdotal evidence, Bittman obscures a wider consensus of studies that show that methane leakage from shale gas development is around 2 percent or less, thereby giving it a significant climate advantage over coal. The latest data from the US Environmental Protection Agency’s Greenhouse Gas inventory pegs fugitive methane emissions at 1.5 percent of total production, and estimates that methane leakage is on the decline. The most comprehensive study of methane leakage from shale gas to date, published last week, estimates a 0.42 percent leakage rate for shale gas production.

High-impact, low-cost intervention could contain “super-emitters.” Rather than being an argument against the climate benefit of natural gas, as Bittman uses it, the existence of a small group of “super-emitters” suggests that methane leakage is not a diffuse issue, but one that could be isolated and addressed in a low-cost manner. 

With gas, it pays to prevent emissions. Even without regulation, the likelihood of intervention is high because there are large financial incentives for developers to limit emissions from their wells. As the president of development of Southwestern Energy said to Bittman, it is cost-effective for developers to minimize fugitive emissions, since all methane leakage is money lost to gas producers.

There will also be a rapid shift in the wake of EPA’s natural gas emission standards, which will roll out in 2015 and require all gas developers to install technologies on wells to reduce their emissions by 95 percent or more. EPA’s emissions standards are expected to lead to 1.7 million tons of methane savings annually â€" the equivalent of removing 4 to 8 million cars from the road each year from a GHG emissions standpoint.

Energy transitions take time, and do not occur by dismantling current systems. Bittman’s casual dismissal of natural gas and nuclear power reveal his lack of understanding about how energy systems change. The US energy system has been changing gradually for hundreds of years, getting cleaner and more diverse through time. Rarely have energy systems undergone a “dismantling of the existing infrastructure,” instead evolving through time through social, economic, and technological forces. Gas abets, not hinders, this necessary evolution toward cleaner sources of energy.

Photo Credit: Mark Bittman and Shale Gas/shutterstock

Read More

Clarifying the Confusion – Storage and Cost Effectiveness

by Chris Edgette, Senior Director, California Energy Storage Alliance and Charlie Barnhart, Postdoctoral Scholar, Global Climate and Energy Project, Stanford University

Is storage cost effective?  Misinterpretations of a Stanford study[1] on the energetic performance of energy storage recently published in Energy and Environmental Science appear to have added to the confusion surrounding this topic.  To clarify a few misconceptions, we (California Energy Storage Alliance, or ‘CESA’) teamed with the Charles Barnhart, the lead-author of the Stanford study, to set the record straight.

The Stanford Study compared the energetic costs of energy storage resources to the energetic losses due to curtailment of wind and solar generation. Considerations of other key benefits delivered by real-world deployment of storage assets were beyond the scope of the study. Just like energetic performance, the benefits listed below add to the environmental, societal, and economic value of storage.

store_v_peaker_chart1_barnhart2013 (1).png

1.     Energy offset: Energy discharged from a grid storage resource is likely to offset production by traditional fossil generators.  Chart 1 compares the energy intensity of various forms of storage charged by wind or solar to the energy intensity of natural gas peakers.

Energy intensity is the lifecycle cost of energy production per unit of energy delivered to society.  Lower intensity values mean that less energy is invested in a resource for the energy delivered.  On average, wind or solar resources coupled with storage are less energy intense than peaking power plants.

store_v_peaker_chart2_barnhart2013 (1).png
Chart 2 shows an intensity comparison between peakers and various forms of storage charged by curtailed renewable energy.

The difference in this chart is that it was assumed that the energy used to charge the storage device would otherwise have been curtailed.  Thus, the energy input value for the storage device is considered to be zero.  Recent Long Term Procurement Planning studies by E3 have shown that renewable curtailment is likely in the next decade. Until the power grid achieves adequate flexibility to accommodate wind and solar power, curtailed energy presents energetic and market opportunities.

2.     Time value: As noted in the study, “The value of available energy depends on time, location and need.”  This dispatchability of energy storage provides operational benefits.  Time value is key to determining the cost effectiveness and environmental benefit of energy storage on the grid.

3.     Greenhouse Gas (GHG) Production: The study’s energetic evaluation did not account for GHG or pollution differences between resources.  A gas peaker releases CO2 and other GHGs during combustion, so the peaker will produce more GHGs per kilowatt hour than a solar or wind farm, even at comparable energy intensities.

4.     Recycling: The study does not account for recycling of storage systems or components.

5.     Additional Benefits: In addition to wind or solar energy shifting activities addressed by the Stanford Study, grid-connected energy storage resources can provide many additional benefits for the electric power system, society and the environment:

a.      Spinning reserve: Backup power reserved by the system operator replaces another generator or transmission resource that drops offline.  While most storage resources can provide this benefit with minimal standby loss, a fossil generator must expend fuel in preparation for a reserve event.

b.     Frequency regulation: Regulation responds rapidly to changing grid loads and generation.  Fast energy storage responds to regulation needs 2-3 times faster than traditional fossil resources.

c.      Peaking capacity: Energy storage peaking capacity avoids costly startups of gas peaker plants.

d.     Transmission and distribution upgrade deferral: Energy storage can smooth customer load or renewable generation at key times, allowing utilities to cost effectively defer or avoid expensive system upgrades.

e.      Transmission congestion relief: Transmission congestion causes high locational pricing and potential curtailment of load.  Energy storage can alleviate congestion and reduce electricity prices.

f.       Voltage support: Energy storage can provide voltage support to the grid instead of specific grid infrastructure.

g.      Locational benefits: Energy storage can be installed in locations where traditional generation might not be feasible, increasing the value of delivered energy while reducing pollution in urban centers.

h.     Reliability: Energy Storage may provide backup power to increase grid security and reliability.


Commercialized energy storage devices operating on the grid today provide a variety of benefits.  For example, Xtreme Power's  and Duke Energy’s 36 MW Notrees project provides ramping control and time shifting, while AES Energy’s 32 MW Laurel Mountain project provides wind ramping and regulation services. When evaluating the cost effectiveness of energy storage, these services must be considered â€" particularly for system planning and resource procurement.  

Based upon the above, we would like to emphasize points which should be accounted for in discussions related to the study:

1.     Stored energy from wind resources has a lower energy intensity than most fossil generators.

2.     Even at comparable intensities, solar resources will produce energy without the GHG impacts of fossil generators.

3.     Time of Delivery effects increase the environmental and societal benefit attributable to storage.

4.     Quantifying a storage system’s ability to provide grid services was outside of the study’s purview, but could point to environmental and societal benefits.

5.     Storage technologies have varying energetic performances, but increased lifecycles will lead to improved energetic cost-benefit ratios of all storage technologies.

In conclusion, the Stanford Study builds our understanding of the broad, societal-scale energetic performance of storage paired with renewables by providing useful energy performance metrics. It should not be taken out of an energetic context.  The study’s authors at Stanford and CESA agree: energy storage will play a crucial role in cleaner, more sustainable electric power systems.

Charles Barnhart, the Study’s lead author, and his colleagues at Stanford are working on additional research studies that will take into account many of the above points.  Their work should further expand our understanding of the energy value of energy storage on the grid, while pointing to deployment strategies that optimize the economic and environmental benefit of energy storage.  We look forward to seeing and discussing the results as this groundbreaking modeling effort moves forward.

Read More

Smaller Nuclear Energy Plants Can Be Beautiful, Despite Opposition

Small Nuclear Plants

Dr. Edwin Lyman, Senior Scientist, Global Security Program, Union of Concerned Scientists, has published a paper titled Small Isn’t Always Beautiful: Safety, Security, and Cost Concerns about Small Modular Reactors that poses many questions about the development of SMRs. I’ve seen Dr. Lyman taking notes and asking questions at a number of SMR related conferences; this paper is apparently one of the products of that effort.

Since I have been advocating for smaller nuclear power plants since 1991 and helping to develop one particular brand of SMR since 2010, I read his report with a great deal of interest.

I will address specific concerns and disagreements with his analysis â€" along with pointing out the subjects on which we agree â€" but first I want to state my general impression. This paper seems to be one more example that supports my long standing assumption that the UCS is fundamentally opposed to the use of nuclear energy. Despite its frequent protestations that it is not antinuclear, the UCS is predictably quick with an automatic negative reaction to any attempt by nuclear energy technologists to improve its viability in the competitive energy market.

The next time I see Dr. Lyman I should ask him directly what kind of power systems he would like to see us building today and in the near future. Based on his dislike of both small and large nuclear power plants, I assume that he and his employer favor coal and natural gas. (Wind and solar are incapable of the task of supplying the kind of reliable, industrial strength power that most members of our society expect to be available 8760 hours per year.)

Dr. Lyman is partially correct in asserting that there are challenges associated with producing smaller reactors that can be operated profitably without reducing safety margins, but he is giving the wrong impression by implying that the task requires cutting corners or that the obstacles are so great that the effort is not worth government investment and encouragement. Let me explain the basis for my dissenting professional opinion.

In addition to my first-hand experience in operating and maintaining submarine nuclear propulsion plants, which are far smaller than commercial power stations, one of the main reasons that I became interested in designing and building smaller commercial reactors was the experience of reading I. C. Bupp’s Light Water: How the Nuclear Dream Dissolved. That book, sharply critical of the nuclear industry of the 1970s and 1980s, pointed out how the quest for “economy of scale” had driven reactor designers to create systems that were several times larger than any in operation at the time that they did the design work.

Bupp showed how the rapid scaling of designs from Gen I to Gen II depended on unproven assumptions and models rather than full scale testing of the structures, systems and components. His work, and the early work of the UCS, led to the expensive and time consuming loss-of fluid test (LOFT) large break experiments that eventually proved that the emergency core cooling systems would work to keep cores from being damaged and releasing harmful radiation. Here is a key quote from NUREG/IA-0028, which is titled Review of LOFT Large Break Experiments.

The principal finding from the large break experiments is that, for the degrees of severity in initial and boundary conditions, the measured fuel cladding temperatures remained well below the peak cladding licensing limit temperatures.

Even though I knew that the experimental results eventually validated the codes and showed that safe large reactors could be built, I recognized that there were a number of diseconomies of scale that combined to reduce the assumed economic benefits of ever larger nuclear power plants. I also realized that the focus on very large power plants pushed nuclear fission energy into a tiny niche of the overall energy market; it is never good for any product to be dependent on a single type of customer. I began my research into smaller reactors with the idea that it was possible to change the prevailing paradigm that “bigger is cheaper”.

My research on the topic of scale economies has spanned the past 22 years. It has taken some unusual paths that included a three-year stint as the general manager for a small manufacturing company. The company that I managed, J&M Industries, Inc., produced a wide variety of products with a large range of quality requirements and production run volumes. We provided custom product development services that helped inventors produce low volume prototypes of new ideas. We manufactured packaged products for the consumer market, bulk products for the medical market, and high quality marine components for other companies whose products included large luxury yachts. That experience gave me direct understanding of the economies associated with requirements, material variations, order timing, inventory, delivery, and various sizes of production runs.

Our company was in a cost-competitive market; the only way to make money was to have a complete understanding of the many components of product cost so that prices could be set at a competitive, but profitable level. The owner of the factory had learned that lesson the hard way. He had developed a couple of simple rules of thumb; I built product costing models based on his experience and refined them over the years we worked together. One of my most important take aways from that job was that the cost of anything includes far more variables than most people want to think about.

There is little academic doubt about the scaling equations associated with electricity production machinery; if you assume that smaller systems are just scaled down versions of larger systems, the cost associated with the larger system does not scale linearly with output. The key to producing economical smaller systems is to take advantage of opportunities to simplify the system design and to eliminate systems and components that are no longer necessary because they were initially added to overcome a scale diseconomy. Of course, it is also possible to break the scaling laws by a complete redesign that puts the technology on a different cost curve altogether.

If, for example, a reactor designer chooses to create an integral design in which the reactor core, steam generator, and pressurizer are directly connected in a single vessel, that designer can eliminate the cost of connecting pipes, isolation valves, controls and indications for the isolation valves, supports for the piping and valves, routine inspections of the piping, and the delays associated with fitting and welding large piping on a construction site.

Another thing to realize about scale economies is that they are not only applicable to nuclear energy, they are applicable to all other energy system competitors. That means that smaller nuclear plants do not compete in cost against the very largest nuclear or coal fired power plants; they must compete in cost against the other options for providing reliable power in 100-500 MWe chunks.

Dr. Lyman seems quite adamant in his assertion that smaller reactors should get no credit for their enhanced safety and smaller cores and that they should be forced to continue planning for a 10 mile emergency planning zone (EPZ). He states that any reactor greater than 250 MWth will contain enough radioactive material to be able to produce a release that would require evacuation up to and beyond 10 miles.

However, that assertion requires the physically incredible assumption that the entire core is somehow vaporized and distributed into the atmosphere. Based on the results of the State of the Art Reactor Consequence Analysis, I believe it is time to reevaluate the need for a 10 mile EPZ for our existing fleet of reactors. Though the report’s executive summary contains a lot of words apparently designed to obscure the key result, here is the important paragraph:

The unmitigated versions of the scenarios analyzed in SOARCA have lower risk of early fatalities than calculated in the 1982 Siting Study SST1 case. SOARCA’s analyses show essentially zero risk of early fatalities. Early fatality risk was calculated to be ~ 10-14 for the unmitigated Surry ISLOCA (for the area within 1 mile of Surry’s exclusion area boundary) and zero for all other SOARCA scenarios.

(Emphasis added.)

There is certainly no public health reason for asserting that the already obsolete EPZ requirements should be applied to newer, safer, and smaller power plants.

Of course, applying the obsolete EPZ requirement to all reactors, regardless of size and design features, would support the apparent UCS mission of forcing nuclear energy to be uncompetitive with its preferred sources of power. It would limit the ability of smaller reactors to be considered as emission-free replacements for older, smaller coal stations that are often located within the boundaries of smaller cities in the United States and abroad. It would also limit the ability to capture more value and thermal efficiency from a nuclear power plant by locating it near an industrial facility or campus/district heating system that could make use of the waste heat that is an inevitable part of a Rankine cycle steam plant. Lyman and UCS apparently do not want any nuclear cogeneration facilities, despite their potential advantages for the climate.

I also disagree with Dr. Lyman’s assertion that the security force requirements should be the same for a modern SMR as they are for large light water reactors that were designed and built more than 30 years ago. Facility vulnerabilities can be mitigated in creative ways when starting with a clean sheet of paper compared to retrofitting security onto an existing site that was not fundamentally designed to resist attack. I have had the pleasure of working with one of the leading site security experts in the country and have seen far more of the details of his security plans than Dr. Lyman. I am confident that he and his team will be able to convince the most skeptical regulators that his design will adequately protect the plant.

Unfortunately, site security is one of those areas where there must be a point at which the public agrees to turn over its information rights to trusted agents â€" like government regulators. It is not possible to allow the public to have access to all of the details since the bad guys are part of “the public.”

Dr. Lyman’s discussion of the importance of security costs (pages 14 and 15 of his report) contains a glaring analytical error. He makes the following statement:

The nuclear industry’s preoccupation with reducing security staffing is somewhat surprising. Even though security labor costs are significant, they are far from being a dominant contributor to overall O&M costs. Security staffing costs range from 15 to 25 percent of total O&M costs.

He follows that statement up by showing that an armed guard force of 120 people is a fairly small portion of the O&M cost for a large nuclear plant.

In total, considering the number of shifts per week, a typical reactor site would need approximately 120 security officers. For comparison, typical total plant staffing is between 400 and 700 personnel per site, so the security force is roughly 20 to 30 percent of the total workforce.

That statement ignores the fact that holding the security staff at a constant number while reducing all other staffing through system simplification and improved control systems makes the portion of O&M devoted to security increase. SMR designers are designing their systems for staffs that are considerably smaller than the 280-580 non security personnel per site that Dr. Lyman assumes is typical.

I agree with some Dr. Lyman’s points. Without redesign, smaller systems have a cost disadvantage over larger systems. Going smaller is not a magic bullet that will suddenly make nuclear energy competitive. The investment required to build a manufacturing infrastructure that enables series production techniques to overcome scale disadvantages might be a real hurdle that prevents SMRs from ever becoming competitive.

Aside: There is a solution to that challenge in the United States that should be more fully explored. Contrary to popular assumption, the US has been building smaller nuclear power plants with some regularity for the past 60 years. The infrastructure to manufacture and assemble complete power plants exists, but much of it is off limits to commercial endeavors. The assumptions that cause that to be true should be open to questioning in today’s political environment. End Aside.

Dr. Lyman expresses well-justified skepticism about the benefits of building nuclear plants underground. It seems to me that the choice of burying power stations comes with at least as many additional risks as the ones it eliminates. The key cited advantage of going underground is reducing the plant’s vulnerability to aircraft impact. I personally think that the Greg Jaczko-initiated Aircraft Impact rule should be discarded as being an unnecessary barrier to building new nuclear power stations. Almost every other component of our national infrastructure is more vulnerable to attack by jet airplanes than an above ground nuclear power station.

Dr. Lyman correctly points out that going underground raises questions about access in emergencies and resistance to flooding. Building plants below grade also adds enough site specific design and construction requirements to negate most, if not all, of the projected advantage of manufacturing the nuclear portion of the power plant in a factory. It is a construction truism that the deeper the foundation, the more difficult the site assessment and the more expensive the construction.

The height of the tall units that all of the integral light water reactor designers are proposing is a fundamental part of their natural circulation, passive safety design, but putting them almost completely underground requires installing foundations that are deeper than the foundations for the world’s tallest buildings. Building a power plant starting at the bottom of a very deep hole adds many hours to the construction process and requires the use of some of the world’s largest cranes, especially if the power plant is to be built of large, heavy modules.

In my opinion, encouraging SMR designers to choose to bury their plants underground may be part of a strategy to add enough cost and make the construction timeline long enough to prevent smaller reactors from competing with other, more immediately lucrative power plants that burn the hydrocarbon products of some of the world’s most politically powerful and wealthy corporations.

The bottom line is that, although it is not automatically true, small can be beautiful in nuclear power plants. Smaller plants can be built with greater predictability, improved safety, increased reliability, and can serve a larger number of potential customers. The Department of Energy is on the right track in its program to support small reactor design and licensing, though the 5-year, $452 million (total) program is incredibly tiny in comparison to the technology’s potential and in comparison to the investments DOE is making in technologies like technically unproven carbon capture and sequestration, large scale solar thermal power stations, and wind turbine deployment.

The post Smaller nuclear power plants can be beautiful, despite the opposition of the UCS appeared first on Atomic Insights.

Photo Credit: Small Nuclear Energy Plants/shutterstock

Read More

New Saab Production Begins, EVs Coming Next Year

Cars Saab ePower

Published on September 28th, 2013 | by Jo Borrás


Saab ePower

Fans of quirky Swedish design have a lot to celebrate this week, as Saab has rises from the ashes of GM’s bankruptcy and begun production of its 9-3 sedan once again! The first new Saabs began rolling off the company’s Trollhätten assembly line last week, and news of progress on the company’s EV front came over the weekend. You can read about both news items, below, in articles that originally appeared on our sister site, Gas 2.


New Saab Begins Production

Saab re-starts production

After years of rumors and speculations of the will they/won’t they variety, a brand-new Saab 9-3 has â€" finally! â€" managed to roll down the assembly line! Don’t be fooled by the fact that this new Saab looks just like the 2009 models the company was building when it was spun off from GM’s bankruptcy, however. This car features all-new components designed by Saab engineers and manufactured in Trollhättan, Sweden.

Saab, now owned by the National Electric Vehicle Sweden company, promised its new cars would reach production in 18 months. That was in September of 2012, so they’re about 6 month ahead of schedule. That on-track message puts NEVS-owned Saab in a decidedly different league than faux car-makers like Detroit Electric and Elio Motors, who’ve spent more time justifying delays than they have building cars. Don’t take my word for that, though, check out the well-appointed assembly line and experienced Saab assembly workers in the photo gallery, below, and start getting excited.

Saab’s back, baby! All we need now is a new Saab 900 revival and we’ll really be in business!

Sources | Photos: Saabs United, via WorldCarFans.


Saab ePower

The first new Saab of the NEVS era rolled off the company’s Trollhätten assembly line in Sweden last week, getting the ball rolling on one of the biggest post-bailout era automotive success stories. Inside EVs is reporting that these first new-era Saabs, which are gasoline-powered and not the upcoming “Saab ePower” electric model, are being built for sale to government agencies in China and to get any bugs in the new assembly process “worked out” before commercial sales of the EVs restart.

Meanwhile, the core of the new Saab ePower line â€" the NEVS lithium-ion phosphorus cell battery packs â€" are reportedly wrapping up development in Japan and are on-track for delivery to Saab’s Swedish assembly line sometime early next year, in time for NEVS and Saab to start putting electric 9-3s and, hopefully, 9-5s on the road in time for the 2015 model year.

We’ll have more updates on the new Saab ePower 9-3′s production timeline, and whether or not Saab will pull the trigger on electric/hybrid concepts like the Saab Phoenix coupe or a new Saab 900 Phoenix model, in the coming weeks and months. Stay tuned!

Source | Photos: Saab, via Inside EVs.


Tags: , , , , , , , , , , , , , , , , , , , , , ,


About the Author

I've been involved in motorsports and tuning since 1997, and write for a number of blogs in the Important Media network. You can find me on Twitter, Skype (jo.borras) or Google+.



Read More

DOE Pumps $33 Million Into EV Batteries and Other Green Car Tech

Batteries 3M gets $3 million for new EV battery research.

Published on September 5th, 2013 | by Tina Casey


The Department of Energy has just come out with a new round of $33 million in funding to improve the performance of EV batteries, along with other projects to reduce vehicle weight, improve efficiency and increase EV battery range. We’re not surprised to see sticky-tape expert 3M among the list of awardees, since the company has been up to its elbows in clean tech projects including solar energy and fuel cells.

3M’s slice of the funding pie is a big one, coming to just over $3 million, and with that in hand the company hopes to propel its long-running EV battery research into the big time.

New DOE Funding For Advanced EV Batteries

The new round of funding includes 38 EV battery and fuel efficiency projects (a complete pdf list is here), but for now let’s zero in on 3M. The company has received significant federal funding for EV battery research since at least 1993 and it looks like all that hard work is about to pay off.

Here’s the Energy Department’s description of the 3M project:

…a new high energy electrochemical couple for automotive applications that exceeds energy requirements for PEV [plug-in electric vehicle] applications that couples a high capacity core shell cathode, advanced electrolyte, and advanced stable silicon alloy composite anode with a novel conductive polymer binder.

The anode is especially interesting because last year 3M announced a patent for its silicon anode compositions for lithium-ion batteries, which the company claims can increase capacity by more than 40 percent in combination with other advanced components developed by the company. 3M previously received a matching grant of $4.6 million from DOE for the research.

In addition to the 3M grant, the new round of DOE funding is going to a number of federal laboratories and academic institutions as well as to another familiar name in the green tech sector, GE, which will get $1.7 million to improve electric drive performance.

https://plus.google.com/102291313118764969093/posts

// ]]>

Overall, the new round of funding supports the Obama Administration’s EV Everywhere initiative, which aims to make EV ownership just as convenient and affordable as owning a conventional car (3M is also a charter member of the workplace EV charging component of EV Everywhere).

3M And Green Tech

As for 3M’s other sustainability-related projects, a couple of recent examples are a DOE grant of $3 million for advanced fuel cell R&D, and a $7.33 million partnership with the National Renewable Energy Laboratory to accelerate the development of low cost thin film solar and concentrated solar technologies.

It’s also worth noting that 3M is working on reducing the cost of compressed natural gas tanks for vehicles, though for obvious reasons (fracking comes to mind) natural gas is not one of our favorite alternative fuels, unless you’re talking about renewable biogas.

Follow me on Twitter and Google+.


Tags: , , , , ,


About the Author

Tina Casey specializes in military and corporate sustainability, advanced technology, emerging materials, biofuels, and water and wastewater issues. Tina’s articles are reposted frequently on Reuters, Scientific American, and many other sites. You can also follow her on Twitter @TinaMCasey and Google+.



Read More

US Cities Could Churn Out Renewable Hydrogen From Wastewater

Clean Power Reclaiming hydrogen from wastewater.

Published on September 13th, 2013 | by Tina Casey


We’ve been giving hydrogen fuel cells the sustainability stinkeye because of the huge amount of energy needed to manufacture hydrogen, which typically involves natural gas, which brings us around to the impacts of fracking including water contamination, fugitive greenhouse gas emissions and even earthquakes. However, a new fuel cell demonstration project from Lawrence Livermore National Laboratory might be enough to shut us up. The $1.75 million project, in partnership with the Florida company Chemergy Inc., aims to reclaim hydrogen from municipal wastewater, aka sewage.

Renewable Hydrogen From Wastewater

The renewable hydrogen fuel cell project is partly funded by the California Energy Commission as well as Chemergy. It also partners the US Department of Energy, the Department of Defense Construction Engineering Research Laboratory (didn’t know we had one of those, did you?), and the Bay Area Biosolids to Energy Coalition, which includes 19 municipal wastewater authorities in the San Francisco Bay area. A wastewater treatment plant operated by a coalition member, the Delta Diablo Sanitation District in Antioch, was selected to be the testbed.

Raw wastewater is typically more than 99 percent water, so the two-step system developed by Chemergy comes in after the treatment process has reduced wastewater to wet biosolids.

Using a low temperature thermochemical process, a hydrogen compound is extracted from the biosolids along with reclaimed heat and carbon dioxide. The compound is then decomposed to form hydrogen.

The renewable hydrogen will go to fuel cells developed by DOE and the Defense Department. The expectation is that within a year, the fuel cell system will process about one ton of wet biosolids daily and reach a capacity of up to 30 kilowatts. The electricity will be used to power some of the treatment plant’s operations.

Based on Livermore’s energy equivalency calculation of one kilogram of hydrogen per gallon of gasoline, the system is expected to produce hydrogen at a competitive price of $2.00 per kilogram.

As for the multiplicity of partners involved in this relatively modest project ($1.75 million doesn’t buy much these days), Chemergy’s expertise is in the chemical conversion of wastewater to hydrogen, but in order for the system to function efficiently from soup to nuts you also have to factor in the durability and safety issues involved in hydrogen storage and use, which is where the experts at Livermore come in. The lab has been partnering closely with the departments of Energy and Defense on advanced fuel cell technology.

Meet Your Friendly Neighborhood Sewage Treatment Plant

Aside from recovering a clean, renewable fuel from wastewater, the fuel cell system also cuts down on the amount of wastewater byproducts that need to be transported off site for disposal. In addition to saving money, that cuts down on greenhouse gas emissions related to transportation and disposal.

https://plus.google.com/102291313118764969093/posts

// ]]>

That’s just the tip of the iceberg as far as resource recovery from wastewater treatment plants goes. Other examples that are already becoming commonplace are renewable methane gas recovery (both for stationary use and as a vehicle fuel) as well as natural soil enhancer from dewatered biosolids.

Also in the works are bioplastics and liquid biofuel from reclaimed grease.

With their huge, sprawling infrastructure, municipal wastewater treatment facilities also have potential for hosting other forms of renewable energy including photovoltaic installations and hydrokinetic turbines.

Follow me on Twitter and Google+.


Tags: , , , , , ,


About the Author

Tina Casey specializes in military and corporate sustainability, advanced technology, emerging materials, biofuels, and water and wastewater issues. Tina’s articles are reposted frequently on Reuters, Scientific American, and many other sites. You can also follow her on Twitter @TinaMCasey and Google+.



Read More

NREL Releases New Roadmap To Reducing Solar PV “Soft Costs” By 2020

Clean Power Rooftop PV panels under sunny blue sky

Published on September 27th, 2013 | by Guest Contributor

Originally published by the National Renewable Energy Laboratory

The Energy Department’s (DOE) National Renewable Energy Laboratory (NREL) recently issued a new report,“Non-Hardware (‘Soft’) Cost-Reduction Roadmap for Residential and Small Commercial Solar Photovoltaics, 2013â€"2020”(PDF) funded by DOE’s SunShot Initiative and written by NREL and Rocky Mountain Institute (RMI). The report builds off NREL’s ongoing soft-cost benchmarking analysis and charts a path to achieve SunShot soft-cost targets of $0.65/W for residential systems and $0.44/W for commercial systems by 2020.

Non-hardware costs â€" also referred to as soft, balance of system, or business process costs â€" include permitting, inspection, interconnection, overhead, installation labor, customer acquisition, and financing. The report also highlights that certain processes often categorized as soft costs, such as permitting and interconnection, may not appear significant when measured in terms of dollars-per-watt, but are costly in that they pose significant market barriers which slow PV deployment.

“Regardless of the specific path taken to achieve the SunShot targets, the concerted efforts of numerous photovoltaic (PV) market stakeholders will be required,” NREL Solar Technology Markets and Policy Analyst Kristen Ardani said. “This report illustrates how the required participation of each type varies substantially by soft-cost-reduction category while noting that roles and responsibilities will be complementary and evolve over time.”

“Soft costs are the majority of costs for residential solar and a large minority for commercial PV projects. They have remained stubbornly high in recent years despite impressive hardware-costs reductions,” said Jon Creyts, program director at Rocky Mountain Institute. “Aggressive soft-cost-reduction pathways must be developed to achieve the SunShot Initiative’s PV price targets.”

Soft costs account for more than 50 percent of total installed residential solar costs and more than 40 percent of commercial solar costs. The report covers strategies to overcoming market barriers and decreasing costs across four key areas: customer acquisition; permitting, inspection, and interconnection; installation labor; and financing. The report identifies residential installation labor, and permitting, inspection, and interconnection as facing the most uncertain near-term paths toward roadmap targets.

The roadmap also leverages proven methodologies adapted from the semiconductor and silicon PV industries, and offers comprehensive findings from market analysis and interviews with solar industry soft-cost experts â€" including financiers, analysts, utility representatives, residential and commercial PV installers, software engineers, and industry organizations â€" all to identify specific cost reduction opportunities.

“This report represents the first quantitative, national roadmap that targets soft-cost-reduction opportunities,” said Minh Le, director of DOE’s Solar Energy Technologies Office. “This roadmap and future refinements are necessary to determine the path forward to reduce the largest cost in residential solar installations. We need to be persistent in identifying the levers of change and where the big challenges persist.”

For example, the report identifies ways to decrease residential customer acquisition expenditures by using software tools to reduce total time spent on site, designing templates to reduce system design costs, and leveraging consumer-targeting strategies to increase the number of leads generated.

This report is the first of a series that will track soft-cost reductions and quantify the impacts of innovations. Future work will elaborate and refine soft-cost benchmarks, cost-reduction strategies, and the distinctions among the nation’s geographically diverse PV markets with the goal of tracking â€" and helping enable â€" progress toward the Energy Department’s SunShot cost-reduction targets. In addition, NREL and RMI are facilitating an industry-driven road mapping initiative, using working groups to lower PV soft costs and overcome market barriers. The kickoff meeting will be immediately before Solar Power International on Oct. 21 in Chicago. To find out more about this kickoff meeting, contact NREL’s Kristen Ardani, kristen.ardani@nrel.govand RMI’s Dan Seif, dseif@rmi.org.

NREL is the U.S. Department of Energy’s primary national laboratory for renewable energy and energy efficiency research and development. NREL is operated for the Energy Department by The Alliance for Sustainable Energy, LLC.


Tags: , , , , , , ,


About the Author

is many, many people all at once. In other words, we publish a number of guest posts from experts in a large variety of fields. This is our contributor account for those special people. :D



Read More

Ocean Warming, Sea Level Rise And Polar Ice Melt Speed Up, Surface Warming Follows

Decadal surface-air temperature (°C) via average of datasets maintained by the HadCRU, NOAA and NASA.

“Global Warming Has Accelerated In Past 15 Years, New Study Of Oceans Confirms,” as we reported back in March. And “Greenland Ice Melt Up Nearly Five-Fold Since Mid-1990s, Antarctica’s Ice Loss Up 50% In Past Decade,” as we reported last November. Another study that month found “sea level rising 60% faster than projected.”

And yet much of the media believes climate change isn’t what gets measured and reported by scientists, but is somehow a dialectic or a debate between scientists and deniers. So while 2010 was the hottest year on record and the 2000s the hottest decade on record, we are subject to nonsensically framed stories like this one from CBS, headlined “Controversy over U.N. report on climate change as warming appears to slow.”

The drama-driven junkies of the MSM apparently think that the most newsworthy thing in the once-every-several-years literature review by hundreds of the world’s leading scientists is that people who make a living denying climate science … wait for it … deny climate science. That CBS story actually begins, “Climatologists and climate-change deniers agree on at least one thing this week: everyone is awaiting the landmark U.N. report on climate change that will be presented at next week’s meeting of the Intergovernmental Panel on Climate Change (IPCC).” Stop the presses! No, please, stop the damn presses already if you are an editor or reporter who thinks deniers deserve equal billing with scientists.

Because the media keeps making the same faux pas about the faux pause, scientists and science writers have had to debunk it repeatedly. Anyone in the media who insists on buying into the false dialectic MUST read the new piece at Real Climate by climatologist Stefan Rahmstorf, the Mother Jones piece by Chris Mooney, this piece by Tamino, and almost anything at Skeptical Science (such as this or this). Also, Peter Sinclair has a great video on this.

Let me extract the key points and figures. Back in July, scientist Dana Nuccitelli summarized a new study, “Distinctive climate signals in reanalysis of global ocean heat content“:

  • Completely contrary to the popular contrarian myth, global warming has accelerated, with more overall global warming in the past 15 years than the prior 15 years. This is because about 90% of overall global warming goes into heating the oceans, and the oceans have been warming dramatically.
  • As suspected, much of the ‘missing heat’ Kevin Trenberth previously talked about has been found in the deep oceans. Consistent with the results of Nuccitelli et al. (2012), this study finds that 30% of the ocean warming over the past decade has occurred in the deeper oceans below 700 meters, which they note is unprecedented over at least the past half century.
  • Some recent studies have concluded based on the slowed global surface warming over the past decade that the sensitivity of the climate to the increased greenhouse effect is somewhat lower than the IPCC best estimate. Those studies are fundamentally flawed because they do not account for the warming of the deep oceans.
  • The slowed surface air warming over the past decade has lulled many people into a false and unwarranted sense of security.

For more on the myth of a low climate sensitivity (or the myth that climate sensitivity is the same as projected future warming), see this post. In reality, the best science says that the Earth’s actual sensitivity to carbon pollution is probably on the high side.

The bottom line is provided by Rahmstorf at RealClimate:

The heat content of the oceans is growing and growing. That means that the greenhouse effect has not taken a pause and the cold sun is not noticeably slowing global warming….

The increase in the amount of heat in the oceans amounts to 17 x 1022 Joules over the last 30 years. That is so much energy it is equivalent to exploding a Hiroshima bomb every second in the ocean for thirty years.

Before discussing the explosive ocean heat data, Tamino and Rahmstorf make an important point about recent warming. Here is the plot of the NASA temperature data:

gisstrend12

“Since 1975, global average surface air temperature has increased at a rate of 0.17 deg.C/decade,” Tamino notes. “But the rate of increase hasn’t been perfectly constant over that entire time span. As a matter of fact, there’s a 15-year time span during which the rate is notably different. Fifteen whole years!!!”

Rahmstorf replied to one journalist who asked whether there’s a real slowdown or “Do the IPCC authors feel pressured to write about it just because skeptics are making so much noise about it?”

You’d have to ask them but it is quite possible. I think a lot of the interest in this topic in the science community has been triggered by the public debate about it. If you look at the 15-year period up to 2006, the warming trend was almost twice as high as normal (namely 0.3 °C per decade) but nobody cared (you can see a graph with this trend line here [or above]. We published a paper in Science in 2007 where we noted this large trend, and as the first explanation for it we named natural variability. There is a certain asymmetry in that 15 years of high trend don’t raise much interest, whilst 15 years of low trend do. The reason is that interest groups strongly push the latter.

What’s surprising is not that deniers and confusionists keep pushing their denial and confusion â€" that is, after all, their job â€" but that much of the mainstream media keeps buying what they are selling.

Let’s return to the speed up in ocean heat content:

Change in heat content in upper 2000 meters (6500 feet) of world’s oceans. Source: NOAA

Change in heat content in upper 2000 meters (6500 feet) of world’s oceans. Source: NOAA

Rahmstorf, who is Head of Earth System Analysis at the Potsdam Institute for Climate Impact Research, explains:

The amount of heat stored in the oceans is one of the most important diagnostics for global warming, because about 90% of the additional heat is stored there (you can read more about this in the last IPCC report from 2007). The atmosphere stores only about 2% because of its small heat capacity. The surface (including the continental ice masses) can only absorb heat slowly because it is a poor heat conductor. Thus, heat absorbed by the oceans accounts for almost all of the planet’s radiative imbalance.

Here’s a graphic illustration of that via Skeptical Science:

where GW is going

A visual depiction of how much global warming heat is going into the various components of the climate system for the period 1993 to 2003, calculated from IPCC AR4 5.2.2.3

What does that mean for our understanding of climate change? Again, here’s Rahmstorf:

If the oceans are warming up, this implies that the Earth must absorb more solar energy than it emits longwave radiation into space. This is the only possible heat source. That’s simply the first law of thermodynamics, conservation of energy. This conservation law is why physicists are so interested in looking at the energy balance of anything. Because we understand the energy balance of our Earth, we also know that global warming is caused by greenhouse gases â€" which have caused the largest imbalance in the radiative energy budget over the last century.

If the greenhouse effect (that checks the exit of longwave radiation from Earth into space) or the amount of absorbed sunlight diminished, one would see a slowing in the heat uptake of the oceans. The measurements show that this is not the case.

Rahmstorf also notes that “Completely independently of this oceanographic data, a simple correlation analysis (Foster and Rahmstorf ERL 2011) showed that the flatter warming trend of the last 10 years was mostly a result of natural variability, namely the recently more frequent appearance of cold La Niña events in the tropical Pacific and a small contribution from decreasing solar activity.”

You can see the effect of La Niña “directly in the following figure, without any statistical analysis”:

Annual global temperature (El Niño years in red and La Niña years in blue)

As you can see, “both the red El Niño years and the blue La Niña years are getting warmer, but given that we have lately experienced a cluster of La Niña years the overall warming trend over the last ten years is slower.” This is the noise “associated with natural variability, not a change in the signal of global warming.”

And, as it turns out, just last month a major study published in Nature confirmed that “the slowing rise in global temperatures during recent years has been a result of prevalent La Niña periods in the tropical Pacific.” The abstract of that study explains:

Our results show that the current hiatus is part of natural climate variability tied specifically to a La Niña like decadal cooling.

Thus there are, as Rahmstorf notes, “at least three independent lines of evidence that confirm we are not dealing with a slowdown in the global warming trend, but rather with progressive global warming with superimposed natural variability.”

And let’s not forget another key indicator of accelerating warming â€" the accelerating melting of the great ice sheets as documented in the most comprehensive analysis of satellite altimetry, interferometry, and gravimetry data sets to date:

IceSheetSLR

Warming of the whole globe (as opposed to the thin surface layer) has sped up. When the rate of surface warming returns to the trendline, I wonder if the media will report that global warming has accelerated.

The post Faux Pause: Ocean Warming, Sea Level Rise And Polar Ice Melt Speed Up, Surface Warming To Follow appeared first on ThinkProgress.

Read More

Creating an Equitable Grid: Should We Be Worried?

The future of reliable energy is in jeopardy - for the poor. New proposals to upgrade the American grid fail to account for the needs of low-income consumers. Historically, equitable grids, which fairly distribute power to all consumers,  were the best choice economically.  Until the twentieth century, electric utilities benefited from increasing demand because it justified building larger, more efficient power plants that ultimately decreased the cost of electricity for everyone. In fact, as long as there was excess capacity, it was often cost-effective to maintain services to consumers as long as they received a partial payment. Without explicit policy intervention, this system generally reduced rather than exacerbated social inequalities.

Today, despite the existence of more efficient technology, energy poverty persists in America. In 2012, 15% of households in the U.S. were below the poverty line and paid a disproportionate amount of their income to adequately heat or cool their home. In some cases, families are making decisions about whether to heat their home or buy food. In many cases, these families are in poverty because of health problems, which may be exacerbated by both heat waves and below-freezing temperatures. In addition to health risks, families are more likely to use unsafe heating sources such as lanterns, which can lead to fire hazards.

Image

The elderly are particularly vulnerable when in energy poverty because of health complications.
Image Credit: Josh Westrich/zefa/corbis

Despite this need, most proposals for revolutionizing the electric grid focus on improving reliability and increasing renewable generation. For example, Utility 2.0  is a pilot program to be implemented in Maryland in response to concerns about grid reliability and consumer engagement.  Although the political motivation is unclear, this pilot program has ambitious plans to align utility compensation with consumer priorities. It focuses on increasing consumer access to information in order to install smart technology and participate in real time pricing programs â€" neither of which is likely to reduce energy poverty. Another project, America’s Power Plan, includes policy recommendations that focus on performance-based compensation, reducing investor uncertainty, and increasing renewables. While this type of change is certainly needed, little mention is made of how these policies will impact disadvantaged communities. These proposals would benefit from guiding principles to ensure that the needs of low-income consumers will be met.

Research indicates that implementing many market-based programs today could disadvantage the poor. For example, analysis of a ComEd data set suggests that real time pricing programs would increase the electricity bills of lower income households because they have a reduced ability to shift their electricity usage and tend to use less energy overall. Other work indicates that an unexpected consequence of solar panel subsidies is the redistribution of wealth from the poor to the rich. In Arizona, all ratepayers pay for subsidies on solar systems yet only the wealthy can afford to install them. Since lower income individuals tend to rent rather than own their homes, they have fewer opportunities to reap the benefits of these types of renewable energy subsidies. As a result, the poor ultimately have less access to reliable power.

We must envision an electricity system that responds to the current environmental crisis and reduces carbon emissions. However, we cannot do so in a way that increases social inequality. If new market-based programs are to improve the quality of life for all of society, we must include safeguards to reduce energy poverty.  A next-generation utility should move us forward, not backwards.

Authored by:

Casey Canfield

As a member of the Energy and Behavior group at Carnegie Mellon University, my research focuses on using behavioral science to improve the effectiveness of programs in the electricity industry. My first project focused on how to design electricity information displays to improve consumer understanding. I'm now studying the cybersecurity mental models of grid operators to improve training and ...

See complete profile

Read More

Solar Energy in the Bush

I am sorry to inform you all that I have just spent an awesome 7 days in the Australian outback. Although I was incommunicado for a full week, I couldn’t escape solar and in fact, was surrounded by it!

Going bush is a tradition that we all too often neglect and thanks to my in-laws, we got a fantastic taste last week of an amazing property called Barkala Farm not far from where some of our in laws settled several generations ago. The 9000 acre farm has a long history but for almost 40 years has been owned and developed by a committed and wonderful family who still run it today. Although they run cattle and specialise in horse riding, bush-walks and bird watching they are also famous for their very substantial pottery making facility â€" and its all run by solar.

With an extended family group of almost 20 people, we hired a wonderful straw-bale/stone/mud brick/slab hut that was recently rebuilt and tucked away at the end of a gully. Surrounded by grazing horses you look over craggy outcrops and innumerable trails and forest, characteristic of the Pilliga Scrub which at 3000km2, is the largest such continuous semi-arid woodland in temperate north-central and the the biggest untouched remnant in NSW. If you haven’t witnessed the Pilliga you must see it.

We saw wildlife galore and the kids were fascinated by giant bush cockroaches, emu’s, wild goats and kangaroos. We gazed at endless moonlit skies every night while we cooked on our open fire, saw historic graffiti from early explorers and found masses of aboriginal tools. A sunset trail ride was a highlight; nothing beats slowly plodding through the bush with wildlife almost oblivious and I even avoided  being thrown off when my trusty steed got spooked (just).

I got to take my boys on their first mountain bike ride down a 3km winding bush trail complete with creek crossings, sand, rocks and some wild goats thrown in for good measure and retraced the route at dawn on my Zero, getting deeper up the trail on a wonderful misty morning and deftly discovering some great new country in elegant silence.

zero pilliga

Back to solar; I wrote about the challenges we discovered trying to sort out refrigeration for such a big mob in a recent post. The farm has progressively upgraded its power generation over the years as demand has risen,  starting almost 21 years ago with a great little PV system which now runs the hut we stayed in (state of the art 55W BP panels, no less). Not long ago, they also upgraded what had become a 100% diesel power system on the main facility due to the load from the pottery kilns and blacksmith workshops adding a fantastic 40kW PV system.

Much to my families chagrin, I spent a fair bit of time poking around with the son of the owner who really knows his stuff. He delighted in the fact that the majority of the time, they simply “skimmed” energy off the top to keep the place running, barely discharging during the day and carefully managing load at night. Scanning around, LED lights, soft start motors and bunch of other energy saving measure were in place. Interestingly, the only challenge they face is during a  certain combination of high loads when the generator takes too long to kick in and sync up and so far he hasn’t quite got the set-up right to avoid it cycling on and off and on and off as the intermittent load comes and goes and battery voltage rapidly rises and falls. Off Grid is inherently more complex and subject to such idiosyncrasies than Grid Connect power, and reminds me of the high level of support such systems require, even when the customer have their own “load sense”; it is not an off the shelf, sell and forget solution.

They’ll get it right soon I’m sure and have good partners and suppliers.

Our little 1.5kW system ran the hut with aplomb handling the family induced surge of demand really well even with two days of rain and cloud. The battery capacity was obviously well designed for such events and with a 5kW inverter, there was plenty of grunt for the electric fry pan that got used once or twice (for warming side dishes to accompany the beast we worked our way through) and recharging my Zero once.

Being remote from the main system, I’m really glad we didn’t turn up with a cool room because it wouldn’t have coped with the lousy efficiency of standard units. Instead, we got by by spreading the cooling load out. The hut has a 200L fridge but to bolster it we used our camper van’s 40L fridge (run by its own solar) and plugged our 50L Waeco into the house, knowing that was was really efficient at around 30W. To back that up we had a small eskie and massive 300L eskie which held all the necessary refreshments for big strong men, dusty from days on the track, whittling, surviving in the bush, protecting women folk and other such necessary pursuits. The big eskie had 18 bags of ice that got us through the entire week at beer chilling temperature, the saving grace being a  cool place to hide it and cold nights.

pv pilliga

We also loaded up and did a days driving while we where their, paying our respect to past ancestors, got a sulphur bath  in the nearby hot springs and visited the stunning Dandri Gorge. What struck me most though was passing through the small towns of Pilliga, Gwabegar, Baradine and the larger centre’s of Wee Waa and Coonabarabran. These are not highly prosperous towns as a rule and in fact some are a shadow of their once great former selves. It’s tough country filled with tough folks who are true outback survivors living simple lives. And you know what many of them had on their roofs?  Grid Connected solar systems, by the bucket load. In fact after a while, my young boys, usually eagle eyed for solar installs just gave up commenting because we were all getting bored with the rolling commentary. 

Large rural enterprises and struggling outback towns using solar are the opposite of “middle class welfare”; they are living proof that solar really works at a community level and makes incredible sense in our big wide land.

The post Solar in the bush appeared first on Solar Business Services.

Authored by:

Nigel Morris

Nigel Morris has been involved in the PV industry for almost 20 years and is the founder of SolarBusinessServices, one of Australia’s leading PV consultancies. He began his PV career as the manufacturing manager with one of Australia’s pioneers in renewable energy and during his 5 years there, was a system designer, manufacturer, installer, salesman and company director. In 1997 he moved ...

See complete profile

Read More

Tour Of Perovo Solar Power Station (CT Exclusive — Interviews, Videos, Pictures)

Clean Power have a sunny day

Published on September 25th, 2013 | by Zachary Shahan


Following up on the teaser videos I shared last week, below are some videos with a lot more information about the Perovo Solar Power Station in Crimea, Ukraine. The very basic facts â€" which you’ll also hear in the interviews â€" are as follows:

  • 6th-largest solar PV power plant in the world at the moment;
  • largest solar PV power plant in the world when it was completed in 2011;
  • took 9 months to complete;
  • developed by Activ Solar*, an Austrian-based solar developer that has so far focused on development of utility-scale solar power plants in Ukraine.

If you’ve ever been curious about some of the details of large solar PV power plants, I think these videos should provide you with much of the info you crave. For more, we could always get you some answers from the folks over at Activ Solar. Also, I’ll publish a 45-minute video interview with Activ Solar’s COO in the coming days. Anyway, for now, check out the videos! (Numerous pictures are also included at the bottom.)

This first video is the longest and gives a fairly detailed overview of the Perovo Solar Power Station and some of the neat technical details there (try to ignore the horrible camera work in parts):

Here’s a view from further out (again, I will try to do better with the camera work in the future):

From the same spot, here’s a bit on why these utility-scale solar power plants are often built in 20 MW phases:


This next one is about a pretty interesting tidbit. It features some ancient graves that have been protected by the Ukrainian government. Right after this video was recorded, I found out the full story here. Apparently, an archaeologist sometime back was exploring one of these graves and he and his whole team died. No one could determine the cause of the deaths, so the government decided to protect these graves as “sacred” (or something like that). There are several such graves fenced off in different parts of these solar fields. Here’s the video for a look at one of them:

Here’s another quite interesting tidbit from the young man who drove me through a portion of Perovo Solar Park on a 4-wheeler. In this one, the young man notes that there was a blackout last winter across much of the large region of Crimea, but that Simferopol‘s lights stayed on thanks to this solar power plant:

Next are just a few fun videos. This first one features the “Have a Sunny Day!” sign at the entrance to the 40 MW section of the Perovo Solar Power Station:

And these next two are the two videos I shared previously in which I was riding on the back of a 4-wheeler through a segment of the solar farm. I absolutely love these videos, and have even made several of my non-cleantech friends watch them. Check ‘em out if you have seen them:

Lastly, for one final smile, here’s a Ukrainian leader in energy efficiency (who was also on this tour with me) commenting on the local landscape:

Below are a bunch of photos of the Perovo Solar Power Station and my tour there. Some were taken by me and some were taken by an Activ Solar employee. If you’d like a higher resolution of any of these or if you’d like to know who to credit for any of them, just let me know!

*Activ Solar supported my trip to and around Ukraine, but the trip came with no stipulations as far as content on CleanTechnica. I simply asked the questions that interested me and am sharing the content that I think is valuable to a broad, global audience. Aside from Activ Solar, greencubator and Alternativa also supported my trip to and around Ukraine.


Tags: , , , , , , ,


About the Author

is the director of CleanTechnica, the most popular cleantech-focused website in the world, and Planetsave, a world-leading green and science news site. He has been covering green news of various sorts since 2008, and he has been especially focused on solar energy, electric vehicles, and wind energy for the past four years or so. Aside from his work on CleanTechnica and Planetsave, he's the Network Manager for their parent organization â€" Important Media â€" and he's the Owner/Founder of Solar Love, EV Obsession, and Bikocity. To connect with Zach on some of your favorite social networks, go to ZacharyShahan.com and click on the relevant buttons.



Read More

Leading Edge Designs on the Grid for Southern California Schools


LEDBy Hazel A. Tamano

The smart grid technology industry, which includes energy-efficient lighting, solar energy, wind energy, geothermal energy, hydroelectricity, marine energy, and biomass, is a $33 billion global revenue industry as of 2012, and it’s growing. Smart grid technologies are not only promising because of the potential they bring to the energy systems already in place, but also because they enhance a cleaner and more efficient environment at reduced costs. According to Bob Lockhart of Navigant Research, “smart grid technologies improve the reliability and efficiency of the power grid via the application of modern IT capabilities alongside, or in place of existing utility assets and networks.”

As the demand for a more energy-efficient planet becomes the norm and not a trend, LEDs (Light Emitting Diodes) are becoming the energy-efficient light of choice. Today, Leading Edge Designs*, a direct manufacturer and distributor of LEDs, is doing just that and making its way by contributing to the smart grid in more ways than just being green. It has started the “Brighter Days Program” to assist economically distressed school districts in both the Los Angeles and San Diego metros. Due to state- and local-level budget cuts in the schools, there is resistance to smart grid technologies because funds are just not there. To help reduce the upfront costs for schools to go greener, Leading Edge Designs is setting aside 5% of every purchase made by their clients in this program to be applied to the cost of LED lights for the participating Southern California schools. Their commercial customers will also receive 5% off their orders if made by year-end.

This is a great program because it helps reduce the upfront costs for cash-strapped school districts, while achieving savings in maintenance and energy costs for years to come. Bill Motsko, CEO of Leading Edge Designs, says “this is a great program because, with corporate participation, we can reduce upfront costs, and the savings on the school’s operating budget per year will have a notable affect. This will enable school districts to refocus and redirect these dollars as needed for projects such as enhancing education programs, purchasing new equipment, helping to keep teachers employed, or helping to fund music and sport programs.” For example, LED lighting can save one school enough in energy and maintenance expenses in one year to perhaps pay a teacher’s salary.

Smart grid technological advancements (such as LEDs) are not only green, but allow us to stop wasting money powering inefficient, outdated systems. Green energy reduction (via better technology) has higher performance than green energy production by a factor of 4-5 times, meaning the cost of reducing energy consumption is more beneficial than the cost of generating new energy. Plus, energy conservation lowers the demand on an already “stressed” infrastructure.

Leading Edge Designs specializes in commercial grade new and retrofit LEDs, so their products fit the cause. Retrofit LEDs are great for schools because they can use existing fixtures and wiring so as not to add another capital expense burden to achieve benefits. School lighting projects may include theatres, administration & facilities buildings, classrooms, school stadiums, sports fields and courts, parking lots, and other indoor and outdoor lighting.

Leading Edge Designs is a San Diego Green tech company that solves clients’ lighting challenges by designing, fabricating and delivering LED lighting products that use only the latest technologies that maximize efficiency and value. They also use recycled materials and design their products so they can also be recycled after their useful life, which typically ranges from 11-22+ years. Because lighting technology has advanced, resulting in greater efficiency and longer product lifespans, buying bulbs off-the-shelf no longer provides the best value. Tailoring products to specific applications allows customers to fine tune their lighting, thus achieving better quality light where and when needed, realizing significant energy savings and lowering maintenance costs. Leading Edge Designs brings this customized approach to their clients, while remaining cost competitive and delivering the highest quality possible.

According to the U.S. Department of Energy, “if all U.S. businesses and institutions conducted cost-effective energy upgrades, they could reduce their average energy use by 25%. The total cost of this work would be more than $100 billion â€" which would be offset as a result of lower energy bills.” Leading Edge Designs’ lighting is a major component of creating energy-efficient buildings because, as a model green business, they are helping companies to save 50-80% or more of energy costs over traditional lighting. This is a dramatic savings for organizations with limited funding sources, and will help those businesses and agencies that install these lights for years to come.

*This article was supported by Leading Edge Designs.

References:
1. Bureau of Labor Statistics, Green Good and Services Program, www.bls.gov/ggs, July 2013
2. CleanTechnica, www.cleantechnica.com, July 2013
3. Clean Tech San Diego, www.cleantechsandiego.org, July 2013
4. Earth Easy for Sustainable Living, www.eartheasy.com, July 2013
5. LA Cleantech Cluster, www.laedc.org/cleantech, July 2013
6. Leading Edge Designs, www.led-ltd.us, July 2013
7. The Smart Grid, www.smartgrid.gov, July 2013

Image Credit: Leading Edge Designs


Tags: ,


About the Author

is many, many people all at once. In other words, we publish a number of guest posts from experts in a large variety of fields. This is our contributor account for those special people. :D

Related Posts


Read More
Powered By Blogger · Designed By Alternative Energy