Siva Power's Thin Film Cost Target of 28 Cents per Watt Is Very Ambitious. But Not Impossible

Siva Power, the CIGS thin film company that re-emerged last year with a new team and approach to the market, is out with an ambitious cost roadmap for its modules.

Before getting into the details of Siva's plan, it's worth remembering one important number related to CIGS thin film: twenty.

That's the approximate number of companies attempting to scale the technology that have either gone bankrupt, gone silent, shifted strategy or have been acquired since 2009. Very few of the venture capitalists who shoveled billions of dollars into the technology ever saw their money again.

Brad Mattson, the CEO of Siva Power, said he's learned from the mistakes of companies that tried to scale too quickly or overestimated their ability to reduce costs. In 2011, when Siva Power was still called Solexant, Mattson took the helm at the company, ditched plans to build a 100-megawatt factory, and focused on R&D. After years of experimenting with different materials and production processes, Siva eventually settled on co-evaporated CIGS on large glass substrates.

"Historically, co-evaporation has yielded the highest overall efficiency of all the CIGS deposition processes" compared to roll-to-roll, sputtering and other techniques, said Shyam Mehta, GTM Research's lead upstream solar analyst.

Mattson called the technology "a gift of physics" that offers the highest thin film efficiencies and the fastest production process. With $60 million in venture funding, Siva plans to build a 300-megawatt plant and eventually produce modules for 28 cents per watt over the next four years, assuming the facility is ever actually built.

Below is the company's newly released cost roadmap, which Mehta notes is "essentially a bunch of numbers in a spreadsheet." Siva doesn't even have its pilot line fully built yet.

Source: PRNewsFoto/Siva Power

The projections factor in labor, energy and water requirements, equipment needs, materials and overhead. Siva says its first 300-megawatt production line will produce CIGS modules at 40 cents per watt. After another two years of operation, the company believes it can get all-in costs down to 28 cents per watt.

The Department of Energy has set a production cost target of 50 cents per watt by 2030. In March, Jinko solar reported that it was producing crystalline silicon modules for 48 cents per watt. According to GTM Research's analysis, top Chinese producers could be making solar modules for 36 cents per watt by 2017.

Mattson told GTM's Eric Wesoff that he believes those numbers are "completely unsustainable" because they don't factor in large subsidies from the Chinese government.

So is Siva's plan "sustainable," considering that the company hasn't even built out its first pilot production facility? Mattson continues to talk about the "gigawatt era" in solar. But only one CIGS company, Solar Frontier, has gotten there so far, and Siva isn't anywhere close.

"It is not outside the realm of possibility, assuming they do hit their goals for scale, efficiency and manufacturing yield, which are undoubtedly ambitious," said Mehta.

The most obvious issue is timing. Due to technical problems related to deposition, CIGS thin film producers have historically underestimated their time to scale. "History has shown that almost all thin film startups have taken significantly longer to scale up" than projected, said Mehta.

The biggest technical challenge for Siva will be ensuring uniform deposition on the large glass substrates while keeping production levels high. (Applied Materials also used large glass substrates for amorphous silicon thin film. That ended poorly, mostly due to efficiency issues.)

"This is where it's fair to be skeptical, even though Siva's team is world-class," said Mehta.

For now, Siva's cost structure will sit in a spreadsheet untested until the firm builds out its first production line.

greentech mediaGreentech Media (GTM) produces industry-leading news, research, and conferences in the business-to-business greentech market. Our coverage areas include solar, smart grid, energy efficiency, wind, and other non-incumbent energy markets. For more information, visit: greentechmedia.com , follow us on twitter: @greentechmedia, or like us on Facebook: facebook.com/greentechmedia.

Authored by:

Stephen Lacey

Stephen Lacey is a Senior Editor at Greentech Media, where he focuses primarily on energy efficiency. He has extensive experience reporting on the business and politics of cleantech. He was formerly Deputy Editor of Climate Progress, a climate and energy blog based at the Center for American Progress. He was also an editor/producer with Renewable Energy World. He received his B.A. in ...

See complete profile

Read More

What Big Oil Isn't Telling You About California's Clean Energy Law

Simon Mui, Scientist, Clean Vehicles and Fuels, San Francisco

We’ve seen this play before: the oil industry is once again trying to scare California into giving it a free pass on pollution.

Last week, Big Oil cranked up, once again, its campaign to block the next critical step in California’s widely supported and successful clean energy law (AB 32) by backing last-minute radical changes to a bill by Assemblymember Henry Perea (D-Fresno) to exempt the oil industry from carbon pollution limits already in effect for California’s other major emitters.

Transportation is the single-largest source of pollution in the state â€" representing 40 percent of emissions. By playing by different rules, the oil industry would extend its free reign to pollute our air, harm our health, and threaten our most vulnerable communities. California’s ability to reduce its carbon pollution to meet AB32 requirements would also be threatened. 

How is Big Oil trying to convince legislators to vote for a bill to let them pollute freely? By attempting to scare them using the same playbook they’ve employed for decades: issuing doom-and-gloom warnings of fuel cost impacts while hiding behind thinly-veiled front groups such as “CARE” and “FedUpAtThePump.”

What Big Oil doesn’t want you to know

California’s enormous progress toward a cleaner energy future should not be jeopardized on the basis of Big Oil’s misleading math which touts the oil industry’s costs to clean up its carbon pollution while excluding the enormous consumer fuel savings from AB32.  Here’s what the oil industry ISN’T telling you about our state’s clean energy law.

  1. Thanks to AB32, our vehicles are becoming far more efficient, we’ve got more ways to run them â€" like electricity â€" and our public transit options are increasing.  Bottom line: less money out of our pockets for transportation fuels.  NRDC analysis shows with AB32 the average household will save over $380 in fuel expenditures in 2015, compared to a scenario without California’s clean transportation measures. Those savings are on top of the oil industry’s costs of acquiring pollution permits. And these fuel savings grow to $850 by 2020 -- and a whopping $1560 annually by 2025. AB32 Fuel Savings_Household.jpg
  2. AB 32 is putting downward pressure on fuel prices over time by significantly reducing California’s fuel demand, expanding competition, and giving consumers more transportation choices. The program is estimated to slash fuel demand by 14% in 2015, growing to a 22% reduction by 2020, compared to a scenario without California’s clean transportation measures. Gasoline and Diesel Use.jpg
  3. Money spent by the oil industry to purchase pollution permits will be invested right back into our economy to lower consumer fuel expenditures through more efficient vehicles and more walkable and transit-friendly neighborhoods. State law requires those investments are targeted in low-income communities that suffer disproportionately from the impacts of pollution. In addition, two bills with widespread support â€" the “Charge Ahead California Initiative” (SB 1275, de León) and the “Clean Truck, Bus, and Off-Road Technology Program” (SB 1204, Lara and Pavley) â€" would place zero and near-zero emission vehicles in those impacted communities, incentivize replacement of old gas-guzzling clunkers, and help families gain better access to financing for efficient vehicles that will further lower monthly fuel and auto costs.
  4. Oil companies want to blame others for price spikes while they quietly rake in massive profits - but we know the truth.  The main drivers of changes in gasoline prices are volatility in global oil markets, refinery outages and planned maintenance, and seasonal gasoline demand. For example, in August of 2012, the Chevron Richmond fire contributed to a jump of 30 cents per gallon and sent thousands of residents to hospitals and health clinics. A subsequent outage in ExxonMobil’s Torrance refinery contributed to a jump of 50 cents in October 2012. Both incidences led to windfall profits for the oil industry while hurting our pocketbooks. The best way for policymakers to protect California from this gas-price roller coaster is through supporting AB32 clean transportation measures. 

Don’t be fooled

Let’s not be fooled. Big Oil’s campaign against AB32’s pollution limits has nothing to do with concerns about consumers or low-income families. It’s about protecting their industry’s market share and profits, blocking competition, and exempting themselves from making investments to cut their pollution.

Thankfully, more than 30 California legislators have already seen through the charade, noting in a letter in response to Assemblymember Perea’s proposal:

"AB 32 is rooted in the principle that business as usual is unsustainable when it comes to reliance on fossil fuels. California’s most disadvantaged communities...are already bearing the brunt of the impacts: a historic drought, wildfires of unprecedented strength and 12 million people breathing air that does not meet federal health standards."

"Inaction is not an option. If we are serious about reducing fuel costs and righting the public health wrongs facing our constituents, we must wean ourselves off fossil fuels by investing in cleaner transportation alternatives. A fundamental redesign of AB 32 that allows oil companies to play by different rules than other industries would not only unacceptably delay action to reduce dangerous climate pollution, but could also disadvantage those industries that have already made investments to comply with the law."

Over the coming months, my colleagues and many other Californians â€" all of whom are part of the large majority supporting AB32 - will be working to ensure the oil industry’s latest canard won’t forestall our clean energy progress.

* For further details on the estimates used for the figures, see Technical Notes.pdf.

Read More

Opportunity to Use Science to Establish Radiation Standards

EPA Radiation Standards

The Environmental Protection Agency (EPA) has issued an Advanced Notice of Proposed Rulemaking (ANPR) to solicit comments from the general public and affected stakeholders about 40 CFR 190, Environmental Radiation Protection Standards for Nuclear Power Operations.

The comment period closes on August 3, 2014. The ANPR page includes links to summary webinars provided to the public during the spring of 2014, including both presentation slides and recorded audio, including questions and answers. This is an important opportunity for members of the public, nuclear energy professionals, nuclear technical societies, and companies involved in various aspects of the nuclear fuel cycle to provide comments about the current regulations and recommendations for improvements.

The existing version of 40 CFR 190 â€" issued on January 13, 1977 during the last week of the Ford Administration â€" established a limit of 0.25 mSv/year whole body dose and 0.75 mSv/year to the thyroid for any member of the general public from radiation coming from any part of the nuclear fuel cycle with the exception of uranium mining and long term waste disposal. Those two activities are covered under different regulations. Naturally occurring radioactive material is not covered by 40 CFR 190, nor are exposures from medical procedures.

40 CFR 190 also specifies annual emissions limits for the entire fuel cycle for three specific radionuclides for each gigawatt-year of nuclear generated electricity â€" krypton-85 (50,000 curies), iodine-129 (5 millicuries), Pu-239 and other alpha emitters > 1 year half-life (0.5 millicuries)

It is important to clarify of the way that the US federal government assigns responsibilities for radiation protection standards. The Nuclear Regulatory Commission (NRC) has the responsibility for regulating individual facilities and for establishing radiation protection standards for workers, but the EPA has a role and an office of radiation protection as well.

The Atomic Energy Act of 1954 initially assigned all regulation relating to nuclear energy and radiation to the Atomic Energy Commission. However, as part of the President’s Reorganization Plan No. 3 of October 1970, President Nixon transferred responsibility for establishing generally applicable environmental radiation protection standards from the Atomic Energy Commission (AEC) to the newly formed Environmental Protection Agency (EPA).

…to the extent that such functions of the Commission consist of establishing generally applicable environmental standards for the protection of the general environment from radioactive material. As used herein, standards mean limits on radiation exposures or levels or concentrations or quantities of radioactive material, in the general environment outside the boundaries of locations under the control of persons possessing or using radioactive material.

(Final Environmental Impact Statement, Environmental Radiation Protection Requirements for Normal Operations of Activities in the Uranium Fuel Cycle, p. 18)

Before the transfer of environmental radiation responsibilities from the AEC to the EPA and until the EPA issued the new rule in 1977, the annual radiation dose limit for a member of the general public from nuclear fuel cycle operations was 5 mSv â€" 20 times higher than the EPA’s limit.

The AEC had conservatively assigned a limit of 1/10th of the 50 mSv/year applied to occupational radiation workers, which it had, in turn, conservatively chosen to provide a high level of worker protection from the potential negative health effects of atomic radiation.

The AEC’s occupational limit of 50 mSv was less than 1/10th of the previously applied “tolerance dose” of 2mSv/day, which worked out to an annual limit of approximately 700 mSv/year.

Aside: After more than 100 years of human experience with working with radiation and radioactive materials, there is still data that prove negative health effects for people whose exposures have been maintained within the above tolerance dose, which was initially established for radiology workers in 1934. End Aside.

From the 1934 tolerance dose to the EPA limit specified in 1977 and still in effect, requirements were tightened by a factor of 2800. The claimed basis for that large conservatism was the lack of data at low doses, leading to uncertainty about radiation health effects on humans.

The only measured human health effects were determined from the acute doses greater than 100 mSv received by the lowest exposed portion of the population of atomic bomb survivors. Based on data from the Life Span Study of atomic bomb victims, which supported a linear relationship between dose and effect, the National Academy of Sciences committee on the Biological Effects of Ionizing Radiation (BEIR) recommended a conservative assumption that the linear relationship continued to exist all the way down to a zero dose, zero effect origin.

For the radionuclide emissions limits, the EPA chose numbers that stretch the linear no-threshold dose assumption by applying it to extremely small doses spread to a very large population.

The Kr-85 standard is illustrative of this stretching. It took several hours of digging through the 240 page final environmental impact statement and the nearly 400 page long collection of comments and responses to determine exactly what dose the EPA was seeking to limit and how much it thought the industry should spend to achieve that protection.

The EPA determined that allowing the industry to continue its established practice of venting Kr-85 and allowing that inert gas to disperse posed an unacceptable risk to the world’s population.

It calculated that if no effort was made to contain Kr-85, and the US industry grew to its projected 1000 GW of electricity production by 2000, an industry with full recycling would release enough radioactive Kr-85 gas to cause about 100 cases of cancer/year.

The EPA’s calculation was based on a world population of 5 billion people exposed to an average of 0.0004 mSv/year per individual.

At the time that the analysis was performed, the Barnwell nuclear fuel reprocessing facility was under construction and nearly complete. It had not been designed to contain Kr-85. The facility owners provided an estimate to the EPA that retrofitting a cryogenic capture and storage capability for krypton-84 would cost $44.6 million.

The EPA finessed the exceedingly large cost for tiny benefit by saying that the estimated cost for the Barnwell facility was not representative of what it would cost other facilities that were designed to optimize the cost of Kr-85 capture. It based that assertion on the fact that Exxon Nuclear Fuels was in a conceptual design phase for a reprocessing facility and had determined that it might be able to include Kr-85 capture for less than half of the Barnwell estimate.

GE, the company that built the Midwest Fuel Recovery Plant in Morris, Illinois provided several comments to the EPA, including one about the low cost benefit of attempting to impose controls on Kr-85.

Comment: The model used to determine the total population dose should have a cutoff point (generally considered to be less than 0.01 mSv/year) below which the radiation dose to individuals is small enough to be ignored.
…
In particular, holdup of krypton-85 is not justified since the average total body dose rate by the year 2000 is expected to be only 0.0004 mSv/year.

Response: Radiation doses caused by man’s activities are additive to the natural radiation background of about 0.8-1.0 mSv/year [note: the actual level at that time, as indicated by other parts of the documents was 0.6 - 3.0 mSv/yr] whole-body dose to which everyone is exposed. It is extremely unlikely that there is an abrupt discontinuity in the dose-effect relationship, whatever its shape or slope. at the dose level represented by the natural background that would be required to justify a conclusion that some small additional radiation dose caused by man’s activities can be considered harmless and may be reasonably ignored.

For this reason, it is appropriate to sum small doses delivered to large population groups to determine the integrated population dose. The integrated population dose ay then be used to calculate potential health effects to assist in making judgements on the risk resulting from radioactive effluent releases from uranium fuel cycle facilities, and the reasonableness of costs that would be incurred to mitigate this risk.

Existing Kr-85 rules are thus based on collective doses and a calculation of risks that is now specifically discouraged by both national (NCRP) and international (ICRP) radiation protection bodies. It is also based on the assumption of a full recycle fuel system and ten times as much nuclear power generating capacity as exists in the US today.

There are many more facets of the existing rule that are worthy of comment, but one more worth mentioning today is concluding paragraph from the underlying policy for radiation protection, which is found on the last page of the final environmental impact statement.

The linear hypothesis by itself precludes the development of acceptable levels of risk based solely on health considerations. Therefore, in establishing radiation protection positions, the Agency will weigh not only the health impact, but also social, economic, and other considerations associated with the activities addressed.

In 1977, there was no consideration given to the fact that any power that was not generated using a uranium or thorium fuel cycle had a good chance of being generated by a power source producing a much higher level of carbon dioxide. In fact, the EPA in 1977 had not even begun to consider that CO2 was a problem. That “other consideration” must play a role in any future decision making about radiation limits or emission limits for radioactive noble gases.


Note: Dose rates from the original documents have been converted into SI units.

The post Opportunity to use science to establish radiation standards appeared first on Atomic Insights.

Photo Credit: EPA, Science, and Radiation/shutterstock

Read More

NY High Court's Local Ban Decision is No Basis for Greenlighting Fracking

NY Fracking Governor Choices

Kate Sinding, Senior Attorney, New York City

When the New York State Court of Appeals ruled last week that municipalities have the right to use their zoning codes to ban fracking, my reaction was one of intense relief and celebration. But I immediately became concerned that the pro-fracking elements would try to salvage something from their defeat by claiming that the decision somehow provided a justification for Governor Cuomo to greenlight fracking in the state. A column over the weekend by Fred LeBrun in the Albany Times Union made pretty much just the point I had feared. But we still think Governor Cuomo will do the right thing: Decide based on science. Because the court’s decision doesn’t change the basic fact that the limited science that does exist on fracking’s health impacts raises serious reasons for concern.

On June 30th, the state’s highest court ruled jointly on two lawsuits, one brought by an oil company and the other by a dairy farmer. In those cases, NRDC’s Community Fracking Defense Project filed an amicus curiae (friend of the court) brief. We supported the towns of Dryden and Middlefield, who argued that state law does not pre-empt them from reflecting what they believe to be the will of their constituents. The people of their towns made plain their fear of the potential harmful environmental and health impacts of fracking. So they enacted zoning amendments that prohibit use of this risky drilling technique within their borders.

And the court found that they had the legal right to do that, which should have been reason for unmitigated joy. But I suspected that the oil and gas industry, with so much at stake, would try to use this decision to its advantage. What I worried was that the oil companies would make an argument something like this: “OK, let’s look at the bright side of this decision. Now that towns can opt out of fracking, Governor Cuomo should approve it statewide, then let each town decide for itself. Some will say to us, go away. Their loss, really. Others will rightly decide that they need the jobs and the cash for leases, and say: Welcome to our town.” This argument is seductive, but utterly specious.

In his column, LeBrun offers a version of that line of thinking. “Among the many charms of the Court of Appeals decision is that it gives the governor political cover to break the impasse over hydrofracking and get on with it,” he wrote. “New York can look forward to the toughest regulations in the country, and only a lunatic would now see a slippery slope by allowing a closely monitored pilot project. For now, the best of all possible outcomes is for hydrofracking to occur only where local communities want it, and be watched like a hawk.”

Happily, the governor has shown no inclination to simply “get on with it” on fracking. Rather than make this crucial decision in a calculatedly political way, he is wisely allowing the science to unfold at the deliberate pace that good policy demands. That’s good news, because the court’s decision changed nothing about the state of the science and the need to ensure all New Yorkers are protected, even if their town boards haven't voted for a ban.

The state’s Department of Environmental Conservation and the Department of Health have been looking carefully at fracking for several years. The Health Department is evidently still some distance from concluding its examination of the potential health impacts. Last month, I wrote about an upstate senator who suggested that the decision will come early in 2015. But we fully expect the governor not to decide anything until that work is done. We have always maintained that sound science should decide the outcomeâ€"not politics. That's not a "lunatic" position; it's a pro-public health one.

Not surprisingly, the extraction companies reacted harshly to the court’s decision. It’s not difficult to understand why. Fearing the possibility that the state will eventually issue permits for fracking, more and more municipalities â€" nearly 180 at last count â€" have decided to block it by using zoning, which is a community’s power to shape land use, reasonably balancing the interests of land owners and the needs of the broader community. So they felt the need to stop this tsunami of zoning changes before it swept the state. That led to the lawsuits.

But the court made clear that the entire communityâ€"made up not only of those who own large, drillable expanses of land, but also of those who don’tâ€"can decide, through its local elected representatives, whether fracking is a good idea or not. It’s an excellent decision, and I encourage you to read my NRDC colleague Daniel Raichel’s blog for a brief and astute analysis.

As the fracking-now folks adopt the argument I mentioned aboveâ€"the what-the-heck-governor-approve-fracking-and-let-towns-decide-one-by-one argumentâ€"our role as advocates is to push back and say: Not so fast! Our governor is way too smart to be snookered by that obvious ploy. The state still has a big role in deciding on the safety or lack of safety of frackingâ€"whether local municipalities ban it or not.

The governor shouldâ€"and we strongly believe he willâ€"stand fast and do what he has been doing: Wait patiently for the science, while resisting calls to simply “get on with it” in deciding this issue.

Photo Credit: New York Fracking Policy/shutterstock

Read More

Energy Efficiency Myths, Busted!

About the panel

Scott Edward Anderson is a consultant, blogger, and media commentator who blogs at The Green Skeptic. More »

Christine Hertzog is a consultant, author, and a professional explainer focused on Smart Grid. More »

Elias Hinckley is a strategic advisor on energy finance and energy policy to investors, energy companies and governments More »

Gary Hunt Gary is an Executive-in-Residence at Deloitte Investments with extensive experience in the energy & utility industries. More »

Jesse Jenkins is a graduate student and researcher at MIT with expertise in energy technology, policy, and innovation. More »

Kelly Klima is a Research Scientist at the Department of Engineering and Public Policy of Carnegie Mellon University. More »

Jim Pierobon helps trade associations/NGOs, government agencies and companies communicate about cleaner energy solutions. More »

Geoffrey Styles is Managing Director of GSW Strategy Group, LLC and an award-winning blogger. More »

Read More

Making a Wire-Free Future

wireless energy concept

This diagram shows how WiTricity Corp.'s highly resonant coupling works. When a highly resonant transmitting copper coil, connected to an AC power source (top, left), is tuned to the same frequency as a highly resonant receiving copper coil (bottom, left), the two coils exchange energy efficiently over distances via the magnetic field (right). Courtesy of WiTricity Corp.

WiTricity’s wireless charging technology is coming soon to mobile devices, electric cars, and more.

Rob Matheson | MIT News Office

More than a century ago, engineer and inventor Nikola Tesla proposed a global system of wireless transmission of electricity â€" or wireless power. But one key obstacle to realizing this ambitious vision has always been the inefficiency of transferring power over long distances.

Near the end of the last decade, however, a team of MIT researchers led by Professor of Physics Marin Soljacic took definitive steps toward more practical wireless charging. First, in 2007, the team wirelessly lit a 60-watt light bulb from eight feet away using two large copper coils, with similarly tuned resonant frequencies, that transferred energy from one to the other over the magnetic field. Then, in 2010, they shrunk the coils down and significantly increased the efficiency of the system, noting future applications in consumer products.

Now, this “wireless electricity” (or “WiTricity”) technology â€" licensed through the researchers’ startup, WiTricity Corp. â€" is coming to mobile devices, electric vehicles, and potentially a host of other applications.

The aim is to forge toward a “wire-free world,” says Soljacic. Primarily, this means consumers need not carry wires and power bricks. But it could also lead to benefits such as smaller batteries and less hardware â€" which would lower costs for manufacturers and consumers.

“It’s probably a dream of any professor at MIT to help change the world for a better place,” says Soljacic, a WiTricity co-founder who now serves on its board of directors. “We believe wireless charging has a potential to do that.”

He is not alone. Last month, WiTricity signed a licensing agreement with Intel to integrate WiTricity technology into computing devices powered by Intel. Back in December, Toyota licensed WiTricity technology for a future line of electric cars. Several more publicized and unpublicized companies have recently joined in the licensing parade for this technology, including Thoratec for their implantable ventricular assisting devices, and TDK for wireless electric vehicle-charging systems. There’s even talk of a helmet powered wirelessly via backpack, specifically for military applications.

At present, WiTricity technology charges devices at around 6 to 12 inches with roughly 95 percent efficiency â€" 12 watts for mobile devices and up to 6.6 kilowatts for cars. But, with growing research and development, the company is increasing distance, scale, and efficiency. It’s also developed repeaters: passive devices that extend the distance of the power transfer. These can be developed into a wide variety of shapes and can be embedded in a carpet to “hop” the power across a room.

iphone wireless charger

WiTricity Corp. recently unveiled a design for a smartphone and wireless charger powered by its technology. The charger can charge two phones simultaneously, and can be placed on top of a table or mounted underneath a table or desk. Courtesy of WiTricity Corp.

The WiTricity technology can charge an electric car, with the vehicle parked about a foot above the transmitting pad. Courtesy of WiTricity Corp.

Stronger coupling

Similar wireless charging technologies have been around for some time. For instance, traditional induction charging, which uses an electromagnetic field to transfer energy between two coils, is used in transformers and wireless toothbrushes. In the past two years, there’s also been an increase in wireless cell phone charging pads based on induction.  

“These work well, but only over very short distances, so they’re nearly touching,” Soljacic says. “They become dramatically inefficient when the distance increases.”

Lasers can also move energy between two points, such as two satellites. But this requires an uninterrupted, continuous path between the transmitter and the receiver, which “is obviously not ideal for consumer products,” Soljacic says.

WiTricity’s system of transmitters and receivers with magnetic coils, on the other hand, “efficiently transfers power over longer distances,” says CEO Alex Gruzen ’84, SM ’86. “It can also charge through materials such as wood or granite, allow freedom to move the devices around, and charge several devices at once.”

To make the system more efficient, WiTricity tunes the coils to find a strong electromagnetic highly resonant coupling. This is similar to a tuning fork vibrating when exposed to a sound of the right frequency, or a radio antenna tuning into a single station out of hundreds. 

The concept took shape in early 2000s, when Soljacic awoke at 3 a.m. to the beeping of his cell phone running out of battery life. Frustrated, and standing half awake, he contemplated ways to harness power from all around to charge the phone.

At the time, he was working on various photonics projects â€" lasers, solar cells, and optical fiber â€" that all involved a phenomenon called resonant coupling. “The underlying physics could be easily applied to power transfer,” he says. 

A new category of magnetic resonance

Seeing use for consumer devices, Soljacic and a team of five MIT researchers â€" including physics professors Peter Fisher and John Joannopoulos â€" published a proof-of-concept experiment in Science in 2007, and founded WiTricity that same year.

In the experiment, the researchers used two copper coils, about two feet across, each a self-resonant system. One transmitting coil was connected to an AC power supply, while another connected to a 60-watt light bulb.

The transmitter emanated a magnetic field, oscillating at megahertz frequencies, which the receiver matched, ensuring a strong coupling between the units and weak interaction with the rest of the environment, including nonmetallic materials â€" and humans. In fact, they demonstrated that they could light the bulb, at roughly 45 percent efficiency, with all six researchers standing in between the two coils.

Gruzen uses the following analogy: A room is packed with 100 wine glasses, each filled with a different level of wine to ensure a different resonant frequency. “If an opera singer belts out a note inside that room, the glass with the corresponding frequency accumulates enough energy to shatter, but none of the other glasses will resonate enough to break,” he says.

A 2010 paper published in Applied Physics Letters by Soljacic and colleagues made another breakthrough: They found that when adding more receiver coils, power transfer efficiency climbs by more than 10 percent. In that experiment, they used larger transmitting coils, but receiving coils that were only a foot across, resulting in a power output of 50 watts from several feet away.

“This enabled the development of a whole new category of magnetic resonance,” Gruzen says. From there, the company focused on finding the optimum design of the coils and electrical control systems for commercial applications.

Wireless charging: An expectation

These days, Gruzen sees wireless charging as analogous to the evolution of a similar technology â€" WiFi â€" that he witnessed in the early 2000s as senior vice president of global notebook business at Hewlett Packard.

At the time, WiFi capabilities were rarely implemented into laptops; this didn’t change until companies began bringing wireless Internet access into hotel lobbies, libraries, airports, and other public places.

Now, having established a standard for wireless charging of consumer devices with the A4WP (Alliance for Wireless Power) known as Rezence, WiTricity aims to be the driving force behind wireless charging. Soon, Gruzen says, it will be an expectation â€" much like WiFi.

“You can have a charging surface wherever you go â€" from a kitchen counter to your workplace to airport lounge and hotel lobbies,” he says. “In this future, you’re not worried about carrying cords. Casual access to topping off power in your devices just becomes an expected thing. This is where we’re going.”

With an expected rise of wireless charging, one promising future application Soljacic sees is in medical devices â€" especially implanted ventricular assist devices (or “heart pumps”) that support blood flow. Currently, a patient who has experienced a heart attack or weakening of the heart has wires running from the implant to a charger â€" which means risk for infection.

“In our case, a patient could lie on the bed and, while he or she is sleeping, our technology could charge the device from a distance,” Soljacic says. “We expect to have much more of these embedded electronic devices in people over the next decade or so.”

Reprinted with permission of MIT News

Read More

Energy Quote of the Day: 'Texas & North Dakota Now Account for Almost Half of Total US Oil Production'

Prices Help Drive Increase of Midwest Oil Exploration

The EIA’s latest Short-term Energy Outlook is out and US crude oil output continues to soar, while natural gas prices are expected to climb back toward the $5 per million BTU level this year, before slightly pulling back in 2015.

EIA Administrator Adam Sieminski highlighted just how prolific US crude oil production has been in recent years:

“Texas and North Dakota now account for almost half of total U.S. oil production, as monthly oil output in Texas recently topped 3 million barrels per day for the first time since 1977 and North Dakota’s oil production hit a record 1 million barrels per day.” â€" Adam Sieminski

Put in context, total liquids production in Kuwait and Iraq individually was about 3.1 mmb/d in 2013, according to the 2014 BP Statistical Review of World Energy. 

Other highlights from the report include rapid natural gas inventory replacement, which has been putting downward pressure on prices over the past few weeks, but increasing consumption is expected to boost prices this year as compared with last year. However, those price increases are expected to reduce power sector consumption this year, after which lower prices and coal plant retirements could increase gas consumption in the power sector in 2015.

Read the full report here.

Authored by:

Jared Anderson

Jared Anderson, Managing Editor at Breaking Energy, covered international oil and natural gas market fundamentals as an Analyst then Senior Analyst in the Research & Advisory division at Energy Intelligence Group. Earlier in his career, Jared spent several years working in the environmental consulting industry. He holds a Master's degree in international relations with a focus on energy from ...

See complete profile

Read More

Top 5 EU Energy Issues: All You Need to Know for the Italian Presidency [VIDEO]

About the panel

Scott Edward Anderson is a consultant, blogger, and media commentator who blogs at The Green Skeptic. More »

Christine Hertzog is a consultant, author, and a professional explainer focused on Smart Grid. More »

Elias Hinckley is a strategic advisor on energy finance and energy policy to investors, energy companies and governments More »

Gary Hunt Gary is an Executive-in-Residence at Deloitte Investments with extensive experience in the energy & utility industries. More »

Jesse Jenkins is a graduate student and researcher at MIT with expertise in energy technology, policy, and innovation. More »

Kelly Klima is a Research Scientist at the Department of Engineering and Public Policy of Carnegie Mellon University. More »

Jim Pierobon helps trade associations/NGOs, government agencies and companies communicate about cleaner energy solutions. More »

Geoffrey Styles is Managing Director of GSW Strategy Group, LLC and an award-winning blogger. More »

Read More

Seeking Consensus on the Internalized Costs of Energy Storage via Batteries

storage and battery internal cost

Important!

Please keep this discussion focussed by following the guidelines at the bottom of this article. In particular, all comments comparing energy options like nuclear and renewables are off-topic.

What is meant by "internalized costs"?

Internalized costs are the costs which can be accurately accounted for in our current systems. In energy production, these costs typically consist of capital costs, financing costs, operation and maintenance costs, and exploration costs. Some energy options incur these costs in various stages such as extraction, transportation and refinement. Profits and taxes are excluded wherever possible in order to isolate the pure cost of production.

Internalized costs of energy storage

This article will cover two battery-based energy storage solutions: standard batteries and flow batteries. Three currently mature energy storage technologies â€" backup thermal power, pumped hydro storage and compressed air energy storage â€" were covered in a previous article, while synfuels will be covered in the next article.

General comments on battery storage economics

Before we get started, some general comments on battery storage economics are in order. As discussed in the previous article, the most important factors influencing the economics of specialized energy storage technologies are the capital costs and the capacity utilization. Capacity utilization is an especially important issue in energy storage because of a trade-off between capacity utilization and the spread between the price at which the storage facility can buy and sell electricity. At higher capacity utilizations, the initial capital investment will be better utilized, but the spread between the buying and selling price will also reduce.

Germany currently offers a good example of the type of buy-sell spreads available in a system with substantial intermittent renewable energy penetration. As shown from the graph below, a buy-sell spread of about €20/MWh is available for probably about 20% of the average day while spreads of €50/MWh are only available on isolated occasions.

Image 

An important feature distinguishing batteries from other energy storage technologies is that storage capacity (kWh) is generally the economically limiting factor instead of output capacity (kW). This implies that a limited battery storage capacity must be utilized at as high a frequency and discharge depth as possible, while facilities like pumped hydro where storage capacity is not such a limiting factor are free to cycle over longer timespans.

The figure above illustrates this issue. As can be seen, significant spreads exist between weeks with high wind output and low wind output as well as between weekdays and weekends. These spreads are not economically accessible to battery technologies which should be cycled very frequently (at least once per day) to more economically utilize the limited storage capacity. In contrast, a pumped hydro facility with a week or more worth of storage capacity can take advantage of these spreads.

In addition to cycle frequency, cycle depth is also an important parameter in battery storage. Since the availability of high frequency spreads will vary significantly from one day to the next depending on fluctuations in renewables output and local electricity demand on weekly and seasonal timescales, the economically viable depth of discharge will also vary significantly. For example, batteries could be useful in Germany over summer when solar PV creates a reasonably reliable daily cycle, but will be of very limited use in winter when solar PV output is minimal and more unpredictable wind power dominates.

Image 

Another factor to take into consideration is that depth of discharge is often an important determinant in battery lifetime where shallower cycles can significantly prolong battery life (see above). In addition, battery lifetime is not only measured in cycles, but also in years. For example, reported Li-ion battery lifetimes range from 1000-10000 cycles and 5-15 years. At one cycle per day, 10000 cycles will take 27 years to complete implying that age-related degradation would probably have rendered the battery unusable long before the cycle lifetime is over.

Finally, it must be acknowledged that there exists substantial uncertainty regarding the economics of pre-commercial energy storage technologies like batteries. Numbers utilized in this article are guided by data available from the reviews of Duke University, the IEA and DNV. Various literature sources were also consulted to confirm that data in these reports is reasonable (Mahlia, Chen and Gonzalez).

Standard batteries

Batteries are often the first thing that comes to mind when considering energy storage. Standard batteries are especially attractive to advocates of distributed renewable energy because they can be deployed on small scale.

Even though Li-ion batteries are making all the headlines, most deep-cycle batteries for renewable energy application are still based on mature lead-acid technology. These batteries have the advantage of low up-front costs (~$150-200/kWh at most wholesalers), but have relatively short lifetimes, relatively high temperature-sensitivity, significant maintenance requirements and significant waste-handling challenges.

The breakeven electricity price spread for lead-acid batteries is given below as a function of the average depth of discharge and the capital costs. Other assumptions include a 2000 cycle service life with no degradation, 75% round-trip efficiency, an average of 1 cycle per day, balance of system costs of $150/kWh, O&M costs $20/kWh/yr, an electricity buying price of $30/MWh and a 5% discount rate. Balance of system and O&M costs are not often considered, but, just as is the case with solar PV will probably become a very important factor as battery prices fall. These costs are taken on the lower edges of the ranges given in the Duke University review. The Excel spreadsheet used to create this figure can be accessed here.

Image 

Given that most suppliers recommend a maximum depth of discharge of around 50%, it is clear from the above figure why deployment of lead-acid batteries for energy storage is very limited. Even under the lowest cost assumption, a 25% average depth of discharge requires an enormous breakeven spread of $800/MWh.

Li-ion batteries are not yet commonly available as solar backup options. One online supplier sells these batteries for around $1000/kWh which is much more expensive than lead-acid batteries. Tesla-manufactured batteries offered by SolarCity also appear to be in that price range. In exchange, Li-ion batteries offer longer lifetimes, lower maintenance requirements and higher round-trip efficiencies. The above figure is repeated for Li-ion batteries below under the assumptions of a 5000 cycle service life with no degradation, 90% round-trip efficiency, an average of 1 cycle per day, balance of system costs of $100/kWh, O&M costs $10/kWh/yr, an electricity buying price of $30/MWh and a 5% discount rate.

Image
 

The figure shows that, thanks to the longer lifetime, lower maintenance costs and higher round-trip efficiencies, Li-ion batteries at $400/kWh have slightly better economics than lead-acid batteries at $100/kWh. Thus, if deep cycle Li-ion batteries for energy storage applications come down to $400/kWh, the choice between Li-ion and lead-acid will depend primarily on the locally applicable discount rate.

It is clear from the two figures above that battery storage is still about an order of magnitude from being economically viable given the price spreads available in wholesale electricity markets (around $50-100/MWh). However, sufficient subsidization could make batteries a viable option for early adopters in countries where household electricity prices are exceedingly high and feed-in tariffs are being reduced to limit deployment. For example, German households currently pay around $400/MWh for electricity and receive around $180/MWh as feed-in tariff for solar power fed back into the grid. Households can therefore avoid up to $220/MWh by storing more solar energy for self-consumption instead of selling it back to the grid.

This spread will further increase in the future, but will likely remain too small to drive significant deployment for the foreseeable future in the absence of subsidies which are substantially more lucrative than those currently in place.

Flow batteries

Flow batteries, Vanadium Redox Flow Batteries (VRB) in particular, are attractive due to their very long lifetimes even under consistently high discharge depths, their good scalability, and their flexibility in managing power and storage capacity separately. They are generally not suitable for small-scale applications, however, and are therefore targeted more towards grid-scale energy storage. Drawbacks include fairly average round-trip efficiencies and significant O&M costs.

The breakeven spread for VRBs is given below as a function of the capital costs and depth of discharge. Other assumptions include a 14000 cycle service life with no degradation, 75% round-trip efficiency, an average of 1 cycle per day, balance of system costs of $150/kWh, O&M costs $30/kWh/yr, an electricity buying price of $30/MWh and a 5% discount rate. The Excel spreadsheet used to create this figure can be accessed here.

Image 

When considering that VRBs can be discharged to 95% without any significant ill effects on lifetime, the figure above starts to look somewhat more promising. Naturally, the average depth of discharge achieved in practice will be much lower than 95%, but this will still improve the economics of VRBs relative to lead-acid and Li-ion batteries which should not be discharged beyond 50%. That being said, however, VRBs remain several times more expensive than pumped hydro storage analysed in the previous article even under the most optimistic cost assumptions.

Commenting

In order to assist in finding the consensus view on the internalized costs of energy storage, please follow these simple commenting guidelines:

Three types of comments are welcome, each introduced by a keyword:

  1. DATA: Please give your opinion on any of the numbers presented in the article. Of particular interest is good data on the capital, BOS and O&M costs of various battery types. Each DATA comment will be weighted by the number of "likes" when the data is ultimately processed.
  2. REBUTTAL: If you strongly disagree with an existing DATA comment, please write a short rebuttal. The "likes" received by a REBUTTAL comment will subtract from the "likes" of the DATA comment. A REBUTTAL comment can once again be rebutted to reduce its weighting.
  3. CORRECTION: If you see a serious error in the numbers presented in the above analysis, please correct me so that I can correct the article.

Miscellaneous guidelines:

  • Make sure your comment gives only one piece of information (use multiple comments for multiple pieces of information).
  • Keep things short.
  • Please try to be as objective as at all possible. For this process to work, we all need to be in the mindset of dialectic instead of debate.
  • Externalities, potential technological breakthroughs and other energy options are off-topic.

Many comments are welcome. More data = greater accuracy.

Read More

Federal Budget Stunts like Blocking Enforcement of Light Bulb Standards Put U.S. Jobs at Risk

Elizabeth Noll, Energy Efficiency Advocate, Washington, D.C.

As the U.S. House begins debate over funding for the nation’s energy and water programs, it’s clear that the House Republican energy plan relies on political stunts and polluter-backed policy. The Republican-backed federal spending bill is set to become a vehicle for a number of misguided legislative maneuvers wholly unrelated to the budget and aimed at blocking programs that help Americans save energy while cutting harmful pollution.

A prime example is the amendment expected by Rep. Michael Burgess (R-TX) targeting energy-saving light bulbs. Originally designed by Energy and Commerce Committee Chairman Fred Upton (R-MI) and signed into law six years ago by President George W. Bush, energy efficiency standards for everyday bulbs have been phased in over the last two and a half years, giving them the first major technology update since the days of Thomas Edison.

As a result, an energy-saving light bulb that complies with federal standards is now available for every socket in America.

lightbulbs.jpgWhen all of the old energy-wasting incandescent are replaced with these upgraded versions, the standards will save consumers $13 billion per year.

Despite these benefits, some Republicans want to use back-door spending bill amendments to undo these standards and win points with Tea Party ideologues who somehow think reducing energy waste and providing consumers more choice is anti-American.

The Burgess amendmentâ€"similar to what he has proposed in the past--would once again prohibit funding for any enforcement of the light bulb standards. This would open the door for foreign companies to sell substandard versions in America. That is one reason all major U.S. lighting companies â€"including GE, Philips, and Sylvania â€" support the technology-neutral efficiency standards and oppose legislation that would interfere with the U.S. Department of Energy’s ability to implement or enforce them. These companies have made major investments and thoroughly upgraded and retooled their supply chains to comply with the law.

Today, NRDC is releasing a new fact sheet that further underscores there is no justification for this kind of legislation. Preventing DOE from enforcing the standards against imported noncompliant, inefficient bulbs won’t do anything other than sacrifice pollution reduction, help foreign producers, and cause U.S. companies to lose sales and suffer from a competitive disadvantage. It also will put jobs at risk in America, where some of the energy-saving bulbs are being assembled and manufactured.

In short, Rep. Burgess’ rider should be called the “American Job Export and Energy Waste Act.”    

Ceiling Fans and Buildings, Too?

Sadly, the backward policy ideas don’t stop with light bulbs. Charlie Dent (R-PA) added an amendment in committee that would halt a congressionally mandated review of ceiling fan energy efficiency. In March of last year DOE began the process of reviewing the standard, but this amendment would stop all activity. Antics such as this end any opportunity for stakeholder participation in the standard-setting process -- as well as any chance to save energy and consumers’ money.

Also buried in the spending bill is language aimed at seriously weakening DOE’s ability to support updated building energy codes. Buildings are the single largest energy end-use in the United States, accounting for about 40 percent of the total. Strong building codes to cut energy waste are critical to ensuring that the energy efficiency opportunity is harnessed when it is cheapest and easiest to implement.

While often overlooked, efficiency standards yield big benefits and represent a key component of the president’s climate action plan which includes a goal of reducing carbon pollution by a cumulative 3 billion metric tons by 2030 through standards for appliances and federal buildings. 

Efficiency is the cheapest, cleanest and fastest way to reduce carbon pollution, save money on energy bills, drive innovation in manufacturing, and create jobs; not to mention the significant health benefits like avoided child asthma attacks and premature deaths that come from burning less fossil fuels. 

House appropriators need to stop wasting time pushing ideologically driven policy reversals that cripple these important programs. Instead, they should get back to the business of funding programs that advance what Americans broadly supportâ€"energy efficiency.

Read More

Energy Politics is Often More About Location than Party

The Morning Consult has an intriguing article titled Map: State Energy Influence in Washington. The article briefly describes how states that produce various energy fuels have senators and congressmen that migrate to committees that affect the industry in their home territories.

It also mentions the curiously large influence that Massachusetts has on energy policy making. Even without any congressmen or senators on key committees and without any large energy corporations in the state, it is the home of the Secretary of Energy, the EPA Administrator, and the acting Federal Energy Regulatory Commission (FERC) chairman.

Aside: Someday I will get around to describing how Massachusetts evolved from being recognized as the most enthusiastic nuclear energy pioneering state to one of the more antinuclear states in the union. My current research traces the beginning of that transition to the 1960 decision of Senator John F. Kennedy to select Senator Lyndon B. Johnson as a running mate, partially to gain access to his natural gas industry campaign contributors. End Aside.

Here is a quote from the article that stimulated me to add a comment:

States also benefit from having representatives in important leadership positions that don’t relate primarily to energy. Senate Majority Leader Harry Reid, D-Nevada, for example, has used his influence to redirect a nuclear waste storage site away from his home state and push to appoint a FERC chairman from his region.

Here is the comment I submitted, which may or may not appear on the original site.

Interesting analysis, but you minimized the influence that Senator Reid has had on recent energy policy decisions.

Here are the specific actions that Senator Reid has taken to hamstring the continued operation and future development of nuclear energy.

He personally intervened and delayed numerous judge confirmations in order to install one of his staff members as a commissioner on the Nuclear Regulatory Commission.

He then included a demand for promotion of that staff member to the Chairmanship of the NRC as part of the price for his support of Senator Obama’s election to President.

During his 7 years and 2 months on the NRC, Greg Jaczko, the former Reid staff member, led the effort to write a new rule requiring nuclear power plants to be able to withstand direct attack by large aircraft and new interpretations of enforcement for fire protection rules.

He used the events at Fukushima to take total control of the NRC for about half a year and implemented several negative policies unilaterally.

His illegal action â€" under Senator Reid’s direction â€" to halt the completion of licensing review for the Yucca Mountain project not only deflected that project from being completed in Nevada, but it called into question a whole series of actions under the category of “waste confidence.”

Eliminating “waste confidence” led to a 2 year moratorium on both new nuclear power plant licenses and extensions of existing licenses. It has also ensured that states whose statutes include a provision prohibiting new nuclear plants until the waste issue is settled at the federal level would continue waiting.

After Jaczko was asked to resign because of the “chilled work environment” that he established as Chairman, Senator Reid once again dictated the selection of the new Chairman, a woman who had no experience in managing any agency or even a work group of more than a few academics.

The serving members of the Commission were passed over as the new member was directly appointed as Chairman.

The new Chairman has led an effort to require most of the existing nuclear plants in the US to conduct expensive and scarce resource consuming evaluations of their seismic design, even though there has never been a nuclear plant whose seismically qualified safety-related structures, systems and components have been damaged by an earthquake â€" even one that exceeded the plant’s design basis.

Energy politics is financially important. Halting or even delaying nuclear energy projects invariably increases the sales revenues of all other energy sources by restricting the overall supply of energy. In each case, there is usually a specific fuel supplier who benefits more directly to the tune of a million or more dollars in additional revenue for every day that a 1000 MWe nuclear plant does not operate.

Rod Adams
Publisher, Atomic Insights

The post Energy politics is often more about location than party appeared first on Atomic Insights.

Photo Credit: Influence and Energy Politics/shutterstock

Authored by:

Rod Adams

Rod Adams gained his nuclear knowledge as a submarine engineer officer and as the founder of a company that tried to develop a market for small, modular reactors from 1993-1999. He began publishing Atomic Insights in 1995 and began producing The Atomic Show Podcast in March 2006. Following his Navy career and a three year stint with a commerical nuclear power plant design firm, he began ...

See complete profile

Read More

Field Mobility Crucial for Wind Energy Projects

wind energy and mobility

Renewable energy projects and infrastructure can be great distances from the company’s headquarters or satellite offices. Supervisors, technicians and other workers in the field need technology that will allow them to maximize productivity, even though they are far from their desk. They need tools to help them work as efficiently as possible.

Utilization of smartphones and smartpad devices by field agents is crucial for companies engaged in renewable energy projects. The ability to use mobile technology increases productivity of workers inspecting or performing maintenance or repair on wind turbines, solar panels or other renewable energy project infrastructure.

Mobile technology streamlines the entire process of assigning work, inspecting infrastructure, performing maintenance and repair, and reporting and storing findings. The capability improves workflows by providing workers in the field and their supervisors in the office the ability to access work assignments and receive status reports on infrastructure in real-time. It eliminates bottlenecks and compresses cycle times. In short, it saves money, which in turn lowers the cost of electricity produced from alternative energy. Ultimately, this technology contributes to reduced reliance upon fossil fuels to meet the world's energy needs.

The ability to utilize mobile technology is maximized when the organization has deployed a Web database in which all project and organizational information is centrally stored. Workers in the field have online access to the organization’s software through their smartphone or smartpad.

They can receive instructions from the office to travel to a location where problems are being experienced or where an inspection is required. Some systems can be programmed to provide automatic notifications of the need to conduct an inspection in a certain location. Through these programmed notifications, the organization can map out regular inspections of their infrastructure systematically. Work crews can receive inspection assignments through their mobile devices, eliminating the possibility that a duplicate inspection will be done by another crew.

Mobile technology is especially helpful in inspecting, maintaining and repairing wind turbines, both on land and offshore. Turbines typically are 100 feet or more in height.  When a worker has climbed to the area of the turbine gearbox, it is necessary to record information on the status of the equipment. By using a mobile device connected to the central database, the worker can immediately upload the current condition of the turbine. He can also upload notes. There is no need to write down information that must be compiled again later for the project file.

The field worker assigned to the turbine can communicate with one or more supervisors through mobile devices, receiving instructions on how to proceed. That prevents the need to write down information and make a report later. Instructions can be relayed in real time, and work time in the field can be minimized and utilized more efficiently.

The field worker can also make real-time reports on necessary repairs and order replacement parts. Using a mobile device, he can take pictures of any damage and upload those pictures into the central database. The capability to report the turbine status and order parts can save hours, possibly days in the completion of the repair.

Some systems also provide offline or detached capability that makes it possible to update information even when Internet connectivity is lost or unreliable. The system syncs the new infrastructure information once connectivity is restored. This capability is especially useful in inspecting and maintaining offshore wind projects or onshore wind farms in remote locations.

Mobile technology can be utilized in other ways in the process of operation of a wind farm. With some systems, field agents can obtain landowner, lease and royalty payment information through the mobile device. The worker in the field can use the device to calculate the current lease or royalty payment owed to a landowner.

Ensuring timely and accurate lease and royalty payment is a significant pain point for the wind industry. There are many variables in meeting these ongoing payments. Without effective technology, wind companies must complete manual calculations that are labor intensive, with multiple staff members involved each month. Despite this commitment of company resources, payments still can be late or wrong, creating even more problems for the company.

To address these many factors, wind companies have resorted to various means with varying degrees of success. Many work from spreadsheets. Some still work from paper. In either case, the information must be input manually through each payment cycle. Many calculations must be done quickly, but the information on spreadsheets can be erroneous or inconsistent. It is complicated and time-consuming to track all of the triggers that determine the timeline for individual payments.

Tight deadlines and manual tweaking of so many adjustments can result in errors or slow the payment process. Missing payments or issuing incorrect amounts mean major headaches for the wind company, and make landowners unhappy.

Web-based software improves the accuracy and efficiency of meeting payment obligations by providing lease management, automatic payment reminders, payment tracker, royalty calculator, and payment scheduler capabilities. The system automates the process of meeting development, construction, landowner compensation and other types of payments, despite the complexities of these calculations and the varied timelines.

With new technological capabilities, including mobile technology, organizations worldwide that are involved in alternative energy have the ability to add to the growing electric capacity from clean, renewable resources.

Photo Credit: Wind Project Mobility and Access/shutterstock

Authored by:

Dan Liggett

Dan writes about the land services, infrastructure assets and energy industries. He is an award-winning writer and editor with an extensive background in newspaper media, public relations and media relations. In addition to technology, Dan has worked in higher education and public transportation. He is a graduate of Ohio University with a Bachelor of Science in Journalism. He also attended ...

See complete profile

Read More
Powered By Blogger · Designed By Alternative Energy