• Home
  • Library
  • Last Issue
  • Solar News
June 2017

The Importance Of Accurate Performance Predictions

By Gwendalyn Bender & Francesca Davidson

Developing a utility-scale solar project requires a large upfront investment, which makes accurately predicting how much power it will produce a critical component of the process. The financial risks of both over- or underperforming expectations are substantial. Underperform and you risk defaulting on your loan. Overperform and you may discover you have overbuilt your plant, investing significantly more money (on equipment, maintenance and other ongoing project costs) than was actually required to meet the obligations of your power purchase agreement.

Francesca Davidson

Francesca Davidson

Gwendalyn Bender

Gwendalyn Bender

Weather, as the fuel of a project, is the greatest source of performance variability. Although it is impossible to predict with 100% accuracy what weather will be like at a site throughout a project’s entire lifetime, there’s a great deal solar developers can do in the pre-construction phase to reduce project uncertainty as much as possible. Solar resource and other weather components, such as precipitation, wind speed and temperature, directly impact power generation, and finding reliable weather information is essential for estimating system production and profits.

Developers benefit from reducing energy estimate uncertainty because investors reward projects with low uncertainty with much more favorable financing terms. Although each project is impacted differently by uncertainty, research from pyranometer manufacturer Kipp & Zonen has determined that during project financing, for every 1% in resource uncertainty reduction, developers save $20,000 on average through improved financial terms.

Unfortunately, it is not uncommon to see errors in solar resource estimates equating to 2% to 5% reductions in energy produced. All of these percentages may seem small, but the associated financial losses add up in the long term. Consider a 10 MW PV project. If you underperform by 5%, in essence, you paid for a 10 MW plant but, in reality, ended up with a 9.5 MW plant. This would result in millions in lost revenue over the asset lifetime.

As solar capacity grows, more and more operational projects are coming online and performing outside the range of initial expectations. In response, the industry is beginning to wake up to the issue of resource assessment and call into question how initial solar energy estimates are calculated.

Best practices for solar assessment

Adopting a standard development and solar resource assessment process helps conserve time and money at each project phase and ultimately reduces the project’s overall level of uncertainty. To appropriately balance resource risk while controlling costs, solar developers need to ensure that they are using the most reliable weather data warranted by the size of the installation and its project phase.

During the initial prospecting stage, a developer will typically search for potential sites by conducting GIS analysis, comparing data layers from various paid and public sources, such as information on transmission, energy pricing, solar irradiance, temperature and wind speed. At this early phase, evaluating sites based on annual or monthly average weather information is a reasonable starting point. One source is the Global Atlas for Renewable Energy hosted by the International Renewable Energy Agency, which includes free global resource and weather information.

Once a specific site has been selected, it is important to find a more accurate, long-term record of solar resource data with information at hourly intervals. This is partly due to the high uncertainty of publicly available sources. Developers also need long-term information because it best indicates how a project will perform over its lifetime. Data at an hourly timestep is required for most industry-specific software programs used to model utility-scale energy output and calculate the one-year P90 values needed for financing. A one-year P90 value indicates the production value the annual energy output will exceed 90% of the time.

Some developers will source monthly average resource data for this purpose and use the data directly in energy modeling software, allowing the software program to interpolate the hourly information. Recently, we at Vaisala conducted a study across several projects to see how this monthly methodology varied from using long-term, hourly average data.

When modeling a fixed-tilt PV system at locations with very stable climates, we found that the differences in energy calculations between using monthly average versus hourly average data could be as low as 0.5% to 1.5%. However, locations with higher resource variability, and particularly when modeling tracking systems, the energy calculation differences could be 2% to 5% between using monthly versus hourly resource data sources. Based on this experience, a good rule of thumb for utility-scale plants is to obtain long-term, hourly data from a high-quality source for any project where you would use a utility-scale solar software program to model energy production (typically 1 MW or larger).

In the not-so-distant past, the industry’s only source for hourly data was typical meteorological year, or TMY. These datasets provide a one-year, hourly record of “typical” solar irradiance and meteorological values at a specific location in a simple file format. TMY datasets are freely available from the U.S. National Renewable Energy Laboratory (NREL) and commonly used at this early phase in the resource assessment process.

However, these datasets are not designed to show extremes, and NREL explicitly warns against using them to predict weather or energy production. Because low solar resource periods are actually screened out of TMY files, using these datasets can put your project at risk of loan default because it may not produce enough energy during these periods to make debt payments. For this reason, a TMY dataset alone is not appropriate for energy estimation at large-scale projects.

Due to the scarcity of direct observation networks and the short-term nature of their records, satellite processing methodologies, which generate long-term, hourly datasets of surface irradiance at a project location, have become the standard in pre-construction energy assessment practices for utility-scale development.

Evaluating long-term satellite data sources

Today, satellite-derived datasets are available from a number of providers, all of which use the same foundational satellite information but vary in their inputs and methods of calculating surface irradiance. For this reason, error and uncertainty can vary significantly between different sources.

Public information typically has high uncertainty. For example, NASA’s global dataset has a 20% uncertainty and NREL’s North American datasets have 5% to 12% uncertainty, depending on the version. Other free datasets that are available in various software packages can have 10% to 12% uncertainty, but high-quality data from a paid provider usually cuts resource uncertainty in half. For example, most proprietary solar datasets offer 5% or less uncertainty across the globe.

Another factor to consider when selecting a weather data source is how current it is. Has the dataset been kept up to date, or does it only offer values through 2010? This is important because aerosols in the atmosphere, such as pollution, influence power performance substantially and have increased dramatically in many parts of the world, such as China and India, over the past 10 years. This has been discussed in a few different reports and is illustrated in the graph below, which shows how the doubling of aerosols in the atmosphere since 2006 at this location near Hyderabad, India, has created a downward trend for irradiance.

  Finally, a developer must evaluate how the dataset was validated and how accurate it is in the area where the project is located. To demonstrate accuracy, most data providers have compared their datasets against direct observations from publicly available ground stations. Additionally, in some cases, data providers have also used these ground stations to calibrate or enhance the accuracy of their solar resource information. A fair and unbiased verification study should reserve at least a subset of ground station data exclusively for validation purposes to provide its users with an accurate estimate of how the data will perform at their project locations. As a user of the data, it should be clear to you which validation points are independent and which are not.

With solar development expanding worldwide, the availability of accurate and consistent data across the globe is becoming increasingly important. When selecting a data source, be sure to check that it has been validated against ground stations in your project’s region. All solar analysts can agree that local conditions, such as pollution, dust or seasonal variation, have a great influence over solar resources – and, thus, your project’s future power generation.

These regional differences are often better captured in satellite data using a different aerosol or turbidity input or by employing a different irradiance model. For this reason, we actively maintain and update five different versions of our global dataset to give developers and financiers greater confidence and a more thorough understanding of local resource variability across the globe. However, today there are many available solar resource datasets that project developers can compare to select the one that works best for their project area.

  Ideally, at this initial evaluation phase, utility-scale developers will collect long-term, hourly time series of weather data from the same provider they intend to use at the financing phase. This helps avoid the unpleasant surprise further down the development road map of changing your resource data source only to get a dramatically different number when estimating energy estimates.

Further reducing resource uncertainty

Beyond collecting high-quality satellite data to account for solar resource variability, seeking further uncertainty reduction can be critical in a number of circumstances. For example, in situations when a comparison of multiple solar resource data sources shows a wide spread between the answers, additional action to reduce project uncertainty may be required.

Most so-called “mega” solar projects (20 MW or larger) aim to achieve energy uncertainty levels between 6% and 9% because, as stated previously, a few percentage points can have a large effect on financing terms. For example, if a 20 MW project has a P90 of 40.9 GWh/year at 6% uncertainty, the P90 is 37.8 GWh/year at 7.5% uncertainty, and at 9% uncertainty, it drops to 36.2 GWh/year. This means a 3% decrease in uncertainty is an almost 10% increase in the P90 energy estimate, which is typically the value used for financing in order to ensure debt repayment.

  Cases where a direct improvement in uncertainty can be tied to better project financial terms – for example, debt-financed projects – are good candidates for installing a privately-owned ground station at the site. Although these measurements generally provide a record of conditions only over a short period of time (six months to two years), ground observations capture micro-scale features affecting power performance that satellite-derived datasets frequently miss.

Direct measurements can also be combined with long-term satellite data using a technique called model output statistics, which produces a corrected record of solar resource. This methodology is the gold standard in solar resource assessment and has proven to reduce resource uncertainty, one of the largest factors in energy uncertainty, by 50%.

Keep in mind, however, that the success of this approach depends on the design and reliability of the ground station equipment. Ideally, your system should measure the parameters of irradiance, temperature, wind and precipitation. Irradiance and temperature data helps you correct long-term correction information from satellite-derived sources, which ultimately increases the accuracy of your energy estimates. On-site wind data is helpful for gust studies to determine the engineering specifications for racking, particularly if the project uses a tracking system to maximize energy. Precipitation data is useful for soiling and maintenance planning. Direct soiling information can be a nice addition, especially in dusty locations, but station maintenance and data management are even more important.

Poor maintenance can quickly turn first-class equipment into second-class equipment due to subpar readings. Additionally, a poorly maintained, low-quality system may cause more trouble than it is worth and result in added time and cost associated with quality controlling the observations and screening out erroneous readings.

Developers that are planning to own and operate the facility also gain substantial ongoing benefits from ground stations. For example, historical information, such as precipitation, can be used to budget maintenance costs. The station also provides real-time condition monitoring, which supports accurate energy forecasting, proactive performance reconciliation, and informed decisions about operations and maintenance activities.

Factoring in the probability of extreme weather is another aspect developers may need to consider. For example, when evaluating insurance options, it is important to analyze the likelihood of equipment damage in areas prone to lightning, high wind gusts or hailstorms. Heavy snow fall and haze from volcanic eruptions or wildfires can also impair energy production and may need to be accounted for in energy estimates during the resource assessment process.

Building trust in solar through performance

With solar power growing rapidly worldwide, it is imperative for solar projects to perform as close to expectation as possible, not only to build trust within the financial community, but also to enhance the reputation of the sector with the general public. As we have seen all too often in the early days of utility-scale renewable energy, underperforming projects often attract negative media attention, making banks reluctant to support future solar facilities or causing them to take a more conservative stance, penalizing future projects with a high cost of capital.

As demonstrated here, the resource assessment process involves a number of key considerations, and its accuracy depends on the quality of the solar resource and weather data selected. To preserve the solar industry’s positive reputation, both developers and financiers must demand that solar resource assessments be executed with the care, precision and responsibility that the job requires. Otherwise, we may pay for the low-cost solutions and cut corners of today with the negative headlines and high investment costs of tomorrow.    


Gwendalyn Bender is head of solar services at Vaisala, an environmental and industrial measurement company, and Francesca Davidson is the company’s energy communications expert.

Zackinlogo

Copyright © 2022 Zackin Publications Inc. All rights reserved.

Scroll Up