‘THE WHOLE PROBLEM WITH THE WORLD IS THAT FOOLS AND FANATICS ARE ALWAYS SO CERTAIN OF THEMSELVES, AND WISER PEOPLE SO FULL OF DOUBTS.’ (BERTRAND RUSSELL)
Our job at Gridcognition is help our customers plan and optimise distributed energy projects.
The energy environment is changing rapidly, and energy projects have inherent uncertainty from multiple sources. The leaders investing in the future decentralised energy system need be able to make good decisions about future risks and opportunities with some understanding of the level of uncertainty they might face.
Given good quality input data we can model how well a Distributed Energy Resource (DER) project could have performed with a high degree of accuracy. But to extrapolate this into the future, so as to predict how well a prospective DER project will perform, we must account for the fact that any input data, however good, can never exactly represent what actually happens in subsequent years. Instead of providing a single project outcome we want to provide a range of possible outcomes.
Uncertainty in the predicted outcomes of DER projects can typically be traced back to two broad sources: load shape uncertainties and forecasting uncertainties.
The former is due to the intrinsic variability in any site load shape, that is how a site consumes (and produces) energy minute by minute, hour by hour, including the variability in things like solar PV system yield. It is a feature of any DER project – even just a business-as-usual scenario where you do nothing!
The latter (forecasting uncertainty) is only a feature of more advanced DER projects where steerable assets, like battery storage systems or electric vehicle chargers, are more actively controlled. There is then an additional source of uncertainty arising from the fact that the control systems for these assets don’t have perfect foresight; they can only act based on imperfect forecasts.
Load shape uncertainty
To model a DER project for an existing site we usually start with historical load data. This is used to accurately calculate the site’s energy usage and the corresponding energy charges. For a green-fields site we would need to estimate this data and for a pure DER project, like a solar farm, we would need to generate this data, based on historical irradiance data for example. Either way we start with some fixed input data that we need to extrapolate into the future.
This data won’t just repeat itself over the ten years or so we may want to model. In reality there will be many variations due to differences in site usage, local weather patterns etc. What we need to do is reflect these variations in our modelling so we can provide a range of possible outcomes rather than a single data point.
A neat way of accomplishing this is to use a bootstrapping approach https://en.wikipedia.org/wiki/Bootstrapping_(statistics). We take our historical data and randomly sample from it to generate a set of possibilities for what the data might look like in the future. Then we calculate the project outcomes for each of our bootstraps to estimate what the range of outcomes looks like. The nice feature of this approach is that we don’t have to do any complicated modelling or make any assumptions about our historical data. It’s a data-driven approach and will automatically capture the patterns and variability inherent to our input data.
We can see this in action in the following example: a 99 kW solar PV system added to a typical office building profile in Brisbane.
The thick blue trace in the upper plot shows the raw load shape for the site after adding the modelled solar PV system. This is based on historical load data for the site and historical irradiance data for the solar. The thinner, grey traces are the bootstraps. They clearly demonstrate the same patterns as the original data but include a lot of variation. We’ve also separated out the generation of the solar PV system in the bottom plot. As expected, this is a major contributor to the variation in the overall site load shape.
Running all of the bootstraps through the rest of our project modelling process we find a range of commercial outcomes for the savings expected in the second year of this project.
The solid bars show the outcomes for the raw load data for each project participant, colour coded for different value flows, so retail energy, network, wholesale etc. The grey bands around the bars show the range of outcomes we found from our bootstraps. Despite what looked like a large amount of variability in the load data the range of outcomes is, perhaps surprisingly, narrow. This is a reflection of the fact that the long-term variability of our load and irradiance data is actually quite low; the typical monthly energy usage and maximum demand values, which are what this site is charged for, don’t actually vary too much. For sites with regular energy usage patterns and not-too-crazy network tariffs this is usually what we see for pure, solar PV projects.
When distributed energy projects integrate multiple steerable assets with intelligent control systems, we need to consider forecasting uncertainty. While solar PV systems, EV charging systems, and loads are all steerable, battery storage systems provide the best example of how we deal with forecasting uncertainty.
Batteries aren’t passive assets. They must react to the site’s load data, on-site generation, and price signals from tariffs and markets to actively optimise the overall load shape for a particular commercial or environmental outcome. To do this the battery control system needs to know what the site load and/or market price is going to do hours or even days into the future. Any such forecast is inevitably going to come with some uncertainty.
A comprehensive estimate of how this forecast uncertainty might impact the project outcomes would require reconstructing the battery control algorithm, including any forecast algorithms it employs, and playing back the historical load and market data through it (or, better yet, bootstraps of this data). Unfortunately, this becomes very expensive to do at scale, not to mention the fact that battery control software vendors aren’t transparent about their forecasting algorithms.
Fortunately, we can reasonably expect most forecasting algorithms to have a well-behaved error profile. This means we can approximate the forecasting uncertainty by randomly perturbing our load and market data. We can then optimise the battery based on this perturbed data and apply the results back to the unperturbed data. Effectively the battery is being optimised for slightly incorrect data, just like in real life.
An example is shown in the following plot.
The top trace shows a week of load data for our Brisbane office, with a peak demand of 183.6 kW. The middle traces (purple is the battery state of charge) show the results of adding a 232 kWh, 116 kW battery to the site, optimised for the site’s tariffs (in this case a time-of-use energy charge and an anytime, demand charge). We see that the overall load shape is nicely flatten to lower the peak demand to 145.2 kW. The bottom traces show the impact of a 10% load forecast uncertainty. The battery has been optimised against a load shape that has been randomly perturbed in each interval by 10%. As a result, the overall load shape is no longer nice and flat but instead retains a few lumps and bumps where the forecast values don’t match the actuals. The peak demand is still lower, at 169.8 kW, but not as low in the previous case.
The impact on the financial outcomes for year two of this project is shown in the following plot on the left.
As before the solid bars show the outcomes for the raw load data, and also without any load forecast uncertainty, i.e. for perfect foresight. The grey uncertainty bands show the range of outcomes we found from our bootstraps but now with the effect of load forecast uncertainty included too. The uncertainty bands are much larger than before and are driven mostly by the uncertainty in the demand charges (part of the red, network charges). They are also no longer approximately symmetric with our perfect foresight model sitting near the top or the range for the customer’s outcome.
We expect to see larger uncertainty bands like this as actively targeting demand charges is a much more variable process. While the rewards are often substantial, they can be entirely negated if you miss the peak demand event, typically just a single interval in the entire month (or even year). This can happen if the forecast is too low, so the battery is not discharging hard enough, or if the forecast in earlier intervals is too high, so the battery doesn’t have enough energy by the time the actual peak demand events occurs.
In addition, uncertainties in forecasting will always lead to worse outcomes. Sampling uncertainty will generally be symmetric as random samples can contain both days that are better and days that are worse for the outcome we’re optimising for (e.g. days with higher or lower solar yield).
Conversely, optimising against an inaccurate forecast will always be worse than optimising against an accurate one. Hence a perfect forecast model will tend to sit towards the top of the uncertainty band.
Exactly the same approach we applied to load forecast uncertainty can be applied to price forecast uncertainty. The plot on the right shows an alternative scenario where the battery is optimised against the wholesale energy spot price instead of the site’s retail and network tariffs. The spot price has been perturbed by 20% before the optimisation process, then cashflows were calculated using the unperturbed market data. Once again, we see a wider uncertainty band with our perfect foresight model sitting towards the top.
The level of load and price uncertainty is a tunable parameter in our models, which we are continuing to calibrate, and which advanced users of our software platform can control to reflect the forecasting accuracy of the control systems they might implement in their projects.
We’ve discussed the two main sources of uncertainty in the predicted outcomes of distributed projects here: those arising from load shape uncertainties and those arising from forecasting uncertainties. We have also discussed some of our techniques for quantifying uncertainty so as to provide project participants with a better understanding of the risks and benefits of their projects.
Sitting above these techniques we provide for managing uncertainty, Gridcognition provides users with the ability to define and compare multiple project scenarios.
Scenarios can be used to explore different project design options (site selection, asset mix, etc), different commercial models (power-purchase agreement, equipment leases, etc), and different future pricing assumptions (equipment costs, market prices, tariffs, etc). By simulating and optimising projects across multiple scenarios with sophisticated quantification of uncertainty with scenarios, we can provide more confidence to our customers.
In the future, we will be expanding the capability of the Gridcognition platform to track the actual performance of deployed assets so we can accurately report on the delivered commercial and environmental value, and re-optimise and re-forecast the expected future performance of the project. This tracking data will also enable us to continue to calibrate our planning models including the way we quantify and represent project uncertainty.
As more and more steerable energy assets are deployed, and we can collect more data on the real-world performance of the control systems for these assets, we will be expanding Gridcognition to provide more confidence and more certainty to distributed energy projects to help accelerate the transition to a decarbonised and decentralized energy future.