Shifting the curve – analysing the effects of energy efficiency projects
We’ve been mostly focused on optimising distributed energy assets at Gridcognition, including solar, storage and EV charging. We thought we’d use our Hackathon at last weeks Virtual Retreat to explore another kind of distributed energy resource: demand management.
Demand management is one of the best tools customers have to reduce energy cost and emissions: energy efficiency, load shifting, and peak load management. Specific tactics can include generically lowering consumption by turning off lights, changing HVAC set points or avoiding the use of inefficient appliances; or more nuanced approaches like moving activity to different times of the day to avoid peak charges or even trying to flatten a site’s load to mitigate demand charges.
Unlike distributed energy resources like solar, storage and EV charging, which often follow a set of rules and so allow for a common modelling approach, this kind of energy management is much more bespoke. Customers can change their energy use in almost any way imaginable and this will be different from one site to the next.
To facilitate this kind of project we needed a way for customers to interact with their energy data in a more hands-on way. At the same time we wanted customers to see the financial fruits of their efforts immediately rather than having to wait for a modelling process to run its course.
Just for fun… we also decided to make a game out of it, giving each user a score for their energy management strategy so they can compete with their colleagues to come up with the best strategy overall.
Anyone who’s spent any time looking at a commercial energy bill will know they can be complex beasts. They typically have many components and some of these components, particularly demand charges, have rather baroque rules behind them.
In our regular modelling, we calculate all of this stuff completely faithfully to ensure accuracy, but in this case, speed is more of the essence. As such, we needed a way of distilling each bill component down to its essentials so they could all be calculated quickly and simply.
Fortunately, most of this work had already been done. When modelling batteries one faces a similar problem, in that the full tariff structure needs to be encoded into the objective function of one’s favourite optimisation algorithm. By repurposing the code we use to feed tariffs to our battery modelling and making a few alterations we were able to construct a very fast and reasonably accurate billing engine. After the initial, one-off step of encoding the full tariff structure, bills can be calculated extremely quickly and in one fell swoop via a couple of pandas groupby operations.
I should admit that getting a prototype of a super-fast billing service built was a selfish motivation for suggesting this particular challenge for the hackathon. The concept will have several applications in the core product down the line, such as quantifying financial uncertainty and rapidly exploring large DER parameter spaces.
Over to the Software Team…
Given the limited time, the key focus was to pick technologies that were fully featured and required minimal boilerplate.
ChartJS – https://www.chartjs.org/ standard JS graphing library that is frontend framework agnostic
The main reason we chose this library was that it had a plugin that allowed us to click and drag data points https://github.com/chrispahm/chartjs-plugin-dragdata. This isn’t a standard feature for graphing libraries and we knew it would be very difficult to build ourselves. It’s also a relatively high-level library.
Vue – https://vuejs.org/ Useful for quickly building dynamic Single Page Apps (SPA).
Vuetify – https://vuetifyjs.com/ a fully featured Vue UI library
Probably the most complete open source ui library. We have been impressed with the number of UI components and the quality of them and know the team behind it works incredibly hard to achieve this. It would be a shame not to try this UI library with Vue.
Netlify – https://www.netlify.com/ modern web app cloud hosting
One of the leaders in modern cloud hosting they make deploying your web app insanely simple. Literally, just a few clicks and you have a fully automated CI/CD pipeline deploying your Vue app. It is sensibly configured and optimised by default, deploying our app in under 2 minutes, all for free. Other leading players in this growing space are Vercel, Deta, Cloudflare workers, and Github pages.
Vuex – https://vuex.vuejs.org/ the offical statemanagement Vue library
A simple but powerful way to manage your application state. Practically it allows components to communicate with other components regardless of the component hierarchy by exposing a global data store.
Prisma + NodeJS + Heroku to provide a persistent data source
We needed a way to persist data and make it available to all users. The classical way to do this is with a SQL database and that is what we did. Using Prisma as the ORM and NodeJS as the server we created a persistent data layer, deploying it on Heroku a PaaS. We chose this stack because it has worked well for us before and our backend is really simple just three tiny endpoints. Ideally for a hackathon, you want to choose a service that just requires the schema and it generates the CRUD endpoints and can host that in the cloud. Graphcool (deprecated) used to do that. BlitzJS, RedwoodJS, and Hasura are other examples that can provide something similar
Flask – https://flask.palletsprojects.com/en/1.1.x/ python server
The modeling and financial calculations were written in Python. We needed to expose this to the user using the Vue app so we chose Flask a simple lightweight Python web server. James had previous experience building and deploying a Flask app so we went with that.
Below is a visual representation of the technologies discussed. What’s interesting here is that we are using three different cloud providers because they excel at certain aspects. This model works well for a hackathon and maybe a startup but there are obvious challenges with an org using this such as fine-grained access control and region requirements.
To get the data needed in the frontend it makes calls to two different backends. One to the flask server that returns financial and modeling data which we will call the modeling service. The other to NodeJS server which returns user and scoring data – scoring service. We could have combined the two backends into one but splitting them out allowed us to build and deploy our code faster. Breaking them apart is logical for the domain as there is no shared code between them.
Selecting Site Data
There are three predefined sites with interval data associated with them. We make an assumption on the location for each site hence perfixing the site with its city e.g. Perth Office. This is significant for determining the relevant tariffs.
Editing load shape
The interval data is presented as several graphs – load shape. Each graph is a day of the week and calculated by averaging all the data for that particular day contingent on the monthly filter.
You can shift your view of the data by selecting certain months for example if you wanted to look at weekly energy consumption for the summer months you would select the following shown below. You can see that the average weekly energy consumption is higher than the rest of the year represented by the grey baseline curve.
You can make fine-grained changes to the data by selecting the days and hours you want to change and clicking and dragging the load profile.
Clicking “Apply Rule” will apply your change to the dataset and is summarised by the rule table on the right. You can make additional edits by repeating the process outlined above.
The financial result of your changes to the interval data is provided by the cost stack which shows a breakdown of the costs and compares it to the baseline – original dataset.
Once you have applied one or more rules to the dataset you can view your results. This results in an overall score which is used to rank the players on the scoreboard. The score is dollar savings per kWh moved and represents how efficient your reduction in energy use was.
There are seven independent graphs representing the average weekly load shape. From a user’s perspective, the graphs are continuous and when they make edits it should be reflected in all seven graphs. So when the user clicks and drags a point we have to listen to that event and manually update the other six graphs in real-time. This is a very expensive operation due to the sheer number of events, calculations, and rendering going on. You will notice a slight delay for the other graphs when changing the curve. Fortunately, this delay isn’t too large to annoy the user. I suspect Vue handles state changes quite efficiently and intuitively, I believe if we tried to do something similar in React it would have been very difficult and would have constantly hit the infinite rerender issue https://stackoverflow.com/questions/48497358/reactjs-maximum-update-depth-exceeded-error that is so common in React but not in Vue.
When working with large datasets and dynamic graphs we have to keep in mind efficiency otherwise the experience of interacting with the graphs will be frustrating. The ideal situation from the perspective of the frontend would be to query the backend for all the data every time the user makes an interaction with the graph. So we can keep the code that deals with filtering and editing the data in one place. However, we cannot do this because there is a substantial time lag when requesting data from a server. This means we have to keep our own in-memory store of the dataset and perform the operations client-side. Resulting in us having to duplicate logic in the frontend and the backend and further complicating the frontend code.
We like the idea of being able to play around with your interval data and quickly finding out the financials. You can instantly get the answer to questions like ‘how much would I save if reduced my energy consumption at night?’. We can take what we have with the interactive graphs and allow the user to upload their own interval data instead of a set list of interval data provided by us. They can then make edits to their data and find out the financial impact of those changes.