The Early Innings Of Big Data On The Grid
“We’re a superpower, but we have a third world grid,” said Bill Richardson, Bill Clinton’s Secretary of Energy in the wake of the Northeast Blackout of 2003. Richardson wanted more transmission lines and stricter national reliability standards to avoid another massive power outage. However, it turned out that the blackout, which was caused by an overloaded transmission line in Ohio sagging into the branches of a tree, could have been contained locally, but for a bug in the software on the local distribution system. In the decade since Richardson spoke, we have learned that software can do a lot more than detect overloaded lines.
The physical infrastructure projects that Richardson wanted are necessary but cost tens of billions of dollars annually and will take years to implement. In the short term, however, enters big data. Today, there are more than 43 million smart meters on the grid. The granular data that these meters provide – reading load every 15 minutes or in some cases more often – allow for cheaper, higher-return software programs on the grid. On an individual basis, a statistician can forecast peaks in demand that may overwhelm the grid operator’s capacity, or an electrical engineer can find an unaccounted-for load that may represent power theft. The problem, however, is doing these analyses at the massive scale of the electrical grid. The US power grid is by some calculations “the largest machine in the world,” composed of over 2.7 million miles of power lines.
I spent this past summer as an intern on the Analytics team at AutoGrid Systems, Inc., a start-up founded in 2012 to build tools that solve this problem; the company uses vast processing power offered by cloud servers to analyze this data. It replaces the utility rules of thumb that estimate future energy demand using historical data with sophisticated prediction algorithms. The company builds its algorithms for distributed computing, so that, as more data comes out of the grid, AutoGrid can add a proportional number of servers and continue to analyze that data at the same speed. Hence, costs grow linearly or slower with size, due to efficiencies at scale, and performance does not suffer. AutoGrid’s energy data platform is already capable of making over one million energy forecasts in ten minutes.
The company grew out of Amit Narayan’s work as Director of Smart Grid Research in Modeling & Simulation at Stanford. AutoGrid’s flagship software, DROMS, allows utilities to forecast demand peaks and manage demand response events by sending alerts to their engineers. Its second major software application, ECO, alerts large building owners when they are at risk of raising their peak load for the month so they can shift load away from that time, saving money on the “demand charge” that many utilities charge based on the monthly peak.
Big data could do more. My experience this summer was that AutoGrid’s data scientists could think of a nearly unending number of problems big data software could solve, and the limit on the rate of offering new functions was our own bandwidth to develop software, not new good ideas. One example: in much of the world, a significant portion of power is stolen by amateur grid engineers who route circuits around the meter or build whole new extensions to the grid. I worked in Dar es Salaam, Tanzania, for a year and, anecdotally, this was a readily available career path to the national utility’s engineers after they left the company. Bloomberg Business Week estimates the cost of energy theft at $17 billion per year in India alone. A machine learning model could be trained using existing cases of power theft to detect the irregularities in power flow that represent an unaccounted for load, and then scour the grid for likely points of theft, dispatching “revenue assurance” teams to investigate the most likely and the largest.
Other data ideas being explored around the industry in general include advanced maintenance detection, more sophisticated billing, software to optimize consumer response to real-time electricity pricing if and when it comes, and smart devices that make energy efficiency part of a broader better performance (to use an example from one of my courses at Stanford that can only be described as adorable: an oven that contains both algorithms to respond to demand response signals and algorithms to vary temperature and humidity to bake the perfect chocolate chip cookie).
Along with AutoGrid, a host of other companies have thrown their hats into the big-data-for-energy ring. Pure-play analytics companies include C3, Gridium, and Gridpoint Energy. Meanwhile, companies that already sell an energy product such as Nest Labs’ (owned by Google) thermostats and OPower’s improved electricity bills have built sophisticated data science teams to analyze the data they already have. To feed companies this data, Silver Spring Networks (SSN) and other advanced metering companies are building and installing smart meters that read energy use many times per day and software connecting them build “smart networks” on the grid. Larger, older engineering companies – GE, Siemens, Oracle, etc. already provide most of the hardware that smart grid software works with. They may build (or acquire) data science teams to add software to their offering, or may form partnerships with the existing software providers.
As this industry emerges, some major questions as to what it will look like in maturity remain: will variations in energy use be provided by turning off electrical appliances, or by on-site storage systems, currently being pushed by companies like Stem and Primus Power? Will companies like AutoGridfunction by sending building managers messages advising them to reduce load, which is called “behavioral demand response” or by actually controlling building equipment remotely, a capability known as “direct load control”? Will companies like Nest that offer a hardware-software solution win out, or are open platforms like AutoGrid’s that function with any hardware more efficient?
On top of this, privacy questions are visible on the horizon, closing fast. With granular enough electricity data, an analytics team may be able to back-calculate everything from the type of equipment operating on a tech company’s campus to the times of day the members of a family are entering and leaving their house. This raises some privacy concerns that have recently caused backlash. My (completely-non-data-driven) opinion is that the resistance that has sprung up to smart metering in the form of websites like stopsmartmeters.org is overblown; online spying provides, and will always provide, much more detailed and troubling insight into our personal lives than energy use data can. However, we as an industry must build technical safeguards against misuse of customer’s energy data.
My one summer interning in the industry leaves me woefully unqualified to predict the future of the grid. In my stead, AutoGrid’s Director of Marketing and Ecosystem Partnerships, Sandra Kwak, offers a compelling vision of what all of this adds up to. When asked about the purpose of AutoGrid, in an abstract sense, she says, “We are contributing to the democratization of energy. More transparency for utilities and consumers allows us all to transform our relationship with energy. We will all have much more choice in making tradeoffs between the timing and pricing of our energy use, and the associated CO2 emissions.”
Cover image "blue grid" by Torley Olmstead is licensed under CC BY-SA 2.0.