top of page

8 items found for ""

  • Imbalance at Risk: A Key Tool in Managing Power Market Imbalances

    Value at Risk (VaR) is a well known and commonly used metric for measuring the financial risk in an asset position in market based industries. The power markets are no different, and VaR is well established in the electricity sector as a measure of risk in financial and physical portfolios. In these markets VaR has traditionally focused on the day-ahead market exposure - largely because this is where most market exposure is located. But as power producers and retailers become more exposed to the imbalance markets, there is an increasing need to both measure and manage imbalance risk. Imbalance at Risk At Optimeering, we use the concept of Imbalance at Risk (IaR) to do this. Simply put, IaR measures the your risk to losses in the imbalance market, arising from your physical portfolio imbalances. In the past, this risk has been small and largely ignored - especially since over time imbalance losses have often later been compensated for by imbalance gains. However with increasing volatility in the imbalance markets, ignoring this risk is getting, well, increasingly risky. IaR allows you to quickly quantify imbalance loss exposure. We start by choosing a level of statistical certainty (e.g. 5%). The IaR is then the imbalance loss or cost that you have a 5% chance of exceeding over the time period in question. So, for example, we could calculate the IaR for a wind portfolio for tomorrow. If this came out at €25000, you would expect to have a 5% chance that your imbalance costs for tomorrow would exceed €25000. Calculating IaR IaR can be calculated in a number of ways. Perhaps the simplest is to base the calculation on historical imbalance price movements and distributions. However, this has a number of weaknesses, not least the fact that imbalance prices and volatilities we have experienced over the past year or two are substantially different than historical levels. Also, it fails to take account of market information we can use to set our expectations around the near-term distribution of imbalance prices. Using such information - for example from an accurate imbalance price distribution forecast - can provide more insight than a vanilla historical-price-based measure. Forecast IaR (FIaR) can be a powerful tool to manage short term imbalance risk. What time period should we look at? A quick answer? Multiple time frames. IaR calculated over the medium term - months or years - can give you an indication of your overall financial exposure, and take account of the fact that your imbalance gains may largely offset your losses over time. This is especially true if your imbalance volumes are not correlated with imbalance prices. However, this is not always the case - we should not expect for example that imbalances in a large wind portfolio in a given price area is not correlated at all with imbalance price levels and volatility in the same area. Short term time horizons - for example same-day, or next-day - are useful when it comes to active management of these risks. Why? Well, let’s say we are able to measure FIaR for today using a good forecast. By using active management of this exposure in those days and hours where IaR is large enough, we can minimise losses without (hopefully) impacting gains, with the consequence improvement in bottom-line portfolio return. Example FIaR visualisation Using IaR to manage imbalances One example strategy is to use short-term IaR to select hours or periods to use the intraday market to close out an expected imbalance. For example, if IaR in a given hour exceeds a threshold, the intraday market could be used to close out or reduce the size of the imbalance position. An extension to this is to look at IaR over a number of hours (e.g. the front 12 hours). If IaR again exceeds a threshold, the intraday market over the coming 12 hours could be actively used to reduce IaR. This approach could involve for example intraday trades in hours where imbalance exposure is low, but that overall result in a reduction in IaR over the whole period. Using IaR in this way can be an effective method for managing imbalance risk, by focusing trading or mitigation activity on those hours and periods where exposure is highest. It lends itself to both manual trading workflows (as you only need to focus on a few hours or periods) as well as automated imbalance management workflows. Summary Imbalance risk is getting real. More and more power market actors are experiencing periods where imbalance costs have hurt - and that is bottom line pain that everyone wants to avoid. IaR, as an extension of the familiar VaR methodology, is a simple yet effective way of measuring and managing imbalance risks, that lends itself to both manual and automatic imbalance management workflows. If you are interested in finding out more about forecasting imbalance markets, calculating IaR and FIaR, and using them to manage your imbalance risk, contact us at Optimeering.

  • Optimeering raises new growth capital

    Optimeering AS has raised NOK 18 million from existing shareholder Lyse Vekst AS and new shareholders Hafslund Invest AS and Farvatn Venture AS. Optimeering delivers SaaS-based advanced AI solutions to power market actors. The company has existing customer relationships with leading power market actors in the Nordics and a healthy pipeline of prospective new customers. Optimeering has eight employees, established revenues and is growing fast. The proceeds from the new issue of shares will be used to support the continued growth and development of Optimeering into a pan-European market leader. The first step towards this goal will be to expand our sales and solution development teams over the coming months. “We are very pleased to welcome Hafslund Invest and Farvatn Venture as new shareholders and for the continued support of Lyse Vekst. We are excited to welcome our investors onboard to help us with our mission of enabling a 100% renewable electricity industry using the power of AI,” Optimeering founder and CEO Gavin Bell comments. “We are very happy to join forces with Optimeering and look forward to support their accelerating growth” comments Investment Manager Iris Auran in Hafslund Invest. “Optimeering has the smartest team in the industry delivering innovative solutions for a sustainable power industry. We are super excited about what they will achieve over the next few years,” comments Investment Manager Karina Halstensen Birkelund in Farvatn Venture. “We invested in Optimeering in 2020 and today´s new share issue confirms our belief in the the company´s solutions and ambition” comments Investment Manager Ane Christophersen in Lyse Vekst AS. For further comments: Gavin Jon Bell, Optimeering founder and CEO, + 47 950 27 979

  • The information advantages of multiple forecasts: why many models are better than one

    Note: this post assumes some familiarity with the Nordic RK (“Regulerkraft” or “regulating energy”) power market. For those without such a background and who are interested in learning more, we would direct you to [Regulerkraftmarkedet | Statnett](https://www.statnett.no/for-aktorer-i-kraftbransjen/systemansvaret/kraftmarkedet/reservemarkeder/tertiarreserver/regulerkraftmarkedet/) (in Norwegian) Our experience in using ML for operational forecasting has shown us that no single ML forecasting model is great at forecasting all market behaviour and in all time periods. Why? Well, in developing and training such a model, you have to make trade-offs. A single model needs to perform well in all time periods - otherwise its unreliability would make it difficult to use consistently. More often than not, trying to develop a single model means you have to trade off accuracy in one aspect or situation with accuracy in another. This gives models that are just OK - never really poor, but never able to perform well. Instead, our approach - and what we deliver in our fab:app forecasts - is the use of multiple targeted or specialised models, where each one is designed to focus on one aspect of the market in question. So, for example, for our Nordic RK market forecasts, we have one ensemble model that forecasts the price distribution in the upcoming hours, and another set of models that focus on large price RK-spot price spread, among others. To illustrate the benefits of this, we can examine the fab:rk nordic price forecasts for NO2 from the morning of May 13, 2022. At 6am, our quantile price model forecast rk prices to most likely be distributed around the day-ahead price level, with the median forecast being at or close to spot. This is what occurred until 13:00, when the market became up-regulated with a price of 174 €/MWh. Such a price movement was not impossible according to the quantile forecast; however, the quantile forecast in and of itself provided little indication that this may occur. So, what can we do about this? This is where our LPI - Large Price Index - models come in to play. These models focus solely on indicating the likelihood of large rk price spreads to spot. An index value of 0 indicates very low likelihood, whilst a value of 10 indicates high likelihood. We provide 2 LPIs, each produced by its own specialised models - a base, and a conditional. The base LPI gives an indication of the overall likelihood that the market will be up (or down) regulated AND that the rk-spot spread will be large; the conditional gives the likelihood of a large rk-spot spread _if_ the market is up (or down) regulated. The “Up” base LPI answers the question: What is the likelihood that the market is up-regulated AND rk-spot spread is high? The conditional “Up” LPI answers the question: If the market ends up being up-regulated, what is the chance that the rk-spot spread is high? Let’s have a look at both LPIs also produced at 6am on May 13th. Firstly, the base up LPI: And secondly the conditional up LPI: The base up LPI forecast very low chances of up-regulation and large deviations in the morning hours, followed by an increased but still moderate likelihood from 12 noon. Similarly, the conditional up LPI forecast low likelihood of large rk-spot spreads in the morning hours even if the market were up-regulated. From 12 noon however, the likelihood was that, if the market were up-regulated, the rk-spot spread would be substantial. This is indeed what occurred. This illustrates very clearly the advantages of multiple, specialised models over a single forecast. Alone, the quantile model provided little distinction between the morning hours and those in the afternoon. Adding the LPI forecasts provided a strong separation between the two time periods, and indicated a substantial risk in the afternoon of high RK prices should the market require up regulation. The provision of this type of additional subtleties in a forecast is essential if the forecast is to be consistently valuable. By using specialised models designed for specific and different forecasting tasks, we can gain benefits and insights that single models are unable to provide. And by designing models specifically for these tasks, we ensure they are as good as they can be at them.

  • Forecasting in power markets: beating the naive forecast

    In our previous post, we examined the naive forecast, and in particular its performance for forecasting market outruns (such as market prices). We also had a look at a simple but “typical” machine learning model (in this case a feed forward neural network) compared to the naive forecast - and that often such models struggle to perform better than the naive. If you haven’t read it, check it out here. The obvious follow-up question for anyone interested in generating forecasts that add value is therefore: how do we beat the naive forecast? A very similar, but perhaps even more useful question, is: how to we add value over the naive forecast? Broadly speaking we have four possibilities: 1. Use forward-looking information in the model. Unsurprisingly, adding future information related to the period you are forecasting can improve forecast accuracy. This can include forecasts of underlying drivers (e.g. weather forecasts), or future values (such as forward curve data) that have already been calculated. Be aware though of the danger here - if such forward information is itself based on (or is no better than) naive forecasts, then your modified forecast will still essentially be performing naive forecasting. As always - be careful of apparent complexity that delivers little value. As an example of relevant added-value via forward looking data, we test adding a wind forecast feature to our simple NN model from our previous post. This improves both model performance overall, and - in some periods at least - results in forecasts that predict timing in price movements, rather than lagging these changes. Looking at performance metrics for this new “augmented” model confirms this. 2. Better model architecture More advanced model architectures than the simple feed forward networks discussed so far may be able to extract additional predictive power out of the data, both historical and forward-looking. However, a common challenge for using these more complex models in power market forecasting is the lack of data. Taking our example of daily price forecasting, we have only 365 data points per year. Ten years of data gives only 3652 or 3653 data points. Liquid traded power markets have only existed in many countries for a maximum of 20 years or so - giving, at best, around 7000 data points. And that is ignoring the fact that these markets have changed dramatically over this time, driven by such things as changes in market rules, new drivers such as CO2 prices and increases in renewable generation. Patterns in the data that once held true may no longer do so, and features that are important today may not have existed even a few years ago. Thus in practice we often have very limited data sets with which to build and test models. These are often much too short for advanced models to be trained and used successfully. You can train a complex model architecture on limited data, but you run a real risk of overfitting (among other things). You do not want your model to simply learn the training data set exactly. Such a model may be able to replicate the training data ok, but that’s all - any market situation that it has not seen exactly in the past can result in very poor predictive accuracy. 3. Better feature development from historical data An alternative to improved architecture is to use domain insight to propose and develop more complex derived features from the data, within a more simple model architecture. However this is often easier said than done, and often requires deep domain knowledge to identify such patterns. This is one reason automatic model fitting methodologies or models built by non-domain experts can perform poorly in practice. To illustrate this approach, we constructed several data features that are based on the tendency for power prices to “mean revert”. These features were then added to the original simple feed forward neural network model presented in our previous blog post. The idea here is that this enables the model to “learn” the mean-reversion tendency in prices and apply this when constructing forecasts. This simple change provides a small but measurable improvement in model accuracy over the simple network from earlier. In our experience such “expertise” features can often compensate for the lack of training data in many power applications. 4. Go beyond point forecasts Adding value by forecasting future distributions of outcomes, or providing estimates of forecast confidence can add substantial value, even if the underlying forecast (or “expected value forecast”) performs similarly to the naive forecast. The change in “distribution spread” (as, say prediction intervals or confidence intervals) over time for example provides a clear indication of forecast accuracy as it changes, and aids forecast interpretation. We plan to address this issue in more detail in a later post - “Beyond the point forecast: how to drive value via predicting distributions. ” Benchmarking Finally, no matter what model you use you should always benchmark your forecasts against the naive forecast. Ideally you should benchmark also against “modified naive” forecasts such as moving average, exponential smoothing, and simple pattern (e.g. seasonality) models. This may be no more complex that defining a set of standard models and metrics, fitting these to each quantity you will forecast, and comparing to your more advanced model(s). It is important to define these metrics so they reflect your use case. For example, consider two price forecast models. The first has a lower mean absolute error, but occasionally its forecast error is substantial. The second has a higher mean absolute error, but rarely if ever has very large forecast errors. The second model may be more preferable, as large errors can lead to very poor decisions and substantial losses, whereas small errors may not. Summary Performing better than a naive or simple model can often be surprisingly difficult in power market forecasting. This is not a bad thing - it is evidence among other things that the markets are well functioning overall, and that there are few or no obvious patterns that are not already known and exploited. Forecasting with machine learning in particular is also challenged by the limited amount of data available. However it is possible, via the use of good model design and careful feature (data) development and selection, to develop models that perform better and are more robust than naive or simple “modified naive” models. All it takes (!) is the combination knowledgable modellers and analysts, readily available data, a well defined workflow and a modelling system that makes it easy to develop, test and deploy… a subject that we will come back to later.

  • Forecasting in power markets: how good is the naive forecast?

    In time series forecasting, the naive forecast - where the forecast for all future periods is set equal to the value from the current period - is the most simple of all forecasting methods. It generally gets a bad wrap, due in large part to this simplicity - how can such a basic methodology be any good? In many applications, its reputation is often justified by - at best - average performance, especially when forecasting many periods ahead. Forecasting inventories (such as reservoir storage levels) is one obvious example. Forecasting inventory levels in 6 months to be equal to todays leaves much information on the table that could easily be used to get a better, more accurate forecast. However, in a number of applications the naive forecast performs surprisingly well. One example of this are market prices, of which power market prices are one example. Indeed, more complex forecasting methods often struggle to do better (and sometimes do even worse) than the naive in these cases. Why might this be? A (partial) answer lies in the markets themselves. In well functioning markets, the market price at any time represents the summation of all information available to the market. It can be thought of as an aggregate market “view” of what the price is, given such “underlying” information. For the next period, assuming relatively moderate changes in the underlying information, we may reasonably expect relatively moderate changes in market prices. In such a case, the naive forecast can be a fairly good short term predictor. A Simple Example To illustrate this, lets look at the day-ahead market power price in the Nordic markets. To make things a little simpler, we drop the weekend data, and build a simple naive model to predict the next-weekday’s system spot price: forecast(t+1) = observed(t) Thus, for example, the forecast for Thursday is Wednesday’s price, and for Monday is the previous Friday’s price. The predictions for our test dataset (May-August 2021) are shown below. We can test this against a simple feed forward neural network model, trained on a set of historical price features that exhibit some (auto)correlation with current price. The forecasts from this model for our test data set are given below. There is very little difference in the forecasts of the two models - indeed, the NN model has essentially learned a naive forecast with occasional small modifications. We can see this in more detail examining a set of forecast metrics for the two models: This is not an atypical result in time-series forecasting using standardised machine learning - such models that are based on recent historical data and historical data patterns end up learning essentially a simple ARIMA-type model. The naive forecast is of course essentially already a simple autoregressive model, and thus performs fairly similarly to these simple ML alternatives. This highlights a very common issue with many ML time-series forecasting models (in power markets and elsewhere) that appear fairly accurate, but are in reality offering little more (or even no more) than the naive forecast. And it raises the obvious next question is: how do we beat the naive forecast? We will tackle this issue in our next post.

  • Optimeering successfully closes early round financing

    We have just successfully closed our "late seed" round of financing - are are really excited about the opportunities this gives us as a growing organisation. We are very pleased to welcome our two investors, Lyse Energi and Sysco to the Optimeering family!

  • Optimeering awarded the Smart Grid Center's Innovation Prize for 2020

    We are very pleased and humbled to be awarded the Norwegian Smart Grids Center's Innovation Prize for 2020, for our work together with Statnett and NTNU on Impala for real-time grid imbalance prediction and mitigation! A special and warm congratulations to the Impala team, and especially Karan Kathuria from Optimeering - you guys did a fantastic job, especially given the high calibre of the other finalists. Read more here (in Norwegian):

  • Impala research paper published

    We recently published a research paper describing the Impala algorithm for forecasting intra-hour power system imbalances, together with our collaborating researchers from NTNU in Trondheim. Check it out here.

Search Results

bottom of page