AI is maximizing Solar for my EV — Part2

An AI application predicting next day’s solar production for a given solar array and configuring EV charger accordingly.

pascal boudalier
7 min readMay 25, 2024

See Part1, Part3

Training.

The model is based on LSTM (Long Short Term Memory) with an attention layer. This is a mature neural network architecture suited to learn from time series.

On a modern laptop (Nvidia 4070 GPU), training with 24000 sequences takes anything between 5 to 10mn.

Thanks to the Nvidia Jetson Nano’s GPU, the model is also regularly retrained on the edge (see later).

See also this notebook to train on Google’s Colab.

The big question: does this works ?

How is my model doing? How correct are its predictions?

To answer those questions, I shall test the model using a test set, a small percentage of the data (10%), which were NOT used during the training process, and therefore never seen by the model.

A dedicated test set is the only way to get a sense of what will happen once the model is deployed in the wild.

So, what are the results ?

When we predict classes, we typical look at accuracy, i.e. the % of correct predictions.

.predict(): 84% accuracy

The model makes the right classification 84% of the time.

When we predict a number, we rather look at errors, i.e. the difference between the prediction and the ground truth.

  • mae (mean absolute error): theaverage of absolute errors (absolute so that positive errors do not cancel out negative ones)
  • rmse (root mean square error): first we, square all the errors, we compute the average and then we take the square root of it. The “rmse” penalizes large errors more than “mae” does.
mae : 1.482 kwh
rmse: 3.547 kwh

The prediction is on average “1.5 kWh away” from the truth (mae). With a production range of 0 to 30kWh, that’s ~5%.

Is this good? Is this bad?

I let you judge. I summarized my thoughts in the next paragraph.

At this point, I can say that classification’s accuracy (84%) is definitely in a different league than picking a class at random (25%). The model has learned something.

We will see later whether we can improve predictive capability, and how.

Looking closer at the classification’s results.

A typical way to drill down on classification’s results is to look at the confusion matrix which provides a per class view of accuracy.

Columns represent the predictions and rows the truth. The diagonal ( top-left to bottom-right) corresponds to all instances of correct predictions (i.e. predicted = truth). All cells outside the diagonal are instances of incorrect predictions. The number in the cells is the number of predictions, performed using the test set.

A few thoughts:

The model works very well for both very high production (>25kWh) and very low production (<8kWh):

  • It never makes stupid mistakes: it never ever predicts <8 kWh when the reality is > 25kWh (or vice versa)
  • It can be trusted: it is correct 88% of the time when predicting >25kWh and 87% of the time when predicting <8kWh.

It does not performs as well for the “mid” classes but remains accurate more than 75% of the time in those classes.

Let’s remember our use case: I want to use prediction to decide how much grid energy to put overnight in my car.

The worst case scenario for me is an optimistic model, where the actual production is often lower than the prediction. In such cases the overnight charge is underestimated.

On the other end, a conservative model, where the reality is higher than the prediction, is OK. In such cases, I just have underestimated the opportunity to charge for free the next day.

Let’s look at the cases where the model is correct or conservative:

correct or higher 91.179%

This means that more than 90% of the time, I can rely on the model to optimize my use of grid energy.

That works for me.

Looking closer at the regression results.

We can look at how the errors are distributed.

Positive and negative errors
Absolute errors
50% of absolute errors < 0.20 Kwh
81.9% of absolute errors < 2.00Kwh

In summary, with regression, I am on average 1.5 kWh away from the ground truth (mae), and much closer to it (0.2kWh) 50% of the time.

That works for me as well.

Let’s use the application.

There is almost nothing for the user to do.

Every day, at sunset, the application (running on the Nvidia JetsonNano) generates a prediction for the next day.

The user is informed of the result via push notification, and the mobile app’s prediction tab is updated.

prediction for tomorrow

The EV will automatically be charged overnight with the amount of energy corresponding to the prediction (assuming the prediction‘s confidence is above a given threshold).

The relation “prediction => energy” is configured in the mobile app’s charger tab.

The EV will be charged for 3 hours overnight, if the prediction is between 8 and 17kWh

The prediction’s confidence is a value between 0 and 1, and is returned by the deep learning model. 1 means the neural network is “fully confident” about its prediction.

Using the mobile app, the user can overwrite this automatic behavior to force overnight charge on/off.

The manufacturer of my EV charger has “some” level of programmatic access. I shall not elaborate on this aspect, as it is not the purpose of this article. The EV charging python module can be swapped to adapt to another brand.

How do I know it is working ?

Even if I tested the model’s accuracy after training, how do I know it is not getting crazy once deployed in the wild and exposed every day to data it has never seen? Actually data nobody has ever seen, like.. the future.

In other words, how do I assess if the model’s predictive capability does not degrade overtime?

For that, I need to perform an ongoing evaluation of the model.

Systematic postmortem for recent predictions.

Every day, the system automatically performs a postmortem by comparing what it has recently predicted to what actually happened.

This postmortem result is provided to the user as push notification, and is available in the mobile app’s prediction tab.

Postmortem for last 6 days

Postmortems are recorded as a rolling list for the last 6 days. A simple color code allows to check at a glance what happened.

Average accuracy since the application was installed is also provided.

Regular testing on all new data.

Every day, I get … a new daily set of meteorological and solar production data, which the model has never seen.

Every so often (every week or every month) I run this batch of “unseen” data thru the model.

Model’s average accuracy on this “unseen” data is presented in the mobile app’s model tab.

This complements the 6 day postmortem view, with an average view since the last training.

Testing on unseen data

If the predictive performance were to degrade, it would be time to go back to the drawing board.

In addition, this tab shows some key characteristics of the model currently being used (size, metrics)

Current model

Improvement forever.

This “unseen data” is also an opportunity to retrain the model on a larger data set.

Every so often (every month, every quarter), the model is retrained. The expectation is that it will improve overtime since it has access to more training data.

This continuous retraining will run “forever”, without any user intervention.

Thanks to the Nvidia Jetson Nano’s GPU, retraining is possible at the edge. It runs for ~1 hour. It also also possible to retrain on Raspberry PI (it takes 4 hours)

That’s it.

As I did for another project (solar2heater, see below), I plan to write an update after one year of operation, to see how the model improved, and whether it saved me some money !!

Want more details ? Part3 describes the journey I followed to get to the best possible mode.

Want more solar stuff ? please check Solar2heater and a quantified benefit analysis after running it for one year.

--

--

pascal boudalier

Tinkering with Raspberry PI, ESP32, RiscV, Solar, LifePo4, IoT, Zigbee, energy harvesting, Python, MicroPython, Keras, Tensorflow, tflite, TPU. Ex Intel and HP