# AI is maximizing Solar for my EV — Part3

## An AI application predicting next day’s solar production for a given solar array and configuring EV charger accordingly.

See Part1, Part2

# Search for the best hyperparameters.

When designing a deep neural network, one has to decide how deep the network is (i.e. how many layers), how many neurons to use per layers..etc.

It is a balancing act. If the network is too small it will not learn. If it is too large, it will memorize rather than learn to generalize.

Those numbers (hyperparameters) can come from experience, intuition, black magic…

I started with the above, but was able to get a slighly better, more accurate model using KerasTuner.

KerasTuner automates the search for the best hyperparameters. Just define the search space, launch KerasTuner, and it will come back later with the hyperparameters combination leading to the best results.

`units = hp.Int("nb units", min_value=128, max_value=384, step=128)`

num_layers = hp.Int("nb layers", min_value=1, max_value=2, step=1)

nb_dense = hp.Int("nb dense", min_value=0, max_value=128, step=128)

dropout_value = hp.Choice("dropout_value", values = [0.0, 0.3])

In the example above we have 3 possibilities for number of units per LSTM layers, 2 for number of LSTM layers, 2 for number of neurons in final classifier and 2 for use of dropout. So a total of 3x2x2x2 = 24 combinations which KerasTuner will analyze.

There are various options for search: random search, tournament style, Bayesian optimization. KerasTuner does not just brutally try all combinations.

After a while, KerasTuner comes back:

`Showing 3 best trials`

Objective(name="val_mae", direction="min")

`Trial 16 summary`

Hyperparameters:

nb units: 384

nb layers: 1

nb dense: 0

dropout_value: 0.3

Score: 1.6457581520080566

....

# Search for best inputs.

KerasTuner is used for hyperparameters, but what about the network’s inputs such as: how many days in sequence? should we use wind direction as one of the input data? should we use 4 classes or 5? etc.

Solar2ev allows to explore those questions.

A simple configuration file defines the individual parameters to explore:

`# `

days_in_seq_h = [3,4,5]

sampling_h = [1,2]

`# exploring the 2 variable above will means 3x2 = 6 training`

Solar2ev will fully train all combinations, and report the results in a.xls file.

I typically sort the xls by column to look at a particular metrics:

ALLcombinations of the search space are fully trained (it is more brutal than clever). So depending on the depth of the search space, expect a few hours of execution.

In particular, I can use this brutal search to explore different types of network inputs, including multi-heads:

`# one head, 5 values`

('temp', 'pressure', 'production', 'sin_month', 'cos_month')

`# two heads, 1st one with 5 values, 2nd one with 4 values`

[

('temp', 'production', 'sin_month', 'cos_month', 'pressure'),

('wind', 'direction', 'sin_month', 'cos_month'),

]

With the 2 heads configured above, the network will learn separately from the sequence of (temp, production, pressure, month) and the sequence of (wind, direction, month).

Using multihead with my model this did not improve very much a network already fully optimized with KerasTuner on a single head.

# Not enough training data? Synthetic data at the rescue.

My solar installation is 3 years old, so I only have access to 3 years of historical data.

There is nothing I can do about that… or is there?

Solar2ev can use the Synthetic Data Vault (SDV) to generate synthetic data, which mimics real data and can be used to extend the training set.

Using synthetic data is an option when training data is scarce, difficult or expensive to get or cannot be disclosed for confidentiality reasons.

Generating synthetic data is a two step process:

- first train SDV on real data. SDV will learn about the characteristics of the real data.
- then ask SDV to create new synthetic data, as much as I want!

For instance:

- I grab two years of real data (2x365x24 consecutive hours).
- I tell SDV that the 1st 365 days is “a typical year”, and the next 365 days is “another typical year”.
- I train SDV on this real data.
- Then I ask SDV to give me synthetic “years”, i.e. something that looks like a typical year (I can ask for as many as I want).

If I ask 2 years of synthetic data, I can rerun my training process as if I had 5 years of history.

I explored various options to use those 2 years of synthetic data:

- Retrain from scratch using 5 years of data, real data first, or synthetic data first.
- Train the model using 3 years of real data, and then continue training using the 2 years of synthetic data (or vice versa). Optionally freeze part of the model when continuing training to only retrain the classifier.

`# run with -h to get all options`

.\synthetic_train.py -h

What is the result?

I could get a few percent improvement on mae compared to a model already tuned by KerasTuner. Not a lot, but I takes it !

Was all this SDV work worth a few percent improvement ?

Of course it was: I learned about SDV, it is a cool trick to know.