Short-term load forecasting (STLF) plays a critical role in efficient energy management and grid operations. Spatiotemporal Graph Neural Networks (STGNNs) have shown promise in capturing spatial and temporal dependencies within power grid systems.
This benchmark evaluates the performance of various STGNN architectures on STLF tasks using a subset of Low Carbon London dataset and metrics. The aim is to establish a foundation for comparing models and guiding future research.
We use the following datasets for the benchmark:
Electricity Load Dataset
Description: Hourly energy consumption data for different nodes in a power grid.
Source: Low Carbon London dataset.
Data_directory: DataLCL_228houses_with_timeslot_temperature.csv
Format: CSV with columns: time
, $smart_meter_id
(228 values)
- Normalization: Load and weather data normalized using
Z-score Normalization
scaling. - Temporal Binning: Aggregate data into 15-minute or hourly bins as required.
- Graph Construction:
- Nodes: Each household.
- Edges: Based on correlation threshold or learnable parameters during training.
Models | Predefined Graph | Learnable Graph | TTS | T&S |
---|---|---|---|---|
GRUGCN | ✅ | ✅ | ||
GCGRU | ✅ | ✅ | ||
T-GCN | ✅ | ✅ | ||
AGCRN | ✅ | ✅ | ||
GraphWavenet | ✅ | ✅ | ||
FC-GNN | ✅ | ✅ | ||
BP-GNN | ✅ | ✅ |
- Mean Absolute Error (MAE)
- Root Mean Squared Error (RMSE)
- Mean Absolute Percentage Error (MAPE)
- SeasonalNaive: Uses the value of the previous day (same hour) as the forecast for the next day.
- VAR: Auto-Regressive Integrated Moving Average.
- GRU: Gated Recurrent Units
- Transformer: Transformer
Below are example placeholders for training commands:
python SpatioTemporal_TS_with_Graph.py <MODEL_NAME> \
<EXP> \ \# experiment_id to save forecast on test
--method <METHOD> \ \# Method to generate graph from similarity function, could be either euclidean, dtw, pearson, correntropy \
--window <WINDOW> \ \# Window of historical result
--hidden_dimension <HIDDEN_DIMENSION> \ \# Number of hidden dimension for neural network (Look at model architecture in \custome_models)
--learning_rate <LEARNING_RATE> \
--batch_size <BATCH_SIZE> \
python hyperparameter_tuning.py <MODEL> \# see help for possible parameter for <MODEL>