loading page

LFLLM: A Large Language Model for Load Forecasting
  • +5
  • Guolong Liu,
  • Yan Bai,
  • Keen Wen,
  • Xinlei Wang,
  • Yanli Liu,
  • Gaoqi Liang,
  • Junhua Zhao,
  • Zhao Yang Dong
Guolong Liu

Corresponding Author:[email protected]

Author Profile
Yan Bai
Keen Wen
Xinlei Wang
Yanli Liu
Gaoqi Liang
Junhua Zhao
Zhao Yang Dong

Abstract

Short-term load forecasting (STLF) plays a pivotal role in ensuring the operational efficiency, reliability, and economic viability of power systems. Traditional forecasting models face challenges with the nonlinearity and complexity of power consumption patterns, especially due to the increasing integration of renewable energy sources. To address these limitations, a Transformer-based large language model (LLM), dubbed LFLLM, is proposed for STLF across various voltage levels in power systems. This paper introduces an efficient training method based on Parameter-Efficient Fine-Tuning (PEFT) to tackle the challenging training problem of LLMs containing massive parameters, thereby ensuring the model's excellent learning ability. To ascertain the robustness and reliability of LFLLM in handling a wider range of load forecasting tasks, its zero-shot learning ability is evaluated. The extensive experiments indicate that LFLLM exhibits superior forecasting accuracy at different voltage levels, as well as detailed predictive capabilities at different frequencies, and remarkable zero-shot learning ability in diverse scenarios, underscores its potential for practical applications in smart grids.
02 Jan 2024Submitted to TechRxiv
08 Jan 2024Published in TechRxiv