TY - JOUR
T1 - Optimal continuous control of refrigerator for electricity cost minimization—Hierarchical reinforcement learning approach
AU - Kim, Bongseok
AU - An, Jihwan
AU - Sim, Min K.
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/12
Y1 - 2023/12
N2 - A refrigerator is a commonly used household appliance; however, limited research has focused on optimizing temperature control policy with the consideration of Time-of-Use (ToU) electricity price. This paper introduces a novel framework that utilizes hierarchical reinforcement learning (HRL) to control the intensity of refrigerator motors. The objective is to achieve both temperature regulation and cost savings under ToU and stochastic usage patterns. The problem is tackled by two HRL agents. The high-level agent is responsible for determining temperature reference based on ToU pricing, while the low-level agent adjusts the motor intensity to meet this temperature reference. To tackle non-stationarity in HRL, the high-level agent employs hindsight action transition and reward function approximation, while the low-level agent employs hindsight goal transition. Through the experimental evaluation, we found that the proposed method outperforms the conventional control methods and standard reinforcement learning approaches. It achieves the lowest total costs, resulting in a significant cost reduction of 5%–24%.
AB - A refrigerator is a commonly used household appliance; however, limited research has focused on optimizing temperature control policy with the consideration of Time-of-Use (ToU) electricity price. This paper introduces a novel framework that utilizes hierarchical reinforcement learning (HRL) to control the intensity of refrigerator motors. The objective is to achieve both temperature regulation and cost savings under ToU and stochastic usage patterns. The problem is tackled by two HRL agents. The high-level agent is responsible for determining temperature reference based on ToU pricing, while the low-level agent adjusts the motor intensity to meet this temperature reference. To tackle non-stationarity in HRL, the high-level agent employs hindsight action transition and reward function approximation, while the low-level agent employs hindsight goal transition. Through the experimental evaluation, we found that the proposed method outperforms the conventional control methods and standard reinforcement learning approaches. It achieves the lowest total costs, resulting in a significant cost reduction of 5%–24%.
KW - Energy management
KW - Hierarchical reinforcement learning
KW - Optimal continuous control
KW - Refrigerator
KW - Time-of-use
UR - https://www.scopus.com/pages/publications/85173238869
U2 - 10.1016/j.segan.2023.101177
DO - 10.1016/j.segan.2023.101177
M3 - Article
AN - SCOPUS:85173238869
SN - 2352-4677
VL - 36
JO - Sustainable Energy, Grids and Networks
JF - Sustainable Energy, Grids and Networks
M1 - 101177
ER -