dc.contributor.author |
Angano, Walter |
|
dc.contributor.author |
Musau, Peter M. |
|
dc.contributor.author |
Wekesa, Cyrus W. |
|
dc.date.accessioned |
2022-11-17T08:05:27Z |
|
dc.date.available |
2022-11-17T08:05:27Z |
|
dc.date.issued |
2021 |
|
dc.identifier.citation |
2021 IEEE PES/IAS PowerAfrica |
en_US |
dc.identifier.isbn |
978-1-6654-0311-5 |
|
dc.identifier.uri |
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9543244 |
|
dc.identifier.uri |
http://repository.seku.ac.ke/handle/123456789/6964 |
|
dc.description |
DOI: 10.1109/PowerAfrica52236.2021.9543244 |
en_US |
dc.description.abstract |
Growth in energy demand stimulates a need to meet this demand which is achieved either through wired solutions like investment in new or expansion of existing generation, transmission and distribution systems or non-wired solutions like Demand Response (DR). This paper proposes a Q-learning algorithm, an off-policy Reinforcement Learning technique, to implement DR in a residential energy system adopting a static Time of Use (ToU) tariff structure, reduce its learning speed by introducing a knowledge base that updates fuzzy logic rules based on consumer satisfaction feedback and minimize dissatisfaction error. Testing was done in a physical system by deploying the algorithm in Matlab and through serial communication interfacing the physical environment with the Arduino Uno. Load curve generated from appliances and ToU data was used to test the algorithm. The designed algorithm minimized electricity cost by 11 % and improved the learning speed of its agent within 500 episodes. |
en_US |
dc.language.iso |
en |
en_US |
dc.publisher |
IEEE |
en_US |
dc.subject |
Demand Response |
en_US |
dc.subject |
Q-Learning |
en_US |
dc.subject |
Reinforcement Learning |
en_US |
dc.subject |
Smart Home Energy Management System |
en_US |
dc.subject |
Time of Use |
en_US |
dc.title |
Design and testing of a demand response Q-learning algorithm for a smart home energy management system |
en_US |
dc.type |
Article |
en_US |