Reinforcement learning (RL) is a branch of machine learning used to solve sequential decision problems (1). It relies on RL algorithm learning correct actions using trial and error, while using feedback from its own actions and experiences. It is analogous to playing chess where each player makes moves or “actions” based on the configuration of the chess board, referred to as “state” in RL. Each action changes the state of the chess board and thus dictates the next action. In RL, the algorithm is trained to identify a sequence of actions, known as “policy,” which maximizes the chances of winning by providing the algorithm a “reward” for a win. The goal is to train the algorithm to identify a policy that maximizes the reward.
RL has seen remarkable success in robotics and computer games (2–6). Its emergence in medicine is, however, recent and mostly limited to computer simulations (7, 8). There are many potential applications of RL in kidney health and diseases (Figure 1). For example, RL can be used to individualize dialysis dosing and management of intra-dialytic hypotension. It can also be used to individualize the management of therapies for chronic kidney disease and its complications, such as anemia, bone mineral disease, and in the use of medications to slow the progression of chronic kidney disease. Additionally, RL can be used to individualize the management of acute kidney injury, especially among critically ill patients. Acute kidney injury requires complex management of fluid balance, electrolytes, and hemodynamic support. RL can be used to learn optimal dosing of medications and fluids based on each patient's individual characteristics and response to treatment.
[Reinforcement learning] can be used to learn optimal dosing of medications and fluids.
In conclusion, RL is a relatively nascent branch of machine learning that has the potential to revolutionize the management of patients with kidney diseases by individualizing treatment strategies and developing decision support tools for clinicians.
References
- 1.↑
Sutton R, Barto A. Reinforcement Learning: An Introduction. Second edition. MIT Press. November 13, 2018. https://mitpress.mit.edu/9780262039246/reinforcement-learning/
- 2.↑
Yen GG, Hickey TW. Reinforcement learning algorithms for robotic navigation in dynamic environments. ISA Trans 2004; 43:217–230. doi: 10.1016/s0019-0578(07)60032-9
- 3.
Smart WD, Kaelbling LP. Effective reinforcement learning for mobile robots. Proceedings 2002 IEEE International Conference on Robotics and Automation, 2002; 4:3404–3410. doi: 10.1109/ROBOT.2002.1014237; https://ieeexplore.ieee.org/document/1014237
- 4.
Hundt A, et al. “Good robot!”: Efficient reinforcement learning for multi-step visual tasks with sim to real transfer. arXiv, September 2020. https://ui.adsabs.harvard.edu/abs/2019arXiv190911730H/abstract
- 5.
Mnih V, et al. Playing Atari with deep reinforcement learning. arXiv, December 2013. https://ui.adsabs.harvard.edu/abs/2013arXiv1312.5602M/abstract
- 6.↑
Mnih V, et al. Human-level control through deep reinforcement learning. Nature 2015; 518:529–533. doi: 10.1038/nature14236
- 7.↑
Nemati S, et al. Optimal medication dosing from suboptimal clinical examples: A deep reinforcement learning approach. Annu Int Conf IEEE Eng Med Biol Soc 2016; 2016:2978–2981. doi: 10.1109/EMBC.2016.7591355
- 8.↑
Raghu A, et al. Deep reinforcement learning for sepsis treatment. arXiv, November 2017. https://ui.adsabs.harvard.edu/abs/2017arXiv171109602R