Zehfroosh, Ashkan and Tanner, Herbert G. (2022) A Hybrid PAC Reinforcement Learning Algorithm for Human-Robot Interaction. Frontiers in Robotics and AI, 9. ISSN 2296-9144
pubmed-zip/versions/1/package-entries/frobt-09-797213/frobt-09-797213.pdf - Published Version
Download (1MB)
Abstract
This paper offers a new hybrid probably approximately correct (PAC) reinforcement learning (RL) algorithm for Markov decision processes (MDPs) that intelligently maintains favorable features of both model-based and model-free methodologies. The designed algorithm, referred to as the Dyna-Delayed Q-learning (DDQ) algorithm, combines model-free Delayed Q-learning and model-based R-max algorithms while outperforming both in most cases. The paper includes a PAC analysis of the DDQ algorithm and a derivation of its sample complexity. Numerical results are provided to support the claim regarding the new algorithm’s sample efficiency compared to its parents as well as the best known PAC model-free and model-based algorithms in application. A real-world experimental implementation of DDQ in the context of pediatric motor rehabilitation facilitated by infant-robot interaction highlights the potential benefits of the reported method.
Item Type: | Article |
---|---|
Subjects: | Eprints AP open Archive > Mathematical Science |
Depositing User: | Unnamed user with email admin@eprints.apopenarchive.com |
Date Deposited: | 23 Jun 2023 07:41 |
Last Modified: | 20 Nov 2023 05:15 |
URI: | http://asian.go4sending.com/id/eprint/766 |