Reinforcement learning-based mobile robot navigation


Altuntas N., Imal E., Emanet N., Öztürk C. N.

Turkish Journal of Electrical Engineering and Computer Sciences, vol.24, no.3, pp.1747-1767, 2016 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 24 Issue: 3
  • Publication Date: 2016
  • Doi Number: 10.3906/elk-1311-129
  • Journal Name: Turkish Journal of Electrical Engineering and Computer Sciences
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, TR DİZİN (ULAKBİM)
  • Page Numbers: pp.1747-1767
  • Keywords: Reinforcement learning, temporal difference, eligibility traces, Sarsa, Q-learning, mobile robot navigation, obstacle avoidance
  • Istanbul Gelisim University Affiliated: No

Abstract

In recent decades, reinforcement learning (RL) has been widely used in different research fields ranging from psychology to computer science. The unfeasibility of sampling all possibilities for continuous-state problems and the absence of an explicit teacher make RL algorithms preferable for supervised learning in the machine learning area, as the optimal control problem has become a popular subject of research. In this study, a system is proposed to solve mobile robot navigation by opting for the most popular two RL algorithms, Sarsa(λ) and Q(λ) . The proposed system, developed in MATLAB, uses state and action sets, defined in a novel way, to increase performance. The system can guide the mobile robot to a desired goal by avoiding obstacles with a high success rate in both simulated and real environments. Additionally, it is possible to observe the effects of the initial parameters used by the RL methods, e.g., λ, on learning, and also to make comparisons between the performances of Sarsa(λ) and Q(λ) algorithms.