Robot control interaction with cloud-assisted analysis control

Document Type : Research Paper

Authors

Department of Electronics and Communications, College of Engineering, University of Baghdad, Baghdad, Iraq

Abstract

Path planning with avoiding obstacles autonomously with a large of computing capabilities in an unknown dynamic environment is a difficult challenge for a mobile robot to solve. This research solves this challenge by combining deep Q-network (DQN) with cloud computing. To begin, a DQN is created and trained to predict the state-action value function of a mobile robot. The information collected from the original RGB image (pixels in the image) taken from the surrounding is fed into the DQN using a cloud computing platform, which reduces the algorithms high computation complexity; Finally, the action chosen policy picks the current optimal mobile robot action. To validate the DQN algorithm, we trained the robot in a dynamic environment with a simple and complex case. The simulation results show that, in a simple case of the environment, the DQN technique can converge to explore a path with fewer steps and higher average reward than in a complicated case and find a collision-free path with an accuracy rate of 89\% in the simple case and when the environment becomes more complex, the accuracy rate is 70 %.

Keywords

[1] F. Affane, K. Meguenni, and A. Omari, Type-2 fuzzy logic controller optimized by wavelet networks for mobile
robot navigation, Indones. J. Electr. Eng. Comput. Sci. 18 (2020), no. 1, 326–334.
[2] A. Anwar and A. Raychowdhury, Autonomous navigation via deep reinforcement learning for resource constraint
edge nodes using transfer learning, IEEE Access 8 (2020), 26549–26560.
[3] S. Canbaz and G. Erdemir, Performance analysis of real-time and general-purpose operating systems for path
planning of the multi-robot systems, Int. J. Electr. Comput. Eng. 12 (2022), no. 1, 285.
[4] C. Chen, J. Jiang, N. Lv, and S. Li, An intelligent path planning scheme of autonomous vehicles platoon using
deep reinforcement learning on network edge, IEEE Access 8 (2020), 99059–99069.
[5] M. Etemad, N. Zare, M. Sarvmaili, A. Soares, B. Brandoli Machado, and S. Matwin, Using deep reinforcementlearning methods for autonomous vessels in 2d environments, Canad. Conf. Artif. Intell., Springer, 2020, pp. 220–
231.
[6] H. Guo, X. Wu, and N. Li, Action extraction in continuous unconstrained video for cloud-based intelligent service
robot, IEEE Access 6 (2018), 33460–33471.
[7] L. Jiang, H. Huang, and Z. Ding, Path planning for intelligent robots based on deep q-learning with experience
replay and heuristic knowledge, IEEE/CAA J. Autom. Sin. 7 (2019), no. 4, 1179–1189.
[8] K. Kamali, I. Bonev, and C. Desrosiers, Real-time motion planning for robotic teleoperation using dynamic-goal
deep reinforcement learning, 17th Conf. Comput. Robot Vision (CRV), 2020, pp. 182–189.
[9] B. Kehoe, S. Patil, P. Abbeel, and K. Goldberg, A survey of research on cloud robotics and automation, IEEE
Trans. Automa. Sci. Eng. 12 (2015), no. 2, 398–409.
[10] J. Lee, A. Ab Ghafar, N. Nordin, F. Saparudin, and N. Katiran, Autonomous multi-function floor cleaning robot
with zig zag algorithm, Indonesian J. Electr. Eng. Comput. Sci. 15 (2019), no. 2, 1653–1663.
[11] M. Lemos, A. de Souza, R. de Lira, C. de Freitas, and V. da Silva, Robot navigation through the deep q-learning
algorithm, Int. J. Adv. Eng. Res. Sci. 8 (2021), 2.
[12] A. Mohammed, I. Aris, M. Hassan, and N. Kamsani, New algorithm for autonomous dynamic path planning in
real-time intelligent robot car, J. Comput. Theor. Nanosci. 14 (2017), no. 11, 5499–5507.
[13] S. Nilwong and G. Capi, Outdoor robot navigation system using game-based dqn and augmented reality, 2020 17th
International Conference on Ubiquitous Robots (UR), 2020, pp. 74–80.
[14] J. Ou, X. Guo, M. Zhu, and W. Lou, Autonomous quadrotor obstacle avoidance based on dueling double deep
recurrent q-learning with monocular vision, Neurocomput. 441 (2021), 300–310.
[15] N. Pandey and M. Diwakar, A review on cloud based image processing services, 7th Int. Conf. Comput. Sustain.
Glob. Dev. (INDIACom), IEEE, 2020, pp. 108–112.
[16] R. Patel, Human robot interaction with cloud assisted voice control and vision system, Ph.D. thesis, The University
of Texas at Arlington, 2018.
[17] M. Penmetcha and B.-C. Min, A deep reinforcement learning-based dynamic computational offloading method for
cloud robotics, IEEE Access 9 (2021), 60265–60279.
[18] M. Penmetcha, S. S. Kannan, and B.-C. Min, Smart cloud: Scalable cloud robotic architecture for web-powered
multi-robot applications, IEEE Int. Conf. Syst. Man, Cybernet.(SMC), 2020, pp. 2397–2402.
[19] M. Saravanan, P. Kumar, and A. Sharma, Iot enabled indoor autonomous mobile robot using cnn and q-learning,
IEEE Int. Conf. Ind. 4.0, Artif. Intell. Commun. Technol.(IAICT), IEEE, 2019, pp. 7–13.
[20] K. Sharma and R. Doriya, Path planning for robots: An elucidating draft, Int. J. Intell. Robot. Appl. 4 (2020),
no. 3, 294–307.
[21] R. Suenaga and K. Morioka, Development of a web-based education system for deep reinforcement learningbased autonomous mobile robot navigation in real world, 2020 IEEE/SICE International Symposium on System
Integration (SII), 2020, pp. 1040–1045.
[22] D. Wang, H. Deng, and Z. Pan, Mrcdrl: Multi-robot coordination with deep reinforcement learning, Neurocomput.
406 (2020), 68–76.
[23] Z. Wang, S. Yang, X. Xiang, A. Vasilijevi´c, N. Miˇskovi´c, and b. Na\dj, Cloud-based mission control of usv fleet:
Architecture, implementation and experiments, Control Eng. Pract. 106 (2021), 104657.
[24] K. Wu, H. Wang, M. Esfahani, and S. Yuan, Learn to navigate autonomously through deep reinforcement learning,
IEEE Trans. Ind. Electron. 69 (2022), no. 5, 5342–5352.
[25] Y. Yang, L. Juntao, and P. Lingling, Multi-robot path planning based on a deep reinforcement learning dqn
algorithm, CAAI Trans. Intell. Technol. 5 (2020), no. 3, 177–183.
Volume 13, Issue 2
July 2022
Pages 1789-1794
  • Receive Date: 17 February 2022
  • Revise Date: 19 March 2022
  • Accept Date: 29 April 2022