TechRxiv
manuscript_TechRxiv.pdf (8.47 MB)

Learning to Navigate Through Reinforcement Across the Sim2Real Gap

Download (8.47 MB)
preprint
posted on 2022-06-29, 12:47 authored by Rana AzzamRana Azzam, Mohamad Chehadeh, Oussama Abdul HayOussama Abdul Hay, Igor Boiko, Yahya Zweiri

Amid the recent advances in robotics and machine learning, unmanned aerial vehicles (UAVs) have shown evident proliferation across various applications. Consequently, the involvement of UAVs in populated environments has progressively become inevitable, putting forward stringent safety and security measures. In this work, we develop a deep reinforcement learning-based UAV-navigation approach that blends decision making with behavioral intelligence. In particular, a reinforcement learning (RL) agent is trained to instruct the UAV on how to accomplish a goal-oriented task, while assuring the safety of the UAV and its surroundings. Upon arriving at the goal position, the RL agent slows the UAV down preparing it for landing. The safety of the UAV and the environment are attained through a robust collision avoidance capability, embedded into the RL-based navigation system and considers both static and dynamic obstacles in the environment. Training is exclusively carried out in simulation, where a high fidelity UAV controller model is used to perform the simulated maneuver. The proposed approach was tested in simulation and then shown to directly transfer to reality without explicit sim2real gap transfer techniques. Experimental results demonstrated the agent's capability to achieve the navigation task with 90% success rate.

History

Email Address of Submitting Author

rana.azzam@ku.ac.ae

Submitting Author's Institution

Khalifa University of Science and Research

Submitting Author's Country

  • United Arab Emirates

Usage metrics

    Licence

    Exports