Model-aided Deep Reinforcement Learning for Sample-efficient UAV Trajectory Design in IoT Networks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Model-aided Deep Reinforcement Learning for Sample-efficient UAV Trajectory Design in IoT Networks

Omid Esrafilian
  • Fonction : Auteur
  • PersonId : 1098073
Harald Bayerlein
  • Fonction : Auteur
  • PersonId : 1098074
David Gesbert

Résumé

Deep Reinforcement Learning (DRL) is gaining attention as a potential approach to design trajectories for autonomous unmanned aerial vehicles (UAV) used as flying access points in the context of cellular or Internet of Things (IoT) connectivity. DRL solutions offer the advantage of on-the-go learning hence relying on very little prior contextual information. A corresponding drawback however lies in the need for many learning episodes which severely restricts the applicability of such approach in real-world time-and energy-constrained missions. Here, we propose a model-aided deep Q-learning approach that, in contrast to previous work, considerably reduces the need for extensive training data samples, while still achieving the overarching goal of DRL, i.e to guide a battery-limited UAV towards an efficient data harvesting trajectory, without prior knowledge of wireless channel characteristics and limited knowledge of wireless node locations. The key idea consists in using a small subset of nodes as anchors (i.e. with known location) and learning a model of the propagation environment while implicitly estimating the positions of regular nodes. Interaction with the model allows us to train a deep Q-network (DQN) to approximate the optimal UAV control policy. We show that in comparison with standard DRL approaches, the proposed model-aided approach requires at least one order of magnitude less training data samples to reach identical data collection performance, hence offering a first step towards making DRL a viable solution to the problem.
Fichier principal
Vignette du fichier
2104.10403.pdf (257.13 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03219126 , version 1 (06-05-2021)

Identifiants

Citer

Omid Esrafilian, Harald Bayerlein, David Gesbert. Model-aided Deep Reinforcement Learning for Sample-efficient UAV Trajectory Design in IoT Networks. GLOBECOM 2021 (IEEE Global Communications Conference), Dec 2021, Madrid, Spain. ⟨10.1109/GLOBECOM46510.2021.9685774⟩. ⟨hal-03219126⟩
167 Consultations
44 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More