Skip to Main content Skip to Navigation
Conference papers

Model-aided Deep Reinforcement Learning for Sample-efficient UAV Trajectory Design in IoT Networks

Abstract : Deep Reinforcement Learning (DRL) is gaining attention as a potential approach to design trajectories for autonomous unmanned aerial vehicles (UAV) used as flying access points in the context of cellular or Internet of Things (IoT) connectivity. DRL solutions offer the advantage of on-the-go learning hence relying on very little prior contextual information. A corresponding drawback however lies in the need for many learning episodes which severely restricts the applicability of such approach in real-world time-and energy-constrained missions. Here, we propose a model-aided deep Q-learning approach that, in contrast to previous work, considerably reduces the need for extensive training data samples, while still achieving the overarching goal of DRL, i.e to guide a battery-limited UAV towards an efficient data harvesting trajectory, without prior knowledge of wireless channel characteristics and limited knowledge of wireless node locations. The key idea consists in using a small subset of nodes as anchors (i.e. with known location) and learning a model of the propagation environment while implicitly estimating the positions of regular nodes. Interaction with the model allows us to train a deep Q-network (DQN) to approximate the optimal UAV control policy. We show that in comparison with standard DRL approaches, the proposed model-aided approach requires at least one order of magnitude less training data samples to reach identical data collection performance, hence offering a first step towards making DRL a viable solution to the problem.
Document type :
Conference papers
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03219126
Contributor : Centre de Documentation Eurecom Connect in order to contact the contributor
Submitted on : Thursday, May 6, 2021 - 10:56:44 AM
Last modification on : Monday, May 10, 2021 - 2:19:53 PM
Long-term archiving on: : Saturday, August 7, 2021 - 6:36:12 PM

File

2104.10403.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03219126, version 1

Collections

Citation

Omid Esrafilian, Harald Bayerlein, David Gesbert. Model-aided Deep Reinforcement Learning for Sample-efficient UAV Trajectory Design in IoT Networks. GLOBECOM 2021 (IEEE Global Communications Conference), Dec 2021, Madrid, Spain. ⟨hal-03219126⟩

Share

Metrics

Record views

285

Files downloads

30