Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 5 von 4568

Details

Autor(en) / Beteiligte
Titel
Cooperative Deep Reinforcement Learning for Dynamic Pollution Plume Monitoring using a Drone Fleet
Ist Teil von
  • IEEE internet of things journal, 2024-03, Vol.11 (5), p.1-1
Ort / Verlag
Piscataway: IEEE
Erscheinungsjahr
2024
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • Monitoring pollution plumes is a key issue, given the harmful effects they cause. The dynamic of these plumes, which may be important due to meteorological conditions, makes their study difficult. Real-time monitoring in order to obtain an accurate mapping of the pollution dispersion is helpful and valuable to mitigate risks. In this work, we consider a fleet of cooperative drones carrying pollution sensors and operating in order to assess a pollution plume. The latter is assumed to follow a Gaussian Process (GP) with varying parameters. For this use case, we propose an efficient approach to characterize spatially and temporarily the plume while optimizing the path planning of drones. In our approach, drones are guided by a Deep Reinforcement Learning (DRL) model called Categorical Deep Q-Network (Categorical DQN) to maximize the plume coverage while considering budget constraints. Specifically, we develop a scalable Independent Q-Learning (IQL) scheme that shares team rewards based on each drone's deployment relevance and therefore ensures cooperation. We evaluate the performance of the plume parameter estimation as well as the maps generated by the GP regression. By testing our framework on several plume scenarios, we show that it offers good results in terms of both estimation quality and run-time efficiency.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX