Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
A Deep Q-Learning-Based Adaptive Traffic Light Control System for Urban Safety
Ist Teil von
2022 4th International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), 2022, p.2430-2435
Ort / Verlag
IEEE
Erscheinungsjahr
2022
Link zum Volltext
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
Traffic congestion is a significant and recurring issues in today's urbanised world, caused by an increase in the number of vehicles. While vehicle density fluctuates on temporally short and geographically small scales, efficient traffic signaling system helps in avoiding traffic congestion. An inefficacious traffic system can lead to congestion and delays resulting in high pollution, and fuel wastage. The Deep Reinforcement Learning (DRL) method provides an excellent approach to solve the problem involving complex relations such as traffic flow and congestion. Recent development in Deep Neural Network (DNN) further enhances the learning capabilities of an agent with complex real-time data. The paper presents an intelligent Traffic Light Control System (TLCS) built on a Deep Q-Learning (DQL) model that accurately represents the problem's components: agents, environment, and actions. The proposed model aims to minimize the traffic queue length and delay in terms of waiting time. The model is implemented using Simulation of Urban MObility (SUMO) for traffic generation in an urban scenario. The performance of the proposed model is compared with a traditional traffic light control system. The simulation results show that the proposed DQL-based model can significantly reduce the delay compared with the traditional model.