Traffic Signal Control using Dueling Double Deep Q-Networks for Urban Mobility Optimization

  • Unique Paper ID: 185136
  • PageNo: 540-543
  • Abstract:
  • Traffic management is the process of regulating vehicle flow at intersections to ensure smooth mobility and road safety. With rapid urbanization, traffic volumes have surged, causing congestion, delays, excessive fuel consumption, and higher emissions, highlighting the need for intelligent traffic control systems. This study aims to develop a dynamic traffic signal optimization framework using Deep Reinforcement Learning (DRL), specifically the Dueling Double Deep Q-Network (D3QN). The proposed model interacts with the SUMO simulation environment, processing traffic states such as queue lengths, waiting times, and phase durations to learn optimal signal control strategies. Experimental results show that the D3QN-based agent reduces average waiting time by up to 35%, lowers queue lengths by 28%, and decreases emissions by 22% compared to traditional fixed-time controllers. These findings demonstrate that the proposed approach not only enhances intersection efficiency but also contributes toward sustainable and adaptive traffic management solutions for smart cities.

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{185136,
        author = {Navaratna Deepak Kurdekar and Dr. Geeta R. Bharamagoudar},
        title = {Traffic Signal Control using Dueling Double Deep Q-Networks for Urban Mobility Optimization},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {5},
        pages = {540-543},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=185136},
        abstract = {Traffic management is the process of regulating vehicle flow at intersections to ensure smooth mobility and road safety. With rapid urbanization, traffic volumes have surged, causing congestion, delays, excessive fuel consumption, and higher emissions, highlighting the need for intelligent traffic control systems. This study aims to develop a dynamic traffic signal optimization framework using Deep Reinforcement Learning (DRL), specifically the Dueling Double Deep Q-Network (D3QN). The proposed model interacts with the SUMO simulation environment, processing traffic states such as queue lengths, waiting times, and phase durations to learn optimal signal control strategies. Experimental results show that the D3QN-based agent reduces average waiting time by up to 35%, lowers queue lengths by 28%, and decreases emissions by 22% compared to traditional fixed-time controllers. These findings demonstrate that the proposed approach not only enhances intersection efficiency but also contributes toward sustainable and adaptive traffic management solutions for smart cities.},
        keywords = {Traffic Management, Deep Reinforcement Learning, Dueling Double Deep Q-Network (D3QN), Traffic Signal Optimization, SUMO Simulation, Smart Cities.},
        month = {October},
        }

Cite This Article

Kurdekar, N. D., & Bharamagoudar, D. G. R. (2025). Traffic Signal Control using Dueling Double Deep Q-Networks for Urban Mobility Optimization. International Journal of Innovative Research in Technology (IJIRT), 12(5), 540–543.

Related Articles