Real-Time Traffic Signal Optimization using Deep Reinforcement Learning in SUMO Simulated Environments

  • Unique Paper ID: 184770
  • PageNo: 3383-3387
  • Abstract:
  • Traffic management refers to the coordination and control of vehicular flow to reduce congestion, delays, and environmental impact. In rapidly urbanizing cities, inefficient traffic signals lead to increased travel times, fuel consumption, and emissions, necessitating intelligent, real-time optimization strategies. This research proposes a Deep Reinforcement Learning-based system using the Dueling Double Deep Q-Network (D3QN) to dynamically manage traffic signals at urban intersections. Real-time data such as vehicle counts, queue lengths, and waiting times are collected via virtual detectors and processed into a state vector for the learning agent. The SUMO (Simulation of Urban Mobility) platform is employed to simulate realistic traffic conditions and interface with the RL model through the TraCI API. Simulation results indicate that the proposed system reduces average vehicle delay by 32%, decreases queue length by 28%, and improves throughput by 22% compared to traditional fixed-time control. This approach demonstrates high scalability and adaptability, offering a promising solution for smart city traffic management and sustainable urban mobility

Copyright & License

Copyright © 2026 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{184770,
        author = {Bhavana D and Dr. Savitha C K and Dr. Prajna M R and Dr. Balapradeep K N},
        title = {Real-Time Traffic Signal Optimization using Deep Reinforcement Learning in SUMO Simulated Environments},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {4},
        pages = {3383-3387},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=184770},
        abstract = {Traffic management refers to the coordination and control of vehicular flow to reduce congestion, delays, and environmental impact. In rapidly urbanizing cities, inefficient traffic signals lead to increased travel times, fuel consumption, and emissions, necessitating intelligent, real-time optimization strategies. This research proposes a Deep Reinforcement Learning-based system using the Dueling Double Deep Q-Network (D3QN) to dynamically manage traffic signals at urban intersections. Real-time data such as vehicle counts, queue lengths, and waiting times are collected via virtual detectors and processed into a state vector for the learning agent. The SUMO (Simulation of Urban Mobility) platform is employed to simulate realistic traffic conditions and interface with the RL model through the TraCI API. Simulation results indicate that the proposed system reduces average vehicle delay by 32%, decreases queue length by 28%, and improves throughput by 22% compared to traditional fixed-time control. This approach demonstrates high scalability and adaptability, offering a promising solution for smart city traffic management and sustainable urban mobility},
        keywords = {Dueling Double DQN, Traffic Signal Optimization, SUMO, Urban Mobility, Real-Time Control},
        month = {September},
        }

Cite This Article

D, B., & K, D. S. C., & R, D. P. M., & N, D. B. K. (2025). Real-Time Traffic Signal Optimization using Deep Reinforcement Learning in SUMO Simulated Environments. International Journal of Innovative Research in Technology (IJIRT), 12(4), 3383–3387.

Related Articles