Multi-Agent Reinforcement Learning for Autonomous Vehicle Coordination in Smart Cities
DOI:
https://doi.org/10.63345/rsk2h608Keywords:
Multi-Agent Reinforcement Learning (MARL), Autonomous Vehicles, Smart Cities, Traffic Optimization, Deep Learning, Intelligent Transportation Systems (ITS)Abstract
The rapid adoption of autonomous vehicles (AVs) in smart cities has introduced new challenges
in coordinating multiple AVs to ensure efficient, safe, and congestion-free urban mobility.
Traditional rule-based or centralized traffic management approaches often fail to adapt to
dynamic, real-time traffic scenarios, leading to inefficiencies and increased congestion. MultiAgent Reinforcement Learning (MARL) offers a decentralized, adaptive, and scalable solution
where multiple AVs act as independent agents, learning optimal driving policies through
interaction with their environment.
This study presents a MARL-based coordination framework integrating Deep Q-Networks
(DQN), Proximal Policy Optimization (PPO), and Multi-Agent Deep Deterministic Policy
Gradient (MADDPG) to optimize traffic flow, minimize delays, and enhance overall urban
mobility. Using simulation environments such as SUMO (Simulation of Urban Mobility) and
CARLA, we evaluate the proposed system across various real-world traffic scenarios. Results
demonstrate that MARL significantly improves travel efficiency, reduces collision rates, and
enhances overall AV cooperation compared to conventional rule-based traffic management
systems. Furthermore, the research highlights key challenges, including scalability,
communication overhead, and real-time decision-making constraints, while offering insights into
future advancements in AV coordination.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.