hig.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Aslani, Mohammad
Publications (3 of 3) Show all publications
Aslani, M., Seipel, S. & Wiering, M. (2018). Continuous residual reinforcement learning for traffic signal control optimization. Canadian journal of civil engineering (Print), 45(8), 690-702
Open this publication in new window or tab >>Continuous residual reinforcement learning for traffic signal control optimization
2018 (English)In: Canadian journal of civil engineering (Print), ISSN 0315-1468, E-ISSN 1208-6029, Vol. 45, no 8, p. 690-702Article in journal (Refereed) Published
Abstract [en]

Traffic signal control can be naturally regarded as a reinforcement learning problem. Unfortunately, it is one of the most difficult classes of reinforcement learning problems owing to its large state space. A straightforward approach to address this challenge is to control traffic signals based on continuous reinforcement learning. Although they have been successful in traffic signal control, they may become unstable and fail to converge to near-optimal solutions. We develop adaptive traffic signal controllers based on continuous residual reinforcement learning (CRL-TSC) that is more stable. The effect of three feature functions is empirically investigated in a microscopic traffic simulation. Furthermore, the effects of departing streets, more actions, and the use of the spatial distribution of the vehicles on the performance of CRL-TSCs are assessed. The results show that the best setup of the CRL-TSC leads to saving average travel time by 15% in comparison to an optimized fixed-time controller.

Place, publisher, year, edition, pages
NRC Research Press, 2018
Keywords
continuous state reinforcement learning, adaptive traffic signal control, microscopic traffic simulation
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:hig:diva-27845 (URN)10.1139/cjce-2017-0408 (DOI)000440632100009 ()2-s2.0-85051122432 (Scopus ID)
Available from: 2018-09-05 Created: 2018-09-05 Last updated: 2018-09-05Bibliographically approved
Aslani, M., Mesgari, M. S., Seipel, S. & Wiering, M. (2018). Developing adaptive traffic signal control by actor-critic and direct exploration methods. Proceedings of the Institution of Civil Engineers: Transport, Article ID jtran.17.00085.
Open this publication in new window or tab >>Developing adaptive traffic signal control by actor-critic and direct exploration methods
2018 (English)In: Proceedings of the Institution of Civil Engineers: Transport, ISSN 0965-092X, E-ISSN 1751-7710, article id jtran.17.00085Article in journal (Refereed) Epub ahead of print
Abstract [en]

Designing efficient traffic signal controllers has always been an important concern in traffic engineering. This is owing to the complex and uncertain nature of traffic environments. Within such a context, reinforcement learning has been one of the most successful methods owing to its adaptability and its online learning ability. Reinforcement learning provides traffic signals with the ability automatically to determine the ideal behaviour for achieving their objective (alleviating traffic congestion). In fact, traffic signals based on reinforcement learning are able to learn and react flexibly to different traffic situations without the need of a predefined model of the environment. In this research, the actor-critic method is used for adaptive traffic signal control (ATSC-AC). Actor-critic has the advantages of both actor-only and critic-only methods. One of the most important issues in reinforcement learning is the trade-off between exploration of the traffic environment and exploitation of the knowledge already obtained. In order to tackle this challenge, two direct exploration methods are adapted to traffic signal control and compared with two indirect exploration methods. The results reveal that ATSC-ACs based on direct exploration methods have the best performance and they consistently outperform a fixed-time controller, reducing average travel time by 21%.

Keywords
communications & control systems; traffic engineering; transport management
National Category
Civil Engineering Computer and Information Sciences
Identifiers
urn:nbn:se:hig:diva-28332 (URN)10.1680/jtran.17.00085 (DOI)
Available from: 2018-10-16 Created: 2018-10-16 Last updated: 2018-11-09Bibliographically approved
Aslani, M., Seipel, S., Mohammad Saadi, M. & Wiering, M. A. (2018). Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran. Advanced Engineering Informatics, 38, 639-655
Open this publication in new window or tab >>Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran
2018 (English)In: Advanced Engineering Informatics, ISSN 1474-0346, E-ISSN 1873-5320, Vol. 38, p. 639-655Article in journal (Refereed) Published
Abstract [en]

Traffic signal control plays a pivotal role in reducing traffic congestion. Traffic signals cannot be adequately controlled with conventional methods due to the high variations and complexity in traffic environments. In recent years, reinforcement learning (RL) has shown great potential for traffic signal control because of its high adaptability, flexibility, and scalability. However, designing RL-embedded traffic signal controllers (RLTSCs) for traffic systems with a high degree of realism is faced with several challenges, among others system disturbances and large state-action spaces are considered in this research.

The contribution of the present work is founded on three features: (a) evaluating the robustness of different RLTSCs against system disturbances including incidents, jaywalking, and sensor noise, (b) handling a high-dimensional state-action space by both employing different continuous state RL algorithms and reducing the state-action space in order to improve the performance and learning speed of the system, and (c) presenting a detailed empirical study of traffic signals control of downtown Tehran through seven RL algorithms: discrete state Q-learning(λ" role="presentation">), SARSA(λ" role="presentation">), actor-critic(λ" role="presentation">), continuous state Q-learning(λ" role="presentation">), SARSA(λ" role="presentation">), actor-critic(λ" role="presentation">), and residual actor-critic(λ" role="presentation">).

In this research, first a real-world microscopic traffic simulation of downtown Tehran is carried out, then four experiments are performed in order to find the best RLTSC with convincing robustness and strong performance. The results reveal that the RLTSC based on continuous state actor-critic(λ" role="presentation">) has the best performance. In addition, it is found that the best RLTSC leads to saving average travel time by 22% (at the presence of high system disturbances) when it is compared with an optimized fixed-time controller.

Keywords
Reinforcement learning, System disturbances, Traffic signal control, Microscopic traffic simulation
National Category
Computer and Information Sciences Civil Engineering
Identifiers
urn:nbn:se:hig:diva-28333 (URN)10.1016/j.aei.2018.08.002 (DOI)000454378700047 ()2-s2.0-85054427837 (Scopus ID)
Available from: 2018-10-16 Created: 2018-10-16 Last updated: 2019-01-28Bibliographically approved
Organisations

Search in DiVA

Show all publications