[ 8th February 2023 by allam ahmed 0 Comments ]

(03) Developing a humanitarian logistics framework using a reinforcement learning technique, Dr. Khalid AlQumaizi, Prof. Ashit Dutta, Dr. Sultan Alshehri

Dr. Khalid I AlQumaizi
Assistant Professor of Family Medicine
College of Medicine, AlMaarefa University, Ad Diriyah
Riyadh 13713
Kingdom of Saudi Arabia
Email: kqumaizi@mcst.edu.sa
Prof. Ashit Kumar Dutta
Department of Computer Science and Information Systems
College of Applied Sciences, AlMaarefa University, Ad Diriyah
Riyadh 13713
Kingdom of Saudi Arabia
Email: adotta@mcst.edu.sa
Dr. Sultan Alshehri
Department of Pharmaceutical Sciences
College of Pharmacy, AlMaarefa University, Ad Diriyah 13713
Kingdom of Saudi Arabia
Email: sshehri.c@mcst.edu.sa

DOI: 10.47556/J.WJEMSD.19.3-4.2023.3

PURPOSE: Humanitarian logistics (HL) refers to co-ordinating relief efforts to ensure disaster victims have timely access to necessary goods. Large-scale catastrophes and catastrophic occurrences sometimes result in a significant resource deficit, making it challenging to allocate limited resources across affected locations to improve emergency logistics’ operations. The primary objective of disaster relief is to save lives, alleviate victims’ suffering, and protect human dignity in the face of overwhelming odds; however, this is only possible when logistical support is provided for catastrophe victims. The primary objectives of HL are life preservation and post-disaster reduction. Disasters such as earthquakes and tsunamis necessitate the prompt and adequate delivery of emergency relief supplies. There is a lack of an effective optimisation model for allocating resources in HL. This paper therefore presents a reinforcement learning-based framework for optimising the resource allocation processes in HL.

DESIGN/METHODOLOGY/APPROACH: The authors employ the State Action Reward State Action (SARSA) algorithm to reduce the complexities in resource allocation. In addition, a transportation plan is developed in order to support victims at remote locations. The suggested algorithm is evaluated against the precise dynamic programming approach and an heuristic algorithm to determine which provides the best quality result.
FINDINGS: The experimental findings reveal that the algorithm outperforms the state-of-the-art methods regarding efficiency and accuracy. In addition, the Q-learning algorithm can deliver solutions closer to optimum or even ideal by improving the training phase.
KEYWORDS: Humanitarian Logistics; Q-learning; SARSA; Emergency Relief Supplies; Reinforcement Learning.

WJEMSD V19 N3-4 2023 AlQumaizi_Dutta-Alshehri.pdf
WJEMSD V19 N3-4 2023 AlQumaizi_Dutta-Alshehri.pdf
Aboutallam ahmed

Leave a Reply