Show simple item record

dc.contributor.authorKonovalenko, Anna
dc.contributor.authorHvattum, Lars Magnus
dc.date.accessioned2024-10-07T09:17:19Z
dc.date.available2024-10-07T09:17:19Z
dc.date.created2024-10-02T11:56:04Z
dc.date.issued2024
dc.identifier.citationLogistics. 2024, 8 (4), 96en_US
dc.identifier.issn2305-6290
dc.identifier.urihttps://hdl.handle.net/11250/3156615
dc.description.abstractBackground: The dynamic vehicle routing problem (DVRP) is a complex optimization problem that is crucial for applications such as last-mile delivery. Our goal is to develop an application that can make real-time decisions to maximize total performance while adapting to the dynamic nature of incoming orders. We formulate the DVRP as a vehicle routing problem where new customer requests arrive dynamically, requiring immediate acceptance or rejection decisions. Methods: This study leverages reinforcement learning (RL), a machine learning paradigm that operates via feedback-driven decisions, to tackle the DVRP. We present a detailed RL formulation and systematically investigate the impacts of various state-space components on algorithm performance. Our approach involves incrementally modifying the state space, including analyzing the impacts of individual components, applying data transformation methods, and incorporating derived features. Results: Our findings demonstrate that a carefully designed state space in the formulation of the DVRP significantly improves RL performance. Notably, incorporating derived features and selectively applying feature transformation enhanced the model’s decision-making capabilities. The combination of all enhancements led to a statistically significant improvement in the results compared with the basic state formulation. Conclusions: This research provides insights into RL modeling for DVRPs, highlighting the importance of state-space design. The proposed approach offers a flexible framework that is applicable to various variants of the DVRP, with potential for validation using real-world data.en_US
dc.language.isoengen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleOptimizing a Dynamic Vehicle Routing Problem with Deep Reinforcement Learning: Analyzing State-Space Componentsen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.volume8en_US
dc.source.journalLogisticsen_US
dc.source.issue4en_US
dc.identifier.doi10.3390/logistics8040096
dc.identifier.cristin2308684
dc.source.articlenumber96en_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal