Learning Optimal Admission Control in Partially Observable Queueing Networks - POLARIS - Performance analysis and Optimization of LARge Infrastructure and Systems Access content directly
Preprints, Working Papers, ... Year : 2023

Learning Optimal Admission Control in Partially Observable Queueing Networks

Abstract

We present an efficient reinforcement learning algorithm that learns the optimal admission control policy in a partially observable queueing network. Specifically, only the arrival and departure times from the network are observable, and optimality refers to the average holding/rejection cost in infinite horizon. While reinforcement learning in Partially Observable Markov Decision Processes (POMDP) is prohibitively expensive in general, we show that our algorithm has a regret that only depends sub-linearly on the maximal number of jobs in the network, S. In particular, in contrast with existing regret analyses, our regret bound does not depend on the diameter of the underlying Markov Decision Process (MDP), which in most queueing systems is at least exponential in S. The novelty of our approach is to leverage Norton's equivalent theorem for closed product-form queueing networks and an efficient reinforcement learning algorithm for MDPs with the structure of birth-and-death processes.
Fichier principal
Vignette du fichier
questa.pdf (1.07 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04170992 , version 1 (25-07-2023)
hal-04170992 , version 2 (22-02-2024)

Licence

Attribution

Identifiers

  • HAL Id : hal-04170992 , version 1

Cite

Jonatha Anselmi, Bruno Gaujal, Louis-Sébastien Rebuffi. Learning Optimal Admission Control in Partially Observable Queueing Networks. 2023. ⟨hal-04170992v1⟩
55 View
38 Download

Share

Gmail Facebook X LinkedIn More