March 14, 2025

ikayaniaamirshahzad@gmail.com

[2409.19975] Exploiting Adjacent Similarity in Multi-Armed Bandit Tasks via Transfer of Reward Samples


View a PDF of the paper titled Exploiting Adjacent Similarity in Multi-Armed Bandit Tasks via Transfer of Reward Samples, by NR Rahul and 1 other authors

View PDF
HTML (experimental)

Abstract:We consider a sequential multi-task problem, where each task is modeled as the stochastic multi-armed bandit with K arms. We assume the bandit tasks are adjacently similar in the sense that the difference between the mean rewards of the arms for any two consecutive tasks is bounded by a parameter. We propose two algorithms (one assumes the parameter is known while the other does not) based on UCB to transfer reward samples from preceding tasks to improve the overall regret across all tasks. Our analysis shows that transferring samples reduces the regret as compared to the case of no transfer. We provide empirical results for our algorithms, which show performance improvement over the standard UCB algorithm without transfer and a naive transfer algorithm.

Submission history

From: Rahul N R [view email]
[v1]
Mon, 30 Sep 2024 06:03:22 UTC (868 KB)
[v2]
Wed, 12 Mar 2025 18:15:36 UTC (869 KB)



Source link

Leave a Comment