View a PDF of the paper titled Simplifying Deep Temporal Difference Learning, by Matteo Gallici and 6 other authors
Abstract:Q-learning played a foundational role in the field reinforcement learning (RL). However, TD algorithms with off-policy data, such as Q-learning, or nonlinear function approximation like deep neural networks require several additional tricks to stabilise training, primarily a large replay buffer and target networks. Unfortunately, the delayed updating of frozen network parameters in the target network harms the sample efficiency and, similarly, the large replay buffer introduces memory and implementation overheads. In this paper, we investigate whether it is possible to accelerate and simplify off-policy TD training while maintaining its stability. Our key theoretical result demonstrates for the first time that regularisation techniques such as LayerNorm can yield provably convergent TD algorithms without the need for a target network or replay buffer, even with off-policy data. Empirically, we find that online, parallelised sampling enabled by vectorised environments stabilises training without the need for a large replay buffer. Motivated by these findings, we propose PQN, our simplified deep online Q-Learning algorithm. Surprisingly, this simple algorithm is competitive with more complex methods like: Rainbow in Atari, PPO-RNN in Craftax, QMix in Smax, and can be up to 50x faster than traditional DQN without sacrificing sample efficiency. In an era where PPO has become the go-to RL algorithm, PQN reestablishes off-policy Q-learning as a viable alternative.
Submission history
From: Mattie Fellows [view email]
[v1]
Fri, 5 Jul 2024 18:49:07 UTC (19,498 KB)
[v2]
Wed, 23 Oct 2024 12:27:12 UTC (13,815 KB)
[v3]
Tue, 4 Mar 2025 17:00:31 UTC (123,008 KB)
[v4]
Fri, 14 Mar 2025 18:51:52 UTC (11,852 KB)