Negative Update Intervals in Deep Multi-Agent Reinforcement Learning

Published in Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), 2019

Recommended citation: Palmer, G., Savani, R., & Tuyls, K. (2019, May). Negative Update Intervals in Deep Multi-Agent Reinforcement Learning. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (pp. 43-51). http://www.ifaamas.org/Proceedings/aamas2019/pdfs/p43.pdf

In Multi-Agent Reinforcement Learning (MA-RL), independent cooperative learners must overcome a number of pathologies in order to learn optimal joint policies. However, addressing one pathology often leaves approaches vulnerable to others. For instance, hysteretic Q-learning addresses miscoordination while leaving agents vulnerable towards misleading stochastic rewards. Other methods, such as \emph{leniency}, have proven more robust when dealing with multiple pathologies simultaneously. However, leniency has predominately been studied within the context of strategic form games (bimatrix games) and fully observable Markov games consisting of a small number of probabilistic state transitions. This raises the question of whether these findings scale to more complex domains. For this purpose we implement a temporally extend version of the Climb Game, within which agents must overcome multiple pathologies \emph{simultaneously}, including relative overgeneralisation, stochasticity, the alter-exploration and moving target problems while learning from a large observation space. We find that existing lenient and hysteretic approaches fail to consistently learn near optimal joint-policies in this environment. To address these pathologies we introduce \emph{Negative Update Intervals-DDQN (NUI-DDQN)}, a Deep MA-RL algorithm which discards episodes yielding cumulative rewards outside the range of expanding intervals. NUI-DDQN consistently gravitates towards optimal joint-policies in deterministic and stochastic reward settings of our environment, overcoming the outlined pathologies.

Download paper here

Recommended citation: Palmer, G., Savani, R., & Tuyls, K. (2019, May). Negative Update Intervals in Deep Multi-Agent Reinforcement Learning. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (pp. 43-51).