DPhil student, Patrick Chang, OMI Director, Álvaro Cartea and OMI Associate Member, José Penalva have published their latest research: (full paper here)

Title: Algorithmic Collusion in Electronic Markets: The Impact of Tick Size (paper)

Abstract: We characterise the stochastic interaction of independent learning algorithms as a deterministic system of ordinary differential equations and use it to understand the long-term behaviour of the algorithms in a repeated game. In a symmetric bimatrix repeated game, we prove that the dynamics of many learning algorithms converge to a pure strategy Nash equilibrium of the stage game. In contrast, we prove that competition between Q-learning algorithms does not always lead to a Nash equilibrium of the stage game. We apply these results to study how the size of the tick in a limit order book facilitates or obstructs tacit collusion among algorithms that compete to provide liquidity. We characterise the set of pure strategy Nash equilibria from the market making stage game with a discrete action space (e.g., the price grid of a limit order book) and its relation to the Bertrand–Nash equilibrium of the game with a continuous action space (i.e., an idealised limit order book with a zero tick size). We derive the bounds that define the set of Nash equilibria of the stage game when the action space is discrete and show that the bounds converge to the Bertrand–Nash equilibrium as the tick size of the limit order book tends to zero. For all the algorithms considered, our findings show that a large tick size obstructs competition; a smaller tick size lowers trading costs for liquidity takers, but slows down the speed of convergence to a rest point. For the algorithms with theoretical guarantees to reach a Nash equilibrium, there is no assurance that the Nash equilibrium reached is the most competitive outcome. Indeed, we show that tacit collusion can and does arise. However, the excess profits are bounded by the range of possible Nash equilibria, which shrinks with the tick size. Finally, for Q-learning, many of the outcomes are sub-optimal for both the market makers and the liquidity takers.