OMI students: Álvaro Arroyo, Felix Drinkall and Peer Nagy presented their research to the OMI and Man Group on 7th October 2022. See below for the full recordings

 

Presentation by Álvaro Arroyo

Title: Deep Survival Functions in the Limit Order Book

Abstract: Whether to execute a trade with a limit order or a market order is an important problem in optimal execution strategies. At the crux of this decision lies an appropriate estimation of the fill probability of a limit order over time at different levels of the LOB, which can give insights into its optimal placement. To this end, we propose a deep learning method to estimate the fill times of limit orders at different levels of the LOB. Unlike previous approaches, which rely on strong heuristics and unrealistic assumptions to perform their analysis, we compare several data-driven methods tailored to the problem of survival analysis from time-series data and benchmark all methods using proper scoring rules.

Presentation by Felix Drinkall

Title: Forecasting Changes in Corporate Credit Ratings Using SEC Filings

Abstract: Corporate credit ratings indicate a company’s ability to service its debt obligations, providing a measure of the company’s financial health. A change in credit ratings can affect the cost of raising capital and in both virtuous and vicious cycles can have a direct impact on the future profitability and size of a company. As an investor, being able to predict when a company is likely to be upgraded or downgraded by the rating agencies gives insight into the long-term direction of the company. My presentation will outline the current literature on credit rating forecasts and will motivate the use of text-derived features from forward-looking sections from SEC filings to help improve forecasting accuracy.

Presentation by Peer Nagy

Title: Reinforcement Learning for Trading Signal Execution in Limit Order Book Markets

Abstract: Successful quantitative trading strategies work by generating price signals with a small, but positive, correlation with future prices. The higher the trading frequency and strategy turnover, the more critical is the execution component of the strategy, which translates the signal into concrete trades. Based on the ABIDES limit order book simulator, we built a reinforcement learning gym environment using the LOBSTER dataset on NASDAQ cash equities, simulating a realistic trading environment by replaying limit order book messages. We use Deep Duelling Double Q learning with the APEX (asynchronous prioritized experience replay) algorithm to train a trading agent, who observes the limit order book state and a short-term directional forecast, to maximise the trading return. Performance is evaluated using artificial  stochastic price signals with varying levels of noise as well as signals generated from raw limit order book data using the DeepLOB model.