Vinsamlegast notið þetta auðkenni þegar þið vitnið til verksins eða tengið í það: https://hdl.handle.net/1946/48676
This thesis explores the application of reinforcement learning (RL) strategies to market making in illiquid markets. Traditional market making approaches often rely on static, rule-based strategies, which can struggle in illiquid environments. The study implements three RL algorithms: Deep Q-Networks (DQN), Advantage Actor-Critic (A2C) and Proximal Policy Optimization (PPO). They are evaluated in a simulated stock market environment on their performance in liquid and illiquid market conditions. Findings show that DQN outperforms the other in both conditions. The research highlights the challenges RL models face in illiquid markets, characterized by higher volatility, wider bid-ask spreads, fewer trades, and higher risks of adverse selection. The results contribute to the growing field of RL in financial markets by showing how these algorithms can be adapted to improve market making in challenging environments. The study also emphasizes the importance of understanding market conditions when deploying algorithmic trading strategies.
Skráarnafn | Stærð | Aðgangur | Lýsing | Skráartegund | |
---|---|---|---|---|---|
MScThesis_UlfarSnaefeld_FINAL.pdf | 1,19 MB | Opinn | Heildartexti | Skoða/Opna |