TechRxiv
XinJin20210522.pdf (1.13 MB)

Can we imitate stock price behavior to reinforcement learn option price?

Download (1.13 MB)
preprint
posted on 28.05.2021, 06:27 by Xin JinXin Jin
This paper presents a framework of imitating the price behavior of the underlying stock for reinforcement learning option price. We use accessible features of the equities pricing data to construct a non-deterministic Markov decision process for modeling stock price behavior driven by principal investor's decision making. However, low signal-to-noise ratio and instability that appear immanent in equity markets pose challenges to determine the state transition (price change) after executing an action (principal investor's decision) as well as decide an action based on current state (spot price). In order to conquer these challenges, we resort to a Bayesian deep neural network for computing the predictive distribution of the state transition led by an action. Additionally, instead of exploring a state-action relationship to formulate a policy, we seek for an episode based visible-hidden state-action relationship to probabilistically imitate principal investor's successive decision making. Our algorithm then maps imitative principal investor's decisions to simulated stock price paths by a Bayesian deep neural network. Eventually the optimal option price is reinforcement learned through maximizing the cumulative risk-adjusted return of a dynamically hedged portfolio over simulated price paths of the underlying.

History

Email Address of Submitting Author

felixxinjin@gmail.com

ORCID of Submitting Author

0000-0003-1155-2890

Submitting Author's Institution

Bank of Montreal

Submitting Author's Country

Canada