This website uses cookies to collect usage information in order to offer a better browsing experience. By browsing this site or by clicking on the "ACCEPT COOKIES" button you accept our Cookie Policy.

Reinforcement Learning in Trading – Part V

QuantInsti

Contributor:
QuantInsti
Visit: QuantInsti

See Part IPart II,  Part III and Part IV to get started.

Bellman Equation

Bellman Equation

In this equation, s is the state, is a set of actions at time and ai is a specific action from the set. R is the reward table. is the state action table but it is constantly updated as we learn more about our system by experience. γ  is the learning rate

We will first start with the q-value for the Hold action on July 30.

  1. The first part is the reward for taking that action. As seen in the R-table it is 0
  2. Let us assume that γ = 0.98. The maximum Q-value for sell and hold actions on the next day, i.e. 31 July, is 1.09
  3. Thus q-value for Hold action on 30 July is 0 + 0.98 (1.09) = 1.06

In this way, we will fill the values for the other rows of the Hold column to complete the Q table.

DateSellHold
23-07-20200.950.966
24-07-20200.950.985
27-07-20200.981.005
28-07-20200.961.026
29-07-20200.981.047
30-07-20200.991.068
31-07-20201.091.090

The RL model will now select the hold action to maximise the Q value. This was the intuition behind the Q table. This process of updating the Q table is called Q learning. Of course, we had taken a scenario with limited actions and states. In reality, we have a large state space and thus, building a q-table will be time-consuming as well as a resource constraint.

To overcome this problem, you can use deep neural networks. They are also called Deep Q networks or DQN. The deep Q networks learn the Q table from past experiences and when given state as input, they can provide the Q-value for each of the actions. We can select the action to take with the maximum Q value.

Stay tuned for the next installment to learn how to train artificial neural networks.

Visit QuantInsti to download practical code: https://blog.quantinsti.com/reinforcement-learning-trading/.

Disclosure: Interactive Brokers

Information posted on IBKR Traders’ Insight that is provided by third-parties and not by Interactive Brokers does NOT constitute a recommendation by Interactive Brokers that you should contract for the services of that third party. Third-party participants who contribute to IBKR Traders’ Insight are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.

This material is from QuantInsti and is being posted with permission from QuantInsti. The views expressed in this material are solely those of the author and/or QuantInsti and IBKR is not endorsing or recommending any investment or trading discussed in the material. This material is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

trading top