This website uses cookies to collect usage information in order to offer a better browsing experience. By browsing this site or by clicking on the "ACCEPT COOKIES" button you accept our Cookie Policy.

Towards Better Keras Modeling – Part IV

See the previous installment in this series for Setup and Download.

TL;DR: essentially a talos workflow involves (1) creating a dict of parameter values to evaluate, (2) defining your keras model within a build_model as you may already do, but with a few small modifications in format, and (3) running a “Scan” method.

Warning: the below code could take 60 minutes+ to execute. It took me about 80 minutes on a Tesla K80 GPU. If you want to replicate results and are concerned about compute time, speed things up by:

  1. reducing/changing parameter values – especially epoch and batch_size
  2. setting the grid_downsample value to something less than 1.00 which runs a random subset of possible combinations, or
  3. Adjusting the early_stopper settings (namely reducing the patience metric to cause the training to stop more quickly after low-water-mark is reached)

#from keras import models
#from keras import layers

from keras.models import Sequential
from keras.layers import Dropout, Dense
from keras.callbacks import TensorBoard
from talos.model.early_stopper import early_stopper

# track performance on tensorboard
tensorboard = TensorBoard(log_dir=’./logs’,
histogram_freq=0,batch_size=5000,
write_graph=False,
write_images=False)

# (1) Define dict of parameters to try
p = {‘first_neuron’:[10, 40, 160, 640, 1280],
‘hidden_neuron’:[10, 40, 160],
‘hidden_layers’:[0,1,2,4],
‘batch_size’: [1000,5000,10000],
‘optimizer’: [‘adam’],
‘kernel_initializer’: [‘uniform’], #’normal’
‘epochs’: [50],
‘dropout’: [0.0,0.25,0.5],
‘last_activation’: [‘sigmoid’]}

# (2) create a function which constructs a compiled keras model object
def numerai_model(x_train, y_train, x_val, y_val, params):
print(params)

model = Sequential()

## initial layer
model.add(Dense(params[‘first_neuron’], input_dim=x_train.shape[1],
activation=’relu’,
kernel_initializer = params[‘kernel_initializer’] ))
model.add(Dropout(params[‘dropout’]))

## hidden layers
for i in range(params[‘hidden_layers’]):
print (f”adding layer {i+1}”)
model.add(Dense(params[‘hidden_neuron’], activation=’relu’,
kernel_initializer=params[‘kernel_initializer’]))
model.add(Dropout(params[‘dropout’]))

## final layer
model.add(Dense(1, activation=params[‘last_activation’],
kernel_initializer=params[‘kernel_initializer’]))

model.compile(loss=’binary_crossentropy’,
optimizer=params[‘optimizer’],
metrics=[‘acc’])

history = model.fit(x_train, y_train,
validation_data=[x_val, y_val],
batch_size=params[‘batch_size’],
epochs=params[‘epochs’],
callbacks=[tensorboard,early_stopper(params[‘epochs’], patience=10)], #,ta.live(),
verbose=0)
return history, model

# (3) Run a “Scan” using the params and function created above

t = ta.Scan(x=X_train.values,
y=y_train.values,
model=numerai_model,
params=p,
grid_downsample=0.50,
dataset_name=’numerai_example’,
experiment_no=’1′)

Done! We now have results from 270 unique network configuration. Talos automatically saves them to a file named [dataset_name]_[experiment_no].csv which we can refer to offline. Hint: if you re-run, make sure to increment the experiment_no so you don’t overwrite past results.

Visit The Alpha Scientist blog to download the complete code:
https://alphascientist.com/hyperparameter_optimization_with_talos.html

The Alpha Scientist blog – Chad is a full-time quantitative trader who has been working on data analytics since before it was cool. He has long balanced his interest in computer science (MS in EE/CS from MIT) with a fascination in markets (CFA designation in 2009). Prior to becoming a full-time quant, he built analytics products and managed teams at software companies across Silicon Valley. If you’ve found this post useful, please follow @data2alpha on Twitter and forward to a friend or colleague who may also find this topic interesting. https://alphascientist.com/

Disclosure: Interactive Brokers

Information posted on IBKR Traders’ Insight that is provided by third-parties and not by Interactive Brokers does NOT constitute a recommendation by Interactive Brokers that you should contract for the services of that third party. Third-party participants who contribute to IBKR Traders’ Insight are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.

This material is from The Alpha Scientist and is being posted with permission from The Alpha Scientist. The views expressed in this material are solely those of the author and/or The Alpha Scientist and IBKR is not endorsing or recommending any investment or trading discussed in the material. This material is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.

trading top