The Modeling Results
To start with, I build an ANN with densely connected layers as my baseline model to compare my other models with.
from keras.models import Sequential from keras import layers from keras.optimizers import RMSpropmodel_ann = Sequential() model_ann.add(layers.Flatten(input_shape = (lookback, data_u.shape[-1]))) model_ann.add(layers.Dense(32,activation = 'relu')) model_ann.add(layers.Dropout(0.3)) model_ann.add(layers.Dense(1,activation = 'sigmoid')) model_ann.summary()
Then, I compile the model and record the fitting process.
model_ann.compile(optimizer = RMSprop(lr = 1e-2), loss = 'binary_crossentropy', metrics = ['acc']) history = model_ann.fit_generator(train_generator, steps_per_epoch=steps_per_epc, epochs = 20, validation_data = val_generator, validation_steps = val_steps)
To check the performance on the validation dataset, I plot the loss curve.
acc_ = history_dic['loss']
val_acc_ = history_dic['val_loss']
epochs = range(1,21)
#plt.clf()
plt.plot(epochs,acc_, 'bo', label = "training loss")
plt.plot(epochs, val_acc_, 'r', label = "validation loss")
plt.xlabel('Epochs')
plt.ylabel('loss')
plt.legend()
plt.show()
As expected, the model becomes overfitting after several epochs. To evaluate the model objectively, I apply it to the test set and get accuracy as 60%.
scores = model_ann.evaluate_generator(test_generator,test_steps)
print("Accuracy = ", scores[1]," Loss = ", scores[0])
Next, I implement an RNN by using one LSTM layer followed by two densely connected layers.
model_rnn = Sequential() model_rnn.add(layers.LSTM(32, dropout=0.2, recurrent_dropout=0.2, input_shape=(None,data_u.shape[-1])))model_rnn.add(layers.Dense(32,activation = 'relu')) model_rnn.add(layers.Dropout(0.3)) model_rnn.add(layers.Dense(1,activation='sigmoid')) model_rnn.summary()
The model training is similar to that of ANN above.
model_rnn.compile(optimizer = RMSprop(lr = 1e-2), loss = 'binary_crossentropy', metrics = ['acc']) history = model_rnn.fit_generator(train_generator, steps_per_epoch=steps_per_epc, epochs = 20, validation_data = val_generator, validation_steps = val_steps)
The training and validation set performance is as below.
The overfitting is not as severe as that of the ANN. I also evaluate the model on the test data, which yields an accuracy of 62.5%. Even though the performance on the test set is better than that of the ANN with densely connected layers, the improvement is tiny.
To gain better performance, I try to increase the complexity of the model by adding one more recurrent layer. However, to reduce the computational cost, I replace the LSTM layer by the Gated Recurrent Unit (GRU). The model is shown below.
model_rnn = Sequential() model_rnn.add(layers.GRU(32, dropout=0.2, recurrent_dropout=0.2, return_sequences = True, input_shape=(None,data_u.shape[-1]))) model_rnn.add(layers.GRU(64, activation = 'relu',dropout=0.2,recurrent_dropout=0.2)) model_rnn.add(layers.Dense(32,activation = 'relu')) model_rnn.add(layers.Dropout(0.3))model_rnn.add(layers.Dense(1,activation = 'sigmoid')) model_rnn.summary()
The training and validation set performance is as below.
No serious overfitting is detected on the plot. Even though the accuracy of the test data has increased to 64%, the improvement is still tiny. I begin to doubt whether RNN can do the job.
However, I give my last try by further increasing the complexity of the model. Specifically, I enable the recurrent layer to be bidirectional.
model_rnn = Sequential() model_rnn.add(layers.Bidirectional(layers.GRU(32, dropout=0.2, recurrent_dropout=0.2, return_sequences = True), input_shape=(None,data_u.shape[-1]))) model_rnn.add(layers.Bidirectional(layers.GRU(64, activation = 'relu',dropout=0.2,recurrent_dropout=0.2)))model_rnn.add(layers.Dense(32,activation = 'relu')) model_rnn.add(layers.Dropout(0.3)) model_rnn.add(layers.Dense(1,activation='sigmoid')) model_rnn.summary()
This time, the training and validation set performance is as below.
Actually, before the model starts overfitting, there is not much difference between this model and the previous one on the validation loss. The accuracy of the test set is 64% as well.
By exploring all the models above, I kind of realize that the RNN may not be a good fit for the NBA game result prediction problem. There are indeed tens of hyperparameters that can be tuned, the difference between the ANN and RNN, however, is too small.
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
跨境电商——阿里巴巴速卖通宝典
速卖通大学 编著 / 电子工业出版社 / 2015-1 / 69.00元
跨境电商作为中国电子商务发展的最新趋势,受到了全社会越来越多的重视,大量中国卖家借助阿里巴巴速卖通平台,将产品直接售卖到全球的消费者手中,通过这条短得不能再短的交易链,获得了丰厚的回报。 但同时,跨境电商这一贸易形式,对卖家的综合素质要求比较高:卖家要对海外市场比较熟悉,对跨境物流有所把握,能够用外语进行产品介绍和客户交流,通过跨境结算拿到货款……诸如此类的门槛,让不少新卖家心生畏难,而所有......一起来看看 《跨境电商——阿里巴巴速卖通宝典》 这本书的介绍吧!