内容简介:与往常一样,此示例中的代码将使用 tf.keras API;如需了解详情,请参阅 TensorFlow Keras 指南。在之前的两个示例(分类影评和预测房价)中,我们了解到在训练周期达到一定次数后,模型在验证数据上的准确率会达到峰值,然后便开始下降。也就是说,模型会过拟合训练数据。请务必学习如何处理过拟合。尽管通常可以在训练集上实现很高的准确率,但我们真正想要的是开发出能够很好地泛化到测试数据(或之前未见过的数据)的模型。
与往常一样,此示例中的代码将使用 tf.keras API;如需了解详情,请参阅 TensorFlow Keras 指南。
在之前的两个示例(分类影评和预测房价)中,我们了解到在训练周期达到一定次数后,模型在验证数据上的准确率会达到峰值,然后便开始下降。
也就是说,模型会过拟合训练数据。请务必学习如何处理过拟合。尽管通常可以在训练集上实现很高的准确率,但我们真正想要的是开发出能够很好地泛化到测试数据(或之前未见过的数据)的模型。
与过拟合相对的是欠拟合。当测试数据仍存在改进空间时,便会发生欠拟合。出现这种情况的原因有很多:模型不够强大、过于正则化,或者根本没有训练足够长的时间。这意味着网络未学习训练数据中的相关模式。
如果训练时间过长,模型将开始过拟合,并从训练数据中学习无法泛化到测试数据的模式。我们需要在这两者之间实现平衡。了解如何训练适当的周期次数是一项很实用的技能,接下来我们将介绍这一技能。
为了防止发生过拟合,最好的解决方案是使用更多训练数据。用更多数据进行训练的模型自然能够更好地泛化。如无法采用这种解决方案,则次优解决方案是使用正则化等技术。这些技术会限制模型可以存储的信息的数量和类型。如果网络只能记住少量模式,那么优化过程将迫使它专注于最突出的模式,因为这些模式更有机会更好地泛化。
在此笔记本中,我们将探索两种常见的正则化技术(权重正则化和丢弃),并使用它们改进我们的 IMDB 影评分类笔记本。
In [1]:
import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print(tf.__version__) 复制代码
1.13.1 复制代码
下载 IMDB 数据集
我们不会像在上一个笔记本中那样使用嵌入,而是对句子进行多热编码。该模型将很快过拟合训练集。它将用来演示何时发生过拟合,以及如何防止过拟合。
对列表进行多热编码意味着将它们转换为由 0 和 1 组成的向量。例如,将序列 [3, 5] 转换为一个 10000 维的向量(除索引 3 和 5 转换为 1 之外,其余全为 0)。
In [2]:
NUM_WORDS = 10000 (train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS) def multi_hot_sequences(sequences, dimension): # Create an all-zero matrix of shape (len(sequences), dimension) results = np.zeros((len(sequences), dimension)) for i, word_indices in enumerate(sequences): results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s return results train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS) test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS) 复制代码
我们来看看生成的其中一个多热向量。字词索引按频率排序,因此索引 0 附近应该有更多的 1 值,如下图所示:
In [3]:
plt.plot(train_data[0]) 复制代码
Out[3]:
[<matplotlib.lines.Line2D at 0x2318f5c0>]复制代码
演示过拟合
要防止过拟合,最简单的方法是缩小模型,即减少模型中可学习参数的数量(由层数和每层的单元数决定)。在深度学习中,模型中可学习参数的数量通常称为模型的“容量”。直观而言,参数越多的模型“记忆容量”越大,因此能够轻松学习训练样本与其目标之间的字典式完美映射(无任何泛化能力的映射),但如果要对之前未见过的数据做出预测,这种映射毫无用处。
请务必谨记:深度学习模型往往善于与训练数据拟合,但真正的挑战是泛化,而非拟合。
另一方面,如果网络的记忆资源有限,便无法轻松学习映射。为了最小化损失,它必须学习具有更强预测能力的压缩表示法。同时,如果模型太小,它将难以与训练数据拟合。我们需要在“太多容量”和“容量不足”这两者之间实现平衡。
遗憾的是,并没有什么神奇公式可用来确定合适的模型大小或架构(由层数或每层的合适大小决定)。您将需要尝试一系列不同的架构。
要找到合适的模型大小,最好先使用相对较少的层和参数,然后开始增加层的大小或添加新的层,直到看到返回的验证损失不断减小为止。我们在影评分类网络上试试这个方法。
我们将仅使用 Dense 层创建一个简单的基准模型,然后创建更小和更大的版本,并比较这些版本。
创建基准模型
In [4]:
baseline_model = keras.Sequential([ # `input_shape` is only required here so that `.summary` works. keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(16, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) baseline_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', 'binary_crossentropy']) baseline_model.summary() 复制代码
WARNING:tensorflow:From e:\program files\python37\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 16) 160016 _________________________________________________________________ dense_1 (Dense) (None, 16) 272 _________________________________________________________________ dense_2 (Dense) (None, 1) 17 ================================================================= Total params: 160,305 Trainable params: 160,305 Non-trainable params: 0 _________________________________________________________________ 复制代码
In [5]:
baseline_history = baseline_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2) 复制代码
Train on 25000 samples, validate on 25000 samples WARNING:tensorflow:From e:\program files\python37\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. Epoch 1/20 - 6s - loss: 0.5216 - acc: 0.7562 - binary_crossentropy: 0.5216 - val_loss: 0.3669 - val_acc: 0.8697 - val_binary_crossentropy: 0.3669 Epoch 2/20 - 3s - loss: 0.2672 - acc: 0.9091 - binary_crossentropy: 0.2672 - val_loss: 0.2868 - val_acc: 0.8883 - val_binary_crossentropy: 0.2868 Epoch 3/20 - 4s - loss: 0.1909 - acc: 0.9336 - binary_crossentropy: 0.1909 - val_loss: 0.2874 - val_acc: 0.8845 - val_binary_crossentropy: 0.2874 Epoch 4/20 - 4s - loss: 0.1545 - acc: 0.9468 - binary_crossentropy: 0.1545 - val_loss: 0.3123 - val_acc: 0.8774 - val_binary_crossentropy: 0.3123 Epoch 5/20 - 4s - loss: 0.1281 - acc: 0.9572 - binary_crossentropy: 0.1281 - val_loss: 0.3270 - val_acc: 0.8758 - val_binary_crossentropy: 0.3270 Epoch 6/20 - 4s - loss: 0.1076 - acc: 0.9658 - binary_crossentropy: 0.1076 - val_loss: 0.3542 - val_acc: 0.8732 - val_binary_crossentropy: 0.3542 Epoch 7/20 - 4s - loss: 0.0908 - acc: 0.9717 - binary_crossentropy: 0.0908 - val_loss: 0.3841 - val_acc: 0.8702 - val_binary_crossentropy: 0.3841 Epoch 8/20 - 4s - loss: 0.0766 - acc: 0.9785 - binary_crossentropy: 0.0766 - val_loss: 0.4187 - val_acc: 0.8662 - val_binary_crossentropy: 0.4187 Epoch 9/20 - 4s - loss: 0.0618 - acc: 0.9841 - binary_crossentropy: 0.0618 - val_loss: 0.4531 - val_acc: 0.8635 - val_binary_crossentropy: 0.4531 Epoch 10/20 - 4s - loss: 0.0511 - acc: 0.9879 - binary_crossentropy: 0.0511 - val_loss: 0.4954 - val_acc: 0.8609 - val_binary_crossentropy: 0.4954 Epoch 11/20 - 4s - loss: 0.0389 - acc: 0.9929 - binary_crossentropy: 0.0389 - val_loss: 0.5310 - val_acc: 0.8584 - val_binary_crossentropy: 0.5310 Epoch 12/20 - 4s - loss: 0.0299 - acc: 0.9956 - binary_crossentropy: 0.0299 - val_loss: 0.5702 - val_acc: 0.8574 - val_binary_crossentropy: 0.5702 Epoch 13/20 - 4s - loss: 0.0231 - acc: 0.9976 - binary_crossentropy: 0.0231 - val_loss: 0.6117 - val_acc: 0.8553 - val_binary_crossentropy: 0.6117 Epoch 14/20 - 4s - loss: 0.0175 - acc: 0.9987 - binary_crossentropy: 0.0175 - val_loss: 0.6467 - val_acc: 0.8542 - val_binary_crossentropy: 0.6467 Epoch 15/20 - 4s - loss: 0.0139 - acc: 0.9991 - binary_crossentropy: 0.0139 - val_loss: 0.6658 - val_acc: 0.8544 - val_binary_crossentropy: 0.6658 Epoch 16/20 - 4s - loss: 0.0111 - acc: 0.9995 - binary_crossentropy: 0.0111 - val_loss: 0.7014 - val_acc: 0.8538 - val_binary_crossentropy: 0.7014 Epoch 17/20 - 4s - loss: 0.0091 - acc: 0.9995 - binary_crossentropy: 0.0091 - val_loss: 0.7202 - val_acc: 0.8535 - val_binary_crossentropy: 0.7202 Epoch 18/20 - 4s - loss: 0.0074 - acc: 0.9996 - binary_crossentropy: 0.0074 - val_loss: 0.7477 - val_acc: 0.8531 - val_binary_crossentropy: 0.7477 Epoch 19/20 - 4s - loss: 0.0062 - acc: 0.9996 - binary_crossentropy: 0.0062 - val_loss: 0.7680 - val_acc: 0.8518 - val_binary_crossentropy: 0.7680 Epoch 20/20 - 4s - loss: 0.0053 - acc: 0.9996 - binary_crossentropy: 0.0053 - val_loss: 0.7896 - val_acc: 0.8513 - val_binary_crossentropy: 0.7896 复制代码
创建一个更小的模型
我们创建一个隐藏单元更少的模型,然后与我们刚刚创建的基准模型进行比较:
In [6]:
smaller_model = keras.Sequential([ keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(4, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) smaller_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', 'binary_crossentropy']) smaller_model.summary() 复制代码
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_3 (Dense) (None, 4) 40004 _________________________________________________________________ dense_4 (Dense) (None, 4) 20 _________________________________________________________________ dense_5 (Dense) (None, 1) 5 ================================================================= Total params: 40,029 Trainable params: 40,029 Non-trainable params: 0 _________________________________________________________________ 复制代码
使用相同的数据训练该模型:
In [7]:
smaller_history = smaller_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2) 复制代码
Train on 25000 samples, validate on 25000 samples Epoch 1/20 - 4s - loss: 0.6087 - acc: 0.6926 - binary_crossentropy: 0.6087 - val_loss: 0.5521 - val_acc: 0.7424 - val_binary_crossentropy: 0.5521 Epoch 2/20 - 3s - loss: 0.4927 - acc: 0.8387 - binary_crossentropy: 0.4927 - val_loss: 0.4727 - val_acc: 0.8355 - val_binary_crossentropy: 0.4727 Epoch 3/20 - 3s - loss: 0.3925 - acc: 0.9019 - binary_crossentropy: 0.3925 - val_loss: 0.3872 - val_acc: 0.8710 - val_binary_crossentropy: 0.3872 Epoch 4/20 - 3s - loss: 0.2983 - acc: 0.9252 - binary_crossentropy: 0.2983 - val_loss: 0.3294 - val_acc: 0.8816 - val_binary_crossentropy: 0.3294 Epoch 5/20 - 3s - loss: 0.2389 - acc: 0.9338 - binary_crossentropy: 0.2389 - val_loss: 0.3037 - val_acc: 0.8839 - val_binary_crossentropy: 0.3037 Epoch 6/20 - 3s - loss: 0.2031 - acc: 0.9402 - binary_crossentropy: 0.2031 - val_loss: 0.2909 - val_acc: 0.8873 - val_binary_crossentropy: 0.2909 Epoch 7/20 - 3s - loss: 0.1773 - acc: 0.9476 - binary_crossentropy: 0.1773 - val_loss: 0.2887 - val_acc: 0.8859 - val_binary_crossentropy: 0.2887 Epoch 8/20 - 3s - loss: 0.1583 - acc: 0.9533 - binary_crossentropy: 0.1583 - val_loss: 0.2916 - val_acc: 0.8838 - val_binary_crossentropy: 0.2916 Epoch 9/20 - 3s - loss: 0.1430 - acc: 0.9593 - binary_crossentropy: 0.1430 - val_loss: 0.3027 - val_acc: 0.8788 - val_binary_crossentropy: 0.3027 Epoch 10/20 - 3s - loss: 0.1305 - acc: 0.9648 - binary_crossentropy: 0.1305 - val_loss: 0.3060 - val_acc: 0.8795 - val_binary_crossentropy: 0.3060 Epoch 11/20 - 3s - loss: 0.1193 - acc: 0.9682 - binary_crossentropy: 0.1193 - val_loss: 0.3150 - val_acc: 0.8773 - val_binary_crossentropy: 0.3150 Epoch 12/20 - 3s - loss: 0.1099 - acc: 0.9717 - binary_crossentropy: 0.1099 - val_loss: 0.3239 - val_acc: 0.8758 - val_binary_crossentropy: 0.3239 Epoch 13/20 - 3s - loss: 0.1012 - acc: 0.9747 - binary_crossentropy: 0.1012 - val_loss: 0.3355 - val_acc: 0.8727 - val_binary_crossentropy: 0.3355 Epoch 14/20 - 3s - loss: 0.0936 - acc: 0.9772 - binary_crossentropy: 0.0936 - val_loss: 0.3476 - val_acc: 0.8708 - val_binary_crossentropy: 0.3476 Epoch 15/20 - 3s - loss: 0.0871 - acc: 0.9796 - binary_crossentropy: 0.0871 - val_loss: 0.3579 - val_acc: 0.8702 - val_binary_crossentropy: 0.3579 Epoch 16/20 - 3s - loss: 0.0810 - acc: 0.9821 - binary_crossentropy: 0.0810 - val_loss: 0.3678 - val_acc: 0.8689 - val_binary_crossentropy: 0.3678 Epoch 17/20 - 3s - loss: 0.0748 - acc: 0.9849 - binary_crossentropy: 0.0748 - val_loss: 0.3796 - val_acc: 0.8680 - val_binary_crossentropy: 0.3796 Epoch 18/20 - 3s - loss: 0.0697 - acc: 0.9865 - binary_crossentropy: 0.0697 - val_loss: 0.3962 - val_acc: 0.8671 - val_binary_crossentropy: 0.3962 Epoch 19/20 - 3s - loss: 0.0651 - acc: 0.9881 - binary_crossentropy: 0.0651 - val_loss: 0.4038 - val_acc: 0.8668 - val_binary_crossentropy: 0.4038 Epoch 20/20 - 3s - loss: 0.0608 - acc: 0.9897 - binary_crossentropy: 0.0608 - val_loss: 0.4175 - val_acc: 0.8659 - val_binary_crossentropy: 0.4175 复制代码
创建一个更大的模型
作为练习,您可以创建一个更大的模型,看看它多快开始过拟合。接下来,我们向这个基准添加一个容量大得多的网络,远远超出解决问题所需的容量:
In [8]:
bigger_model = keras.models.Sequential([ keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(512, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) bigger_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy','binary_crossentropy']) bigger_model.summary() 复制代码
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_6 (Dense) (None, 512) 5120512 _________________________________________________________________ dense_7 (Dense) (None, 512) 262656 _________________________________________________________________ dense_8 (Dense) (None, 1) 513 ================================================================= Total params: 5,383,681 Trainable params: 5,383,681 Non-trainable params: 0 _________________________________________________________________ 复制代码
再次使用相同的数据训练该模型:
In [9]:
bigger_history = bigger_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2) 复制代码
Train on 25000 samples, validate on 25000 samples Epoch 1/20 - 9s - loss: 0.3443 - acc: 0.8530 - binary_crossentropy: 0.3443 - val_loss: 0.3148 - val_acc: 0.8696 - val_binary_crossentropy: 0.3148 Epoch 2/20 - 9s - loss: 0.1417 - acc: 0.9484 - binary_crossentropy: 0.1417 - val_loss: 0.3314 - val_acc: 0.8744 - val_binary_crossentropy: 0.3314 Epoch 3/20 - 9s - loss: 0.0441 - acc: 0.9874 - binary_crossentropy: 0.0441 - val_loss: 0.4434 - val_acc: 0.8685 - val_binary_crossentropy: 0.4434 Epoch 4/20 - 9s - loss: 0.0069 - acc: 0.9989 - binary_crossentropy: 0.0069 - val_loss: 0.5873 - val_acc: 0.8682 - val_binary_crossentropy: 0.5873 Epoch 5/20 - 9s - loss: 9.7864e-04 - acc: 1.0000 - binary_crossentropy: 9.7864e-04 - val_loss: 0.7093 - val_acc: 0.8656 - val_binary_crossentropy: 0.7093 Epoch 6/20 - 9s - loss: 8.7528e-04 - acc: 1.0000 - binary_crossentropy: 8.7528e-04 - val_loss: 0.7232 - val_acc: 0.8691 - val_binary_crossentropy: 0.7232 Epoch 7/20 - 9s - loss: 1.4544e-04 - acc: 1.0000 - binary_crossentropy: 1.4544e-04 - val_loss: 0.7518 - val_acc: 0.8686 - val_binary_crossentropy: 0.7518 Epoch 8/20 - 9s - loss: 9.5470e-05 - acc: 1.0000 - binary_crossentropy: 9.5470e-05 - val_loss: 0.7742 - val_acc: 0.8687 - val_binary_crossentropy: 0.7742 Epoch 9/20 - 9s - loss: 7.0958e-05 - acc: 1.0000 - binary_crossentropy: 7.0958e-05 - val_loss: 0.7910 - val_acc: 0.8686 - val_binary_crossentropy: 0.7910 Epoch 10/20 - 9s - loss: 5.5851e-05 - acc: 1.0000 - binary_crossentropy: 5.5851e-05 - val_loss: 0.8047 - val_acc: 0.8684 - val_binary_crossentropy: 0.8047 Epoch 11/20 - 9s - loss: 4.5268e-05 - acc: 1.0000 - binary_crossentropy: 4.5268e-05 - val_loss: 0.8176 - val_acc: 0.8686 - val_binary_crossentropy: 0.8176 Epoch 12/20 - 9s - loss: 3.7788e-05 - acc: 1.0000 - binary_crossentropy: 3.7788e-05 - val_loss: 0.8276 - val_acc: 0.8685 - val_binary_crossentropy: 0.8276 Epoch 13/20 - 9s - loss: 3.1885e-05 - acc: 1.0000 - binary_crossentropy: 3.1885e-05 - val_loss: 0.8364 - val_acc: 0.8684 - val_binary_crossentropy: 0.8364 Epoch 14/20 - 9s - loss: 2.7393e-05 - acc: 1.0000 - binary_crossentropy: 2.7393e-05 - val_loss: 0.8452 - val_acc: 0.8685 - val_binary_crossentropy: 0.8452 Epoch 15/20 - 9s - loss: 2.3753e-05 - acc: 1.0000 - binary_crossentropy: 2.3753e-05 - val_loss: 0.8530 - val_acc: 0.8686 - val_binary_crossentropy: 0.8530 Epoch 16/20 - 9s - loss: 2.0851e-05 - acc: 1.0000 - binary_crossentropy: 2.0851e-05 - val_loss: 0.8606 - val_acc: 0.8686 - val_binary_crossentropy: 0.8606 Epoch 17/20 - 9s - loss: 1.8398e-05 - acc: 1.0000 - binary_crossentropy: 1.8398e-05 - val_loss: 0.8673 - val_acc: 0.8688 - val_binary_crossentropy: 0.8673 Epoch 18/20 - 9s - loss: 1.6352e-05 - acc: 1.0000 - binary_crossentropy: 1.6352e-05 - val_loss: 0.8737 - val_acc: 0.8688 - val_binary_crossentropy: 0.8737 Epoch 19/20 - 9s - loss: 1.4625e-05 - acc: 1.0000 - binary_crossentropy: 1.4625e-05 - val_loss: 0.8793 - val_acc: 0.8687 - val_binary_crossentropy: 0.8793 Epoch 20/20 - 9s - loss: 1.3164e-05 - acc: 1.0000 - binary_crossentropy: 1.3164e-05 - val_loss: 0.8852 - val_acc: 0.8686 - val_binary_crossentropy: 0.8852 复制代码
绘制训练损失和验证损失图表
实线表示训练损失,虚线表示验证损失(请谨记:验证损失越低,表示模型越好)。在此示例中,较小的网络开始过拟合的时间比基准模型晚(前者在 6 个周期之后,后者在 4 个周期之后),并且开始过拟合后,它的效果下降速度也慢得多。
In [10]:
def plot_history(histories, key='binary_crossentropy'): plt.figure(figsize=(16,10)) for name, history in histories: val = plt.plot(history.epoch, history.history['val_'+key], '--', label=name.title()+' Val') plt.plot(history.epoch, history.history[key], color=val[0].get_color(), label=name.title()+' Train') plt.xlabel('Epochs') plt.ylabel(key.replace('_',' ').title()) plt.legend() plt.xlim([0,max(history.epoch)]) plot_history([('baseline', baseline_history), ('smaller', smaller_history), ('bigger', bigger_history)]) 复制代码
请注意,较大的网络几乎仅仅 1 个周期之后便立即开始过拟合,并且之后严重得多。网络容量越大,便能够越快对训练数据进行建模(产生较低的训练损失),但越容易过拟合(导致训练损失与验证损失之间的差异很大)。
策略
添加权重正则化
您可能熟悉奥卡姆剃刀定律:如果对于同一现象有两种解释,最可能正确的解释是“最简单”的解释,即做出最少量假设的解释。这也适用于神经网络学习的模型:给定一些训练数据和一个网络架构,有多组权重值(多个模型)可以解释数据,而简单模型比复杂模型更不容易过拟合。
在这种情况下,“简单模型”是一种参数值分布的熵较低的模型(或者具有较少参数的模型,如我们在上面的部分中所见)。因此,要缓解过拟合,一种常见方法是限制网络的复杂性,具体方法是强制要求其权重仅采用较小的值,使权重值的分布更“规则”。这称为“权重正则化”,通过向网络的损失函数添加与权重较大相关的代价来实现。这个代价分为两种类型:
-
L1 正则化,其中所添加的代价与权重系数的绝对值(即所谓的权重“L1 范数”)成正比。
-
L2 正则化,其中所添加的代价与权重系数值的平方(即所谓的权重“L2 范数”)成正比。L2 正则化在神经网络领域也称为权重衰减。不要因为名称不同而感到困惑:从数学角度来讲,权重衰减与 L2 正则化完全相同。
在 tf.keras 中,权重正则化的添加方法如下:将权重正则化项实例作为关键字参数传递给层。现在,我们来添加 L2 权重正则化。
In [11]:
l2_model = keras.models.Sequential([ keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001), activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001), activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) l2_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', 'binary_crossentropy']) l2_model_history = l2_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2) 复制代码
Train on 25000 samples, validate on 25000 samples Epoch 1/20 - 4s - loss: 0.5023 - acc: 0.8133 - binary_crossentropy: 0.4605 - val_loss: 0.3661 - val_acc: 0.8791 - val_binary_crossentropy: 0.3226 Epoch 2/20 - 4s - loss: 0.2924 - acc: 0.9104 - binary_crossentropy: 0.2456 - val_loss: 0.3344 - val_acc: 0.8860 - val_binary_crossentropy: 0.2851 Epoch 3/20 - 4s - loss: 0.2439 - acc: 0.9316 - binary_crossentropy: 0.1925 - val_loss: 0.3387 - val_acc: 0.8843 - val_binary_crossentropy: 0.2858 Epoch 4/20 - 4s - loss: 0.2226 - acc: 0.9418 - binary_crossentropy: 0.1682 - val_loss: 0.3571 - val_acc: 0.8789 - val_binary_crossentropy: 0.3014 Epoch 5/20 - 4s - loss: 0.2073 - acc: 0.9495 - binary_crossentropy: 0.1506 - val_loss: 0.3691 - val_acc: 0.8771 - val_binary_crossentropy: 0.3116 Epoch 6/20 - 4s - loss: 0.1957 - acc: 0.9548 - binary_crossentropy: 0.1373 - val_loss: 0.3820 - val_acc: 0.8749 - val_binary_crossentropy: 0.3231 Epoch 7/20 - 4s - loss: 0.1889 - acc: 0.9574 - binary_crossentropy: 0.1291 - val_loss: 0.4039 - val_acc: 0.8707 - val_binary_crossentropy: 0.3435 Epoch 8/20 - 4s - loss: 0.1829 - acc: 0.9592 - binary_crossentropy: 0.1217 - val_loss: 0.4151 - val_acc: 0.8696 - val_binary_crossentropy: 0.3534 Epoch 9/20 - 3s - loss: 0.1775 - acc: 0.9600 - binary_crossentropy: 0.1151 - val_loss: 0.4341 - val_acc: 0.8649 - val_binary_crossentropy: 0.3711 Epoch 10/20 - 4s - loss: 0.1731 - acc: 0.9632 - binary_crossentropy: 0.1097 - val_loss: 0.4419 - val_acc: 0.8667 - val_binary_crossentropy: 0.3780 Epoch 11/20 - 4s - loss: 0.1685 - acc: 0.9654 - binary_crossentropy: 0.1041 - val_loss: 0.4562 - val_acc: 0.8639 - val_binary_crossentropy: 0.3913 Epoch 12/20 - 4s - loss: 0.1655 - acc: 0.9653 - binary_crossentropy: 0.1001 - val_loss: 0.4730 - val_acc: 0.8622 - val_binary_crossentropy: 0.4073 Epoch 13/20 - 4s - loss: 0.1610 - acc: 0.9692 - binary_crossentropy: 0.0949 - val_loss: 0.4828 - val_acc: 0.8611 - val_binary_crossentropy: 0.4165 Epoch 14/20 - 4s - loss: 0.1590 - acc: 0.9686 - binary_crossentropy: 0.0922 - val_loss: 0.4988 - val_acc: 0.8588 - val_binary_crossentropy: 0.4318 Epoch 15/20 - 4s - loss: 0.1588 - acc: 0.9690 - binary_crossentropy: 0.0915 - val_loss: 0.5141 - val_acc: 0.8592 - val_binary_crossentropy: 0.4462 Epoch 16/20 - 4s - loss: 0.1571 - acc: 0.9701 - binary_crossentropy: 0.0884 - val_loss: 0.5248 - val_acc: 0.8558 - val_binary_crossentropy: 0.4557 Epoch 17/20 - 4s - loss: 0.1541 - acc: 0.9708 - binary_crossentropy: 0.0852 - val_loss: 0.5294 - val_acc: 0.8572 - val_binary_crossentropy: 0.4604 Epoch 18/20 - 4s - loss: 0.1520 - acc: 0.9723 - binary_crossentropy: 0.0825 - val_loss: 0.5622 - val_acc: 0.8521 - val_binary_crossentropy: 0.4924 Epoch 19/20 - 4s - loss: 0.1512 - acc: 0.9713 - binary_crossentropy: 0.0812 - val_loss: 0.5532 - val_acc: 0.8546 - val_binary_crossentropy: 0.4828 Epoch 20/20 - 4s - loss: 0.1441 - acc: 0.9766 - binary_crossentropy: 0.0736 - val_loss: 0.5628 - val_acc: 0.8539 - val_binary_crossentropy: 0.4925 复制代码
l2(0.001) 表示层的权重矩阵中的每个系数都会将 0.001 * weight_coefficient_value**2 添加到网络的总损失中。请注意,由于此惩罚仅在训练时添加,此网络在训练时的损失将远高于测试时。
以下是 L2 正则化惩罚的影响:
In [12]:
plot_history([('baseline', baseline_history), ('l2', l2_model_history)]) 复制代码
可以看到,L2 正则化模型的过拟合抵抗能力比基准模型强得多,虽然这两个模型的参数数量相同。
添加丢弃层
丢弃是由 Hinton 及其在多伦多大学的学生开发的,是最有效且最常用的神经网络正则化技术之一。丢弃(应用于某个层)是指在训练期间随机“丢弃”(即设置为 0)该层的多个输出特征。假设某个指定的层通常会在训练期间针对给定的输入样本返回一个向量 [0.2, 0.5, 1.3, 0.8, 1.1];在应用丢弃后,此向量将随机分布几个 0 条目,例如 [0, 0.5, 1.3, 0, 1.1]。“丢弃率”指变为 0 的特征所占的比例,通常设置在 0.2 和 0.5 之间。在测试时,网络不会丢弃任何单元,而是将层的输出值按等同于丢弃率的比例进行缩减,以便平衡以下事实:测试时的活跃单元数大于训练时的活跃单元数。
在 tf.keras 中,您可以通过丢弃层将丢弃引入网络中,以便事先将其应用于层的输出。
下面我们在 IMDB 网络中添加两个丢弃层,看看它们在降低过拟合方面表现如何:
In [13]:
dpt_model = keras.models.Sequential([ keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dropout(0.5), keras.layers.Dense(16, activation=tf.nn.relu), keras.layers.Dropout(0.5), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) dpt_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy','binary_crossentropy']) dpt_model_history = dpt_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2) 复制代码
WARNING:tensorflow:From e:\program files\python37\lib\site-packages\tensorflow\python\keras\layers\core.py:143: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. Train on 25000 samples, validate on 25000 samples Epoch 1/20 - 4s - loss: 0.6432 - acc: 0.6127 - binary_crossentropy: 0.6432 - val_loss: 0.5211 - val_acc: 0.8422 - val_binary_crossentropy: 0.5211 Epoch 2/20 - 4s - loss: 0.4937 - acc: 0.7782 - binary_crossentropy: 0.4937 - val_loss: 0.3720 - val_acc: 0.8755 - val_binary_crossentropy: 0.3720 Epoch 3/20 - 4s - loss: 0.3972 - acc: 0.8453 - binary_crossentropy: 0.3972 - val_loss: 0.3092 - val_acc: 0.8852 - val_binary_crossentropy: 0.3092 Epoch 4/20 - 4s - loss: 0.3319 - acc: 0.8764 - binary_crossentropy: 0.3319 - val_loss: 0.2802 - val_acc: 0.8889 - val_binary_crossentropy: 0.2802 Epoch 5/20 - 4s - loss: 0.2871 - acc: 0.8955 - binary_crossentropy: 0.2871 - val_loss: 0.2767 - val_acc: 0.8889 - val_binary_crossentropy: 0.2767 Epoch 6/20 - 4s - loss: 0.2460 - acc: 0.9126 - binary_crossentropy: 0.2460 - val_loss: 0.2813 - val_acc: 0.8855 - val_binary_crossentropy: 0.2813 Epoch 7/20 - 4s - loss: 0.2198 - acc: 0.9270 - binary_crossentropy: 0.2198 - val_loss: 0.2924 - val_acc: 0.8866 - val_binary_crossentropy: 0.2924 Epoch 8/20 - 4s - loss: 0.1932 - acc: 0.9340 - binary_crossentropy: 0.1932 - val_loss: 0.3089 - val_acc: 0.8845 - val_binary_crossentropy: 0.3089 Epoch 9/20 - 3s - loss: 0.1771 - acc: 0.9396 - binary_crossentropy: 0.1771 - val_loss: 0.3187 - val_acc: 0.8839 - val_binary_crossentropy: 0.3187 Epoch 10/20 - 4s - loss: 0.1570 - acc: 0.9470 - binary_crossentropy: 0.1570 - val_loss: 0.3355 - val_acc: 0.8828 - val_binary_crossentropy: 0.3355 Epoch 11/20 - 4s - loss: 0.1457 - acc: 0.9509 - binary_crossentropy: 0.1457 - val_loss: 0.3532 - val_acc: 0.8817 - val_binary_crossentropy: 0.3532 Epoch 12/20 - 3s - loss: 0.1365 - acc: 0.9525 - binary_crossentropy: 0.1365 - val_loss: 0.3798 - val_acc: 0.8804 - val_binary_crossentropy: 0.3798 Epoch 13/20 - 4s - loss: 0.1249 - acc: 0.9565 - binary_crossentropy: 0.1249 - val_loss: 0.3950 - val_acc: 0.8792 - val_binary_crossentropy: 0.3950 Epoch 14/20 - 4s - loss: 0.1106 - acc: 0.9617 - binary_crossentropy: 0.1106 - val_loss: 0.4083 - val_acc: 0.8783 - val_binary_crossentropy: 0.4083 Epoch 15/20 - 4s - loss: 0.1090 - acc: 0.9612 - binary_crossentropy: 0.1090 - val_loss: 0.4413 - val_acc: 0.8788 - val_binary_crossentropy: 0.4413 Epoch 16/20 - 3s - loss: 0.0996 - acc: 0.9652 - binary_crossentropy: 0.0996 - val_loss: 0.4641 - val_acc: 0.8777 - val_binary_crossentropy: 0.4641 Epoch 17/20 - 4s - loss: 0.0969 - acc: 0.9648 - binary_crossentropy: 0.0969 - val_loss: 0.4753 - val_acc: 0.8779 - val_binary_crossentropy: 0.4753 Epoch 18/20 - 4s - loss: 0.0940 - acc: 0.9666 - binary_crossentropy: 0.0940 - val_loss: 0.4847 - val_acc: 0.8747 - val_binary_crossentropy: 0.4847 Epoch 19/20 - 4s - loss: 0.0873 - acc: 0.9686 - binary_crossentropy: 0.0873 - val_loss: 0.5048 - val_acc: 0.8759 - val_binary_crossentropy: 0.5048 Epoch 20/20 - 3s - loss: 0.0807 - acc: 0.9699 - binary_crossentropy: 0.0807 - val_loss: 0.5233 - val_acc: 0.8755 - val_binary_crossentropy: 0.5233 复制代码
In [14]:
plot_history([('baseline', baseline_history), ('dropout', dpt_model_history)]) 复制代码
添加丢弃层可明显改善基准模型。
下面总结一下防止神经网络出现过拟合的最常见方法:
- 获取更多训练数据。
- 降低网络容量。
- 添加权重正则化。
- 添加丢弃层。
还有两个重要的方法在本指南中没有介绍:数据增强和批次归一化。
以上所述就是小编给大家介绍的《探索过拟合和欠拟合》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:- 过拟合与欠拟合-股票投资中的机器学习
- matlab练习程序(曲面拟合)
- 使用权重正则化较少模型过拟合
- 用Keras中的权值约束缓解过拟合
- 过拟合详解:监督学习中不准确的「常识」
- 深度神经网络是否夸张地过拟合了?
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Don't Make Me Think
Steve Krug / New Riders Press / 18 August, 2005 / $35.00
Five years and more than 100,000 copies after it was first published, it's hard to imagine anyone working in Web design who hasn't read Steve Krug's "instant classic" on Web usability, but people are ......一起来看看 《Don't Make Me Think》 这本书的介绍吧!
SHA 加密
SHA 加密工具
XML、JSON 在线转换
在线XML、JSON转换工具