深度学习-IMBD二分类问题

时间:2022-07-25
本文章向大家介绍深度学习-IMBD二分类问题,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

IMBD分类问题

概述

IMBD自互联网电影数据库(IMDB)的50 000条评论数据。 分为训练集25000和测试25000 测试集合训练集的好评和差评按照1:1分配

代码

# 导入数据
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(
num_words=10000)
# num_words是指的筛选大于10000频率的单词
# train_labels赋值为1和0,0代表负面评论,1代表正面评论
word_index = imdb.get_word_index()# 获得单词的索引,并转换为字典格式
# word_index为字典,单词对应频率
reverse_word_index = dict(
    [(value, key) for (key, value) in word_index.items()])# 键值转换,将频率作为key
decoded_review = ' '.join(
    [reverse_word_index.get(i - 3, '?') for i in train_data[0]])# 将评论解码
# 数据预处理
import numpy as np # 导入numpy计算库
# 定义函数将数字编码为二进制
def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))# 创建一个0矩阵,长度和宽度为序列的长
    for i, sequence in enumerate(sequences):
        results[i, sequence] = 1.
    return results
# 分别对训练集和测试集进行转换
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
# 转换为的数据显示为0或者1的向量
# 同时将标签进行向量化
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
# 定义模型
from keras import models
from keras import layers
model = models.Sequential()# 使用Sequential模块,其实还可以使用API模块
# 第一层为16个参数,激活函数为relu(rectified linear unit,整流线性单元)函数
# 输入的格式
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
# 第二层
model.add(layers.Dense(16, activation='relu'))
# 第三层,激活函数为逻辑回归的sigmoid函数
model.add(layers.Dense(1, activation='sigmoid'))
# 配置优化器和损失函数
# 优化器其实就是对模型参数进行学习的方法
# 同时kersa支持自定义优化器
model.compile(optimizer='rmsprop',
            loss='binary_crossentropy',
            metrics=['accuracy'])
# 验证集?
# 从训练和测试提取1000个样本的验证集合
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
# 训练模型
# 每次使用512个样本进行20次迭代,并在验证集中验证
# 将每次的训练结果保存在history中
history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=20,
                    batch_size=512,#取512个小样本
                    validation_data=(x_val, y_val))
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 2s 144us/step - loss: 0.4933 - accuracy: 0.7881 - val_loss: 0.3651 - val_accuracy: 0.8767
Epoch 2/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.2931 - accuracy: 0.9055 - val_loss: 0.3168 - val_accuracy: 0.8757
Epoch 3/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.2155 - accuracy: 0.9327 - val_loss: 0.2854 - val_accuracy: 0.8873
Epoch 4/20
15000/15000 [==============================] - 1s 74us/step - loss: 0.1712 - accuracy: 0.9447 - val_loss: 0.2828 - val_accuracy: 0.8878
Epoch 5/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.1441 - accuracy: 0.9526 - val_loss: 0.2787 - val_accuracy: 0.8876
Epoch 6/20
15000/15000 [==============================] - 1s 74us/step - loss: 0.1137 - accuracy: 0.9668 - val_loss: 0.3240 - val_accuracy: 0.8800
Epoch 7/20
15000/15000 [==============================] - 1s 76us/step - loss: 0.0952 - accuracy: 0.9717 - val_loss: 0.3106 - val_accuracy: 0.8839
Epoch 8/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0763 - accuracy: 0.9791 - val_loss: 0.3339 - val_accuracy: 0.8776
Epoch 9/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0628 - accuracy: 0.9829 - val_loss: 0.3982 - val_accuracy: 0.8658
Epoch 10/20
15000/15000 [==============================] - 1s 85us/step - loss: 0.0524 - accuracy: 0.9867 - val_loss: 0.3790 - val_accuracy: 0.8765
Epoch 11/20
15000/15000 [==============================] - 1s 88us/step - loss: 0.0403 - accuracy: 0.9913 - val_loss: 0.4047 - val_accuracy: 0.8754
Epoch 12/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0326 - accuracy: 0.9931 - val_loss: 0.4363 - val_accuracy: 0.8762
Epoch 13/20
15000/15000 [==============================] - 1s 74us/step - loss: 0.0259 - accuracy: 0.9952 - val_loss: 0.4784 - val_accuracy: 0.8669
Epoch 14/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0205 - accuracy: 0.9971 - val_loss: 0.4990 - val_accuracy: 0.8712
Epoch 15/20
15000/15000 [==============================] - 1s 76us/step - loss: 0.0197 - accuracy: 0.9953 - val_loss: 0.5286 - val_accuracy: 0.8702
Epoch 16/20
15000/15000 [==============================] - 1s 84us/step - loss: 0.0096 - accuracy: 0.9995 - val_loss: 0.5725 - val_accuracy: 0.8637
Epoch 17/20
15000/15000 [==============================] - 1s 88us/step - loss: 0.0116 - accuracy: 0.9981 - val_loss: 0.5989 - val_accuracy: 0.8659
Epoch 18/20
15000/15000 [==============================] - 1s 80us/step - loss: 0.0052 - accuracy: 0.9999 - val_loss: 0.6340 - val_accuracy: 0.8684
Epoch 19/20
15000/15000 [==============================] - 1s 83us/step - loss: 0.0095 - accuracy: 0.9977 - val_loss: 0.6588 - val_accuracy: 0.8659
Epoch 20/20
15000/15000 [==============================] - 1s 77us/step - loss: 0.0030 - accuracy: 0.9999 - val_loss: 0.6966 - val_accuracy: 0.8628
# history中有个history字典对象,保存着每次的训练数据
history_dict = history.history
history_dict.keys()# 键值
dict_keys(['val_loss', 'val_accuracy', 'loss', 'accuracy'])
# 对绘制训练损失和验证损失
import matplotlib.pyplot as plt# 导入matplotlib绘图库
history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1) # 迭代次数

# 绘图
plt.plot(epochs, loss_values, 'bo', label='Training loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation loss')
# 设置标题
plt.title('Training and validation loss')
# 设置xy的标题
plt.xlabel('Epochs')
plt.ylabel('Loss')
# 绘制图例
plt.legend()
plt.show()

可以看出随着迭代的增加,训练集的损失在不断的减少,但是验证集的损失在不断的增加

# 绘制训练和验证的精度
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
# 设置标题
plt.title('Training and validation accuracy')
# 设置x和y的标签
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
# 设置图例
plt.legend()
plt.show()

结合上述两个图可以看出

  • 训练集随着训练的加深,模型的损失和精确度都是往好的方向发展
  • 验证集随着训练的加深,模型的表现在不断的下降
  • 其实这是过拟合的现象
# 重新训练一个模型
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
            loss='binary_crossentropy',
            metrics=['accuracy'])
# 以上内容一致
# 这里去掉了验证过程,直接建模,然后测试
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
Epoch 1/4
25000/25000 [==============================] - 1s 52us/step - loss: 0.4484 - accuracy: 0.8310
Epoch 2/4
25000/25000 [==============================] - 1s 39us/step - loss: 0.2597 - accuracy: 0.9085
Epoch 3/4
25000/25000 [==============================] - 1s 39us/step - loss: 0.2001 - accuracy: 0.9288
Epoch 4/4
25000/25000 [==============================] - 1s 39us/step - loss: 0.1673 - accuracy: 0.9394
25000/25000 [==============================] - 2s 61us/step
results
[0.29232519361495973, 0.8848000168800354]

最终的模型精确度为0.88

结束语

IMBD的数据集为kersa自带的数据集,数据处理的比较规范,如果是自己做可能会涉及更多的问题,包括转码之类。

peace &love

问题

概述

IMBD自互联网电影数据库(IMDB)的50 000条评论数据。 分为训练集25000和测试25000 测试集合训练集的好评和差评按照1:1分配

代码

# 导入数据
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(
num_words=10000)
# num_words是指的筛选大于10000频率的单词
# train_labels赋值为1和0,0代表负面评论,1代表正面评论
word_index = imdb.get_word_index()# 获得单词的索引,并转换为字典格式
# word_index为字典,单词对应频率
reverse_word_index = dict(
    [(value, key) for (key, value) in word_index.items()])# 键值转换,将频率作为key
decoded_review = ' '.join(
    [reverse_word_index.get(i - 3, '?') for i in train_data[0]])# 将评论解码
# 数据预处理
import numpy as np # 导入numpy计算库
# 定义函数将数字编码为二进制
def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))# 创建一个0矩阵,长度和宽度为序列的长
    for i, sequence in enumerate(sequences):
        results[i, sequence] = 1.
    return results
# 分别对训练集和测试集进行转换
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
# 转换为的数据显示为0或者1的向量
# 同时将标签进行向量化
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
# 定义模型
from keras import models
from keras import layers
model = models.Sequential()# 使用Sequential模块,其实还可以使用API模块
# 第一层为16个参数,激活函数为relu(rectified linear unit,整流线性单元)函数
# 输入的格式
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
# 第二层
model.add(layers.Dense(16, activation='relu'))
# 第三层,激活函数为逻辑回归的sigmoid函数
model.add(layers.Dense(1, activation='sigmoid'))
# 配置优化器和损失函数
# 优化器其实就是对模型参数进行学习的方法
# 同时kersa支持自定义优化器
model.compile(optimizer='rmsprop',
            loss='binary_crossentropy',
            metrics=['accuracy'])
# 验证集?
# 从训练和测试提取1000个样本的验证集合
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
# 训练模型
# 每次使用512个样本进行20次迭代,并在验证集中验证
# 将每次的训练结果保存在history中
history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=20,
                    batch_size=512,#取512个小样本
                    validation_data=(x_val, y_val))
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 2s 144us/step - loss: 0.4933 - accuracy: 0.7881 - val_loss: 0.3651 - val_accuracy: 0.8767
Epoch 2/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.2931 - accuracy: 0.9055 - val_loss: 0.3168 - val_accuracy: 0.8757
Epoch 3/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.2155 - accuracy: 0.9327 - val_loss: 0.2854 - val_accuracy: 0.8873
Epoch 4/20
15000/15000 [==============================] - 1s 74us/step - loss: 0.1712 - accuracy: 0.9447 - val_loss: 0.2828 - val_accuracy: 0.8878
Epoch 5/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.1441 - accuracy: 0.9526 - val_loss: 0.2787 - val_accuracy: 0.8876
Epoch 6/20
15000/15000 [==============================] - 1s 74us/step - loss: 0.1137 - accuracy: 0.9668 - val_loss: 0.3240 - val_accuracy: 0.8800
Epoch 7/20
15000/15000 [==============================] - 1s 76us/step - loss: 0.0952 - accuracy: 0.9717 - val_loss: 0.3106 - val_accuracy: 0.8839
Epoch 8/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0763 - accuracy: 0.9791 - val_loss: 0.3339 - val_accuracy: 0.8776
Epoch 9/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0628 - accuracy: 0.9829 - val_loss: 0.3982 - val_accuracy: 0.8658
Epoch 10/20
15000/15000 [==============================] - 1s 85us/step - loss: 0.0524 - accuracy: 0.9867 - val_loss: 0.3790 - val_accuracy: 0.8765
Epoch 11/20
15000/15000 [==============================] - 1s 88us/step - loss: 0.0403 - accuracy: 0.9913 - val_loss: 0.4047 - val_accuracy: 0.8754
Epoch 12/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0326 - accuracy: 0.9931 - val_loss: 0.4363 - val_accuracy: 0.8762
Epoch 13/20
15000/15000 [==============================] - 1s 74us/step - loss: 0.0259 - accuracy: 0.9952 - val_loss: 0.4784 - val_accuracy: 0.8669
Epoch 14/20
15000/15000 [==============================] - 1s 75us/step - loss: 0.0205 - accuracy: 0.9971 - val_loss: 0.4990 - val_accuracy: 0.8712
Epoch 15/20
15000/15000 [==============================] - 1s 76us/step - loss: 0.0197 - accuracy: 0.9953 - val_loss: 0.5286 - val_accuracy: 0.8702
Epoch 16/20
15000/15000 [==============================] - 1s 84us/step - loss: 0.0096 - accuracy: 0.9995 - val_loss: 0.5725 - val_accuracy: 0.8637
Epoch 17/20
15000/15000 [==============================] - 1s 88us/step - loss: 0.0116 - accuracy: 0.9981 - val_loss: 0.5989 - val_accuracy: 0.8659
Epoch 18/20
15000/15000 [==============================] - 1s 80us/step - loss: 0.0052 - accuracy: 0.9999 - val_loss: 0.6340 - val_accuracy: 0.8684
Epoch 19/20
15000/15000 [==============================] - 1s 83us/step - loss: 0.0095 - accuracy: 0.9977 - val_loss: 0.6588 - val_accuracy: 0.8659
Epoch 20/20
15000/15000 [==============================] - 1s 77us/step - loss: 0.0030 - accuracy: 0.9999 - val_loss: 0.6966 - val_accuracy: 0.8628
# history中有个history字典对象,保存着每次的训练数据
history_dict = history.history
history_dict.keys()# 键值
dict_keys(['val_loss', 'val_accuracy', 'loss', 'accuracy'])
# 对绘制训练损失和验证损失
import matplotlib.pyplot as plt# 导入matplotlib绘图库
history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1) # 迭代次数

# 绘图
plt.plot(epochs, loss_values, 'bo', label='Training loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation loss')
# 设置标题
plt.title('Training and validation loss')
# 设置xy的标题
plt.xlabel('Epochs')
plt.ylabel('Loss')
# 绘制图例
plt.legend()
plt.show()

可以看出随着迭代的增加,训练集的损失在不断的减少,但是验证集的损失在不断的增加

# 绘制训练和验证的精度
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
# 设置标题
plt.title('Training and validation accuracy')
# 设置x和y的标签
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
# 设置图例
plt.legend()
plt.show()

结合上述两个图可以看出

  • 训练集随着训练的加深,模型的损失和精确度都是往好的方向发展
  • 验证集随着训练的加深,模型的表现在不断的下降
  • 其实这是过拟合的现象
# 重新训练一个模型
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
            loss='binary_crossentropy',
            metrics=['accuracy'])
# 以上内容一致
# 这里去掉了验证过程,直接建模,然后测试
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
Epoch 1/4
25000/25000 [==============================] - 1s 52us/step - loss: 0.4484 - accuracy: 0.8310
Epoch 2/4
25000/25000 [==============================] - 1s 39us/step - loss: 0.2597 - accuracy: 0.9085
Epoch 3/4
25000/25000 [==============================] - 1s 39us/step - loss: 0.2001 - accuracy: 0.9288
Epoch 4/4
25000/25000 [==============================] - 1s 39us/step - loss: 0.1673 - accuracy: 0.9394
25000/25000 [==============================] - 2s 61us/step
results
[0.29232519361495973, 0.8848000168800354]

最终的模型精确度为0.88

结束语

IMBD的数据集为kersa自带的数据集,数据处理的比较规范,如果是自己做可能会涉及更多的问题,包括转码之类。

peace &love