一看就懂的Tensorflow实战(卷积神经网络)

时间:2022-07-22
本文章向大家介绍一看就懂的Tensorflow实战(卷积神经网络),主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

Tensorflow卷积神经网络实现

from __future__ import division, print_function, absolute_import

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np

导入数据集

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./data/", one_hot=False)

Extracting ./data/train-images-idx3-ubyte.gz
Extracting ./data/train-labels-idx1-ubyte.gz
Extracting ./data/t10k-images-idx3-ubyte.gz
Extracting ./data/t10k-labels-idx1-ubyte.gz

参数设置

# Training Parameters
learning_rate = 0.001
num_steps = 2000
batch_size = 128

# Network Parameters
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.25 # Dropout, probability to drop a unit

定义CNN模型

# Create the neural network
def conv_net(x_dict, n_classes, dropout, reuse, is_training):

    # Define a scope for reusing the variables
    with tf.variable_scope('ConvNet', reuse=reuse):
        # TF Estimator input is a dict, in case of multiple inputs
        x = x_dict['images']

        # MNIST data input is a 1-D vector of 784 features (28*28 pixels)
        # Reshape to match picture format [Height x Width x Channel]
        # Tensor input become 4-D: [Batch Size, Height, Width, Channel]
        x = tf.reshape(x, shape=[-1, 28, 28, 1])

        # Convolution Layer with 32 filters and a kernel size of 5
        conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with strides of 2 and kernel size of 2
        conv1 = tf.layers.max_pooling2d(conv1, 2, 2)

        # Convolution Layer with 64 filters and a kernel size of 3
        conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with strides of 2 and kernel size of 2
        conv2 = tf.layers.max_pooling2d(conv2, 2, 2)

        # Flatten the data to a 1-D vector for the fully connected layer
        fc1 = tf.contrib.layers.flatten(conv2)

        # Fully connected layer (in tf contrib folder for now)
        fc1 = tf.layers.dense(fc1, 1024)
        # Apply Dropout (if is_training is False, dropout is not applied)
        fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)

        # Output layer, class prediction
        out = tf.layers.dense(fc1, n_classes)

    return out

补充:tf.nn,tf.layers, tf.contrib模块区别 [1] tf.nn,tf.layers, tf.contrib模块有很多功能是重复的,尤其是卷积操作,在使用的时候,我们可以根据需要现在不同的模块。但有些时候可以一起混用。 下面是对三个模块的简述:

  • tf.nn :提供神经网络相关操作的支持,包括卷积操作(conv)、池化操作(pooling)、归一化、loss、分类操作、embedding、RNN、Evaluation。
  • tf.layers:主要提供的高层的神经网络,主要和卷积相关的,个人感觉是对tf.nn的进一步封装,tf.nn会更底层一些。
  • tf.contrib:tf.contrib.layers提供够将计算图中的 网络层、正则化、摘要操作、是构建计算图的高级操作,但是tf.contrib包含不稳定和实验代码,有可能以后API会改变。 以上三个模块的封装程度是逐个递进的。

补充:TensorFlow layers模块 [2] Convolution Convolution 有多个方法,如 conv1d()、conv2d()、conv3d(),分别代表一维、二维、三维卷积,另外还有 conv2d_transpose()、conv3d_transpose(),分别代表二维和三维反卷积,还有 separable_conv2d() 方法代表二维深度可分离卷积。它们定义在 tensorflow/python/layers/convolutional.py 中,其用法都是类似的,在这里以 conv2d() 方法为例进行说明。 conv2d( inputs, filters, kernel_size, strides=(1, 1), padding='valid', data_format='channels_last', dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer=None, bias_initializer=tf.zeros_initializer(), kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, trainable=True, name=None, reuse=None ) 参数说明如下:

  • inputs:必需,即需要进行操作的输入数据。
  • filters:必需,是一个数字,代表了输出通道的个数,即 output_channels。
  • kernel_size:必需,卷积核大小,必须是一个数字(高和宽都是此数字)或者长度为 2 的列表(分别代表高、宽)。
  • strides:可选,默认为 (1, 1),卷积步长,必须是一个数字(高和宽都是此数字)或者长度为 2 的列表(分别代表高、宽)。
  • padding:可选,默认为 valid,padding 的模式,有 valid 和 same 两种,大小写不区分。
  • data_format:可选,默认 channels_last,分为 channels_last 和 channels_first 两种模式,代表了输入数据的维度类型,如果是 channels_last,那么输入数据的 shape 为 (batch, height, width, channels),如果是 channels_first,那么输入数据的 shape 为 (batch, channels, height, width)。
  • dilation_rate:可选,默认为 (1, 1),卷积的扩张率,如当扩张率为 2 时,卷积核内部就会有边距,3×3 的卷积核就会变成 5×5。
  • activation:可选,默认为 None,如果为 None 则是线性激活。
  • use_bias:可选,默认为 True,是否使用偏置。
  • kernel_initializer:可选,默认为 None,即权重的初始化方法,如果为 None,则使用默认的 Xavier 初始化方法。
  • bias_initializer:可选,默认为零值初始化,即偏置的初始化方法。
  • kernel_regularizer:可选,默认为 None,施加在权重上的正则项。
  • bias_regularizer:可选,默认为 None,施加在偏置上的正则项。
  • activity_regularizer:可选,默认为 None,施加在输出上的正则项。
  • kernel_constraint,可选,默认为 None,施加在权重上的约束项。
  • bias_constraint,可选,默认为 None,施加在偏置上的约束项。
  • trainable:可选,默认为 True,布尔类型,如果为 True,则将变量添加到 GraphKeys.TRAINABLE_VARIABLES 中。
  • name:可选,默认为 None,卷积层的名称。
  • reuse:可选,默认为 None,布尔类型,如果为 True,那么如果 name 相同时,会重复利用。
  • 返回值: 卷积后的 Tensor。

注意,这里只需要给出输入数据,输出通道数,卷积核大小即可。 Pooling layers 模块提供了多个池化方法,这几个池化方法都是类似的,包括 max_pooling1d()、max_pooling2d()、max_pooling3d()、average_pooling1d()、average_pooling2d()、average_pooling3d(),分别代表一维二维三维最大和平均池化方法,它们都定义在 tensorflow/python/layers/pooling.py 中,这里以 > max_pooling2d() 方法为例进行介绍。 max_pooling2d( inputs, pool_size, strides, padding='valid', data_format='channels_last', name=None ) 参数说明如下:

  • inputs: 必需,即需要池化的输入对象,必须是 4 维的。
  • pool_size:必需,池化窗口大小,必须是一个数字(高和宽都是此数字)或者长度为 2 的列表(分别代表高、宽)。
  • strides:必需,池化步长,必须是一个数字(高和宽都是此数字)或者长度为 2 的列表(分别代表高、宽)。
  • padding:可选,默认 valid,padding 的方法,valid 或者 same,大小写不区分。
  • data_format:可选,默认 channels_last,分为 channels_last 和 channels_first 两种模式,代表了输入数据的维度类型,如果是 channels_last,那么输入数据的 shape 为 (batch, height, width, channels),如果是 channels_first,那么输入数据的 shape 为 (batch, channels, height, width)。
  • name:可选,默认 None,池化层的名称。
  • 返回值: 经过池化处理后的 Tensor。

dropout dropout 是指在深度学习网络的训练过程中,对于神经网络单元,按照一定的概率将其暂时从网络中丢弃,可以用来防止过拟合,layers 模块中提供了 dropout() 方法来实现这一操作,定义在 tensorflow/python/layers/core.py。下面我们来说明一下它的用法。 dropout( inputs, rate=0.5, noise_shape=None, seed=None, training=False, name=None ) 参数说明如下:

  • inputs:必须,即输入数据。
  • rate:可选,默认为 0.5,即 dropout rate,如设置为 0.1,则意味着会丢弃 10% 的神经元。
  • noise_shape:可选,默认为 None,int32 类型的一维 Tensor,它代表了 dropout mask 的 shape,dropout mask 会与 inputs 相乘对 inputs 做转换,例如 inputs 的 shape 为 (batch_size, timesteps, features),但我们想要 droput mask 在所有 timesteps 都是相同的,我们可以设置 noise_shape=[batch_size, 1, features]。
  • seed:可选,默认为 None,即产生随机熟的种子值。
  • training:可选,默认为 False,布尔类型,即代表了是否标志位 training 模式。
  • name:可选,默认为 None,dropout 层的名称。
  • 返回: 经过 dropout 层之后的 Tensor。

定义模型函数

# Define the model function (following TF Estimator Template)
def model_fn(features, labels, mode):

    # Build the neural network
    # Because Dropout have different behavior at training and prediction time, we
    # need to create 2 distinct computation graphs that still share the same weights.
    logits_train = conv_net(features, num_classes, dropout, reuse=False, is_training=True)
    logits_test = conv_net(features, num_classes, dropout, reuse=True, is_training=False)

    # Predictions
    pred_classes = tf.argmax(logits_test, axis=1)
    pred_probas = tf.nn.softmax(logits_test)

    # If prediction mode, early return
    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)

    # Define loss and optimizer
    loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
        logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
    train_op = optimizer.minimize(loss_op, global_step=tf.train.get_global_step())

    # Evaluate the accuracy of the model
    acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)

    # TF Estimators requires to return a EstimatorSpec, that specify
    # the different ops for training, evaluating, ...
    estim_specs = tf.estimator.EstimatorSpec(
      mode=mode,
      predictions=pred_classes,
      loss=loss_op,
      train_op=train_op,
      eval_metric_ops={'accuracy': acc_op})

    return estim_specs

创建评估器

# Build the Estimator
model = tf.estimator.Estimator(model_fn)

INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: C:UsersxywangAppDataLocalTemptmp8i1k3w75
INFO:tensorflow:Using config: {'_model_dir': 'C:\Users\xywang\AppData\Local\Temp\tmp8i1k3w75', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x000001F84714B780>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

定义输入方法

# Define the input function for training
input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.train.images}, y=mnist.train.labels,
    batch_size=batch_size, num_epochs=None, shuffle=True)

训练模型

# Train the Model
model.train(input_fn, steps=num_steps)

INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 1 into C:UsersxywangAppDataLocalTemptmp8i1k3w75model.ckpt.
INFO:tensorflow:loss = 2.310159, step = 1
INFO:tensorflow:global_step/sec: 7.94691
INFO:tensorflow:loss = 0.15775274, step = 101 (12.585 sec)
INFO:tensorflow:global_step/sec: 7.43979
INFO:tensorflow:loss = 0.051440004, step = 201 (13.440 sec)
INFO:tensorflow:global_step/sec: 8.26849
INFO:tensorflow:loss = 0.07565387, step = 301 (12.095 sec)
INFO:tensorflow:global_step/sec: 8.47324
INFO:tensorflow:loss = 0.043410238, step = 401 (11.802 sec)
INFO:tensorflow:global_step/sec: 7.94311
INFO:tensorflow:loss = 0.048961755, step = 501 (12.590 sec)
INFO:tensorflow:global_step/sec: 8.58757
INFO:tensorflow:loss = 0.024859685, step = 601 (11.645 sec)
INFO:tensorflow:global_step/sec: 8.39987
INFO:tensorflow:loss = 0.07183821, step = 701 (11.904 sec)
INFO:tensorflow:global_step/sec: 8.6733
INFO:tensorflow:loss = 0.007703744, step = 801 (11.530 sec)
INFO:tensorflow:global_step/sec: 8.25551
INFO:tensorflow:loss = 0.02502199, step = 901 (12.113 sec)
INFO:tensorflow:global_step/sec: 7.98054
INFO:tensorflow:loss = 0.019118268, step = 1001 (12.563 sec)
INFO:tensorflow:global_step/sec: 8.3921
INFO:tensorflow:loss = 0.009793495, step = 1101 (11.884 sec)
INFO:tensorflow:global_step/sec: 7.6179
INFO:tensorflow:loss = 0.08203622, step = 1201 (13.127 sec)
INFO:tensorflow:global_step/sec: 8.35142
INFO:tensorflow:loss = 0.03721855, step = 1301 (11.975 sec)
INFO:tensorflow:global_step/sec: 8.33818
INFO:tensorflow:loss = 0.025231175, step = 1401 (11.992 sec)
INFO:tensorflow:global_step/sec: 8.6748
INFO:tensorflow:loss = 0.026730753, step = 1501 (11.528 sec)
INFO:tensorflow:global_step/sec: 8.43105
INFO:tensorflow:loss = 0.008975061, step = 1601 (11.862 sec)
INFO:tensorflow:global_step/sec: 8.46893
INFO:tensorflow:loss = 0.011308375, step = 1701 (11.807 sec)
INFO:tensorflow:global_step/sec: 8.34723
INFO:tensorflow:loss = 0.007505517, step = 1801 (11.980 sec)
INFO:tensorflow:global_step/sec: 8.38929
INFO:tensorflow:loss = 0.021354698, step = 1901 (11.920 sec)
INFO:tensorflow:Saving checkpoints for 2000 into C:UsersxywangAppDataLocalTemptmp8i1k3w75model.ckpt.
INFO:tensorflow:Loss for final step: 0.011493968.
<tensorflow.python.estimator.estimator.Estimator at 0x1f84570c710>

评估模型

# Evaluate the Model
# Define the input function for evaluating
input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.test.images}, y=mnist.test.labels,
    batch_size=batch_size, shuffle=False)
# Use the Estimator 'evaluate' method
model.evaluate(input_fn)

INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2018-04-11-09:41:50
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from C:UsersxywangAppDataLocalTemptmp8i1k3w75model.ckpt-2000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at 2018-04-11-09:41:53
INFO:tensorflow:Saving dict for global step 2000: accuracy = 0.9868, global_step = 2000, loss = 0.043212146
{'accuracy': 0.9868, 'global_step': 2000, 'loss': 0.043212146}

模型测试

# Predict single images
n_images = 1
# Get images from test set
test_images = mnist.test.images[:n_images]
# Prepare the input data
input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': test_images}, shuffle=False)
# Use the model to predict the images class
preds = list(model.predict(input_fn))

# Display
for i in range(n_images):
    plt.imshow(np.reshape(test_images[i], [28, 28]), cmap='gray')
    plt.show()
    print("Model prediction:", preds[i])

INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from C:UsersxywangAppDataLocalTemptmp8i1k3w75model.ckpt-2000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
Model prediction: 7

参考

[1] tf API 研读1:tf.nn,tf.layers, tf.contrib概述 (https://blog.csdn.net/u014365862/article/details/77833481)

[2] TensorFlow layers模块用法(https://cuiqingcai.com/5715.html)