内容简介:Convolutional Neural Networks can be confusing and intimidating for anyone getting into deep learning. In this article, I will show that building a simple CNN is actually quite easy.First, we need to import the modules.Keras is a high-level Python neural n
Building a Convolutional Neural Network in only 40 lines of code
The simplest CNN possible using Keras and Tensorflow
Convolutional Neural Networks can be confusing and intimidating for anyone getting into deep learning. In this article, I will show that building a simple CNN is actually quite easy.
First, we need to import the modules.
import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras import datasets, layers, models from keras.preprocessing.image import ImageDataGenerator
Keras is a high-level Python neural networks library that runs on top of Tensorflow. It’s simple architecture, readability and overall ease of use make it one of the most popular library when it comes to deep learning with Python.
For this article, I used the “10 monkey species” dataset, available on Kaggle here: https://www.kaggle.com/slothkong/10-monkey-species . It contains 1098 training images and 272 validation images, separated in 10 classes of monkeys.
Before we begin, make sure the images are stored properly. The structure seen below is necessary for the flow_from_directory function (which we will get to soon) to work.
Directory - Training_images - Class_1 - image_1 - image_2 - Class_2 - image_1 - Validation_images - Class_1 - image_1 ...
Now we begin. First, we have to specify the paths of the images, as well as their target size.
#Path
train_dir = Path('../input/10-monkey-species/training/training/')
test_dir = Path('../input/10-monkey-species/validation/validation/')
#Images target size
target_size = (100,100)
channels = 3 #RGB
We then create our generators, which will allow us to do data augmentation and scaling of the images. Be careful to only augment training images, but rescale everything. Rescaling is necessary because having every image in the same [0,1] range will mean that they contribute more evenly during training.
As a clarification, be mindful that the ImageDataGenerator function does not create new images. Rather, it modifies some of our current training images so that the model is trained on a bigger variety of samples. This helps in avoiding overfitting and it makes the model more apt to predict the class of new monkeys.
#Data augmentation
train_generator = ImageDataGenerator(rescale=1/255,
rotation_range=40,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
valid_generator = ImageDataGenerator(rescale = 1/255)
We can then import the images. The flow_from_directory function finds the images in their specified path and resizes them to their target size.
epochs = 20
batch_size = 64
#Finds images, transforms them
train_data = train_generator.flow_from_directory(train_dir, target_size=target_size, batch_size=batch_size, class_mode='categorical')
test_data = valid_generator.flow_from_directory(test_dir, target_size=target_size, batch_size=batch_size, class_mode='categorical', shuffle=False)
Epochs and batch size are two very important hyperparameters. One epoch is done when the entire dataset has passed through the network. The batch size is the number of images processed before the model is updated. Adjusting those parameters drastically changes the speed and length of the training.
We can then build the model itself. Here, I create an extremely simple model, with only one convolution layer and one pooling layer in addition to the input/output layers. Obviously, a more complex model would help improving performance and adding layers is extremely simple, but for this tutorial, I will leave it at that.
#Number of images we have
train_samples = train_data.samples
valid_samples = test_data.samples#Building the model
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape (100, 100, channels)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Dense(10, activation='softmax'))
#Compile model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
The first convolutional layer requires an input shape, i.e. the shape of the images. The last layer of this CNN uses the softmax activation function, which is appropriate when we have multiple classes (we have 10 here), as it allows the model to calculate probabilities an image belongs to each class. Finally, the model.compile function allows us to specify how the model will learn during training.
We can then fit the model and evaluate performance on the validation set.
#Fit the model
model.fit_generator(generator=train_data,
steps_per_epoch=train_samples/batch_size,
validation_data=test_data,
validation_steps=valid_samples/batch_size,
epochs=epochs)
This super simple model achieves an accuracy of almost 63% on the validation set after only 20 epochs!
Epoch 20/20 17/17 [===============================] - 81s 5s/step - loss: 0.6802 - accuracy: 0.7395 - val_loss: 1.4856 - val_accuracy: 0.6287
In practice, it would make sense to have many more epochs and to let the model train for as long as it improves.
There you have it, a Convolutional Neural Network, from start to finish in only 40 lines of code. Not so intimidating anymore is it? Thanks a lot for reading!
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。