Doctor AI Diagnoses Pneumonia

栏目: IT技术 · 发布时间: 5年前

Source: Deep Learning on Medium

Doctor AI Diagnoses Pneumonia

Imports

Let s load some important libraries:

from keras.preprocessing.image 
import ImageDataGenerator, load_img from keras.models
import Sequential from keras.layers
import Conv2D, MaxPooling2D from keras.layers
import Activation, Dropout, Flatten, Dense from keras
import backend as K
import os
import numpy as np
import pandas as np
import matplotlib.pyplot as plt
%matplotlib inline

Getting to know the data

Let s get to know the data, viewing two sample images, one in normal condition and another with pneumonia.

import matplotlib.pyplot as plt
img_name = 'NORMAL2-IM-0588-0001.jpeg'
img_normal = load_img('../input/chest_xray/chest_xray/train/NORMAL/' + img_name)
plt.imshow(img_normal)
plt.show()
img_name = 'person63_bacteria_306.jpeg' 
img_pneumonia = load_img('../input/chest_xray/chest_xray/train/PNEUMONIA/ ' + img_name)
print('PNEUMONIA')
plt.imshow(img_pneumonia) plt.show()

Preparing data to feed into model

Setting some important variables like images, epochs, etc.:

img_width, img_height = 150, 150
nb_train_samples = 5217
nb_validation_samples = 17
epochs = 20
batch_size = 16

The image width and image height are both 150 pixels. There will be 5217 samples to train, and 17 samples to validate (we will add more via data augmentation later). Validation data is data used to evaluate the loss function during training (opposed to test data, used to evaluate the metric after training). The training will run for 20 epochs, in batches of 16 images.

Specifying the directories for images:

train_data_dir = '../input/chest_xray/chest_xray/train'
validation_data_dir = '../input/chest_xray/chest_xray/val'
test_data_dir = '../input/chest_xray/chest_xray/test'

Lastly, the images need to be reshaped:

if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)

Because the image is in color, it has three separate color values for each pixel, hence the depth of 3. If the image were black-and-white like the MNIST dataset the depth would be 1.

Creating the Model

The model will be created along a standard CNN formula: several repetitions of convolution layer, activation layer, and pooling layer, followed finally by a flattening and a standard dense layer. A dropout layer was added at the end to further regularize, followed by another dense layer (surrounded by two activation functions).

model = Sequential()model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

For more on Keras layers and what they do, check out this article.

We can get information on the layers by calling model.layers.

We can also get an idea of what the inputs and outputs should be with model.input and model.output.

Output for model.input
Output for model.output

Next, we must compile the model with a loss function, an optimizer, and a metric. In this case, the loss function of choice is binary cross-entropy (pretty much the universal choice). The optimizer of choice is rmsprop, which works well in images where the classification is dependent on very small changes in the image. The code to compile is as below:

model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

Data Augmentation

There are only 17 images for validation so how will we get more data? The answer: data augmentation. We can use data augmentation to give us more data for training, validation, and testing.

train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)

To rescale, we need to test

test_datagen = ImageDataGenerator(rescale=1. / 255)

The following code uses flow_from_directory to directly apply the data generator to the images in the directory into the train set.

train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')

The following code generates code for validation:

validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')

And this one for test:

test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

离心力:互联网历史与数字化未来

离心力:互联网历史与数字化未来

[英] 乔尼·赖安(Johnny Ryan) / 段铁铮 / 译言·东西文库/电子工业出版社 / 2018-2-1 / 68.00元

★一部详实、严谨的互联网史著作; ★哈佛、斯坦福等高校学生必读书目; ★《互联网的未来》作者乔纳森·L. 齐特雷恩,《独立报》《爱尔兰时报》等知名作者和国外媒体联合推荐。 【内容简介】 虽然互联网从诞生至今,不过是五六十年,但我们已然有必要整理其丰富的历史。未来的数字世界不仅取决于我 们的设想,也取决于它的发展历程,以及互联网伟大先驱们的理想和信念。 本书作者乔尼· ......一起来看看 《离心力:互联网历史与数字化未来》 这本书的介绍吧!

XML、JSON 在线转换
XML、JSON 在线转换

在线XML、JSON转换工具

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具