内容简介:The study developed here for the automatic detection of COVID-19 in X-ray images is strictly for educational purposes. The final application is not intended to be a reliable and accurate diagnostic system for the diagnosis of COVID-19 in humans, as it has
Applying Artificial Intelligence techniques in the development of a web-app for the detection of Covid-19 in X-ray images
May 18 ·33min read
Disclaimer
The study developed here for the automatic detection of COVID-19 in X-ray images is strictly for educational purposes. The final application is not intended to be a reliable and accurate diagnostic system for the diagnosis of COVID-19 in humans, as it has not been evaluated professionally or academically.
Introduction
Covid-19 is a pandemic disease caused by a virus (the SARS-CoV-2 coronavirus), which has already infected millions of people, causing the death of hundreds of thousands in a few months.
According to the World Health Organization (WHO), most patients with COVID-19 (about 80%) may be asymptomatic, and about 20% of cases may require hospital care because they have difficulty breathing. Of those cases, approximately 5% may need support for the treatment of respiratory failure (ventilatory support), a situation that can collapse Intensive Care facilities. Methods to fast test who has the virus is a key in combating the pandemic.
What is the coronavirus?
Coronavirus is a family of viruses that cause respiratory infections. The new coronavirus agent was discovered at the end of 1919 after cases registered in China. It causes a disease called coronavirus (COVID-19).
Human coronaviruses were isolated for the first time in 1937. However, it was only in 1965 that the virus was described as coronavirus, due to its profile under the microscopy, that looks like a crown. In the video below, you can see an atomic level 3D model of the SARS-CoV-2 virus:
Why X-rays?
Recently, several promising efforts have been observed in the application of machine learning to aid the diagnosis of COVID-19 based on Computed Tomography (CT). Despite these methods’ success, the fact remains that COVID-19 is an infection that is vigorously spreading in communities of all sizes, especially the neediest.
X-ray machines are cheaper, more straightforward, and faster to operate, and are therefore more accessible than CTs to healthcare professionals working in more impoverished or more remote regions.
Objective
One of the significant challenges in combating Covid-19 is testing the presence of the virus in people. Thus, the objective of this project is to automatically detect the virus that causes Covid-19 in patients with Pneumonia (and even in asymptomatic, or not sick people), using scanned chest X-ray images. These images are pre-processed and used for the training of Convolutional Neural Network (CNN) models.
CNN-type networks generally need an extensive dataset to function. Still, in this project, a technique known as “Transfer Learning” is applied, which is very useful when the dataset is small (case of images of proven patients with Covid-19).
Two classification models are developed:
- Detection of Covid-19 versus patients diagnostic with normal Chest X-Ray results
- Detection of Covid-19 versus patients with Pneumonia
As defined in the paper COVID-19 Image Data Collection , all types of Pneumonia (other than caused by the Covid-19 virus), are considered for this work, only as “Pneumonia” (and classified with the Pneumo label ).
For the training of the models, tools, libraries, and resources of TensorFlow 2.0 (with Keras) are used, which is an open-source platform used in Machine Learning, or, more precisely, Deep Learning. The final models are the fundamentals of a web application (web-app) developed in Flask for testing in situations close to reality.
The diagram below provides us with a basic idea of how the final application works:
From the scanned image of a Chest X-Ray (User_A.png), stored locally on the web-app user’s computer, the application decides whether the image belongs to a person who is contaminated by the virus or not (Model Prediction: [POSITIVE] or [NEGATIVE] ). In both situations, the application informs about the accuracy of the prediction (Model Accuracy: X% ). To avoid mistakes both, the name of the original file and its image is shown to the user. A new copy of the image is stored locally, having its name added to the prediction label plus the value of accuracy.
The work is divided into 4 parts:
- Environment setup, data acquisition, cleaning and preparation
- Model 1 Training (Covid/Normal)
- Model 2 Training (Covid/Pneumo)
- Development and testing of a Web App for the detection of Covid-19 in X-ray images
Inspiration
The inspiration for this project was the creation of a proof-of-concept regarding the XRayCovid-19 project developed by UFRRJ (Federal Rural University of Rio de Janeiro, Brazil). The UFRRJ XRayCovid-19 is an ongoing project that uses artificial intelligence in the health systems of aid in the diagnostic process COVID-19. The tool is characterized by ease of use, efficiency in response time and effectiveness in the result obtained, characteristics that I hope to extend to the Web-App developed in Part 4 of this tutorial. Below, a print screen of one of the results of the diagnostic (one of the Covid-19 dataset 1 images was used):
The scientific basis for the work developed by the University can be seen in the paper by Chowdhury et al. 2020, Can AI help in screening Viral and COVID-19 pneumonia?
Another exciting work that also inspired this project, in addition to being used to compare the results of the models, is the Chester application Chester: A Web Delivered Locally Computed Chest X-ray Disease Prediction System , developed by researchers at the University of Montreal. The Chester is a free and simple prototype that can be used by medical professionals to understand the reality of tools Deep Learning to aid in the diagnosis of X-ray of the chest. The system was designed to be a second opinion, in which a user can process an image to confirm or assist in the diagnosis.
The current version of Chester (2.0) was trained with more than 106 thousand images using a Convolutional Network of the type DenseNet-121. The web app does not detect Covid-19, which is one of the researchers’ goals for future versions of the app. Below, a print screen of one of the results of the diagnostic (one of the Covid-19 dataset 1 images was used)
In the following link, you can access Chester or even download the app for offline use.
Thanks
This work was initially developed, based on the excellent tutorial published by Dr. Adrian Rosebrock, which I strongly recommend an in-depth read. Besides, I would like to thank Nell Trevor, who, based on Dr. Rosebrock’s work, went further by providing ideas on how to test the resulting model. In the following link, Nell made available via the PythonAnyware.com website, a web application for real tests of Covid-19 in X-ray images: Covid-19 predictor API .
Part 1 — Environment setup and data preparation
The dataset
The first challenge in training a model to detect any type of information from images is the quantity of data (or images) to be used. In principle, the greater the number of images, the better the final model is, which is not the case with this project for Covid-19 detection, once there are not many publicly available images (remember that this pandemic is only a few months old). However, studies like the one by Hall et al. Finding COVID-19 from Chest X-rays using Deep Learning on a Small Dataset , prove that promising results can be obtained with only a few hundred images using Transfer Learning techniques.
Two models are trained as explained in the introduction; therefore, 3 sets of data are needed:
- Set of X-ray images confirmed with Covid-19
- Set of X-ray images of regular (“normal”) patients (without disease)
- Set of X-ray images showing Pneumonia, but not caused by Covid-19
For this purpose, two data sets are downloaded:
Dataset 1: Image set with COVID-19Joseph Paul Cohen and Paul Morrison and Lan Dao COVID-19 image data collection, arXiv: 2003.11597, 2020
Public and open dataset of X-ray and computed tomography images of positive or suspected patients with COVID-19 or other viral and bacterial pneumonias (MERS, SARS and ARDS.). Data are collected from public sources, as well as through indirect collection from hospitals and doctors (project approved by the University of Montreal Ethics Committee # CERSES-20–058-D). All images and data are available in the following GitHub repository.
Dataset 2: Chest x-ray images with Pneumonia and normals
Kermany, Daniel; Zhang, Kang; Goldbaum, Michael (2018), “Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification”, Mendeley Data, v2.
Set of validated images (OCT and chest radiography) classified as normal and with some type of Pneumonia, by Deep Learning processes. The images are divided into a training set and an independent patient test set. The data are available on the website: https://data.mendeley.com/datasets/rscbjbr9sj/2
Types of chest X- rays
From datasets, three types of images can be found, PA, AP, and Lateral (L). L-side images (L) are apparent, but what is the difference between an X-ray AP and PA view? In simple words, during the procedure of taking an X-ray, when the X-ray passes from the posterior part of the body to the anterior, it is called the PA (Posterior — Anterior) view. While in the AP view, the direction is the opposite.
Usually, the X-ray is taken in the AP view for any part of the body. An important exception here is precisely the Chest X-ray. In this case, it is preferable to view the PA over the AP. But if the patient is very ill and unable to maintain his position, an AP-type X-ray may be taken for the chest.
With the vast majority of Chest X-rays being PA-type views, this is the type of view choice used to train the models.
Defining the environment for training DL models
The ideal is to start with a new Python environment. To do this, using Terminal, define a working directory (for example: X-Ray_Covid_development) and once there, create an environment in Python (for example: TF_2_Py_3_7 ) :
mkdir X-Ray_Covid_development cd X-Ray_Covid_development conda create — name TF_2_Py_3_7 python=3.7 -y conda activate TF_2_Py_3_7
Once inside the environment, install TensorFlow 2.0:
pip install — upgrade pip pip install tensorflow
From this point, install the other libraries necessary for training the model. For example:
conda install -c anaconda numpy conda install -c anaconda pandas conda install -c anaconda scikit-learn conda install -c conda-forge matplotlib conda install -c anaconda pillow conda install -c conda-forge opencv conda install -c conda-forge imutils
Create the necessary subdirectories:
notebooks
10_dataset —
|_ covid [here goes the dataset for training model 1]
|_ normal [here goes the dataset for training model 1]
20_dataset —
|_ covid [here goes the dataset for training model 2]
|_ pneumo [here goes the dataset for training model 2]
input -
|_ 10_Covid_Imagens _
| |_ [metadata.csv goes here]
| |_ images [Covid-19 images go here]
|_ 20_Chest_Xray -
|_ test _
|_ NORMAL [images go here]
|_ PNEUMONIA [images go here]
|_ train _
|_ NORMAL [images go here]
|_ PNEUMONIA [images go here]
model
dataset_validation _
|_ covid_validation [images go here]
|_ non_covidcovid_validation [images go here]
|_ normal_validation [images go here]
Data Download
Download the dataset 1 (Covid-19), and save the file metadata.csv under: /input /10_Covid_Images/ and, the images under /input/10_Covid_Images/images/ .
Download dataset 2 (Pneumo and Normal), and save the images under /input/20_Chest_Xray/ (keep the original test and train structure).
Part 2 — Model 1 — Covid/Normal
Data Preparation
- Download the Notebook: 10_Xray_Normal_Covid19_Model_1_Training_Tests.ipynb from my GitHub and store it in the subdirectory /notebooks.
- Once inside the Notebook, Import the libraries and run the support functions.
Building the Covid label dataset
From the input dataset (/input/10_Covid_Images/), the dataset to be used for training model 1 is created, which will be used in the classification of images to be defined with the covid and normal labels .
input_dataset_path = ‘../input/10_Covid_images’
The metadata.csv file, will provide information about the images that are in the /images/ file
csvPath = os.path.sep.join([input_dataset_path, “metadata.csv”]) df = pd.read_csv(csvPath) df.shape
The metadat.csv file has 354 rows and 28 columns, which means that in the subdirectory /images/ there are 354 X-ray images. Let’s analyze some of its columns to know more details of these images.
By df.modality, there are 310 images of X-ray and 44 of CT (tomography). The CT images are discarded and df.findings column, shows that the 310 X-ray images are subdivided into:
COVID-19 235 Streptococcus 17 SARS 16 Pneumocystis 15 COVID-19, ARDS 12 E.Coli 4 ARDS 4 No Finding 2 Chlamydophila 2 Legionella 2 Klebsiella 1
Looking at the 235 confirmed images for COVID-19 in terms of visualization, we have:
PA 142 AP 39 AP Supine 33 L 20 AP semi erect 1
As commented in the introduction, only the 142 PA-type images (Posterior-Anterior) are used for model training, as they are the most common images found in chest radiographs (final dataframe: xray_cv ).
xray_cv.patiendid column shows that the 142 images belong to 96 unique patients, meaning that in some cases, the same patient took more than one radiograph. This information is not taken into account since all images are used for training (we are interested in the content of the image).
By xray_cv.date, it is observed that there are 8 most recent images taken in March 2020. These images are separated in a list to be removed from the model training. And thus be used later as validation of the final model.
imgs_march = [ ‘2966893D-5DDF-4B68–9E2B-4979D5956C8E.jpeg’, ‘6C94A287-C059–46A0–8600-AFB95F4727B7.jpeg’, ‘F2DE909F-E19C-4900–92F5–8F435B031AC6.jpeg’, ‘F4341CE7–73C9–45C6–99C8–8567A5484B63.jpeg’, ‘E63574A7–4188–4C8D-8D17–9D67A18A1AFA.jpeg’, ‘31BA3780–2323–493F-8AED-62081B9C383B.jpeg’, ‘7C69C012–7479–493F-8722-ABC29C60A2DD.jpeg’, ‘B2D20576–00B7–4519-A415–72DE29C90C34.jpeg’ ]
The next step will be to build the dataframe that will point to the training dataset ( xray_cv_train ), which should reference 134 images (all input images from Covid, except for the separate ones for later validation):
xray_cv_train = xray_cv[~xray_cv.filename.isin(imgs_march)] xray_cv_train.reset_index(drop=True, inplace=True)
while the final validation ( xray_cv_val ) has 8 images:
xray_cv_val = xray_cv[xray_cv.filename.isin(imgs_march)] xray_cv_val.reset_index(drop=True, inplace=True)
Creating the files for COVID training images and later validation
It is important to remember that in the previous item, only dataframes were created with information taken from the original file metada.csv. We know which images we want to be stored in the final files for training and now we need to “physically” separate the actual images (in their digitized formats) into their correct subdirectories (folders).
For this we will use the load_image_folder support() function, which from a metadata file, copies the images referenced in it from one file to another:
def load_image_folder(df_metadata, col_img_name, input_dataset_path, output_dataset_path): img_number = 0 # loop over the rows of the COVID-19 data frame for (i, row) in df_metadata.iterrows():imagePath = os.path.sep.join([input_dataset_path, row[col_img_name]])if not os.path.exists(imagePath): print('image not found') continuefilename = row[col_img_name].split(os.path.sep)[-1] outputPath = os.path.sep.join([f"{output_dataset_path}", filename]) shutil.copy2(imagePath, outputPath) img_number += 1 print('{} selected Images on folder {}:'.format(img_number, output_dataset_path))
With the instructions below, the 134 selected images will be copied to the folder ../10_dataset/covid/.
input_dataset_path = '../input/10_Covid_images/images' output_dataset_path = '../dataset/covid' dataset = xray_cv_train col_img_name = 'filename'load_image_folder(dataset, col_img_name, input_dataset_path, output_dataset_path)
Creating folders for normal images (validation and training)
In the case of dataset 2 (normal and pneumonia images), the file with the metadata is not provided. Thus, you only have to copy the images from the input file to the end. For this we will use the load_image_folder_direct() support function, which copies a number of images (selected randomly) from une folder to another:
def load_image_folder_direct(input_dataset_path, output_dataset_path, img_num_select): img_number = 0 pathlist = Path(input_dataset_path).glob('**/*.*') nof_samples = img_num_select rc = [] for k, path in enumerate(pathlist): if k < nof_samples: rc.append(str(path)) # because path is not string shutil.copy2(path, output_dataset_path) img_number += 1 else: i = random.randint(0, k) if i < nof_samples: rc[i] = str(path)print('{} selected Images on folder {}:'.format(img_number, output_dataset_path))
Repeating the same procedure for the images in the folder ../input/20_Chest_Xray/train/NORMAL, we will randomly copy for the training, the same number of images that was used previously for the Covid images (len (xray_cv_train)), or 134 images. With this, the dataset for training the model is balanced.
input_dataset_path = '../input/20_Chest_Xray/train/NORMAL' output_dataset_path = '../dataset/normal' img_num_select = len(xray_cv_train)load_image_folder_direct(input_dataset_path, output_dataset_path, img_num_select)
In the same way, we separate 20 random images for later use in model validation.
input_dataset_path = '../input/20_Chest_Xray/train/NORMAL' output_dataset_path = '../dataset_validation/normal_validation'img_num_select = 20 load_image_folder_direct(input_dataset_path, output_dataset_path, img_num_select)
Although we are not training the model with images showing symptoms of Pneumonia (clear of Covid-19), it is interesting to see how the final model behaves with them. Thus, we also separate 20 of these images for later validation.
input_dataset_path = '../input/20_Chest_Xray/train/PNEUMONIA' output_dataset_path = '../dataset_validation/non_covid_pneumonia_validation'img_num_select = 20 load_image_folder_direct(input_dataset_path, output_dataset_path, img_num_select)
Below, the images show how the folders should be configured at the end of this step (on my Mac anyway). Also, the numbers marked in red, show the respective quantities of x-ray images contained inside the folders.
Plotting datasets for quick visual verification
As the number of images in the folders is not large, it is possible to make a visual check of them. For this, the support function plots_from_files () is used:
def plots_from_files(imspaths, figsize=(10, 5), rows=1, titles=None, maintitle=None): """Plot the images in a grid""" f = plt.figure(figsize=figsize) if maintitle is not None: plt.suptitle(maintitle, fontsize=10) for i in range(len(imspaths)): sp = f.add_subplot(rows, ceildiv(len(imspaths), rows), i + 1) sp.axis('Off') if titles is not None: sp.set_title(titles[i], fontsize=16) img = plt.imread(imspaths[i]) plt.imshow(img)def ceildiv(a, b): return -(-a // b)
Then, the path of the dataset that will be used in the training (dataset_path) and the lists with the names of the images to be viewed are defined :
dataset_path = '../10_dataset'normal_images = list(paths.list_images(f"{dataset_path}/normal")) covid_images = list(paths.list_images(f"{dataset_path}/covid"))
With this, calling the support functions for visualization, the images are shown:
plots_from_files(covid_images, rows=10, maintitle="Covid-19 X-ray images")
plots_from_files(normal_images, rows=10, maintitle="Normal X-ray images")
Generally speaking, the images look good.
Choice of a pre-trained Convolutional Neural Network model
The training of the model is carried out with the images previously defined, but over a model already pre-trained from the TF / Keras library, applying the technique known as “Transfer Learning”.
Transfer Learning is a machine learning method in which a model developed for one task is reused as a starting point for a model in a second task. For more information, see Jason Brownlee’s excellent article, A Gentle Introduction to Transfer Learning for Deep Learning
The Keras Applications is the Deep Learning Library module of Keras, which provides model definitions and pre-trained weights for several popular architectures such as VGG16, ResNet50v2, ResNet101v2, Xception, MobileNet, and more. The following link shows these options: Keras Applications .
The pre-trained model to be used is the VGG16, developed by the Visual Graphics Group (VGG) of Oxford and described in the paper “ Very Deep Convolutional Networks for Large-Scale Image Recognition ”. In addition to being very popular when developing image classification models for publicly available weights, this was also the model suggested by Dr. Adrian in his tutorial.
The ideal would be to do tests (benchmark) with several models (for example, ResNet50v2, ResNet101v2) or even create a specific one (like the model suggested in the paper by Zhang et al. COVID-19 Screening on Chest X-ray Images Using Deep Learning based Anomaly Detection ). But as the final objective of this work is just a proof of concept, we are only exploring VGG16 .
The VGG16 is a convolutional neural network (CNN) architecture that, despite having been developed in 2014, is still considered today as one of the best architectures to work with image classification.
One of the characteristics of the VGG16 architecture is that, instead of having a large number of hyperparameters, they concentrated on having convolution layers with a 3x3 filter (kernel) with one pass, followed by a 2x2 max-pooling layer. This procedure is followed by a set of convolution and max-pooling layers consistently throughout the architecture. In the end, the architecture has 2 FC (fully connected layers), followed by a softmax type activation for the output.
The 16 in VGG16 refers that the architecture has 16 layers with weights (w). This network is extensive, having, in the case of using all the original 16 layers, almost 140 million trained parameters. In our case, where the last two layers (FC1 and 2) are trained locally, the total number of parameters is just over 15 million, with something around 590,000 parameters trained locally (while the rest are “frozen”).
The first point to note is that the first layer of the VNN16 architecture works with images of 224x224x3 pixels, so we must ensure that the X-ray images to be trained also have these dimensions, since they are part of the “first layer “ of the convolutional network. Thus, when loading the model with the original weights (weights = “imagenet”), we also should leave the top layer of the model out (include_top = False ), which are replaced by our layers (headModel).
baseModel = VGG16(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)))
Next, we must define the hyperparameters that are used for the training (in the comments below some possible values to be tested to improve the “accuracy” of the model):
INIT_LR = 1e-3 # [0.0001] EPOCHS = 10 # [20] BS = 8 # [16, 32] NODES_DENSE0 = 64 # [128] DROPOUT = 0.5 # [0.0, 0.1, 0.2, 0.3, 0.4, 0.5] MAXPOOL_SIZE = (4, 4) # [(2,2) , (3,3)] ROTATION_DEG = 15 # [10] SPLIT = 0.2 # [0.1]
and then build our model, which is added to the base model:
headModel = baseModel.output headModel = AveragePooling2D(pool_size=MAXPOOL_SIZE)(headModel) headModel = Flatten(name="flatten")(headModel) headModel = Dense(NODES_DENSE0, activation="relu")(headModel) headModel = Dropout(DROPOUT)(headModel) headModel = Dense(2, activation="softmax")(headModel)
The headModel model is placed over the base model, becoming the part of the model that is actually trained (determining the optimal weights).
model = Model(inputs=baseModel.input, outputs=headModel)
It is important to remember that a pre-trained CNN model such as the VGG16, was trained with thousands of images to classify generic images (as a dog, cat, car, and people). What we need to do now is to customize it for our needs (classify X-ray images). Theoretically, the first layers of the model simplify parts of the image, identifying shapes within them. These initial labels are very generic (such as lines, circles, and squares), so we don’t want to train them again. We do want to train only the last layers of our network, together with the new layers added.
The following loop, performed over all layers in the base model, “freeze” them so that they are not updated during the first training process.
for layer in baseModel.layers: layer.trainable = False
And at this point, the model is ready to be trained, but first, we must prepare the data (images) for the model's training.
Data pre-processing
Let’s start by creating a list with the names (and paths) where the images are stored:
imagePaths = list(paths.list_images(dataset_path))
Then for each of the images in the list, we must:
- Extract the image label (in this case, covid or normal )
- Set the image channels from BGR (CV2 default) to RGB
- Resize images to 224 x 224 (default for VGG16)
data = [] labels = []for imagePath in imagePaths: label = imagePath.split(os.path.sep)[-2] image = cv2.imread(imagePath) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = cv2.resize(image, (224, 224)) data.append(image) labels.append(label)
Data and labels are converted to arrays, being the values of the each pixel’ s intencity that range from 0 to 255, is scaled from 0 to 1, facilitating training.
data = np.array(data) / 255.0 labels = np.array(labels)
The labels will be encoded numerically using the one-hot encoding technique.
lb = LabelBinarizer() labels = lb.fit_transform(labels) labels = to_categorical(labels)
At this point, the training dataset is divided into training and testing ( 80% for train and 20% for the test):
(trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=SPLIT, stratify=labels, random_state=42)
And last but not least, we should apply the techniques of “rising” data or "augmentation".
Augmentation
As suggested by Chowdhury et al. in their paper , three augmentation strategies (Rotation, Scheduling, and Translation) can be used to generate additional training images for COVID-19, helping to prevent “overfitting”.
With the TS/Keras image pre-processing library (ImageDataGenerator), several image parameters can be changed, for example:
trainAug = ImageDataGenerator( rotation_range=15, width_shift_range=0.2, height_shift_range=0.2, rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest')
Initially, only an image maximum rotation of 15 degrees is applied to evaluate the results.
It is observed that the X-ray images, in general, are aligned with few variations in rotation.
trainAug = ImageDataGenerator(rotation_range=ROTATION_DEG, fill_mode="nearest")
At this point, we are already with both the model and the data defined and prepared to be compiled and trained.
Model Building and Training
The compilation allows the actual construction of the model that we previously implemented, but with some additional features, such as the loss-rate function, optimizer, and metrics.
For network training, we use a loss function that calculates the difference between the values predicted by the network and the actual values of the training data. The loss values accompanied by an optimizer algorithm (such as Adam ) facilitate the number of changes made to the weights within the network. Those hyper-parameters help the convergence of the network training, obtaining the loss values as close to zero as possible.
We also specify the learning rate of the optimizer ( lr). In this case, the lr is defined as 1e-3 (approx. 0.05). If during the training, an increase in “bouncing” is noticed, meaning that the model cannot converge, we should decrease the learning rate so that we can reach the global minimums.
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"])
Let’s train the model:
H = model.fit( trainAug.flow(trainX, trainY, batch_size=BS), steps_per_epoch=len(trainX) // BS, validation_data=(testX, testY), validation_steps=len(testX) // BS, epochs=EPOCHS)
The result already looks quite interesting, reaching a precision in the validation data of the 92%! Let’s plot the precision charts:
Evaluate the trained model:
And look at the Confusion Matrix:
[[27 0] [ 4 23]] acc: 0.9259 sensitivity: 1.0000 specificity: 0.8519
From the model trained with the hyperparameters initially chosen, we obtained:
- 100% sensitivity, which means that of patients who have COVID-19 (i.e. True Positives ), we could accurately identify them as “positive for COVID-19” in 100% of the time.
- 85% specificity means that of patients who do not have COVID-19 (ie true negatives ), we could accurately identify them as “COVID-19 negative” in only 85% of the time.
The result was not so satisfactory, as 15% of patients who do not have Covid would be misdiagnosed. Let’s try to fine-tune the model first, changing some of the hyperparameters:
INIT_LR = 0.0001 # was 1e-3 EPOCHS = 20 # was 10 BS = 16 # was 8 NODES_DENSE0 = 128 # was 64 DROPOUT = 0.5 MAXPOOL_SIZE = (2, 2) # was (4, 4) ROTATION_DEG = 15 SPLIT = 0.2
As a result we have:
precision recall f1-score support covid 0.93 1.00 0.96 27 normal 1.00 0.93 0.96 27 accuracy 0.96 54 macro avg 0.97 0.96 0.96 54 weighted avg 0.97 0.96 0.96 54
And the Confusion Matrix:
[[27 0] [ 2 25]]acc: 0.9630 sensitivity: 1.0000 specificity: 0.9259
A much better result! Now with 93% specificity, it means that of the patients who do not have COVID-19 (that is, True Negatives), we could accurately identify them as “COVID-19 negative” 93% of the time to 100% in the identification of the True Positives.
For now, the result looks promising. Let’s save the mode, testing it on those images that were left out of the training for validation (the 8 images with Covid-19 March 2020 and the 20 chosen at random from the input dataset).
model.save("../model/covid_normal_model.h5")
Testing the model in real images (validation)
First, let’s retrieve the model and show the final architecture to check that everything is in order:
new_model = load_model('../model/covid_normal_model.h5')# Show the model architecture new_model.summary()
The model looks good, being the structure of 16 layers of VGG16. Note that the trainable parameters are 590,210, which are the sum of the last two layers (dense_2 and dense_3), which were added to the pre-trained model with 14.7M of parameters.
Let’s validate the model loaded in the test dataset:
[INFO] evaluating network... precision recall f1-score support covid 0.93 1.00 0.96 27 normal 1.00 0.93 0.96 27 accuracy 0.96 54 macro avg 0.97 0.96 0.96 54 weighted avg 0.97 0.96 0.96 54
Perfect, we arrived at the same result as before, meaning that the trained model was saved and loaded correctly. Now let’s validate the model with the 8 Covid images saved previously. For this we use one more support function, developed for individual image tests, the test_rx_image_for_Covid19():
def test_rx_image_for_Covid19(imagePath): img = cv2.imread(imagePath) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = cv2.resize(img, (224, 224)) img = np.expand_dims(img, axis=0)img = np.array(img) / 255.0pred = new_model.predict(img) pred_neg = round(pred[0][1]*100) pred_pos = round(pred[0][0]*100)print('\n X-Ray Covid-19 Detection using AI - MJRovai') print(' [WARNING] - Only for didactic purposes') if np.argmax(pred, axis=1)[0] == 1: plt.title('\nPrediction: [NEGATIVE] with prob: {}% \nNo Covid-19\n'.format( pred_neg), fontsize=12) else: plt.title('\nPrediction: [POSITIVE] with prob: {}% \nPneumonia by Covid-19 Detected\n'.format( pred_pos), fontsize=12)img_out = plt.imread(imagePath) plt.imshow(img_out) plt.savefig('../Image_Prediction/Image_Prediction.png') return pred_pos
On the Notebook, this function would show the following result:
By changing the imagePath value for the remaining 7 images, we obtain the following result:
All images returned POSITIVE, confirming the 100% sensitivity.
Let's now test the 20 separate images for validation labeled as NORMAL. The first one at Notebook should look like:
Testing one by one, is possible to confirmed the prediction, but since we have a lot more images, let’s use another function to test a group of images, all at once: test_rx_image_for_Covid19_batch (img_lst) .
Testando images in Batch
Let’s create lists of the images contained in the validation folders:
validation_path = '../dataset_validation'normal_val_images = list(paths.list_images( f"{validation_path}/normal_validation")) non_covid_pneumonia_validation_images = list(paths.list_images( f"{validation_path}/non_covid_pneumonia_validation")) covid_val_images = list(paths.list_images( f"{validation_path}/covid_validation"))
The test_rx_image_for_Covid19_batch (img_lst) function can be seen below:
def test_rx_image_for_Covid19_batch(img_lst): neg_cnt = 0 pos_cnt = 0 predictions_score = [] for img in img_lst: pred, neg_cnt, pos_cnt = test_rx_image_for_Covid19_2(img, neg_cnt, pos_cnt) predictions_score.append(pred) print ('{} positive detected in a total of {} images'.format(pos_cnt, (pos_cnt+neg_cnt))) return predictions_score, neg_cnt, pos_cnt
Applying the function to the 20 images that we had previously separated:
img_lst = normal_val_images normal_predictions_score, normal_neg_cnt, normal_pos_cnt = test_rx_image_for_Covid19_batch(img_lst) normal_predictions_score
We observe that all 20 were diagnosed as negative with the following scores (remembering that the model will return close to "1" for “positive”):
0.25851375, 0.025379542, 0.005824779, 0.0047603976, 0.042225637, 0.025087152, 0.035508618, 0.009078974, 0.014746706, 0.06489486, 0.003134642, 0.004970203, 0.15801577, 0.006775451, 0.0032735346, 0.007105667, 0.001369465, 0.005155371, 0.029973848, 0.014993184
In only 2 cases the images were valued with a (1-accuracy) below 90% (0.26 and 0.16).
Since we have a function to apply the model in batch, remember that the input dataset/input /20_Chest_Xray/ , has two groups, /train and /test. Only part of the group of images contained in the /train were used for training and all of the /test images were never seen by the model:
input -
|_ 10_Covid_Imagens _
| |_ metadata.csv
| |_ images [used train model 1]
|_ 20_Chest_Xray -
|_ test _
|_ NORMAL
|_ PNEUMONIA
|_ train _
|_ NORMAL [used train model 1]
|_ PNEUMONIA
We can then take advantage and test all new images from this folder. First, we created the image lists:
validation_path = '../input/20_Chest_Xray/test'normal_test_val_images = list(paths.list_images(f"{validation_path}/NORMAL")) print("Normal Xray Images: ", len(normal_test_val_images))pneumo_test_val_images = list(paths.list_images(f"{validation_path}/PNEUMONIA")) print("Pneumo Xray Images: ", len(pneumo_test_val_images))
We observed 234 “unpublished” images diagnosed as normal (and 390 more as Pneumonia not caused by Covid-19). Applying the function of batch tests, we observed that 24 images presented false positives in a total of 234 (approximately 10%). Let’s see how the model output values were distributed, remembering that the values returned by the function are calculated as:
pred = new_model.predict(image) pred_pos = round(pred[0][0] * 100)
We observed that the average value of the accuraccy valleys of the predictions is 0.15 and very concentrated in values close to zero (the median is only 0.043). Interestingly, most of the false positives are close to 0.5, with a few outliers above 0.6.
In addition to improving the model, it would also be worth studying the images that generated the false positive, as it may be a technical characteristic of the way of capturing the data.
Testando com imagens de Pneumonia não provocadas por Covid
Since the input dataset also has X-ray images of patients with Pneumonia, but not caused by Covid, let’s apply model 1 (Covid / Normal) to see what the result is:
The result was tremendously bad, as out of 390 images, 185 had false positives. And observing the distribution of the results, it is observed that there is a peak close to 80%, that is, it was very wrong!
Recalling that this result is not surprising in technical terms, as the model was not trained with images of patients with Pneumonia.
Anyway, this turns out to be a big problem, as I imagine that a specialist would be able to differentiate with the naked eye whether a patient has Pneumonia or not. Still, it would perhaps be more difficult to differentiate whether this Pneumonia was caused by the Covid-19 virus (SARS -CoV-2), any other virus, or even a bacterium.
The model should be more useful, differentiating patients with Pneumonia caused by Covid-19 from the other types of viruses or bacterias. For that, another model is trained, now with images of patients who contracted Covid-19 and patients who contracted Pneumonia but not caused by the Covid-19 virus.
Part 3 — Model 2 — Covid/Pneumo
Data Preparation
- Download the Notebook: 20_Xray_Pneumo_Covid19_Model_2_Training_Tests.ipynb from my GitHub and store it in the subdirectory /notebooks.
- Import the libraries that are used and run the support functions.
The Covid images dataset used in model 2, is the same used in model 1, only that now, it is stored in a different folder.
dataset_path = '../20_dataset'
The Pneumonia images will be downloaded from the folder /input/20_Chest_Xray/train/PNEUMONIA/ and stored in /20_dataset/pneumo/. The function to be used is the same as before:
input_dataset_path = '../input/20_Chest_Xray/train/PNEUMONIA' output_dataset_path = '../20_dataset/pneumo' img_num_select = len(xray_cv_train) # Same number of samples as Covid data
With this, we call the support functions for visualization, inspecting the result obtained:
pneumo_images = list(paths.list_images(f"{dataset_path}/pneumo")) covid_images = list(paths.list_images(f"{dataset_path}/covid"))plots_from_files(covid_images, rows=10, maintitle="Covid-19 X-ray images")
plots_from_files(pneumo_images, rows=10, maintitle="Pneumony X-ray images"
Generally speaking, the images look good.
Choice of pre-trained CNN model and its hyperparameters
The pre-trained model to be used is the VGG16 , same used for the model 1 training
baseModel = VGG16(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)))
Next, we must define the hyperparameters that are used for training. We start with the same ones used after the final training tunning of model 1:
INIT_LR = 0.0001 EPOCHS = 20 BS = 16 NODES_DENSE0 = 128 DROPOUT = 0.5 MAXPOOL_SIZE = (2, 2) ROTATION_DEG = 15 SPLIT = 0.2
and then, build our model that will be added to the base model:
headModel = baseModel.output headModel = AveragePooling2D(pool_size=MAXPOOL_SIZE)(headModel) headModel = Flatten(name="flatten")(headModel) headModel = Dense(NODES_DENSE0, activation="relu")(headModel) headModel = Dropout(DROPOUT)(headModel) headModel = Dense(2, activation="softmax")(headModel)
The headModel model is placed on top of the base model, becoming the real model used for training.
model = Model(inputs=baseModel.input, outputs=headModel)
The following loop, performed over all layers in the base model, will “freeze” them so that they are not updated during the first training process.
for layer in baseModel.layers: layer.trainable = False
At this point, the model is ready to be trained, but we should first prepare the data (images) for the model.
Data Pre-processing
Let’s start by creating a list with the names (and paths) where the images are stored and perform the same pre-processing as model 1:
imagePaths = list(paths.list_images(dataset_path))data = [] labels = []for imagePath in imagePaths: label = imagePath.split(os.path.sep)[-2] image = cv2.imread(imagePath) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = cv2.resize(image, (224, 224))data.append(image) labels.append(label)data = np.array(data) / 255.0 labels = np.array(labels)
The labels are encoded numerically using the one-hot encoding technique.
lb = LabelBinarizer() labels = lb.fit_transform(labels) labels = to_categorical(labels)
At this point, we will divide the training dataset into training and testing (80% for train and 20% for test):
(trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=SPLIT, stratify=labels, random_state=42)
And last but not least, we will apply the techniques of data augmentation.
trainAug = ImageDataGenerator(rotation_range=ROTATION_DEG, fill_mode="nearest")
At this point we are already with both the model and the data defined and prepared to be compiled and trained.
Compilation and Training of model 2
Compilation:
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"])
Training:
H = model.fit( trainAug.flow(trainX, trainY, batch_size=BS), steps_per_epoch=len(trainX) // BS, validation_data=(testX, testY), validation_steps=len(testX) // BS, epochs=EPOCHS)
With 20 epochs and the initial parameters, the result looks very interesting, reaching a precision in the validation data of 100%! Let’s plot the precision charts, evaluate the trained model, and look at the confusion matrix:
precision recall f1-score support covid 0.96 1.00 0.98 27 pneumo 1.00 0.96 0.98 27 accuracy 0.98 54 macro avg 0.98 0.98 0.98 54 weighted avg 0.98 0.98 0.98 54
Confusion Matrix:
[[27 0] [ 1 26]] acc: 0.9815 sensitivity: 1.0000 specificity: 0.9630
With the model trained (with the hyperparameters initially chosen), we obtained:
- 100% sensitivity, which means that of patients who have COVID-19 (i.e., True Positives), we could accurately identify them as “positive for COVID-19” in 100% of the time.
- 96% specificity, which means that of patients who do not have COVID-19 (i.e., True Negatives), we could accurately identify them as “COVID-19 negative” in 96% of the time.
The result is entirely satisfactory, as only 4% of patients who do not have Covid would be misdiagnosed. But as in this case, the correct classification between patients with Pneumonia and with Covid-19 is what most benefit; we should make at least a few more adjustments to the hyperparameters, doing the training again.
The first thing I tried to lower the initial lr a little and it was a disaster. I returned to the original value.
I also reduced the split of the data, increasing the Covid images a bit, and changed the maximum rotation angle to 10 degrees, which was suggested in the papers related to the original dataset:
INIT_LR = 0.0001 EPOCHS = 20 BS = 16 NODES_DENSE0 = 128 DROPOUT = 0.5 MAXPOOL_SIZE = (2, 2) ROTATION_DEG = 10 SPLIT = 0.1
As a result, we have:
precision recall f1-score support covid 1.00 1.00 1.00 13 pneumo 1.00 1.00 1.00 14 accuracy 1.00 27 macro avg 1.00 1.00 1.00 27 weighted avg 1.00 1.00 1.00 27
And the Confusion Matrix:
[[13 0] [ 0 14]]acc: 1.0000 sensitivity: 1.0000 specificity: 1.0000
The result looks better them before, but we use very little test data! Let’s save the model and test it with batches of a larger number of images as before.
model.save("../model/covid_pneumo_model.h5")
We observed that we have 390 images labeled as Pneumonia not caused by Covid-19. Applying the batch testing function, we observed that only 3 images presented False Positives out of a total of 390 (approximately 0.8%). Also, the average value of the prediction accuracy values is 0.04 and very concentrated in values close to zero (the median is only 0.02).
The overall result is even better than what was observed with the previous model. Interestingly, almost all results are within the first 3 quartiles, with very few outliers having more than 20% error.
In this case, it would be worthwhile also to study the images that generated the false positive (only 3), as they can also be a technical characteristic of the way of capturing the data.
Testing with images of patients considered normal (health)
Since the input dataset also has X-ray images of normal patients (not trained), let’s apply model 2 (Covid/Pneumo) to see what the result is
In this case, the result was not as bad as seen in the model 1 tests, as out of 234 images, 45 presented false positives (19%).
Well, the ideal would be to use the correct model for each case, but if only one should be used, model 2 is the right choice.
Note: In a last attempt to test alternatives, doing a benchmark, I tried to vary the augmentation parameters, as suggested by Chowdhury et al., and to my surprise, the result was no better (The results are at endo of Notebook).
Part 4 — Web App for Covid-19 detection in X-ray images
Testing the Python Stand Alone script
For the development of the web-app, we use Flask, a web micro-framework written in Python. It is classified as a microstructure because it does not require specific tools or libraries to function.
Also, we need only a few libraries and the functions related to individually testing an image. So, let’ s initially work on a “clean” notebook, where tests are performed using the model 2 already trained and saved.
- Load from my GitHub, the notebook: 30_AI_Xray_Covid19_Pneumo_Detection_Application.ipynb
- Now import only the libraries needed to test the model created in the previous Notebook.
import numpy as np import cv2 from tensorflow.keras.models import load_model
- Then execute the support function for loading and testing the image:
def test_rx_image_for_Covid19_2(model, imagePath): img = cv2.imread(imagePath) img_out = img img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = cv2.resize(img, (224, 224)) img = np.expand_dims(img, axis=0)img = np.array(img) / 255.0pred = model.predict(img) pred_neg = round(pred[0][1]*100) pred_pos = round(pred[0][0]*100) if np.argmax(pred, axis=1)[0] == 1: prediction = 'NEGATIVE' prob = pred_neg else: prediction = 'POSITIVE' prob = pred_poscv2.imwrite('../Image_Prediction/Image_Prediction.png', img_out) return prediction, prob
- Download the trained model
covid_pneumo_model = load_model('../model/covid_pneumo_model.h5')
- Then, upload some images from the validation subdirectory and confirm that everything is OK:
imagePath = '../dataset_validation/covid_validation/6C94A287-C059-46A0-8600-AFB95F4727B7.jpeg' test_rx_image_for_Covid19_2(covid_pneumo_model, imagePath)
The result should be: (‘POSITIVE’, 96.0)
imagePath = ‘../dataset_validation/normal_validation/IM-0177–0001.jpeg’ test_rx_image_for_Covid19_2(covid_pneumo_model, imagePath)
The result should be: (‘NEGATIVE’, 99.0)
imagePath = '../dataset_validation/non_covid_pneumonia_validation/person63_bacteria_306.jpeg' test_rx_image_for_Covid19_2(covid_pneumo_model, imagePath)
The result should be: (‘NEGATIVE’, 98.0)
So far, all development was done on a Jupyter Notebook, and we should do a final test having the code as a python script running in the development directory created initially, for example, with the name: covidXrayApp_test.py.
# Import Libraries and Setupimport numpy as np import cv2 from tensorflow.keras.models import load_model# Turn-off Info and warnings import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'# Support Functionsdef test_rx_image_for_Covid19_2(model, imagePath): img = cv2.imread(imagePath) img_out = img img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = cv2.resize(img, (224, 224)) img = np.expand_dims(img, axis=0)img = np.array(img) / 255.0pred = model.predict(img) pred_neg = round(pred[0][1]*100) pred_pos = round(pred[0][0]*100) if np.argmax(pred, axis=1)[0] == 1: prediction = 'NEGATIVE' prob = pred_neg else: prediction = 'POSITIVE' prob = pred_poscv2.imwrite('./Image_Prediction/Image_Prediction.png', img_out) return prediction, prob# load model covid_pneumo_model = load_model('./model/covid_pneumo_model.h5')# --------------------------------------------------------------- # Execute testimagePath = './dataset_validation/covid_validation/6C94A287-C059-46A0-8600-AFB95F4727B7.jpeg' prediction, prob = test_rx_image_for_Covid19_2(covid_pneumo_model, imagePath) print (prediction, prob)
Let’s test the script directly on the Terminal:
Perfect everything works perfectly and “stand-alone”, outside the Notebook.
Creating an environment to run the app in Flask
The first step is to start with a new Python environment. For this, using Terminal, define a working directory (for example covid19XrayWebApp) and once there, create an environment in Python (for example:
mkdir covid19XrayWebApp cd covid19XrayWebApp conda create --name covid19xraywebapp python=3.7.6 -y conda activate covid19xraywebapp
Once inside the environment, install Flask and all the necessary libraries to run the application:
conda install -c anaconda flask conda install -c anaconda requests conda install -c anaconda numpy conda install -c conda-forge matplotlib conda install -c anaconda pillow conda install -c conda-forge opencv pip install --upgrade pip pip install tensorflow pip install gunicorn
Create the necessary sub-directories:
[here the app.py] model [here the trained and saved model] templates [here the .html file] static _ [here the .css file and static images] |_ xray_analysis [here the output image after analysis] |_ xray_img [here the input x-ray image]
Copy the files from my GitHub and store them in the newly created directories like this:
- The python application responsible for “backend” executions, on the server is called app.py and must be at the root of the main directory
- In /template, the index.html file should be stored, which will be the “face” of the application, or the “front-end”
- In /static will be the style.css file, responsible for formatting the front-end (template.html) as well as static images such as logo, icon, etc.
- Also under /static, are the subdirectories that will receive the images to be analyzed, as well as the results of the analyzes (in fact the same image saved with a new name that contain: its original name plus the diagnosis and percentage of accuracy).
Once all the files are installed in their proper places, the working directory looks something like this:
Starting the Web App on local network
Once you have the files installed in your folders, run the app.py, which is the “engine” of our web-app, responsible for receiving an image stored somewhere on the user’s computer (no matter where).
python app.py
At the Terminal, we can observe:
On your browser, enter with the direction:
And the app will be running in your local network:
Testing the web-app with real images
We can choose to start one of the X-ray images that presented Covid, already used for validation during development.
- By pressing the [Browse] button in the app, your computer’s file manager opens
- Select an image and select [Open] (in the case of my Mac' s Finder Window)
- The file name appears as selected at the app.
- Press [Submit Query] in the app.
- The image is displayed at the bottom of the app, along with the image diagnosis and its accuracy value.
- The image is stored in the folder: /static/xray_analisys with the following structure: [Result]_Prob_[XX]_Name_[FILENAME].png
Below the sequence of steps:
Repeating the test for one of the images with Pneumonia, but without Covid-19:
Next Steps
As discussed in the introduction, this project is a proof of concept to demonstrate the feasibility of detecting the virus that causes Covid-19 in X-ray images. For the project to be used in real cases, several steps must still be accomplished. Here are some suggestions:
- Validate the entire project with professionals from the Health area
- Develop a benchmark to find the best pre-trained model
- Train the model using images obtained with patients, preferably from the same region where the application would be used.
- Get a more extensive set of patient images with Covid-19
- Vary the model hyper-parameters of the model
- Test the feasibility of training a model with 3 classes (Normal, Covid, and Pneumonia)
- Apply to the images to be tested by the application, the same capture, and digitization procedure used with the training images
- Change the app, allowing the selection of which model is more appropriated to be used (model 1 or 2)
- Putting the web-app into production on platforms like Heroku.com or pythonanywhere.com
Conclusion
As always, I hope this article can help others find their way in the beautiful world of Data Science! And more than ever, I hope that this article could inspire professionals in the areas of Deep Learning and Health to work together, putting in production models that can help in the fight against this pandemic.
All the codes used in this article are available for download on my GitHub: covid19Xray .
Regards from the South of the World!
See you in my next article!
Thank you
Marcelo
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
SHA 加密
SHA 加密工具
UNIX 时间戳转换
UNIX 时间戳转换