An Offbeat Approach to Brain Tumor Classification using Computer Vision
Computer Vision plays a very crucial role in the field of Medical Science and this study of Applied Computer Vision in Medical Science is broadly known as Medical Imaging . Now, Computer Vision is achieved either by deploying Machine Learning or Deep Learning methodologies or both (hybrid) into production.
In this article, I am going to throw light on one such Machine Learning Methodology that uses a Deep Learning Block making it a Hybrid Model for Brain Tumor Classification.
Hybrid Model Development for Brain Tumor Classification
- The Dataset : Brain MRI Images Dataset available in Kaggle, is used for Model Development ( download ). The dataset contains 253 MRI Images of the Brain. Sample images are shown below.
- 3-Layered-Feed-forward-Convolutional Neural Network for Image Feature Extraction : The proposed CNN Architecture is shown below:
This CNN Architecture is forward passed only once ( no backpropagation ) for Feature Extraction. This is the Deep Learning Block as discussed above.
- Support Vector Machine (SVM)-RBF Kernel : SVM’s RBF Kernel is instantiated and trained for Predictive Model Development. The ML Block is developed using Scikit-Learn.
Implementation of the Hybrid CNN-SVM Model
The CNN Model is developed using Keras Framework:
# Importing all necessary libraries
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# CNN Model Development
classifier = Sequential()
# CONVOLUTION (1st Layer)
classifier.add(Convolution2D(32,(3,3),strides = (3,3),input_shape=(1000,1000,3),activation='relu'))
# Max-Pooling for 1st Convolutional Layer
classifier.add(MaxPooling2D(pool_size=(2,2)))
# CONVOLUTION (2nd Layer)
classifier.add(Convolution2D(32,(3,3),strides = (3,3), activation = 'relu'))
# Max-Pooling for 2nd Convolutional Layer
classifier.add(MaxPooling2D(pool_size=(2,2)))
# CONVOLUTION (3rd Layer)
classifier.add(Convolution2D(32,(3,3),strides = (3,3), activation = 'relu'))
# Max-Pooling for 3rd Convolutional Layer
classifier.add(MaxPooling2D(pool_size=(2,2)))
# FLATTENING
classifier.add(Flatten())
Delving Deep into the CNN Model instantiated:
classifier.summary()
Loading the CNN Model with pre-saved weights obtained via raw experimentation of the whole methodology (yet to discuss the remaining steps). The weights are available in a Hierarchical Data Format (H5) File in the link mentioned below:
classifier.load_weights("Brain_Tumor_PCA.h5")
Image Feature Extraction using the Instantiated CNN Model
# Importing the necessary libraries
import numpy as np
import cv2
import os
# initializing the feature matrix
X = np.ones((253, 512))
# image loading and feature extraction
i = 0
os.chdir('.../MRI_IMAGES/train/yes')
for filename in os.listdir('.../MRI_IMAGES/train/yes'):
img = cv2.imread(filename)
img = cv2.resize(img,(1000,1000))
img = np.divide(img,255)
img = img.reshape(1,1000,1000,3)
X[i] = classifier.predict(img)
i = i + 1
os.chdir('.../MRI_IMAGES/train/no')
for filename in os.listdir('.../MRI_IMAGES/train/no'):
img = cv2.imread(filename)
img = cv2.resize(img,(1000,1000))
img = np.divide(img,255)
img = img.reshape(1,1000,1000,3)
X[i] = classifier.predict(img)
i = i + 1
os.chdir('.../MRI_IMAGES/test/yes')
for filename in os.listdir('.../MRI_IMAGES/test/yes'):
img = cv2.imread(filename)
img = cv2.resize(img,(1000,1000))
img = np.divide(img,255)
img = img.reshape(1,1000,1000,3)
X[i] = classifier.predict(img)
i = i + 1
os.chdir('.../MRI_IMAGES/test/no')
for filename in os.listdir('.../MRI_IMAGES/test/no'):
img = cv2.imread(filename)
img = cv2.resize(img,(1000,1000))
img = np.divide(img,255)
img = img.reshape(1,1000,1000,3)
X[i] = classifier.predict(img)
i = i + 1# Preparing the Actual Labels
y = np.concatenate((np.ones(121), np.zeros(79), np.ones(34), np.zeros(19)))
Dimensionality Reduction using Principal Component Analysis
# Importing necessary libraries
from sklearn.decomposition import PCApca = PCA(n_components = 2)
pca.fit(X.T)
Z = pca.components_.T
Scatter Plot Visualization of the Dataset
# Importing the necessary libraries
import matplotlib.pyplot as pltplt.scatter(Z.T[0], Z.T[1], c = y, s = 10, marker = 'x')
plt.title('Scatter Plot Visualization (VIOLET -> Non-Tumorous, YELLOW -> Tumorous)')
plt.xlabel("F1_PCA")
plt.ylabel("F2_PCA")
So, from the Scatter Plot, it is evident that no ML Algorithm that linearly separates the original feature space, are applicable here.
Spliting the Dataset into Training and Test Set
# Importing necessary libraries
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(Z,y, test_size=0.1, random_state=1234)
SVM (RBF Kernel) Predictive Model Development and Grid-Search Tuning for selection of best hyper-parameter i.e., C (penalty parameter in SVMs)
# Importing necessary libraries
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
# MODEL INSTANTIATION
model = SVC(kernel = 'rbf')
parameters = {'C':[0.1,1,10,100,1000,10000,100000]}
grid_search = GridSearchCV(param_grid = parameters, estimator = model, verbose = 3)
# MODEL TRAINING AND GRID-SEARCH TUNING
grid_search = grid_search.fit(X_train,y_train)
Getting the best Hyper-parameter i.e., C, the penalty parameter
print(grid_search.best_params_)
And, the ‘gamma’ parameter of SVM (RBF Kernel) is set to (1/number_of_features) as default value in Scikit-Learn. So, gamma = 0.5
Model Performance Analysis
print("Validation Accuracy:",grid_search.score(X_test,y_test)) print("Training Accuracy: ",grid_search.score(X_train, y_train))
# Importing the necessary library
from sklearn.metrics import classification_reportprint(classification_report(y_test,grid_search.predict(X_test)))
# Importing the necessary libraries
import matplotlib.pyplot as plt
%matplotlib inline
import itertools
from sklearn.metrics import confusion_matrixdef plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')print(cm)plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')plt.figure()
plot_confusion_matrix(confusion_matrix(y_test, grid_search.predict(X_test)), classes=[0,1], normalize=True,
title='Confusion Matrix')
# Importing the necessary modules
from sklearn.metrics import roc_curve, aucy_roc = np.array(y_test)
fpr, tpr, thresholds = roc_curve(y_roc, grid_search.decision_function(X_test))
roc_auc = auc(fpr, tpr)
print("Area under the ROC curve : %f" % roc_auc)
# Importing the necessary libraries
import pylab as plpl.clf()
pl.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
pl.plot([0, 1], [0, 1], 'k--')
pl.xlim([0.0, 1.0])
pl.ylim([0.0, 1.0])
pl.xlabel('False Positive Rate')
pl.ylabel('True Positive Rate')
pl.legend(loc="lower right")
pl.show()
Logic Justification by Decision Boundary
Logic Justification is very essential for any AI Model, be it based on Machine Learning or Deep Learning and this methodology is no exception. In Deep Learning Models involved in Medical Imaging, Logic Justification is done using Grad Cam Visualizations, which is later verified by doctors. Hence pure Deep Learning Models enjoy an upper-hand.
Here, the methodology is based on Support Vector Machine, which is a Pattern Recognition based Machine Learning Algorithm using N-Dimensional Feature Space Geometry. So, Decision Boundary Visualization is the best way of Logic Justification for the whole methodology.
x_min, x_max = X_train[:, 0].min(), X_train[:, 0].max() y_min, y_max = X_train[:, 1].min(), X_train[:, 1].max() xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.001), np.arange(y_min, y_max, 0.001))h = grid_search.predict(np.c_[xx.ravel(), yy.ravel()]) h = h.reshape(xx.shape) plt.contourf(xx, yy, h) plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s = 10, marker = 'o', edgecolor = 'k') plt.title('Scatter Plot Visualization of Training Set (VIOLET -> Non-Tumorous, YELLOW -> Tumorous)') plt.xlabel('F1_PCA') plt.ylabel('F2_PCA')
x_min, x_max = X_test[:, 0].min(), X_test[:, 0].max() y_min, y_max = X_test[:, 1].min(), X_test[:, 1].max() xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.001), np.arange(y_min, y_max, 0.001))h = grid_search.predict(np.c_[xx.ravel(), yy.ravel()]) h = h.reshape(xx.shape) plt.contourf(xx, yy, h) plt.scatter(X_test[:, 0], X_test[:, 1], c = y_test, s = 10, marker = 'o', edgecolor = 'k') plt.title('Scatter Plot Visualization of Test Set (VIOLET -> Non-Tumorous, YELLOW -> Tumorous)') plt.xlabel('F1_PCA') plt.ylabel('F2_PCA')
以上所述就是小编给大家介绍的《An Offbeat Approach to Brain Tumor Classification using Computer Vision》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。