Hire the author: Priti S

Introduction

“I’ll dare to make a bold statement that the COVID-19 pandemic is the most obstreperous and devastating eon of our lives.”

Its main ambiguity lies in the fact that it affects different people in different ways. The situation is becoming even more cumbersome, with every passing day. Above all, the various strong medical theories have also been ineffective in proving their worth against the novel coronavirus. Its severity is high enough as it is a viral infection. Viruses are microorganisms that have the power to seize the host organism by inserting their genetic material into the cells of the host.

Artificial Intelligence is a disruptive technology. It has the power to change the world and take over the medical field completely, in the near future. Its efficacy can be proved by looking into its past success in providing top-notch solutions for many medical problems. But Building a robust model with high precision and flexibility comes with a cost. Various techniques such as boosting, bagging, model stacking, and using billions of parameters make the model difficult to understand. It becomes hard to interpret how the variables used are affecting the model’s outcomes.

Hence, Explainable AI comes to the rescue. It explains and adds meaning to the model. The explainer models can help decode the black-box nature of the AI models.

Glossary

CNN – Convolutional Neural Network is a powerful deep neural network that is widely used in Image Recognition, Image Segmentation, Computer Vision, and many other image-related tasks. The input to these networks is images.

XAI – Shortened form of Explainable AI, which contrasts from the traditional black-box nature of the AI models. It is capable of explaining AI solutions, which could be understood by humans.

LIME – It stands for Local Interpretable Model-Agnostic Explanations. It is an algorithm that can faithfully illustrate the predictions of the AI models by approximating them locally using an explainer model that can be interpreted.

Procedure

Step 1: The Dataset

The dataset to perform the diagnosis and interpretation requires X-Ray images from both Covid-19 affected as well as Non-Covid-19 affected patients. So, the images have been collected from the following sources and were combined together, for this project.

  • University of Montreal’s Postdoctoral Fellow, Dr. Joseph Paul Cohen’s open-sourced dataset, that contains the chest X-Ray images of all the patients suffering from Covid-19.
  • Dataset from Kaggle’s Chest X-Ray Images(Pneumonia) competition. It helped with the X-Ray images of a person without COVID-19.

Step 2: Model Architecture

CNN has been used to train the model on the prepared dataset so that the model can predict whether the X-Ray image is from a Covid-19 patient or not.

model = Sequential()
model.add(Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=(224,224,3)))
model.add(Conv2D(128,(3,3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(64,(3,3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(128,(3,3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss=keras.losses.binary_crossentropy,optimizer='adam',metrics=['accuracy'])

It is a Sequential model with four convolutional layers. To ensure that more feature learning happens, the filters in each convolutional layer have been increased gradually. The lower layers are capable of detecting the features on the smaller part of the image. And as we move deeper into the layers of the neural network, the receptive nature of the layers increases. The input shape in the first hidden layer is specified to 224 X 224 pixels with three channels.

The convolutional layers are followed by the Max Pooling layers, which discards all such features that do not contribute towards the final prediction made. The network also has Dropout layers. The addition of these extra layers prevents the neural network from overfitting.

The activation function used is relu, as it overcomes the vanishing gradient problem. A Flatten layer is added before the Dense layers, to convert the data into a one-dimensional array. The sigmoid activation has been used for the output layer of the network because it is a binary classification problem.

Now, we need to compile the model, and for this, we have used binary cross-entropy as the loss function, adam as the optimizer and accuracy has been used as the metric to evaluate the model performance.

Step 3: Model Training

Augmenting the Image Data

Data Augmentation in Keras is supported by the “ImageDataGenerator” class. It is applied to improve the model’s ability to generalize. A rescale parameter has been used to multiply every pixel in the image. It is transforming every pixel value from [0, 255] to [0, 1] to treat all the images in the same manner and also to have a large learning rate.

train_datagen = image.ImageDataGenerator(rescale = 1./255, shear_range = 0.2,zoom_range = 0.2, horizontal_flip = True)
test_datagen = image.ImageDataGenerator(rescale=1./255)

Generate Training Data

The training data is prepared by reading the images directly from the directory and creating batches, train_generator, and validation generator. This is done to identify the classes automatically from the folder name. Here the images are being reshaped into an input size of 224 X 224 and the batch size has been fixed to 32.

train_generator = train_datagen.flow_from_directory(
'/content/drive/MyDrive/CovidDataset/Train',
target_size = (224,224),
batch_size = 32,
class_mode = 'binary')
validation_generator = test_datagen.flow_from_directory(
'/content/drive/MyDrive/CovidDataset/Val',
target_size = (224,224),
batch_size = 32,
class_mode = 'binary')

Model Fitting

hist = model.fit_generator(
train_generator,
epochs = 10,
validation_data = validation_generator,
validation_steps=2
)

Step 4: Model Evaluation

The next step is to measure the model performance. As the evaluation metric used in model.compile is accuracy, the evaluate_generator evaluates the model performance on it.

The training set has resulted in a loss of 0.08 and an accuracy of 96.43%. Whereas, the validation set gave a loss of 0.05 and an accuracy of 98.33%. As the difference between the training and the validation loss is very less, it is a good sign that our model is not overfitting.

Loss/Accuracy vs Epoch

So, looking into the figure above, we can say that the training and the validation loss is decreasing with an increase in epoch and the accuracy for both the set is increasing, with an increase in the epoch.

Step 5: Image Prediction with the Model

To check whether the model is capable of making correct predictions and can classify the images into their expected class or not, model.predict and the model.predict_classes functions have been used. The former generates a prediction score and the latter predicts the class of the input image.

preds = model.predict(images)
prediction_class = model.predict_classes(images)

As the model can accurately predict different classes, let’s now understand the features on which the model focuses, while looking at the images to find the difference between them.

Step 6: Using LIME to Interpret the Model

Install LIME

LIME helps in understanding the decisions, subsequently made by a black-box model. First, we need to install lime and then import the lime_image module, to work on explaining the classifier.

pip install lime

Create an Explainer Object

In addition, we need to create an explainer object using the LimeImageExplainer() function. Moreover, this object is capable of explaining the predictions on image data.

explainer = lime_image.LimeImageExplainer()

Now, we can use the explainer object created to explain the model prediction for any particular image. This is done using the explain_instance() function. At first, it generates neighborhood data by perturbing features from the instance, at random. Then, the local weighted linear models on the neighborhood data are used to explain different classes. The parameter top_labels is the number of labels that we want our explainer object to show. For our model, its value is 2.

explanation = explainer.explain_instance(images[0].astype('double'), model.predict,
top_labels=2, hide_color=0, num_samples=1000)

Visualizing the Explanations

Further, we can visualize the interpretations from the explanation object by using the get_image_and_mask() function. It uses the topmost labels from the explanation object to explain. For the first image, the parameters positive_only and hide_rest have been set True. So, only those superpixels are displayed, which are contributing to the prediction made, and the unexplainable parts are made gray. But for the second image, those two parameters have been set False, hence the topmost features, that can either be making a positive contribution or a negative contribution, gets displayed and the unexplainable parts are also displayed. The num_features parameter is the number of features to display, according to their priority.

temp_1, mask_1 = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
temp_2, mask_2 = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=10, hide_rest=False)

The figure above shows the positive features in green and the negative features in red. There is no background noise that contributes to the prediction made. Therefore, we can trust the model.

Learning Tools

Some of the important references, that were very useful in the development of this project, have been listed below:

  1. Convolutional Neural Network for Image Classification
  2. Deep Learning based detection and analysis of Covid-19 on chest X-Ray images
  3. Interpretable Machine Learning by Christoph Molnar
  4. LIME package
  5. Interpretable Machine Learning for image classification with LIME

Learning Strategy

One should have some basic proficiency in Python and Deep Learning, and must also know how to work with Neural Networks. It is important because CNN is just a more powerful DNN. So, having some basic knowledge on how to build a model, and fit it on the dataset, would be helpful. A learner must also focus on exploring some facts on the Explainable AI, before starting on with this project.

Reflective Analysis

So, we are in the midst of the Covid-19 pandemic, where humanity is experiencing a rapid increase in the total cases, all over the world. As a result, the work pressure of medical practitioners has increased. In such a scenario, AI technologies, which often have much to offer to the medical field, can be used to reduce the pressure of the humans of the medical field. Besides relying on the accuracy of the model prediction, it becomes important to understand the decisions made by a model, to trust it completely.

The custom model is giving a very good accuracy, hence we have skipped using pre-trained models. Using XAI we can look into the infected areas, very easily. Therefore, this method can help provide better and quick treatments.

The GitHub link for this project is available here.

Thank you!

Hire the author: Priti S