17

Explainable Deep Learning in Breast Cancer Prediction

 4 years ago
source link: https://towardsdatascience.com/explainable-deep-learning-in-breast-cancer-prediction-ae36c638d2a4?gi=43460a188864
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Understanding Convolutional Neural Network Prediction Results in Healthcare

euqaaqq.jpg!web

Advanced machine learning models (e.g., Random Forest, deep learning models, etc.) are generally considered not explainable [1][2]. As described in [1][2][3][4], those models largely remain black boxes, and understanding the reasons behind their prediction results for healthcare is very important in assessing trust if a doctor plans to take actions to treat a disease (e.g., cancer) based on a prediction result. In [2], I used the Wisconsin Breast Cancer Diagnosis (WBCD) tabular dataset to present how to use the Local Interpretable Model-agnostic Explanations (LIME) method to explain the prediction results of a Random Forest model in breast cancer diagnosis.

In this article, I use the Kaggle Breast Cancer Histology Images (BCHI) dataset [5] to demonstrate how to use LIME to explain the image prediction results of a 2D Convolutional Neural Network (ConvNet) for the Invasive Ductal Carcinoma (IDC) breast cancer diagnosis.

1. Preparing Breast Cancer Histology Images Dataset

The BCHI dataset [5] can be downloaded from Kaggle . As described in [5], the dataset consists of 5,547 50x50 pixel RGB digital images of H&E-stained breast histopathology samples. These images are labeled as either IDC or non-IDC. There are 2,788 IDC images and 2,759 non-IDC images. Those images have already been transformed into Numpy arrays and stored in the file X.npy . Similarly the corresponding labels are stored in the file Y.npy in Numpy array format.

1.1 Loading Data

Once the X.npy and Y.npy files have been downloaded into a local computer, they can be loaded into memory as Numpy arrays as follows:

X = np.load('./data/X.npy') # images
Y = np.load('./data/Y.npy') # labels (0 = Non IDC, 1 = IDC)

The following are two of the data samples, the image on the left is labeled as 0 (non-IDC) and the image on the right is labeled as 1 (IDC).

7faYfue.png!web

Figure 1. Two samples: The left one is labeled as 0 (non-IDC) and the right one is labeled as 1 (IDC).

1.2 Shuffling Data

In the original dataset files, all the data samples labeled as 0 (non-IDC) are put before the data samples labeled as 1 (IDC). To avoid artificial data patterns, the dataset is randomly shuffled as follows:

indices = np.arange(Y.shape[0])
np.random.shuffle(indices)
indices = list(indices)
X = X[indices]
Y = Y[indices]

1.3 Transforming Dataset

The pixel value in an IDC image is in the range of [0, 255], while a typical deep learning model works the best when the value of input data is in the range of [0, 1] or [-1, 1]. The class Scale below is to transform the pixel value of IDC images into the range of [0, 1].

class Scale(BaseEstimator, TransformerMixin):
    def __init__(self):
        pass 
        
    def fit(self, X, y):
        return self
    
    def transform(self, X): 
        X1 = X.copy()
        X1 = X1 / 255.0
        return X1

1.4 Dividing Dataset for Model Training and Testing

The dataset is divided into three parts, 80% for model training and validation (1,000 for validation and the rest of 80% for training) , and 20% for model testing.

X_train_raw, X_test_raw, y_train_raw, y_test_raw = train_test_split(X, Y, test_size=0.2)X_train = X_train_raw.copy()
X_val   = X_train_data[:1000]
X_train = X_train_data[1000:]
X_test  = X_test_raw.copy()y_train = y_train_raw.copy()
y_val   = y_train[:1000]
y_train = y_train[1000:]
y_test  = y_test_raw.copy()

2. Training 2D ConvNet Model

The BCHI dataset [5] consist of images and thus a 2D ConvNet model is selected for IDC prediction.

2.1 Creating 2D ConvNet

Similarly to [5], the function getKerasCNNModel () below creates a 2D ConvNet for the IDC image classification.

def getKerasCNNModel():
    batch_size = BATCH_SIZE
    epochs = EPOCH_SIZE  
    img_rows, img_cols = X_train.shape[1], X_train.shape[2]
    input_shape = (img_rows, img_cols, 3) 
    model = Sequential()
    model.add(Conv2D(16, kernel_size=(3,3), activation='relu', input_shape=input_shape))
    model.add(MaxPooling2D(pool_size=(2, 2)))  
    model.add(Dropout(0.25)) 
    model.add(Conv2D(32, (3,3),  activation='relu')) 
    model.add(MaxPooling2D(pool_size=(2, 2)))  
    model.add(Dropout(0.25)) 
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))  
    model.add(Dropout(0.5)) 
    model.add(Dense(1, activation='sigmoid'))
    
    model.compile(loss= keras.losses.binary_crossentropy, 
                  optimizer=keras.optimizers.rmsprop(), 
                  metrics=['accuracy'])
    
    return model

2.2 Creating Pipeline Component

The class KerasCNN is to wrapper the 2D ConvNet model as a sklearn pipeline component so that it can be combined with other data preprocessing components such as Scale into a pipeline.

class KerasCNN(BaseEstimator, TransformerMixin):
    def __init__(self, X_val=None, y_val=None):
        self._model      = getKerasCNNModel()
        self._batch_size = BATCH_SIZE
        self._epochs     = EPOCH_SIZE
        self._X_val      = X_val / 255.0
        self._y_val      = y_val
    
    def fit(self, X, y):  
        self.history = self._model.fit(X, y,
                        batch_size=self._batch_size,
                        verbose=1,
                        epochs=self._epochs,
                        validation_data=(self._X_val, self._y_val))
        return self
    
    def transform(self, X): 
        return X    def predict_proba(self, X):
        y_pred = self._model.predict(X) 
        return y_pred  
    
    def evaluate(self, X, y):
        return self._model.evaluate(X,y)

3. Explaining Model Prediction Results

As described before, I use LIME to explain the ConvNet model prediction results in this article.

3.1 Setting Up a Pipeline

Similarly to [1][2], I make a pipeline to wrapper the ConvNet model for the integration with LIME API.

from sklearn.pipeline import Pipelinesimple_cnn_pipeline = Pipeline([
    ('scale', Scale()),
    ('CNN', KerasCNN(X_val=X_val, y_val=y_val))
    ])

3.2 Training the ConvNet Model

The ConvNet model is trained as follows so that it can be called by LIME for model prediction later on.

simple_cnn_pipeline.fit(X_train, y_train)

3.3 Selecting LIME Explainer

As described in [1][2], the LIME method supports different types of machine learning model explainers for different types of datasets such as image, text, tabular data, etc. The LIME image explainer is selected in this article because the dataset consists of images.

The 2D image segmentation algorithm Quickshift is used for generating LIME super pixels (i.e., segments) [1].

from lime import lime_image
from lime.wrappers.scikit_image import SegmentationAlgorithmexplainer = lime_image.LimeImageExplainer() segmenter = SegmentationAlgorithm(‘quickshift’, kernel_size=1, max_dist=200, ratio=0.2)

3.4 Explaining Model Prediction

Once the ConvNet model has been trained, given an original IDC image, the explain_instance () method of the LIME image explainer can be called to generate an explanation of the model prediction.

An explanation of an image prediction consists of a template image and a corresponding mask image. These images can be used to explain a ConvNet model prediction result in different ways.

QN3aeeJ.png!web

Figure 2. The predictions for above two samples are to be explained. The left image is predicted as negative (IDC: 0) and the right image is predicted as positive (IDC: 0) by the ConvNet model.

Explanation 1: Prediction of Positive IDC (IDC: 1)

Figure 3 shows a positive IDC image for explaining model prediction via LIME.

jeq2QbZ.png!web

Figure 3. IDC_1_sample: Prediction of a positive IDC sample to be explained

The code below is to generate an explanation object explanation_1 of the model prediction for the image IDC_1_sample (IDC: 1) in Figure 3.

In this explanation, white color is used to indicate the portion of image that supports the model prediction ( IDC: 1) .

explanation_1 = explainer.explain_instance(IDC_1_sample, 
                classifier_fn = simple_cnn_pipeline.predict_proba, 
                top_labels=2, 
                hide_color=0, 
                num_samples=10000,
                segmentation_fn=segmenter)

Once the explanation of the model prediction is obtained, its method get_image_and_mask () can be called to obtain the template image and the corresponding mask image (super pixels):

from skimage.segmentation import mark_boundariestemp, mask = explanation_1.get_image_and_mask(explanation_1.top_labels[0], 
                                            positive_only=True, 
                                            num_features=25, 
                                            hide_rest=True)
plt.imshow(mark_boundaries(temp, mask))

Figure 4 shows the hidden portion of given IDC image in gray color. The white portion of the image indicates the area of the given IDC image that supports the model prediction of positive IDC.

mYfaAfI.png!web

Figure 4: Explanation of model prediction of a positive IDC in Figure 3 by hiding original image details. White color indicates the area that supports the model prediction. Gray part does not support or is irrelevant to the model prediction.

The code below is to show the boundary of the area of the IDC image in yellow that supports the model prediction of positive IDC (see Figure 5).

temp, mask = explanation_1.get_image_and_mask(explanation_1.top_labels[0], 
                                            positive_only=True, 
                                            num_features=25, 
                                            hide_rest=False)
plt.imshow(mark_boundaries(temp, mask))

ziM3auE.png!web

Figure 5: Explanation of model prediction of a positive IDC in Figure 3 with original image details. Yellow color indicates the boundary of the white area in Figure 4 that supports the model prediction. The gray area is either not support or is not relevant to the prediction.

Explanation 2: Prediction of non-IDC (IDC: 0)

Figure 6 shows a non-IDC image for explaining model prediction via LIME.

v6JZziI.png!web

Figure 6. IDC_0_sample: Prediction of a negative IDC sample to be explained

The code below is to generate an explanation object explanation_2 of the model prediction for the image IDC_0_sample in Figure 6. In this explanation, white color is used to indicate the portion of image that supports the model prediction of non-IDC.

explanation_2 = explainer.explain_instance(IDC_0_sample, 
                                         classifier_fn = simple_cnn_pipeline.predict_proba, 
                                         top_labels=2, 
                                         hide_color=0, 
                                         num_samples=10000,
                                         segmentation_fn=segmenter
                                        )

Once the explanation of the model prediction is obtained, its method get_image_and_mask () can be called to obtain the template image and the corresponding mask image (super pixels):

temp, mask = explanation_2.get_image_and_mask(explanation_2.top_labels[0], 
                                            positive_only=True, 
                                            num_features=30, 
                                            hide_rest=True)
plt.imshow(mark_boundaries(temp, mask))

Figure 7 shows the hidden area of the non-IDC image in gray. The white portion of the image indicates the area of the given non-IDC image that supports the model prediction of non-IDC.

IVJBbmI.png!web

Figure 7: Explanation of model prediction of a negative IDC (IDC: 0) of Figure 6 by hiding original image details. White color indicates the area that supports the model prediction. Gray part does not support or is irrelevant to the model prediction.

The code below is to show the boundary of the area of the IDC image in yellow that supports the model prediction of non-IDC (see Figure 8).

temp, mask = explanation_2.get_image_and_mask(explanation_2.top_labels[0], 
                                            positive_only=True, 
                                            num_features=30, 
                                            hide_rest=False)
plt.imshow(mark_boundaries(temp, mask))

3A7jai3.png!web

Figure 8. Explanation of model prediction of non-IDC of Figure 6 (IDC: 0) with original image details. Yellow color indicates the boundary of the white area in Figure 7 that supports the model prediction. The gray area is either not support or is not relevant to the prediction.

Conclusion

In this article, I used the Kaggle BCHI dataset [5] to show how to use the LIME image explainer [3] to explain the IDC image prediction results of a 2D ConvNet model in the IDC breast cancer diagnosis. Explanations of model prediction of both IDC and non-IDC were provided by setting the number of super-pixels/features (i.e., the num_features parameter in the method get_image_and_mask ()) to 20.

I observed that the explanation results are sensitive to the choice of the number of super pixels/features. Domain knowledge is required to adjust this parameter to achieve appropriate model prediction explanation. Quality of the input data (images in this case) is also very important for a reasonable result. Accuracy can be improved by adding more samples.

A Jupyter notebook with all the source code used in this article is available in Github [6].

References

[1] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why Should I Trust You?” Explaining the Predictions of Any Classifier

[2] Y. Huang, Explainable Machine Learning for Healthcare

[3] LIME tutorial on image classification

[4] Interpretable Machine Learning, A Guide for Making Black Box Models Explainable

[5] Predicting IDC in Breast Cancer Histology Images

[6] Y. Huang, Jupyter notebook


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK