main content

understand network predictions using lime -凯发k8网页登录

this example shows how to use locally interpretable model-agnostic explanations (lime) to understand why a deep neural network makes a classification decision.

deep neural networks are very complex and their decisions can be hard to interpret. the lime technique approximates the classification behavior of a deep neural network using a simpler, more interpretable model, such as a regression tree. interpreting the decisions of this simpler model provides insight into the decisions of the neural network [1]. the simple model is used to determine the importance of features of the input data, as a proxy for the importance of the features to the deep neural network.

when a particular feature is very important to a deep network's classification decision, removing that feature significantly affects the classification score. that feature is therefore important to the simple model too.

deep learning toolbox provides the imagelime function to compute maps of the feature importance determined by the lime technique. the lime algorithm for images works by:

  • segmenting an image into features.

  • generating many synthetic images by randomly including or excluding features. excluded features have every pixel replaced with the value of the image average, so they no longer contain information useful for the network.

  • classifying the synthetic images with the deep network.

  • fitting a simpler regression model using the presence or absence of image features for each synthetic image as binary regression predictors for the scores of the target class. the model approximates the behavior of the complex deep neural network in the region of the observation.

  • computing the importance of features using the simple model, and converting this feature importance into a map that indicates the parts of the image that are most important to the model.

you can compare results from the lime technique to other explainability techniques, such as occlusion sensitivity or grad-cam. for examples of how to use these related techniques, see the following examples.

load pretrained network and image

load the pretrained network googlenet.

net = googlenet;

extract the image input size and the output classes of the network.

inputsize = net.layers(1).inputsize(1:2);
classes = net.layers(end).classes;

load the image. the image is of a retriever called sherlock. resize the image to the network input size.

img = imread("sherlock.jpg");
img = imresize(img,inputsize);

classify the image, and display the three classes with the highest classification score in the image title.

[ypred,scores] = classify(net,img);
[~,topidx] = maxk(scores, 3);
topscores = scores(topidx);
topclasses = classes(topidx);
imshow(img)
titlestring = compose("%s (%.2f)",topclasses,topscores');
title(sprintf(join(titlestring, "; ")));

googlenet classifies sherlock as a golden retriever. understandably, the network also assigns a high probability to the labrador retriever class. you can use imagelime to understand which parts of the image the network is using to make these classification decisions.

identify areas of an image the network uses for classification

you can use lime to find out which parts of the image are important for a class. first, look at the predicted class of golden retriever. what parts of the image suggest this class?

by default, imagelime identifies features in the input image by segmenting the image into superpixels. this method of segmentation requires image processing toolbox; however, if you do not have image processing toolbox, you can use the option "segmentation","grid" to segment the image into square features.

use the imagelime function to map the importance of different superpixel features. by default, the simple model is a regression tree.

map = imagelime(net,img,ypred);

display the image of sherlock with the lime map overlaid.

figure
imshow(img,'initialmagnification',150)
hold on
imagesc(map,'alphadata',0.5)
colormap jet
colorbar
title(sprintf("image lime (%s)", ...
    ypred))
hold off

the maps shows which areas of the image are important to the classification of golden retriever. red areas of the map have a higher importance — when these areas are removed, the score for the golden retriever class goes down. the network focuses on the dog's face and ear to make its prediction of golden retriever. this is consistent with other explainability techniques like occlusion sensitivity or grad-cam.

compare to results of a different class

googlenet predicts a score of 55% for the golden retriever class, and 40% for the labrador retriever class. these classes are very similar. you can determine which parts of the dog are more important for both classes by comparing the lime maps computed for each class.

using the same settings, compute the lime map for the labrador retriever class.

secondclass = topclasses(2);
map = imagelime(net,img,secondclass);
figure;
imshow(img,'initialmagnification',150)
hold on
imagesc(map,'alphadata',0.5)
colormap jet
colorbar
title(sprintf("image lime (%s)",secondclass))
hold off

for the labrador retriever class, the network is more focused on the dog's nose and eyes, rather than the ear. while both maps highlight the dog's forehead, the network has decided that the dog's ear and neck indicate the golden retriever class, while the dog's eye and nose indicate the labrador retriever class.

compare lime with grad-cam

other image interpretability techniques such as grad-cam upsample the resulting map to produce a smooth heatmap of the important areas of the image. you can produce similar-looking maps with imagelime, by calculating the importance of square or rectangular features and upsampling the resulting map.

to segment the image into a grid of square features instead of irregular superpixels, use the "segmentation","grid" name-value pair. upsample the computed map to match the image resolution using bicubic interpolation, by setting "outputupsampling","bicubic".

to increase the resolution of the initially computed map, increase the number of features to 100 by specifying the "numfeatures",100 name-value pair. as the image is square, this produces a 10-by-10 grid of features.

the lime technique generates synthetic images based on the original observation by randomly choosing some features and replacing all the pixels in those features with the average image pixel, effectively removing that feature. increase the number of random samples to 6000 by setting "numsamples",6000. when you increase the number of features, increasing the number of samples usually gives better results.

by default the imagelime function uses a regression tree as its simple model. instead, fit a linear regression model with lasso regression by setting "model","linear".

map = imagelime(net,img,"golden retriever", ...
    "segmentation","grid",...
    "outputupsampling","bicubic",...
    "numfeatures",100,...
    "numsamples",6000,...
    "model","linear");
imshow(img,'initialmagnification', 150)
hold on
imagesc(map,'alphadata',0.5)
colormap jet
title(sprintf("image lime (%s - linear model)", ...
    ypred))
hold off

similar to the gradient map computed by grad-cam, the lime technique also strongly identifies the dog's ear as significant to the prediction of golden retriever.

display only the most important features

lime results are often plotted by showing only the most important few features. when you use the imagelime function, you can also obtain a map of the features used in the computation and the calculated importance of each feature. use these results to determine the four most important superpixel features and display only the four most important features in an image.

compute the lime map and obtain the feature map and the calculated importance of each feature.

[map,featuremap,featureimportance] = imagelime(net,img,ypred);

find the indices of the top four features.

numtopfeatures = 4;
[~,idx] = maxk(featureimportance,numtopfeatures);

next, mask out the image using the lime map so only pixels in the most important four superpixels are visible. display the masked image.

mask = ismember(featuremap,idx);
maskedimg = uint8(mask).*img;
figure
imshow(maskedimg);
title(sprintf("image lime (%s - top %i features)", ...
    ypred, numtopfeatures))

references

[1] ribeiro, marco tulio, sameer singh, and carlos guestrin. “‘why should i trust you?’: explaining the predictions of any classifier.” in proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 1135–44. san francisco california usa: acm, 2016. https://doi.org/10.1145/2939672.2939778.

see also

| | | |

related topics

网站地图