deep learning visualization methods
deep learning networks are often described as "black boxes" because the reason that a network makes a certain decision is not always obvious. increasingly, deep learning networks are being used in domains from medical treatment to loan applications, so understanding why a network makes a particular decision is crucial.
you can use interpretability techniques to translate network behavior into output that a person can interpret. this interpretable output can then answer questions about the predictions of a network. interpretability techniques have many applications, for example, verification, debugging, learning, assessing bias, and model selection.
you can apply interpretability techniques after network training, or build them into the network. the advantage of post-training methods is that you do not have to spend time constructing an interpretable deep learning network. this topic focuses on post-training methods that use test images to explain the predictions of a network trained on image data.
visualization methods are a type of interpretability technique that explain network predictions using visual representations of what a network is looking at. there are many techniques for visualizing network behavior, such as heat maps, saliency maps, feature importance maps, and low-dimensional projections.
visualization methods
interpretability techniques have varying characteristics; which method you use will depend on the interpretation you want and the network you have trained. methods can be local and only investigate network behavior for a specific input or global and investigate network behavior across an entire data set.
each visualization method has a specific approach that determines the output it produces. a common distinction between methods is if they are gradient or perturbation based. gradient-based methods backpropagate the signal from the output back towards the input. perturbation-based methods perturb the input to the network and consider the effect of the perturbation on prediction. another approach to interpretability technique involves mapping or approximating the complex network model to a more interpretable space. for example, some methods approximate the network predictions using a simpler, more interpretable model. other methods use dimension reduction techniques to reduce high-dimensional activations down to interpretable 2-d or 3-d space.
the following table compares visualization interpretability techniques for deep learning models for image classification. for an example showing how to use visualization methods to investigate the predictions of an image classification network, see .
deep learning visualization methods for image classification
method | example visualization | function | locality | approach | resolution | requires tuning | description |
---|---|---|---|---|---|---|---|
activations |
| local | activation visualization | low | no | visualizing activations is a simple way of understanding network behavior. most convolutional neural networks learn to detect features like color and edges in their first convolutional layers. in deeper convolutional layers, the network learns to detect more complicated features. for more information, see . | |
cam |
| no | local | gradient-based class activation heat map | low | no | class activation mapping (cam) is a simple technique for generating visual explanations of the predictions of convolutional neural networks [1]. cam uses the global average pooling layer in a convolutional neural network to generate a map that highlights which parts of an image the network is using with respect to a particular class label. for more information, see . |
grad-cam |
| local | gradient-based class activation heat map | low | no | gradient-weighted class activation mapping (grad-cam) is a generalization of the cam method that uses the gradient of the classification score with respect to the convolutional features determined by the network to understand which parts of an observation are most important for classification [2]. the places where the gradient is large are the places where the final score depends most on the data. grad-cam gives similar results to cam without the architecture restrictions of cam. for more information, see grad-cam reveals the why behind deep learning decisions and . | |
occlusion sensitivity |
| local | perturbation-based heat map | low to medium | yes | occlusion sensitivity measures network sensitivity to small perturbations in input data. the method perturbs small areas of the input by replacing it with an occluding mask, typically a gray square. as the mask moves across the image, the technique measures the change in probability score for a given class. you can use occlusion sensitivity to highlight which parts of the image are most important to the classification. to get the best results from occlusion
sensitivity, you must choose the right values for the for more information, see . | |
lime |
| local | perturbation-based proxy model, feature importance | low to high | yes | the lime technique approximates the classification behavior of a deep learning network using a simpler, more interpretable model, such as a linear model or a regression tree [3]. the simple model determines the importance of features of the input data, as a proxy for the importance of the features to the deep learning network. for more information, see understand network predictions using lime and . | |
gradient attribution |
| no | local | gradient-based saliency map | high | no | gradient attribution methods provide pixel-resolution maps showing which pixels are most important to the network classification decisions [4][5]. these methods compute the gradient of the class score with respect to the input pixels. intuitively, the maps show which pixels most affect the class score when changed. the gradient attribution methods produce maps the same size as the input image. therefore, gradient attribution maps have a high resolution, but they tend to be much noisier, as a well-trained deep network is not strongly dependent on the exact value of specific pixels. for more information, see . |
deep dream |
| global | gradient-based activation maximization | low to high | yes | deep dream is a feature visualization technique that synthesizes images that strongly activate network layers [6]. by visualizing these images, you can highlight the image features learned by a network. these images are useful for understanding and diagnosing network behavior. for more information, see . | |
t-sne |
| (statistics and machine learning toolbox) | global | dimension reduction | n/a | no | t-sne is a dimension reduction technique that preserves distances so that points near each other in the high-dimension representation are also near each other in the low-dimensional representation [7]. you can use t-sne to visualize how deep learning networks change the representation of input data as it passes through the network layers. for more information, see . |
maximal and minimal activating images |
| no | global | gradient-based activation maximization | n/a | no | visualizing images that strongly or weakly activate the network for each class is a simple way of understating your network. images that strongly activate highlight what the network thinks a "typical" image from that class looks like. images that weakly activate can help you to discover why your network makes incorrect classification predictions. for more information, see . |
to explore applying these methods interactively using an app, see the github® repository.
interpretability methods for nonimage data
many interpretability focus on interpreting image classification or regression networks.
interpreting nonimage data is often more challenging due to the nonvisual nature of the data.
you can use grad-cam to visualize the classification decisions of a 1-d convolutional network
trained on time series data. for more information, see . to explore the activations
of an lstm network, use the and
(statistics and machine learning toolbox) functions. for an example showing how to explore the predictions of an lstm
network, see . to explore the behavior of a
network trained on tabular features, use the lime
(statistics and machine learning toolbox) and shapley
(statistics and machine learning toolbox) functions.
for an example showing how to interpret a feature input network, see . for more information about
interpreting machine learning models, see interpret machine learning models (statistics and machine learning toolbox).
references
[1] zhou, bolei, aditya khosla, agata lapedriza, aude oliva, and antonio torralba. "learning deep features for discriminative localization." in 2016 proceedings of the ieee conference on computer vision and pattern recognition : 2921–2929. las vegas: ieee, 2016.
[2] selvaraju, ramprasaath r., michael cogswell, abhishek das, ramakrishna vedantam, devi parikh, and dhruv batra. “grad-cam: visual explanations from deep networks via gradient-based localization.” in 2017 proceedings of the ieee conference on computer vision: 618–626. venice, italy: ieee, 2017. https://doi.org/10.1109/iccv.2017.74.
[3] ribeiro, marco tulio, sameer singh, and carlos guestrin. “‘why should i trust you?’: explaining the predictions of any classifier.” in proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (2016): 1135–1144. new york, ny: association for computing machinery, 2016. https://doi.org/10.1145/2939672.2939778.
[4] simonyan, karen, andrea vedaldi, and andrew zisserman. “deep inside convolutional networks: visualising image classification models and saliency maps.” preprint, submitted april 19, 2014. https://arxiv.org/abs/1312.6034.
[5] tomsett, richard, dan harborne, supriyo chakraborty, prudhvi gurram, and alun preece. “sanity checks for saliency metrics.” proceedings of the aaai conference on artificial intelligence, 34, no. 04, (april 2020): 6021–29, https://doi.org/10.1609/aaai.v34i04.6064.
[6] tensorflow. "deepdreaming with tensorflow." https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb.
[7] van der maaten, laurens, and geoffrey hinton. "visualizing data using t-sne." journal of machine learning research, 9 (2008): 2579–2605.
see also
| | | | (statistics and machine learning toolbox) |
related topics
- grad-cam reveals the why behind deep learning decisions
- understand network predictions using lime
- interpret machine learning models (statistics and machine learning toolbox)