deep learning visualization methods -凯发k8网页登录

main content

deep learning visualization methods

deep learning networks are often described as "black boxes" because the reason that a network makes a certain decision is not always obvious. increasingly, deep learning networks are being used in domains from medical treatment to loan applications, so understanding why a network makes a particular decision is crucial.

you can use interpretability techniques to translate network behavior into output that a person can interpret. this interpretable output can then answer questions about the predictions of a network. interpretability techniques have many applications, for example, verification, debugging, learning, assessing bias, and model selection.

you can apply interpretability techniques after network training, or build them into the network. the advantage of post-training methods is that you do not have to spend time constructing an interpretable deep learning network. this topic focuses on post-training methods that use test images to explain the predictions of a network trained on image data.

visualization methods are a type of interpretability technique that explain network predictions using visual representations of what a network is looking at. there are many techniques for visualizing network behavior, such as heat maps, saliency maps, feature importance maps, and low-dimensional projections.

workflow for taking a trained network and a set of test images and producing interpretable output

visualization methods

interpretability techniques have varying characteristics; which method you use will depend on the interpretation you want and the network you have trained. methods can be local and only investigate network behavior for a specific input or global and investigate network behavior across an entire data set.

each visualization method has a specific approach that determines the output it produces. a common distinction between methods is if they are gradient or perturbation based. gradient-based methods backpropagate the signal from the output back towards the input. perturbation-based methods perturb the input to the network and consider the effect of the perturbation on prediction. another approach to interpretability technique involves mapping or approximating the complex network model to a more interpretable space. for example, some methods approximate the network predictions using a simpler, more interpretable model. other methods use dimension reduction techniques to reduce high-dimensional activations down to interpretable 2-d or 3-d space.

the following table compares visualization interpretability techniques for deep learning models for image classification. for an example showing how to use visualization methods to investigate the predictions of an image classification network, see .

deep learning visualization methods for image classification

methodexample visualization functionlocalityapproachresolutionrequires tuningdescription
activations

example visualization of activations on an image of a dog. the eyes and nose of the dog appear white and the rest of the image is black.

localactivation visualizationlowno

visualizing activations is a simple way of understanding network behavior. most convolutional neural networks learn to detect features like color and edges in their first convolutional layers. in deeper convolutional layers, the network learns to detect more complicated features.

for more information, see .

cam

example visualization of cam heat map on an image of a dog. the map highlights the head of the dog.

no

localgradient-based class activation heat maplowno

class activation mapping (cam) is a simple technique for generating visual explanations of the predictions of convolutional neural networks [1]. cam uses the global average pooling layer in a convolutional neural network to generate a map that highlights which parts of an image the network is using with respect to a particular class label.

for more information, see .

grad-cam

example visualization of grad-cam heat map on an image of a dog. the map highlights the ear of the dog.

localgradient-based class activation heat maplowno

gradient-weighted class activation mapping (grad-cam) is a generalization of the cam method that uses the gradient of the classification score with respect to the convolutional features determined by the network to understand which parts of an observation are most important for classification [2]. the places where the gradient is large are the places where the final score depends most on the data.

grad-cam gives similar results to cam without the architecture restrictions of cam.

for more information, see grad-cam reveals the why behind deep learning decisions and .

occlusion sensitivity

example visualization of occlusion sensitivity heat map on an image of a dog. the map highlights the ear and body of the dog.

localperturbation-based heat maplow to mediumyes

occlusion sensitivity measures network sensitivity to small perturbations in input data. the method perturbs small areas of the input by replacing it with an occluding mask, typically a gray square. as the mask moves across the image, the technique measures the change in probability score for a given class. you can use occlusion sensitivity to highlight which parts of the image are most important to the classification.

to get the best results from occlusion sensitivity, you must choose the right values for the masksize and stride options. this tuning provides more flexibility to examine the input features at different length scales.

for more information, see .

lime

example visualization of lime technique on an image of a dog. the image highlights segments of the ear and head of the dog.

localperturbation-based proxy model, feature importancelow to highyes

the lime technique approximates the classification behavior of a deep learning network using a simpler, more interpretable model, such as a linear model or a regression tree [3]. the simple model determines the importance of features of the input data, as a proxy for the importance of the features to the deep learning network.

for more information, see understand network predictions using lime and .

gradient attribution

example visualization of gradient attribution technique on an image of a dog. the image shows highlighted pixels around the eyes and nose of the dog.

no

localgradient-based saliency maphighno

gradient attribution methods provide pixel-resolution maps showing which pixels are most important to the network classification decisions [4][5]. these methods compute the gradient of the class score with respect to the input pixels. intuitively, the maps show which pixels most affect the class score when changed.

the gradient attribution methods produce maps the same size as the input image. therefore, gradient attribution maps have a high resolution, but they tend to be much noisier, as a well-trained deep network is not strongly dependent on the exact value of specific pixels.

for more information, see .

deep dream

example visualization of deep dream technique.

globalgradient-based activation maximization low to highyes

deep dream is a feature visualization technique that synthesizes images that strongly activate network layers [6]. by visualizing these images, you can highlight the image features learned by a network. these images are useful for understanding and diagnosing network behavior.

for more information, see .

t-sne

example visualization of t-sne technique showing a graph with 12 clusters of points in 10 different colors.

(statistics and machine learning toolbox)

globaldimension reductionn/ano

t-sne is a dimension reduction technique that preserves distances so that points near each other in the high-dimension representation are also near each other in the low-dimensional representation [7]. you can use t-sne to visualize how deep learning networks change the representation of input data as it passes through the network layers.

for more information, see .

maximal and minimal activating images

four images of sushi with high scores for class sushi.

no

globalgradient-based activation maximization n/ano

visualizing images that strongly or weakly activate the network for each class is a simple way of understating your network. images that strongly activate highlight what the network thinks a "typical" image from that class looks like. images that weakly activate can help you to discover why your network makes incorrect classification predictions.

for more information, see .

to explore applying these methods interactively using an app, see the github® repository.

understanding network predictions for image classification (unpic) app.

interpretability methods for nonimage data

many interpretability focus on interpreting image classification or regression networks. interpreting nonimage data is often more challenging due to the nonvisual nature of the data. you can use grad-cam to visualize the classification decisions of a 1-d convolutional network trained on time series data. for more information, see . to explore the activations of an lstm network, use the and (statistics and machine learning toolbox) functions. for an example showing how to explore the predictions of an lstm network, see . to explore the behavior of a network trained on tabular features, use the lime (statistics and machine learning toolbox) and shapley (statistics and machine learning toolbox) functions. for an example showing how to interpret a feature input network, see . for more information about interpreting machine learning models, see interpret machine learning models (statistics and machine learning toolbox).

references

[1] zhou, bolei, aditya khosla, agata lapedriza, aude oliva, and antonio torralba. "learning deep features for discriminative localization." in 2016 proceedings of the ieee conference on computer vision and pattern recognition : 2921–2929. las vegas: ieee, 2016.

[2] selvaraju, ramprasaath r., michael cogswell, abhishek das, ramakrishna vedantam, devi parikh, and dhruv batra. “grad-cam: visual explanations from deep networks via gradient-based localization.” in 2017 proceedings of the ieee conference on computer vision: 618–626. venice, italy: ieee, 2017. https://doi.org/10.1109/iccv.2017.74.

[3] ribeiro, marco tulio, sameer singh, and carlos guestrin. “‘why should i trust you?’: explaining the predictions of any classifier.” in proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (2016): 1135–1144. new york, ny: association for computing machinery, 2016. https://doi.org/10.1145/2939672.2939778.

[4] simonyan, karen, andrea vedaldi, and andrew zisserman. “deep inside convolutional networks: visualising image classification models and saliency maps.” preprint, submitted april 19, 2014. https://arxiv.org/abs/1312.6034.

[5] tomsett, richard, dan harborne, supriyo chakraborty, prudhvi gurram, and alun preece. “sanity checks for saliency metrics.” proceedings of the aaai conference on artificial intelligence, 34, no. 04, (april 2020): 6021–29, https://doi.org/10.1609/aaai.v34i04.6064.

[6] tensorflow. "deepdreaming with tensorflow." https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb.

[7] van der maaten, laurens, and geoffrey hinton. "visualizing data using t-sne." journal of machine learning research, 9 (2008): 2579–2605.

see also

| | | | (statistics and machine learning toolbox) |

related topics

external websites

    网站地图