semantic segmentation of multispectral images using deep learning -凯发k8网页登录
this example shows how to perform semantic segmentation of a multispectral image with seven channels using u-net.
semantic segmentation involves labeling each pixel in an image with a class. one application of semantic segmentation is tracking deforestation, which is the change in forest cover over time. environmental agencies track deforestation to assess and quantify the environmental and ecological health of a region.
deep learning based semantic segmentation can yield a precise measurement of vegetation cover from high-resolution aerial photographs. one challenge is differentiating classes with similar visual characteristics, such as trying to classify a green pixel as grass, shrubbery, or tree. to increase classification accuracy, some data sets contain multispectral images that provide additional information about each pixel. for example, the hamlin beach state park data set supplements the color images with three near-infrared channels that provide a clearer separation of the classes.
this example first shows you how to perform semantic segmentation using a pretrained u-net and then use the segmentation results to calculate the extent of vegetation cover. then, you can optionally train a u-net network on the hamlin beach state parck data set using a patch-based training methodology.
download dataset
this example uses a high-resolution multispectral data set to train the network []. the image set was captured using a drone over the hamlin beach state park, ny. the data contains labeled training, validation, and test sets, with 18 object class labels. the size of the data file is 3.0 gb.
download the mat-file version of the data set using the downloadhamlinbeachmsidata
helper function. this function is attached to the example as a supporting file. specify datadir
as the desired location of the data.
datadir = fullfile(tempdir,"rit18_data");
downloadhamlinbeachmsidata(datadir);
load the dataset.
load(fullfile(datadir,"rit18_data.mat")); whos train_data val_data test_data
name size bytes class attributes test_data 7x12446x7654 1333663576 uint16 train_data 7x9393x5642 741934284 uint16 val_data 7x8833x6918 855493716 uint16
the multispectral image data is arranged as numchannels-by-width-by-height arrays. however, in matlab®, multichannel images are arranged as width-by-height-by-numchannels arrays. to reshape the data so that the channels are in the third dimension, use the switchchannelstothirdplane
helper function. this function is attached to the example as a supporting file.
train_data = switchchannelstothirdplane(train_data); val_data = switchchannelstothirdplane(val_data); test_data = switchchannelstothirdplane(test_data);
confirm that the data has the correct structure.
whos train_data val_data test_data
name size bytes class attributes test_data 12446x7654x7 1333663576 uint16 train_data 9393x5642x7 741934284 uint16 val_data 8833x6918x7 855493716 uint16
save the training data as a mat file and the training labels as a png file. this facilitates loading the training data using an imagedatastore
and a pixellabeldatastore
during training.
save("train_data.mat","train_data"); imwrite(train_labels,"train_labels.png");
visualize multispectral data
in this dataset, the rgb color channels are the 3rd, 2nd, and 1st image channels. display the color component of the training, validation, and test images as a montage. to make the images appear brighter on the screen, equalize their histograms by using the function.
figure montage(... {histeq(train_data(:,:,[3 2 1])), ... histeq(val_data(:,:,[3 2 1])), ... histeq(test_data(:,:,[3 2 1]))}, ... bordersize=10,backgroundcolor="white") title("rgb component of training, validation, and test image (left to right)")
display the last three histogram-equalized channels of the training data as a montage. these channels correspond to the near-infrared bands and highlight different components of the image based on their heat signatures. for example, the trees near the center of the second channel image show more detail than the trees in the other two channels.
figure montage(... {histeq(train_data(:,:,4)),histeq(train_data(:,:,5)),histeq(train_data(:,:,6))}, ... bordersize=10,backgroundcolor="white") title("training image ir channels 1, 2, and 3 (left to right)")
channel 7 is a mask that indicates the valid segmentation region. display the mask for the training, validation, and test images.
figure montage(... {train_data(:,:,7),val_data(:,:,7),test_data(:,:,7)}, ... bordersize=10,backgroundcolor="white") title("mask of training, validation, and test image (left to right)")
visualize ground truth labels
the labeled images contain the ground truth data for the segmentation, with each pixel assigned to one of the 18 classes. get a list of the classes with their corresponding ids.
disp(classes)
0. other class/image border 1. road markings 2. tree 3. building 4. vehicle (car, truck, or bus) 5. person 6. lifeguard chair 7. picnic table 8. black wood panel 9. white wood panel 10. orange landing pad 11. water buoy 12. rocks 13. other vegetation 14. grass 15. sand 16. water (lake) 17. water (pond) 18. asphalt (parking lot/walkway)
create a vector of class names.
classnames = [ "roadmarkings","tree","building","vehicle","person", ... "lifeguardchair","picnictable","blackwoodpanel",... "whitewoodpanel","orangelandingpad","buoy","rocks",... "lowlevelvegetation","grass_lawn","sand_beach",... "water_lake","water_pond","asphalt"];
overlay the labels on the histogram-equalized rgb training image. add a color bar to the image.
cmap = jet(numel(classnames)); b = labeloverlay(histeq(train_data(:,:,4:6)),train_labels,transparency=0.8,colormap=cmap); figure imshow(b) title("training labels") n = numel(classnames); ticks = 1/(n*2):1/n:1; colorbar(ticklabels=cellstr(classnames),ticks=ticks,ticklength=0,ticklabelinterpreter="none"); colormap(cmap)
perform semantic segmentation
download a pretrained u-net network.
trainedunet_url = "https://www.mathworks.com/supportfiles/vision/data/multispectralunet.mat"; downloadtrainednetwork(trainedunet_url,datadir); load(fullfile(datadir,"multispectralunet.mat"));
to perform the semantic segmentation on the trained network, use the segmentmultispectralimage
helper function with the validation data. this function is attached to the example as a supporting file. the segmentmultispectralimage
function performs segmentation on image patches using the function. processing patches is required because the size of the image prevents processing the entire image at once.
predictpatchsize = [1024 1024]; segmentedimage = segmentmultispectralimage(val_data,net,predictpatchsize);
to extract only the valid portion of the segmentation, multiply the segmented image by the mask channel of the validation data.
segmentedimage = uint8(val_data(:,:,7)~=0) .* segmentedimage;
figure
imshow(segmentedimage,[])
title("segmented image")
the output of semantic segmentation is noisy. perform post image processing to remove noise and stray pixels. use the function to remove salt-and-pepper noise from the segmentation. visualize the segmented image with the noise removed.
segmentedimage = medfilt2(segmentedimage,[7,7]);
imshow(segmentedimage,[]);
title("segmented image with noise removed")
overlay the segmented image on the histogram-equalized rgb validation image.
b = labeloverlay(histeq(val_data(:,:,[3 2 1])),segmentedimage,transparency=0.8,colormap=cmap); figure imshow(b) title("labeled segmented image") colorbar(ticklabels=cellstr(classnames),ticks=ticks,ticklength=0,ticklabelinterpreter="none"); colormap(cmap)
calculate extent of vegetation cover
the semantic segmentation results can be used to answer pertinent ecological questions. for example, what percentage of land area is covered by vegetation? to answer this question, find the number of pixels labeled vegetation. the label ids 2 ("trees"), 13 ("lowlevelvegetation"), and 14 ("grass_lawn") are the vegetation classes. also find the total number of valid pixels by summing the pixels in the roi of the mask image.
vegetationclassids = uint8([2,13,14]); vegetationpixels = ismember(segmentedimage(:),vegetationclassids); validpixels = (segmentedimage~=0); numvegetationpixels = sum(vegetationpixels(:)); numvalidpixels = sum(validpixels(:));
calculate the percentage of vegetation cover by dividing the number of vegetation pixels by the number of valid pixels.
percentvegetationcover = (numvegetationpixels/numvalidpixels)*100;
fprintf("the percentage of vegetation cover is %3.2f%%.",percentvegetationcover);
the percentage of vegetation cover is 51.72%.
the rest of the example shows you how to train u-net on the hamlin beach dataset.
create random patch extraction datastore for training
use a random patch extraction datastore to feed the training data to the network. this datastore extracts multiple corresponding random patches from an image datastore and pixel label datastore that contain ground truth images and pixel label data. patching is a common technique to prevent running out of memory for large images and to effectively increase the amount of available training data.
begin by loading the training images from "train_data.mat"
in an . because the mat file format is a nonstandard image format, you must use a mat file reader to enable reading the image data. you can use the helper mat file reader, matread6channels
, that extracts the first six channels from the training data and omits the last channel containing the mask. this function is attached to the example as a supporting file.
imds = imagedatastore("train_data.mat",fileextensions=".mat",readfcn=@matread6channels);
create a to store the label patches containing the 18 labeled regions.
pixellabelids = 1:18;
pxds = pixellabeldatastore("train_labels.png",classnames,pixellabelids);
create a from the image datastore and the pixel label datastore. each mini-batch contains 16 patches of size 256-by-256 pixels. one thousand mini-batches are extracted at each iteration of the epoch.
dstrain = randompatchextractiondatastore(imds,pxds,[256,256],patchesperimage=16000);
the random patch extraction datastore dstrain
provides mini-batches of data to the network at each iteration of the epoch. preview the datastore to explore the data.
inputbatch = preview(dstrain); disp(inputbatch)
inputimage responsepixellabelimage __________________ _______________________ {256×256×6 uint16} {256×256 categorical} {256×256×6 uint16} {256×256 categorical} {256×256×6 uint16} {256×256 categorical} {256×256×6 uint16} {256×256 categorical} {256×256×6 uint16} {256×256 categorical} {256×256×6 uint16} {256×256 categorical} {256×256×6 uint16} {256×256 categorical} {256×256×6 uint16} {256×256 categorical}
create u-net network layers
this example uses a variation of the u-net network. in u-net, the initial series of convolutional layers are interspersed with max pooling layers, successively decreasing the resolution of the input image. these layers are followed by a series of convolutional layers interspersed with upsampling operators, successively increasing the resolution of the input image []. the name u-net comes from the fact that the network can be drawn with a symmetric shape like the letter u.
this example modifies the u-net to use zero-padding in the convolutions, so that the input and the output to the convolutions have the same size. use the helper function, createunet
, to create a u-net with a few preselected hyperparameters. this function is attached to the example as a supporting file.
inputtilesize = [256,256,6]; lgraph = createunet(inputtilesize); disp(lgraph.layers)
58×1 layer array with layers: 1 'imageinputlayer' image input 256×256×6 images with 'zerocenter' normalization 2 'encoder-section-1-conv-1' 2-d convolution 64 3×3×6 convolutions with stride [1 1] and padding [1 1 1 1] 3 'encoder-section-1-relu-1' relu relu 4 'encoder-section-1-conv-2' 2-d convolution 64 3×3×64 convolutions with stride [1 1] and padding [1 1 1 1] 5 'encoder-section-1-relu-2' relu relu 6 'encoder-section-1-maxpool' 2-d max pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0] 7 'encoder-section-2-conv-1' 2-d convolution 128 3×3×64 convolutions with stride [1 1] and padding [1 1 1 1] 8 'encoder-section-2-relu-1' relu relu 9 'encoder-section-2-conv-2' 2-d convolution 128 3×3×128 convolutions with stride [1 1] and padding [1 1 1 1] 10 'encoder-section-2-relu-2' relu relu 11 'encoder-section-2-maxpool' 2-d max pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0] 12 'encoder-section-3-conv-1' 2-d convolution 256 3×3×128 convolutions with stride [1 1] and padding [1 1 1 1] 13 'encoder-section-3-relu-1' relu relu 14 'encoder-section-3-conv-2' 2-d convolution 256 3×3×256 convolutions with stride [1 1] and padding [1 1 1 1] 15 'encoder-section-3-relu-2' relu relu 16 'encoder-section-3-maxpool' 2-d max pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0] 17 'encoder-section-4-conv-1' 2-d convolution 512 3×3×256 convolutions with stride [1 1] and padding [1 1 1 1] 18 'encoder-section-4-relu-1' relu relu 19 'encoder-section-4-conv-2' 2-d convolution 512 3×3×512 convolutions with stride [1 1] and padding [1 1 1 1] 20 'encoder-section-4-relu-2' relu relu 21 'encoder-section-4-dropout' dropout 50% dropout 22 'encoder-section-4-maxpool' 2-d max pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0] 23 'mid-conv-1' 2-d convolution 1024 3×3×512 convolutions with stride [1 1] and padding [1 1 1 1] 24 'mid-relu-1' relu relu 25 'mid-conv-2' 2-d convolution 1024 3×3×1024 convolutions with stride [1 1] and padding [1 1 1 1] 26 'mid-relu-2' relu relu 27 'mid-dropout' dropout 50% dropout 28 'decoder-section-1-upconv' 2-d transposed convolution 512 2×2×1024 transposed convolutions with stride [2 2] and cropping [0 0 0 0] 29 'decoder-section-1-uprelu' relu relu 30 'decoder-section-1-depthconcatenation' depth concatenation depth concatenation of 2 inputs 31 'decoder-section-1-conv-1' 2-d convolution 512 3×3×1024 convolutions with stride [1 1] and padding [1 1 1 1] 32 'decoder-section-1-relu-1' relu relu 33 'decoder-section-1-conv-2' 2-d convolution 512 3×3×512 convolutions with stride [1 1] and padding [1 1 1 1] 34 'decoder-section-1-relu-2' relu relu 35 'decoder-section-2-upconv' 2-d transposed convolution 256 2×2×512 transposed convolutions with stride [2 2] and cropping [0 0 0 0] 36 'decoder-section-2-uprelu' relu relu 37 'decoder-section-2-depthconcatenation' depth concatenation depth concatenation of 2 inputs 38 'decoder-section-2-conv-1' 2-d convolution 256 3×3×512 convolutions with stride [1 1] and padding [1 1 1 1] 39 'decoder-section-2-relu-1' relu relu 40 'decoder-section-2-conv-2' 2-d convolution 256 3×3×256 convolutions with stride [1 1] and padding [1 1 1 1] 41 'decoder-section-2-relu-2' relu relu 42 'decoder-section-3-upconv' 2-d transposed convolution 128 2×2×256 transposed convolutions with stride [2 2] and cropping [0 0 0 0] 43 'decoder-section-3-uprelu' relu relu 44 'decoder-section-3-depthconcatenation' depth concatenation depth concatenation of 2 inputs 45 'decoder-section-3-conv-1' 2-d convolution 128 3×3×256 convolutions with stride [1 1] and padding [1 1 1 1] 46 'decoder-section-3-relu-1' relu relu 47 'decoder-section-3-conv-2' 2-d convolution 128 3×3×128 convolutions with stride [1 1] and padding [1 1 1 1] 48 'decoder-section-3-relu-2' relu relu 49 'decoder-section-4-upconv' 2-d transposed convolution 64 2×2×128 transposed convolutions with stride [2 2] and cropping [0 0 0 0] 50 'decoder-section-4-uprelu' relu relu 51 'decoder-section-4-depthconcatenation' depth concatenation depth concatenation of 2 inputs 52 'decoder-section-4-conv-1' 2-d convolution 64 3×3×128 convolutions with stride [1 1] and padding [1 1 1 1] 53 'decoder-section-4-relu-1' relu relu 54 'decoder-section-4-conv-2' 2-d convolution 64 3×3×64 convolutions with stride [1 1] and padding [1 1 1 1] 55 'decoder-section-4-relu-2' relu relu 56 'final-convolutionlayer' 2-d convolution 18 1×1×64 convolutions with stride [1 1] and padding [0 0 0 0] 57 'softmax-layer' softmax softmax 58 'segmentation-layer' pixel classification layer cross-entropy loss
select training options
train the network using stochastic gradient descent with momentum (sgdm) optimization. specify the hyperparameter settings for sgdm by using the (deep learning toolbox) function.
training a deep network is time-consuming. accelerate the training by specifying a high learning rate. however, this can cause the gradients of the network to explode or grow uncontrollably, preventing the network from training successfully. to keep the gradients in a meaningful range, enable gradient clipping by specifying "gradientthreshold"
as 0.05
, and specify "gradientthresholdmethod"
to use the l2-norm of the gradients.
initiallearningrate = 0.05; maxepochs = 150; minibatchsize = 16; l2reg = 0.0001; options = trainingoptions("sgdm",... initiallearnrate=initiallearningrate, ... momentum=0.9,... l2regularization=l2reg,... maxepochs=maxepochs,... minibatchsize=minibatchsize,... learnrateschedule="piecewise",... shuffle="every-epoch",... gradientthresholdmethod="l2norm",... gradientthreshold=0.05, ... plots="training-progress", ... verbosefrequency=20);
train the network or download pretrained network
to train the network, set the dotraining
variable in the following code to true
. train the model by using the (deep learning toolbox) function.
train on a gpu if one is available. using a gpu requires parallel computing toolbox™ and a cuda® enabled nvidia® gpu. for more information, see gpu computing requirements (parallel computing toolbox). training takes about 20 hours on an nvidia titan x.
dotraining = false; if dotraining net = trainnetwork(dstrain,lgraph,options); modeldatetime = string(datetime("now",format="yyyy-mm-dd-hh-mm-ss")); save(fullfile(datadir,"multispectralunet-" modeldatetime ".mat"),"net"); end
evaluate segmentation accuracy
segment the validation data.
segmentedimage = segmentmultispectralimage(val_data,net,predictpatchsize);
save the segmented image and ground truth labels as png files. the example uses these files to calculate accuracy metrics.
imwrite(segmentedimage,"results.png"); imwrite(val_labels,"gtruth.png");
load the segmentation results and ground truth using .
pxdsresults = pixellabeldatastore("results.png",classnames,pixellabelids); pxdstruth = pixellabeldatastore("gtruth.png",classnames,pixellabelids);
measure the global accuracy of the semantic segmentation by using the function.
ssm = evaluatesemanticsegmentation(pxdsresults,pxdstruth,metrics="global-accuracy");
evaluating semantic segmentation results ---------------------------------------- * selected metrics: global accuracy. * processed 1 images. * finalizing... done. * data set metrics: globalaccuracy ______________ 0.90411
the global accuracy score indicates that just over 90% of the pixels are classified correctly.
references
[1] kemker, r., c. salvaggio, and c. kanan. "high-resolution multispectral dataset for semantic segmentation." corr, abs/1703.01918. 2017.
[2] ronneberger, o., p. fischer, and t. brox. "u-net: convolutional networks for biomedical image segmentation." corr, abs/1505.04597. 2015.
[3] kemker, ronald, carl salvaggio, and christopher kanan. "algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning." isprs journal of photogrammetry and remote sensing, deep learning rs data, 145 (november 1, 2018): 60-77. https://doi.org/10.1016/j.isprsjprs.2018.04.014.
see also
(deep learning toolbox) | (deep learning toolbox) | | | | | | |
related topics
- semantic segmentation using deep learning
- (deep learning toolbox)