main content

brain mri segmentation using pretrained 3-凯发k8网页登录

this example shows how to segment a brain mri using a deep neural network.

segmentation of brain scans enables the visualization of individual brain structures. brain segmentation is also commonly used for quantitative volumetric and shape analyses to characterize healthy and diseased populations. manual segmentation by clinical experts is considered the highest standard in segmentation. however, the process is extremely time-consuming and not practical for labeling large data sets. additionally, labeling requires expertise in neuroanatomy and is prone to errors and limitations in interrater and intrarater reproducibility. trained segmentation algorithms, such as convolutional neural networks, have the potential to automate the labeling of large clinical data sets.

in this example, you use the pretrained synthseg neural network [], a 3-d u-net for brain mri segmentation. synthseg can be used to segment brain scans of any contrast and resolution without retraining or fine-tuning. synthseg is also robust to a wide array of subject populations, from young and healthy to aging and diseased subjects, and a wide array of scan conditions, such as white matter lesions, with or without preprocessing, including bias field corruption, skull stripping, intensity normalization, and template registration.

comparison of predicted segmentation map and ground truth segmentation map

download brain mri and label data

this example uses a subset of the candi data set [] []. the subset consists of a brain mri volume and the corresponding ground truth label volume for one patient. both files are in the nifti file format. the total size of the data files is ~2.5 mb.

run this code to download the dataset from the mathworks® website and unzip the downloaded folder.

zipfile = matlab.internal.examples.downloadsupportfile("image","data/brainsegdata.zip");
filepath = fileparts(zipfile);
unzip(zipfile,filepath)

the datadir folder contains the downloaded and unzipped dataset.

datadir = fullfile(filepath,"brainsegdata");

load pretrained network

this example uses a pretrained tensorflow-keras convolutional neural network. download the pretrained network from the mathworks® website by using the helper function downloadtrainednetwork. the helper function is attached to this example as a supporting file. the size of the pretrained network is approximately 51 mb.

trainedbraincandinetwork_url = "https://www.mathworks.com/supportfiles/"  ...
    "image/data/trainedbrainsynthsegnetwork.h5";
downloadtrainednetwork(trainedbraincandinetwork_url,datadir);

load test data

read the metadata from the brain mri volume by using the function. read the brain mri volume by using the function.

imfile = fullfile(datadir,"anat.nii.gz");
metadata = niftiinfo(imfile);
vol = niftiread(metadata);

in this example, you segment the brain into 32 classes corresponding to anatomical structures. read the names and numeric identifiers for each class label by using the getbraincandisegmentationlabels helper function. the helper function is attached to this example as a supporting file.

labeldirs = fullfile(datadir,"groundtruth");
[classnames,labelids] = getbraincandisegmentationlabels;

preprocess test data

preprocess the mri volume by using the preprocessbraincandidata helper function. the helper function is attached to this example as a supporting file. the helper function performs these steps:

  • resampling — if resample is true, resample the data to the isotropic voxel size 1-by-1-by-1 mm. by default, resample is false and the function does not perform resampling. to test the pretrained network on images with a different voxel size, set resample to true if the input is not isotropic.

  • alignment — rotate the volume to a standardized ras orientation.

  • cropping — crop the volume to a maximum size of 192 voxels in each dimension.

  • normalization — normalize the intensity values of the volume to values in the range [0, 1], which improves the contrast.

resample = false;
cropsize = 192;
[volproc,cropidx,imsize] = preprocessbraincandidata(vol,metadata,cropsize,resample);
inputsize = size(volproc);

convert the preprocessed mri volume into a formatted deep learning array with the ssscb (spatial, spatial, spatial, channel, batch) format by using (deep learning toolbox).

voldl = dlarray(volproc,"ssscb");

define network architecture

import the network layers from the downloaded model file of the pretrained network using the (deep learning toolbox) function. the importkeraslayers function requires the deep learning toolbox™ converter for tensorflow models support package. if this support package is not installed, then importkeraslayers provides a download link. specify importweights as true to import the layers using the weights from the same hdf5 file. the function returns a (deep learning toolbox) object.

the keras network contains some layers that the deep learning toolbox™ does not support. the importkeraslayers function displays a warning and replaces the unsupported layers with placeholder layers.

modelfile = fullfile(datadir,"trainedbrainsynthsegnetwork.h5");
lgraph = importkeraslayers(modelfile,importweights=true,imageinputsize=inputsize);
warning: imported layers have no output layer because the model does not specify a loss function. add an output layer or use the 'outputlayertype' name-value argument when you call importkeraslayers.
warning: unable to import some keras layers, because they are not supported by the deep learning toolbox. they have been replaced by placeholder layers. to find these layers, call the function findplaceholderlayers on the returned object.

to replace the placeholder layers in the imported network, first identify the names of the layers to replace. find the placeholder layers using (deep learning toolbox).

placeholderlayers = findplaceholderlayers(lgraph)
placeholderlayers = 
  placeholderlayer with properties:
                  name: 'unet_prediction'
    kerasconfiguration: [1×1 struct]
               weights: []
           inputlabels: {''}
          outputlabels: {''}
   learnable parameters
    no properties.
   state parameters
    no properties.
  show all properties

define existing layers with the same configurations as the imported keras layers.

sf = softmaxlayer;

replace the placeholder layers with existing layers using (deep learning toolbox).

lgraph = replacelayer(lgraph,"unet_prediction",sf);

convert the network to a (deep learning toolbox) object.

net = dlnetwork(lgraph);

display the updated layer graph information.

layergraph(net)
ans = 
  layergraph with properties:
     inputnames: {'unet_input'}
    outputnames: {1×0 cell}
         layers: [60×1 nnet.cnn.layer.layer]
    connections: [63×2 table]

predict using test data

predict network output

predict the segmentation output for the preprocessed mri volume. the segmentation output predictim contains 32 channels corresponding to the segmentation label classes, such as "background", "leftcerebralcortex", "rightthalamus". the predictim output assigns confidence scores to each voxel for every class. the confidence scores reflect the likelihood of the voxel being part of the corresponding class. this prediction is different from the final semantic segmentation output, which assigns each voxel to exactly one class.

predictim = predict(net,voldl);

test time augmentation

this example uses test time augmentation to improve segmentation accuracy. in general, augmentation applies random transformations to an image to increase the variability of a data set. you can use augmentation before network training to increase the size of the training data set. test time augmentation applies random transformations to test images to create multiple versions of the test image. you can then pass each version of the test image to the network for prediction. the network calculates the overall segmentation result as the average prediction for all versions of the test image. test time augmentation improves segmentation accuracy by averaging out random errors in the individual network predictions.

by default, this example flips the mri volume in the left-right direction, resulting in a flipped volume flippeddata. the network output for the flipped volume is flippredictim. set flipval to false to skip the test time augmentation and speed up prediction.

flipval = true;
if flipval
    flippeddata = fliplr(volproc);  
    flippeddata = flip(flippeddata,2);
    flippeddata = flip(flippeddata,1);
    flippeddata = dlarray(flippeddata,"ssscb");
    flippredictim = predict(net,flippeddata);
else
    flippredictim = [];  
end

postprocess segmentation prediction

to get the final segmentation maps, postprocess the network output by using the postprocessbraincandidata helper function. the helper function is attached to this example as a supporting file. the postprocessbraincandidata function performs these steps:

  • smoothing — apply a 3-d gaussian smoothing filter to reduce noise in the predicted segmentation masks.

  • morphological filtering — keep only the largest connected component of predicted segmentation masks to remove additional noise.

  • segmentation — assign each voxel to the label class with the greatest confidence score for that voxel.

  • resizing — resize the segmentation map to the original input volume size. resizing the label image allows you to visualize the labels as an overlay on the grayscale mri volume.

  • alignment — rotate the segmentation map back to the orientation of the original input mri volume.

the final segmentation result, predictedsegmaps, is a 3-d categorical array the same size as the original input volume. each element corresponds to one voxel and has one categorical label.

predictedsegmaps = postprocessbraincandidata(predictim,flippredictim,imsize, ...
    cropidx,metadata,classnames,labelids);

overlay a slice from the predicted segmentation map on a corresponding slice from the input volume using the function. include all the brain structure labels except the background label.

sliceidx = 80;
testslice = rescale(vol(:,:,sliceidx));
predsegmap = predictedsegmaps(:,:,sliceidx);
b = labeloverlay(testslice,predsegmap,"includedlabels",2:32);
figure
montage({testslice,b})

quantify segmentation accuracy

measure the segmentation accuracy by comparing the predicted segmentation labels with the ground truth labels drawn by clinical experts.

create a (computer vision toolbox) to store the labels. because the nifti file format is a nonstandard image format, you must use a nifti file reader to read the pixel label data. you can use the helper nifti file reader, niftireader, defined at the bottom of this example.

pxds = pixellabeldatastore(labeldirs,classnames,labelids,fileextensions=".gz",...
    readfcn=@niftireader);

read the ground truth labels from the pixel label datastore.

groundtruthlabel = read(pxds);
groundtruthlabel = groundtruthlabel{1};

measure the segmentation accuracy using the function. this function computes the dice index between the predicted and ground truth segmentations.

diceresult = zeros(length(classnames),1);
for j = 1:length(classnames)
    diceresult(j)= dice(groundtruthlabel==classnames(j),...
        predictedsegmaps==classnames(j));
end

calculate the average dice index across all labels for the mri volume.

meandicescore = mean(diceresult);
disp("average dice score across all labels = "  num2str(meandicescore))
average dice score across all labels = 0.80793

visualize statistics about the dice indices across all the label classes as a box chart. the middle blue line in the plot shows the median dice index. the upper and lower bounds of the blue box indicate the 25th and 75th percentiles, respectively. black whiskers extend to the most extreme data points that are not outliers.

figure
boxchart(diceresult)
title("dice accuracy")
xticklabels("all label classes")
ylabel("dice coefficient")

supporting functions

the niftireader helper function reads a nifti file in a datastore.

function data = niftireader(filename)
    data = niftiread(filename);
    data = uint8(data);
end

references

[1] billot, benjamin, douglas n. greve, oula puonti, axel thielscher, koen van leemput, bruce fischl, adrian v. dalca, and juan eugenio iglesias. “synthseg: domain randomisation for segmentation of brain scans of any contrast and resolution.” arxiv:2107.09559 [cs, eess], december 21, 2021. .

[2] “nitrc: candi share: schizophrenia bulletin 2008: tool/resource info.” accessed october 17, 2022. .

[3] frazier, j. a., s. m. hodge, j. l. breeze, a. j. giuliano, j. e. terry, c. m. moore, d. n. kennedy, et al. “diagnostic and sex effects on limbic volumes in early-onset bipolar disorder and schizophrenia.” schizophrenia bulletin 34, no. 1 (october 27, 2007): 37–46. .

see also

| (deep learning toolbox) | (deep learning toolbox) | (computer vision toolbox) | |

related topics

    网站地图