main content

compress network for estimating battery state of charge -凯发k8网页登录

since r2023b

this example shows how to compress a neural network for predicting the state of charge of a battery using projection and principal component analysis.

battery state of charge (soc) is the level of charge of an electric battery relative to its capacity measured as a percentage. soc is critical information for the vehicle energy management system and must be accurately estimated to ensure reliable and affordable electrified vehicles (xev). however, due to the nonlinear temperature, health, and soc dependent behavior of li-ion batteries, soc estimation is still a significant automotive engineering challenge. traditional approaches to this problem, such as electrochemical models, usually require precise parameters and knowledge of the battery composition as well as its physical response. in contrast, using neural networks is a data-driven approach that requires minimal knowledge of the battery or its nonlinear behavior [1].

the compressnetworkusingprojection function compresses a network by projecting layers into smaller parameter subspaces. for optimal initialization of the projected network, the function projects the learnable parameters of projectable layers into a subspace that maintains the highest variance in neuron activations. after you compress a neural network using projection, you can then fine-tune the network to increase the accuracy.

in this example, you:

  1. train a recurrent neural network to predict the state of charge of a li-ion battery, given time series data representing features of the battery.

  2. compress the network using projection.

  3. fine-tune the compressed network.

  4. generate library c code for making predictions using the original network and the compressed, fine-tuned network.

  5. compare the size and the performance of the original and compressed, fine-tuned network.

compressing a network reduces the size of the network in memory and speeds up inference. you can apply the compression and fine-tuning steps to the network trained in the predict battery state of charge using deep learning example.

download data

each file in the lg_hg2_prepared_dataset_mcmasteruniversity_jan_2020 data set contains a time series x of five predictors (voltage, current, temperature, average voltage, and average current) and a time series y of one target (soc). each file represents data collected at a different ambient temperature. the predictors have been normalized to be in the range [0,1].

specify the url from where to download the data set. alternatively, you can download this data set manually from .

url = "https://data.mendeley.com/public-files/datasets/cp3473x7xv/files/ad7ac5c9-2b9e-458a-a91f-6f3da449bdfb/file_downloaded";

set downloadfolder to where you want to download the zip file and the outputfolder to where you want to extract the zip file.

downloadfolder = tempdir;
outputfolder = fullfile(downloadfolder,"lghg2@n10c_to_25degc");

download and extract the lg_hg2_prepared_dataset_mcmasteruniversity_jan_2020 data set.

if ~exist(outputfolder,"dir")
    fprintf("downloading lghg2@n10c_to_25degc.zip (56 mb) ... ")
    filename = fullfile(downloadfolder,"lghg2@n10c_to_25degc.zip");
    websave(filename,url);
    unzip(filename,outputfolder)
end
downloading lghg2@n10c_to_25degc.zip (56 mb) ... 

prepare training and validation data

set the subsequence length to 500 and the number of input features to 3. as this example uses an lstm network that can learn long term trends, the remaining two features in the data set (the average voltage and average current) are not required.

chunksize = 500;
numfeatures = 3;     

use the setupdata function to prepare the training and validation data by splitting the sequences into subsequences of length 500. the setupdata function is provided as a supporting function at the end of this example and splits the sequences into subsequences of a specified length.

trainingfile = fullfile(outputfolder,"train","train_lghg2@n10degc_to_25degc_norm_5inputs.mat");
[xtrain,ytrain] = setupdata(trainingfile,chunksize,numfeatures);
validationfile = fullfile(outputfolder,"validation", "01_test_lghg2@n10degc_norm_(05_inputs).mat");
[xval,yval] = setupdata(validationfile,chunksize,numfeatures);

visualize one of the observations in the training data set by plotting the target soc and the corresponding predictors.

responsetopreview = 30;
figure
plot(ytrain{responsetopreview})
hold on
plot(xtrain{responsetopreview}(1,:))
plot(xtrain{responsetopreview}(2,:))
plot(xtrain{responsetopreview}(3,:))
legend(["soc" "voltage" "current" "temperature"])
xlabel("sample")
hold off

define network architecture

define the following lstm network that predicts the battery state of charge. this example uses an lstm network instead of a network with three hidden layers to demonstrate the effect of projection on a larger network. all of the steps in this example can be applied to the network trained in the predict battery state of charge using deep learning example.

  • for the sequence input, specify a sequence input layer with input size matching the number of features. rescale the input to be in the range [-1,1] using the symmetric-rescale normalization name-value argument.

  • to learn long-term dependencies in the sequence data, include two lstm layers with 128 and 64 hidden units respectively.

  • to reduce overfitting, include two dropout layers with a dropout probability of 0.2.

  • include a fully connected layer with a size that matches the size of the output. to bound the output in the interval [0,1], include a sigmoid layer.

numhiddenunits = 128;
numresponses = 1;
dropoutprob = 0.2;
layers = [...
    sequenceinputlayer(numfeatures,normalization="rescale-symmetric")
    lstmlayer(numhiddenunits)
    dropoutlayer(dropoutprob)
    lstmlayer(numhiddenunits/2)
    dropoutlayer(dropoutprob)
    fullyconnectedlayer(numresponses)
    sigmoidlayer];

specify the training options.

  • train for 100 epochs with mini-batches of size 64 using the "adam" solver.

  • specify an initial learning rate of 0.01, a learning rate drop period of 30 and a learning rate drop factor of 0.1.

  • to prevent the gradients from exploding, set the gradient threshold to 1.

  • shuffle the training data every epoch.

  • specify the validation data.

  • to avoid having to rearrange the training data, specify the input and target data formats as ctb (channel, time, batch).

  • return the network with the lowest validation loss.

  • display the training progress and suppress the verbose output.

options = trainingoptions("adam", ...
    minibatchsize = 64, ...
    maxepochs = 100, ...
    initiallearnrate = 1e-2, ...
    learnrateschedule = "piecewise", ...
    learnratedropperiod = 30, ...
    learnratedropfactor = 0.1, ...
    gradientthreshold = 1, ...
    shuffle = "every-epoch", ...
    validationdata = {xval,yval}, ...
    validationfrequency = 50, ...
    inputdataformats="ctb", ...
    targetdataformats="ctb", ...
    outputnetwork="best-validation-loss", ...
    plots = "training-progress", ...
    verbose = false);

train network

train the network using the trainnet function, specifying the loss function as mean-squared error.

recurrentnet = trainnet(xtrain,ytrain,layers,"mean-squared-error",options);

test network

evaluate the performance of the network on the test data set and compare the network predictions to the measured values.

testfile = fullfile(outputfolder,"test","01_test_lghg2@n10degc_norm_(05_inputs).mat");
s = load(testfile);
xtest = s.x(1:numfeatures,:);
xtest = dlarray(xtest,"ct");
ypredoriginal = predict(recurrentnet,xtest);
ytest = s.y;
rmseoriginal = rmse(ytest,extractdata(ypredoriginal))
rmseoriginal = single
    0.0336

plot the predictions and the measured values.

figure
plot(ypredoriginal);
hold on;
plot(ytest,'k--',linewidth=2); 
hold off
xlabel("sample")
ylabel("y")
legend("soc estimated using original network","soc ground truth",location="best");

inspect the number of learnables in the network using the numlearnables function. the numlearnables function is provided at the end of this example.

learnables = numlearnables(recurrentnet)
learnables = 117057

save the trained network.

save("recurrentnet.mat","recurrentnet");

explore compression levels

the compressnetworkusingprojection function uses principal component analysis (pca) to identify the subspace of learnable parameters that result in the highest variance in neuron activations by analyzing the network activations using a data set of training data. this analysis requires only the predictors of the training data to compute the network activations. it does not require the training targets.

the pca step can be computationally intensive. if you expect to compress the same network multiple times (for example, when exploring different levels of compression), then perform the pca step first and reuse the resulting neuronpca object.

create a mini-batch queue containing the training data.

  • specify a mini-batch size of 64.

  • specify that the output data has format "ctb" (channel, time, batch).

  • preprocess the mini-batches by concatenating the sequences over the third dimension.

mbsize = 64;
mbq = minibatchqueue(...
    arraydatastore(xtrain,outputtype="same",readsize=mbsize),...
    minibatchsize=mbsize,...
    minibatchformat="ctb",...
    minibatchfcn=@(x) cat(3,x{:}));

create the neuronpca object. to view information about the steps of the neuron pca algorithm, set the verbositylevel option to "steps".

npca = neuronpca(recurrentnet,mbq,verbositylevel="steps");
using solver mode "direct".
computing covariance matrices for activations connected to: "lstm_1/in","lstm_1/out","lstm_2/in","lstm_2/out","fc/in","fc/out"
computing eigenvalues and eigenvectors for activations connected to: "lstm_1/in","lstm_1/out","lstm_2/in","lstm_2/out","fc/in","fc/out"
neuronpca analyzed 3 layers: "lstm_1","lstm_2","fc"

view the properties of the neuronpca object.

npca
npca = 
  neuronpca with properties:
                  layernames: ["lstm_1"    "lstm_2"    "fc"]
      explainedvariancerange: [0 1]
    learnablesreductionrange: [0.6358 0.9770]
            inputeigenvalues: {[3×1 double]  [128×1 double]  [64×1 double]}
           inputeigenvectors: {[3×3 double]  [128×128 double]  [64×64 double]}
           outputeigenvalues: {[128×1 double]  [64×1 double]  [6.3770]}
          outputeigenvectors: {[128×128 double]  [64×64 double]  [1]}

the explained variance of a network details how well the space of network activations can capture the underlying features of the data. to explore different amounts of compression, iterate over different values of the explainedvariancegoal option of the compressnetworkusingprojection function and compare the results.

numvalues = 10;
explainedvargoal = 1 - logspace(-3,0,numvalues);
explainedvariance = zeros(1,numvalues);
learnablesreduction = zeros(1,numvalues);
accuracy = zeros(1,numvalues);
xvalcompression = dlarray(cat(3,xval{:}),"cbt");
yvalcompression = cat(3,yval{:});
for i = 1:numvalues
    variancegoal = explainedvargoal(i);
    [trialnetwork,info] = compressnetworkusingprojection(recurrentnet,npca, ...
        explainedvariancegoal=variancegoal, ...
        verbositylevel="off");
    explainedvariance(i) = info.explainedvariance;
    learnablesreduction(i) = info.learnablesreduction;
    ypredprojected = predict(trialnetwork,xvalcompression);
    ypredprojected = extractdata(ypredprojected);
    accuracy(i) = rmse(yvalcompression,ypredprojected,"all");
end

plot the rmse of the compressed networks against their explained variance goal.

figure
tiledlayout("flow")
nexttile
plot(learnablesreduction,accuracy,' -')
ylabel("rmse")
title("effect of different compression levels")
nexttile
plot(learnablesreduction,explainedvariance,' -')
ylim([0.8 1])
ylabel("explained variance")
xlabel("learnable reduction")

the graph shows that an increase in learnable reduction has a corresponding increase in rmse (decrease in accuracy). a learnable reduction value of around 94% shows a good compromise between compression amount and rmse.

compress network using projection

compress the network using the neuronpca object with a learnable reduction goal of 94% using the compressnetworkusingprojection function. to ensure that the projected network supports library-free code generation, specify that the projected layers are unpacked.

recurrentnetprojected = compressnetworkusingprojection(recurrentnet,npca,learnablesreductiongoal=0.94,unpackprojectedlayers=true);
compressed network has 94.8% fewer learnable parameters.
projection compressed 3 layers: "lstm_1","lstm_2","fc"

inspect the number of learnables in the projected network.

learnablesprojected = numlearnables(recurrentnetprojected)
learnablesprojected = 6092

evaluate the projected network´s performance on the test data set.

ypredprojected = predict(recurrentnetprojected,xtest);
rmseprojected = rmse(ytest,extractdata(ypredprojected))
rmseprojected = single
    0.0595

compressing the network has increased the root-mean-square error of the predictions.

fine-tune compressed network

you can improve the accuracy by retraining the network.

reduce the number of training epochs and the number of epochs between drops in the learning rate.

options.maxepochs = options.maxepochs/2;
options.learnratedropperiod = options.learnratedropperiod/2;

train the network using the trainnet function, specifying the loss function as mean squared error.

recurrentnetprojected = trainnet(xtrain,ytrain,recurrentnetprojected,"mean-squared-error",options);

evaluate the fine-tuned projected network´s performance on the test data set.

ypredprojected = predict(recurrentnetprojected,xtest);
rmseprojected = rmse(ytest,extractdata(ypredprojected))
rmseprojected = single
    0.0349

save the fine-tuned projected network.

save("recurrentnetprojected.mat","recurrentnetprojected");

generate c code

generate c code based on the original network and the fine-tuned compressed network.

create entry-point function for code generation

an entry-point function is a top-level matlab function from which you generate code. write an entry-point function in matlab that:

  • uses the coder.loaddeeplearningnetwork function to load a deep learning model and to construct. for more information, see (gpu coder).

  • calls the predict function to predict the responses.

the entry-point functions recurrentnetpredict.m and recurrentnetprojectedpredict.m are provided as supporting files with this example. to access these files, open the example as a live script.

inspect the entry-point functions.

type recurrentnetpredict.m
function y = recurrentnetpredict(x)
persistent net;
if isempty(net)
    net = coder.loaddeeplearningnetwork("recurrentnet.mat");
end
y = predict(net,x);
end
type recurrentnetprojectedpredict.m
function y = recurrentnetprojectedpredict(x)
persistent net;
if isempty(net)
    net = coder.loaddeeplearningnetwork("recurrentnetprojected.mat");
end
y = predict(net,x);
end

generate code

to configure build settings, such as output file name, location, and type, create a coder configuration object. to create the object, use the coder.config function and specify that the output should be a mex file.

cfg = coder.config("mex");

set the language to use in the generated code to c .

cfg.targetlang = "c  ";

to generate code that does not use any third-party libraries, set the target deep learning library to none.

cfg.deeplearningconfig = coder.deeplearningconfig("none"); 

create example values that define the size and class of input to the generated code.

matrixinput = coder.typeof(single(xtest),size(xtest),[false false]); 

generate code for the original network and the fine-tuned compressed network. the mex files recurrentnetpredict_mex and recurrentnetprojected_mex are created in your current folder.

codegen -config cfg recurrentnetpredict -args {matrixinput}
code generation successful.
codegen -config cfg recurrentnetprojectedpredict -args {matrixinput}
code generation successful.

you can view the resulting code generation report by clicking view report in the matlab command window. the report is displayed in the report viewer window. if the code generator detects errors or warnings during code generation, the report describes the issues and provides links to the problematic matlab code.

run the generated code

run the generated code. to ensure that the generated code performs as expected, check that the root-mean-squared errors are unchanged.

ypredoriginal = recurrentnetpredict_mex(single(xtest));
rmseoriginal = rmse(ytest,extractdata(ypredoriginal))
rmseoriginal = single
    0.0336
ypredprojected = recurrentnetprojectedpredict_mex(single(xtest));
rmseprojected = rmse(ytest,extractdata(ypredprojected))
rmseprojected = single
    0.0349

compare original and compressed networks

plot the predictions from each network and the measured values.

figure
plot(ypredoriginal);
hold on;
plot(ypredprojected);
plot(ytest,'k--',linewidth=2); 
hold off
xlabel("sample")
ylabel("y")
legend("soc estimated using original network","soc estimated using projected network","soc ground truth",location="best");

compare the error, model size, and inference time of the networks.

% plot root-mean-squared errors.
figure
tiledlayout(1,3)
nexttile
bar([rmseoriginal rmseprojected])
xticklabels(["original" "fine-tuned projected"])
ylabel("rmse")
% plot numbers of learnables.
nexttile
bar([learnables learnablesprojected])
xticklabels(["original" "fine-tuned projected"])
ylabel("number of learnables")
% calculate and plot inference time using the generated code.
originalnncomptime = timeit(@()recurrentnetpredict_mex(single(xtest)));
projectednncomptime = timeit(@()recurrentnetprojectedpredict_mex(single(xtest)));
nexttile
bar([originalnncomptime projectednncomptime])
xticklabels(["original" "fine-tuned projected"])
ylabel("desktop codegen inference time (s)")

compared to the original network, the projected network is significantly smaller and has reduced inference time, while incurring a minor reduction in prediction accuracy.

networks compressed using projection can be used inside simulink® models. as there is no equivalent to projected lstm or projected gru layers in the mkl-dnn library, networks containing these layers cannot take advantage of code generation to improve simulation speed as described in . you can take advantage of the reduced network size and reduced inference time when you deploy the generated code to hardware or run software-in-the-loop (sil) or processor-in-the-loop (pil) simulations of your model.

supporting functions

numlearnables

the numlearnables function receives a network as input and returns the total number of learnables in that network.

function n = numlearnables(net)
    n = 0;
    for i = 1:size(net.learnables,1)
        n = n   numel(net.learnables.value{i});
    end
end

setupdata

the setupdata function loads a structure stored in the mat-file filename, extracts the first numfeatures features of the sequence data and the target values, and splits the data into subsequences of length chunksize.

function [x,y] = setupdata(filename,chunksize,numfeatures)
    s = load(filename);
    nsamples = length(s.y);
    nelems = floor(nsamples/chunksize);
    x = cell(nelems,1);
    y = cell(nelems,1);
    
    for ii = 1:nelems
        idxstart = 1 (ii-1)*chunksize;
        idxend = ii*chunksize;
        x{ii} = s.x(1:numfeatures,idxstart:idxend);
        y{ii} = s.y(idxstart:idxend);
    end
end

references

[1] kollmeyer, phillip, carlos vidal, mina naguib, and michael skells. “lg 18650hg2 li-ion battery data and example deep neural network xev soc estimator script.” mendeley, march 5, 2020. .

see also

| | | (matlab coder)

related topics

网站地图