main content

csi feedback with autoencoders -凯发k8网页登录

this example shows how to use an autoencoder neural network to compress downlink channel state information (csi) over a clustered delay line (cdl) channel. csi feedback is in the form of a raw channel estimate array.

introduction

in conventional 5g radio networks, csi parameters are quantities related to the state of a channel that are extracted from the channel estimate array. the csi feedback includes several parameters, such as the channel quality indication (cqi), the precoding matrix indices (pmi) with different codebook sets, and the rank indicator (ri). the ue uses the csi reference signal (csi-rs) to measure and compute the csi parameters. the user equipment (ue) reports csi parameters to the access network node (gnb) as feedback. upon receiving the csi parameters, the gnb schedules downlink data transmissions with attributes such as modulation scheme, code rate, number of transmission layers, and mimo precoding. this figure shows an overview of a csi-rs transmission, csi feedback, and the transmission of downlink data that is scheduled based on the csi parameters.

the ue processes the channel estimate to reduce the amount of csi feedback data. as an alternative approach, the ue compresses and feeds back the channel estimate array. after receipt, the gnb decompresses and processes the channel estimate to determine downlink data link parameters. the compression and decompression can be achieved using an autoencoder neural network [1, 2]. this approach eliminates the use of existing quantized codebook and can improve overall system performance.

this example uses a 5g downlink channel with these system parameters.

txantennasize = [2 2 2 1 1]; % rows, columns, polarizations, panels
rxantennasize = [2 1 1 1 1]; % rows, columns, polarizations, panels
rmsdelayspread = 300e-9;     % s
maxdoppler = 5;              % hz
nsizegrid = 52;              % number resource blocks (rb)
                             % 12 subcarriers per rb
subcarrierspacing = 15;      % 15, 30, 60, 120 khz
numtrainingchest = 15000;
% carrier definition
carrier = nrcarrierconfig;
carrier.nsizegrid = nsizegrid;
carrier.subcarrierspacing = subcarrierspacing
carrier = 
  nrcarrierconfig with properties:
              ncellid: 1
    subcarrierspacing: 15
         cyclicprefix: 'normal'
            nsizegrid: 52
           nstartgrid: 0
                nslot: 0
               nframe: 0
   read-only properties:
       symbolsperslot: 14
     slotspersubframe: 1
        slotsperframe: 10
autoencopt.numsubcarriers = carrier.nsizegrid*12;
autoencopt.numsymbols = carrier.symbolsperslot;
autoencopt.numtxantennas = prod(txantennasize);
autoencopt.numrxantennas = prod(rxantennasize);

generate and preprocess data

the first step of designing an ai-based system is to prepare training and testing data. for this example, generate simulated channel estimates and preprocess the data. use functions to configure a cdl-c channel.

waveinfo = nrofdminfo(carrier);
samplesperslot = ...
  sum(waveinfo.symbollengths(1:waveinfo.symbolsperslot));
channel = nrcdlchannel;
channel.delayprofile = 'cdl-c';
channel.delayspread = rmsdelayspread;       % s
channel.maximumdopplershift = maxdoppler;   % hz
channel.randomstream = "global stream";
channel.transmitantennaarray.size = txantennasize;
channel.receiveantennaarray.size = rxantennasize;
channel.channelfiltering = false;           % no filtering for 
                                            % perfect estimate
channel.numtimesamples = samplesperslot;    % 1 slot worth of samples
channel.samplerate = waveinfo.samplerate;

simulate channel

run the channel and get the perfect channel estimate, hest.

[pathgains,sampletimes] = channel();
pathfilters = getpathfilters(channel);
offset = nrperfecttimingestimate(pathgains,pathfilters);
hest = nrperfectchannelestimate(carrier,pathgains,pathfilters, ...
  offset,sampletimes);

the channel estimate matrix is an [nsubcarriersnsymbolsnrxntx] array for each slot.

[nsub,nsym,nrx,ntx] = size(hest)
nsub = 624
nsym = 14
nrx = 2
ntx = 8

plot the channel response. the upper left plot shows the channel frequency response as a function of time (symbols) for receive antenna 1 and transmit antenna 1. the lower left plot shows the channel frequency response as a function of transmit antennas for symbol 1 and receive antenna 1. the upper right plot shows the channel frequency response for all receive antennas for symbol 1 and transmit antenna 1. the lower right plot shows the change in channel magnitude response as a function of transmit antennas for all receive antennas for subcarrier 400 and symbol 1.

plotchannelresponse(hest)

preprocess channel estimate

preprocess the channel estimate to reduce the size and convert it to a real-valued array. this figure shows the channel estimate reduction preprocess.

channel estimate preprocessing

assume that the channel coherence time is much larger than the slot time. average the channel estimate over a slot and obtain a [nsubcarriers1nrxntx]array.

hmean = mean(hest,2);

to enable operation on subcarriers and tx antennas, move the tx and rx antenna dimensions to the second and third dimensions, respectively.

hmean = permute(hmean,[1 4 3 2]);

to obtain the delay-angle representation of the channel, apply a 2-d discrete fourier transform (dft) over subcarriers and tx antennas for each rx antenna and slot. to demonstrate the workflow and reduce runtime, this subsection processes rx channel 1 only.

hdft2 = fft2(hmean(:,:,1));

since the multipath delay in the channel is limited, truncate the delay dimension to remove values that do not carry information. the sampling period on the delay dimension is tdelay=1/(nsubcarriers*fss)t delay is eqaul to one over multiplication of number of subcarriers and subcarrier spacing., where fss is subcarrier spacing. the expected rms delay spread in delay samples is τrms/tdelaytau rms divided by t delay, where τrmstau rms is the rms delay spread of the channel in seconds.

tdelay = 1/(autoencopt.numsubcarriers*carrier.subcarrierspacing*1e3);
rmstausamples = channel.delayspread / tdelay;
maxtruncationfactor = floor(autoencopt.numsubcarriers / rmstausamples);

truncate the channel estimate to an even number of samples that is 10 times the expected rms delay spread. increasing the truncationfactor value can decrease the performance loss due to preprocessing. but, doing so increases the neural network complexity, number of required training data points, and training time. a neural network with more learnable parameters might not converge to a better solution.

truncationfactor = 10;
maxdelay = round((channel.delayspread/tdelay)*truncationfactor/2)*2
maxdelay = 28
autoencopt.maxdelay = maxdelay;

calculate the truncation indices and truncate the channel estimate.

midpoint = floor(nsub/2);
loweredge = midpoint - (nsub-maxdelay)/2   1;
upperedge = midpoint   (nsub-maxdelay)/2;
htemp = hdft2([1:loweredge-1 upperedge 1:end],:);

to get back to the subcarriers-tx antennas domain, apply a 2-d inverse discrete fourier transform (idft) to the truncated array [2]. this process effectively decimates the channel estimate in the subcarrier axis.

htrunc = ifft2(htemp);

separate the real and imaginary parts of the channel estimate to obtain a [ndelayntx2] array.

htruncreal = zeros(maxdelay,ntx,2);
htruncreal(:,:,1) = real(htrunc);
htruncreal(:,:,2) = imag(htrunc); %#ok 

plot the channel estimate signal through the preprocessing steps. images are scaled to help visualization.

plotpreprocessingsteps(hmean(:,:,1),hdft2,htemp,htrunc,nsub,ntx, ...
  maxdelay)

prepare data in bulk

the helper function generates numtrainingchest of preprocessed [ndelayntx2] channel estimates by using the process described in this section. the function saves each [ndelayntx2] channel estimate as an individual file in the datadir with the prefix of trainingdatafileprefix. if is available, helpercsinettrainingdata function uses to parallelize data generation. data generation takes less than three minutes on a pc with intel® xeon® w-2133 cpu @ 3.60ghz and running in parallel on six workers.

datadir = fullfile(exroot(),"data");
trainingdatafileprefix = "nr_channel_est";
if validatetrainingfiles(datadir,trainingdatafileprefix, ...
    numtrainingchest,autoencopt,channel,carrier) == false
  disp("starting training data generation")
  tic
  autoencopt.normalization = false;  % do not normalize data yet
  
  helpercsinettrainingdata(datadir,trainingdatafileprefix, ...
    numtrainingchest,carrier,channel,autoencopt);
  t = seconds(toc);
  t.format = "hh:mm:ss";
  disp(string(t)   " - finished training data generation")
end
starting training data generation
6 workers running
00:00:12 -  8% completed
00:00:23 - 16% completed
00:00:35 - 24% completed
00:00:46 - 32% completed
00:00:58 - 40% completed
00:01:09 - 48% completed
00:01:21 - 56% completed
00:01:32 - 64% completed
00:01:44 - 72% completed
00:01:56 - 80% completed
00:02:07 - 88% completed
00:02:19 - 96% completed
00:02:26 - finished training data generation

create a object to access the data. the signal datastore uses individual files for each data point.

sds = signaldatastore( ...
  fullfile(datadir,"processed",trainingdatafileprefix "_*"));

load data into memory, calculate the mean value and standard deviation, and then use the mean and standard deviation values to normalize the data.

htruncrealcell = readall(sds);
htruncreal = cat(4,htruncrealcell{:});
meanval = mean(htruncreal,'all')
meanval = single
    -0.0236
stdval = std(htruncreal,[],'all')
stdval = single
    16.0657

separate the data into training, validation, and test sets. also, normalize the data to achieve zero mean and a target standard deviation of 0.0212, which restricts most of the data to the range of [-0.5 0.5].

n = size(htruncreal, 4);
numtrain = floor(n*10/15)
numtrain = 10000
numval = floor(n*3/15)
numval = 3000
numtest = floor(n*2/15)
numtest = 2000
targetstd = 0.0212;
htreal = (htruncreal(:,:,:,1:numtrain)-meanval) ...
  /stdval*targetstd 0.5;
hvreal = (htruncreal(:,:,:,numtrain (1:numval))-meanval) ...
  /stdval*targetstd 0.5;
htestreal = (htruncreal(:,:,:,numtrain numval (1:numtest))-meanval) ...
  /stdval*targetstd 0.5;
autoencopt.meanval = meanval;
autoencopt.stdvalue = stdval;
autoencopt.targetstdvalue = targetstd; %#ok 

define and train neural network model

the second step of designing an ai-based system is to define and train the neural network model.

define neural network

this example uses a modified version of the autoencoder neural network proposed in [1].

inputsize = [maxdelay ntx 2];  % third dimension is real and imaginary parts
nlinear = prod(inputsize);
nencoded = 64;
autoencoderlgraph = layergraph([ ...
    % encoder
    imageinputlayer(inputsize,"name","htrunc", ...
      "normalization","none","name","enc_input")
    convolution2dlayer([3 3],2,"padding","same","name","enc_conv")
    batchnormalizationlayer("epsilon",0.001,"meandecay",0.99, ...
      "variancedecay",0.99,"name","enc_bn")
    leakyrelulayer(0.3,"name","enc_leakyrelu")
    flattenlayer("name","enc_flatten")
    fullyconnectedlayer(nencoded,"name","enc_fc")
    sigmoidlayer("name","enc_sigmoid")
    % decoder
    fullyconnectedlayer(nlinear,"name","dec_fc")
    functionlayer(@(x)dlarray(reshape(x,maxdelay,ntx,2,[]),'sscb'), ...
      "formattable",true,"acceleratable",true,"name","dec_reshape")
    ]);
autoencoderlgraph = ...
  helpercsinetaddresiduallayers(autoencoderlgraph, "dec_reshape");
autoencoderlgraph = addlayers(autoencoderlgraph, ...
    [convolution2dlayer([3 3],2,"padding","same","name","dec_conv") ...
    sigmoidlayer("name","dec_sigmoid") ...
    regressionlayer("name","dec_output")]);
autoencoderlgraph = ...
  connectlayers(autoencoderlgraph,"leakyrelu_2_3","dec_conv");
figure
plot(autoencoderlgraph)
title('csi compression autoencoder')

train neural network

set the training options for the autoencoder neural network and train the network using the (deep learning toolbox) function. training takes less than 15 minutes on an amd epyc 7262 3.2 ghz 8c/16t with 8 nvidia rtx a5000 gpus with executionenvironment set to 'multi-gpu'. set trainnow to false to load the pretrained network. note that the saved network works for the following settings. if you change any of these settings, set trainnow to true.

txantennasize = [2 2 2 1 1]; % rows, columns, polarizations, panels
rxantennasize = [2 1 1 1 1]; % rows, columns, polarizations, panels
rmsdelayspread = 300e-9;     % s
maxdoppler = 5;              % hz
nsizegrid = 52;              % number resource blocks (rb)
                             % 12 subcarriers per rb
subcarrierspacing = 15; 
trainnow = false;
minibatchsize = 1000;
options = trainingoptions("adam", ...
    initiallearnrate=0.0074, ...
    learnrateschedule="piecewise", ...
    learnratedropperiod=112, ...
    learnratedropfactor=0.6085, ...
    epsilon=1e-7, ...
    maxepochs=1000, ...
    minibatchsize=minibatchsize, ...
    shuffle="every-epoch", ...
    validationdata={hvreal,hvreal}, ...
    validationfrequency=20, ...
    verbose=false, ...
    validationpatience=20, ...
    outputnetwork="best-validation-loss", ...
    executionenvironment="auto", ...
    plots='training-progress') %#ok 
options = 
  trainingoptionsadam with properties:
             gradientdecayfactor: 0.9000
      squaredgradientdecayfactor: 0.9990
                         epsilon: 1.0000e-07
                initiallearnrate: 0.0074
               learnrateschedule: 'piecewise'
             learnratedropfactor: 0.6085
             learnratedropperiod: 112
                l2regularization: 1.0000e-04
         gradientthresholdmethod: 'l2norm'
               gradientthreshold: inf
                       maxepochs: 1000
                   minibatchsize: 1000
                         verbose: 0
                verbosefrequency: 50
                  validationdata: {[28×8×2×3000 single]  [28×8×2×3000 single]}
             validationfrequency: 20
              validationpatience: 20
                         shuffle: 'every-epoch'
                  checkpointpath: ''
             checkpointfrequency: 1
         checkpointfrequencyunit: 'epoch'
            executionenvironment: 'auto'
                      workerload: []
                       outputfcn: []
                           plots: 'training-progress'
                  sequencelength: 'longest'
            sequencepaddingvalue: 0
        sequencepaddingdirection: 'right'
            dispatchinbackground: 0
         resetinputnormalization: 1
    batchnormalizationstatistics: 'population'
                   outputnetwork: 'best-validation-loss'
if trainnow
  [net,traininfo] = ...
    trainnetwork(htreal,htreal,autoencoderlgraph,options); %#ok 
  save("csitrainednetwork_" ...
      string(datetime("now","format","dd_mm_hh_mm")), ...
    'net','traininfo','options','autoencopt')
else
  helpercsinetdownloaddata()
  autoencoptcached = autoencopt;
  load("csitrainednetwork",'net','traininfo','options','autoencopt')
  if autoencopt.numsubcarriers ~= autoencoptcached.numsubcarriers ...
      || autoencopt.numsymbols ~= autoencoptcached.numsymbols ...
      || autoencopt.numtxantennas ~= autoencoptcached.numtxantennas ...
      || autoencopt.numrxantennas ~= autoencoptcached.numrxantennas ...
      || autoencopt.maxdelay ~= autoencoptcached.maxdelay
    error("csiexample:missmatch", ...
      "saved network does not match settings. set trainnow to true.")
  end
end
files already exist. skipping download and extract.

test trained network

use the (deep learning toolbox) function to process the test data.

htestrealhat = predict(net,htestreal);

calculate the correlation and normalized mean squared error (nmse) between the input and output of the autoencoder network. the correlation is defined as

ρ=e{1nn=1n|hˆnhhn|hˆn2hn2}

where hn is the channel estimate at the input of the autoencoder and hˆn is the channel estimate at the output of the autoencoder. nmse is defined as

nmse=e{h-hˆ22h22}normalized mean square error is equal to the square of the second norm of the difference between autoencoder input and output, divided y the square of the seconf norm of the autoencoder input.

where h is the channel estimate at the input of the autoencoder and hˆ is the channel estimate at the output of the autoencoder.

rho = zeros(numtest,1);
nmse = zeros(numtest,1);
for n=1:numtest
    in = htestreal(:,:,1,n)   1i*(htestreal(:,:,2,n));
    out = htestrealhat(:,:,1,n)   1i*(htestrealhat(:,:,2,n));
    % calculate correlation
    n1 = sqrt(sum(conj(in).*in,'all'));
    n2 = sqrt(sum(conj(out).*out,'all'));
    aa = abs(sum(conj(in).*out,'all'));
    rho(n) = aa / (n1*n2);
    % calculate nmse
    mse = mean(abs(in-out).^2,'all');
    nmse(n) = 10*log10(mse / mean(abs(in).^2,'all'));
end
figure
tiledlayout(2,1)
nexttile
histogram(rho,"normalization","probability")
grid on
title(sprintf("autoencoder correlation (mean \\rho = %1.5f)", ...
  mean(rho)))
xlabel("\rho"); ylabel("pdf")
nexttile
histogram(nmse,"normalization","probability")
grid on
title(sprintf("autoencoder nmse (mean nmse = %1.2f db)",mean(nmse)))
xlabel("nmse (db)"); ylabel("pdf")

end-to-end csi feedback system

this figure shows the end-to-end processing of channel estimates for csi feedback. the ue uses the csi-rs signal to estimate the channel response for one slot, hest. the preprocessed channel estimate, htr, is encoded by using the encoder portion of the autoencoder to produce a 1-by-nenc compressed array. the compressed array is decompressed by the decoder portion of the autoencoder to obtain htrˆ. postprocessing htrˆ produces hestˆ.

end-to-end csi compression

to obtain the encoded array, split the autoencoder into two parts: the encoder network and the decoder network.

[encnet,decnet] = helpercsinetsplitencoderdecoder(net,"enc_sigmoid");
plotnetwork(net,encnet,decnet)

generate channel estimates.

nslots = 100;
hest = helpercsinetchannelestimate(nslots,carrier,channel);

encode and decode the channel estimates with normalization set to true.

autoencopt.normalization = true;
codeword = helpercsinetencode(encnet, hest, autoencopt);
hhat = helpercsinetdecode(decnet, codeword, autoencopt);

calculate the correlation and nmse for the end-to-end csi feedback system.

h = squeeze(mean(hest,2));
rhoe2e = zeros(nrx,nslots);
nmsee2e = zeros(nrx,nslots);
for rx=1:nrx
    for n=1:nslots
        out = hhat(:,rx,:,n);
        in = h(:,rx,:,n);
        rhoe2e(rx,n) = helpercsinetcorrelation(in,out);
        nmsee2e(rx,n) = helpernmse(in,out);
    end
end
figure
tiledlayout(2,1)
nexttile
histogram(rhoe2e,"normalization","probability")
grid on
title(sprintf("end-to-end correlation (mean \\rho = %1.5f)", ...
  mean(rhoe2e,'all')))
xlabel("\rho"); ylabel("pdf")
nexttile
histogram(nmsee2e,"normalization","probability")
grid on
title(sprintf("end-to-end nmse (mean nmse = %1.2f db)", ...
  mean(nmsee2e,'all')))
xlabel("nmse (db)"); ylabel("pdf")

effect of quantized codewords

practical systems require quantizing the encoded codeword by using a small number of bits. simulate the effect of quantization across the range of [2, 10] bits. the results show that 6-bits is enough to closely approximate the single-precision performance.

csi compression with autoencoder and quantization

maxval = 1;
minval = -1;
idxbits = 1;
nbitsvec = 2:10;
rhoq = zeros(nrx,nslots,length(nbitsvec));
nmseq = zeros(nrx,nslots,length(nbitsvec));
for numbits = nbitsvec
    disp("running for "   numbits   " bit quantization")
    % quantize between 0:2^n-1 to get bits
    qcodeword = uencode(double(codeword*2-1), numbits);
    % get back the floating point, quantized numbers
    codewordrx = (single(udecode(qcodeword,numbits)) 1)/2;
    hhat = helpercsinetdecode(decnet, codewordrx, autoencopt);
    h = squeeze(mean(hest,2));
    for rx=1:nrx
        for n=1:nslots
            out = hhat(:,rx,:,n);
            in = h(:,rx,:,n);
            rhoq(rx,n,idxbits) = helpercsinetcorrelation(in,out);
            nmseq(rx,n,idxbits) = helpernmse(in,out);
        end
    end
    idxbits = idxbits   1;
end
running for 2 bit quantization
running for 3 bit quantization
running for 4 bit quantization
running for 5 bit quantization
running for 6 bit quantization
running for 7 bit quantization
running for 8 bit quantization
running for 9 bit quantization
running for 10 bit quantization
figure
tiledlayout(2,1)
nexttile
plot(nbitsvec,squeeze(mean(rhoq,[1 2])),'*-')
title("correlation (codeword-"   size(codeword,3)   ")")
xlabel("number of quantization bits"); ylabel("\rho")
grid on
nexttile
plot(nbitsvec,squeeze(mean(nmseq,[1 2])),'*-')
title("nmse (codeword-"   size(codeword,3)   ")")
xlabel("number of quantization bits"); ylabel("nmse (db)")
grid on

further exploration

the autoencoder is able to compress a [624 8] single-precision complex channel estimate array into a [64 1] single-precision array with a mean correlation factor of 0.99 and an nmse of –16 db. using 6-bit quantization requires only 384 bits of csi feedback data, which equates to a compression ratio of approximately 800:1.

display("compression ratio is "   (624*8*32*2)/(64*6)   ":"   1)
    "compression ratio is 832:1"

investigate the effect of truncationfactor on the system performance. vary the 5g system parameters, channel parameters, and number of encoded symbols and then find the optimum values for the defined channel.

the example shows how to use channel state information (csi) feedback to adjust the physical downlink shared channel (pdsch) parameters and measure throughput. replace the csi feedback algorithm with the csi compression autoencoder and compare performance.

helper functions

explore the helper functions to see the detailed implementation of the system.

training data generation

network definition and manipulation

csi processing

performance measurement

appendix: optimize hyperparameters with experiment manager

use the experiment manager app to find the optimal parameters. csitrainingproject.mlproj is a preconfigured project. extract the project.

if ~exist("csitrainingproject","dir")
  projroot = helpercsinetextractproject();
else
  projroot = fullfile(exroot(),"csitrainingproject");
end

to open the project, start the experiment manager app and open the following file.

disp(fullfile(".","csitrainingproject","csitrainingproject.prj"))
.\csitrainingproject\csitrainingproject.prj

the optimize hyperparameters experiment uses bayesian optimization with hyperparameter search ranges specified as in the following figure. the experiment setup function is . the custom metric function is .

the optimal parameters are 0.0074 for initial learning rate, 112 iterations for the learning rate drop period, and 0.6085 for learning rate drop factor. after finding the optimal hyperparameters, train the network with same parameters multiple times to find the best trained network. increase the maximum iterations by a factor of two.

the sixth trial produced the best nmse. this example uses this trained network as the saved network.

configuring batch mode

when execution mode is set to batch sequential or batch simultaneous, training data must be accessible to the workers in a location defined by the datadir variable in the prepare data in bulk section. set datadir to a network location that is accessible by the workers. for more information, see (deep learning toolbox).

local functions

function plotchannelresponse(hest)
%plotchannelresponse plot channel response
figure
tiledlayout(2,2)
nexttile
waterfall(abs(hest(:,:,1,1))')
xlabel("subcarriers"); 
ylabel("symbols"); 
zlabel("channel magnitude")
view(15,30)
colormap("cool")
title("rx=1, tx=1")
nexttile
plot(squeeze(abs(hest(:,1,:,1))))
grid on
xlabel("subcarriers"); 
ylabel("channel magnitude")
legend("rx 1", "rx 2")
title("symbol=1, tx=1")
nexttile
waterfall(squeeze(abs(hest(:,1,1,:)))')
view(-45,75)
grid on
xlabel("subcarriers"); 
ylabel("tx"); 
zlabel("channel magnitude")
title("symbol=1, rx=1")
nexttile
nsubcarriers = size(hest,1);
subcarrier = randi(nsubcarriers);
plot(squeeze(abs(hest(subcarrier,1,:,:)))')
grid on
xlabel("tx"); 
ylabel("channel magnitude")
legend("rx 1", "rx 2")
title("subcarrier="   subcarrier   ", symbol=1")
end
function valid = validatetrainingfiles(datadir,fileprefix,expn, ...
  opt,channel,carrier)
%validatetrainingfiles validate training data files
%   v = validatetrainingfiles(dir,pre,n,opt,ch,cr) checks the dir directory
%   for training data files with a prefix of pre. it checks if there are
%   n*opt.numrxantennas files, channel configuration is same as ch, and
%   carrier configuration is same as cr.
valid = true;
files = dir(fullfile(datadir,fileprefix "*"));
if isempty(files)
  valid = false;
  return
end
if exist(fullfile(datadir,"info.mat"),"file")
  infostr = load(fullfile(datadir,"info.mat"));
  if ~isequal(get(infostr.channel),get(channel)) ...
      || ~isequal(infostr.carrier,carrier)
    valid = false;
  end
else
  valid = false;
end
if valid
  valid = (expn == (length(files)*opt.numrxantennas));
  % check size of hest in the files
  load(fullfile(files(1).folder,files(1).name),'h')
  if ~isequal(size(h),[opt.numsubcarriers opt.numsymbols ...
      opt.numrxantennas opt.numtxantennas])
    valid = false;
  end
end
if ~valid
  disp("removing invalid data directory: "   files(1).folder)
  rmdir(files(1).folder,'s')
end
end
function plotnetwork(net,encnet,decnet)
%plotnetwork plot autoencoder network
%   plotnetwork(net,enc,dec) plots the full autoencoder network together
%   with encoder and decoder networks.
fig = figure;
t1 = tiledlayout(1,2,'tilespacing','compact');
t2 = tiledlayout(t1,1,1,'tilespacing','tight');
t3 = tiledlayout(t1,2,1,'tilespacing','tight');
t3.layout.tile = 2;
nexttile(t2)
plot(net)
title("autoencoder")
nexttile(t3)
plot(encnet)
title("encoder")
nexttile(t3)
plot(decnet)
title("decoder")
pos = fig.position;
pos(3) = pos(3)   200;
pos(4) = pos(4)   300;
pos(2) = pos(2) - 300;
fig.position = pos;
end
function plotpreprocessingsteps(hmean,hdft2,htemp,htrunc, ...
  nsub,ntx,maxdelay)
%plotpreprocessingsteps plot preprocessing workflow
hfig = figure;
hfig.position(3) = hfig.position(3)*2;
subplot(2,5,[1 6])
himg = imagesc(abs(hmean)); 
himg.parent.ydir = "normal"; 
himg.parent.position(3) = 0.05; 
himg.parent.xtick=''; himg.parent.ytick=''; 
xlabel(sprintf('tx\nantennas\n(%d)',ntx)); 
ylabel(sprintf('subcarriers\n(%d)',nsub'));
title("measured")
subplot(2,5,[2 7])
himg = image(abs(hdft2)); 
himg.parent.ydir = "normal"; 
himg.parent.position(3) = 0.05; 
himg.parent.xtick=''; himg.parent.ytick=''; 
title("2-d dft")
xlabel(sprintf('tx\nangle\n(%d)',ntx)); 
ylabel(sprintf('delay samples\n(%d)',nsub'));
subplot(2,5,[3 8])
himg = image(abs(htemp)); 
himg.parent.ydir = "normal"; 
himg.parent.position(3) = 0.05; 
himg.parent.position(4) = himg.parent.position(4)*10*maxdelay/nsub; 
himg.parent.position(2) = (1 - himg.parent.position(4)) / 2;
himg.parent.xtick=''; himg.parent.ytick=''; 
xlabel(sprintf('tx\nangle\n(%d)',ntx)); 
ylabel(sprintf('delay samples\n(%d)',maxdelay'));
title("truncated")
subplot(2,5,[4 9])
himg = imagesc(abs(htrunc)); 
himg.parent.ydir = "normal"; 
himg.parent.position(3) = 0.05; 
himg.parent.position(4) = himg.parent.position(4)*10*maxdelay/nsub; 
himg.parent.position(2) = (1 - himg.parent.position(4)) / 2;
himg.parent.xtick=''; himg.parent.ytick=''; 
xlabel(sprintf('tx\nantennas\n(%d)',ntx)); 
ylabel(sprintf('subcarriers\n(%d)',maxdelay'));
title("2-d idft")
subplot(2,5,5)
himg = imagesc(real(htrunc)); 
himg.parent.ydir = "normal"; 
himg.parent.position(3) = 0.05; 
himg.parent.position(4) = himg.parent.position(4)*10*maxdelay/nsub; 
himg.parent.position(2) = himg.parent.position(2)   0.18;
himg.parent.xtick=''; himg.parent.ytick=''; 
xlabel(sprintf('tx\nantennas\n(%d)',ntx)); 
ylabel(sprintf('subcarriers\n(%d)',maxdelay'));
title("real")
subplot(2,5,10)
himg = imagesc(imag(htrunc)); 
himg.parent.ydir = "normal"; 
himg.parent.position(3) = 0.05; 
himg.parent.position(4) = himg.parent.position(4)*10*maxdelay/nsub; 
himg.parent.position(2) = himg.parent.position(2)   0.18;
himg.parent.xtick=''; himg.parent.ytick=''; 
xlabel(sprintf('tx\nantennas\n(%d)',ntx)); 
ylabel(sprintf('subcarriers\n(%d)',maxdelay'));
title("imaginary")
end
function rootdir = exroot()
%exroot example root directory
rootdir = fileparts(which("helpercsinetlayergraph"));
end

references

[1] wen, chao-kai, wan-ting shih, and shi jin. “deep learning for massive mimo csi feedback.” ieee wireless communications letters 7, no. 5 (october 2018): 748–51. https://doi.org/10.1109/lwc.2018.2818160.

[2] zimaglia, elisa, daniel g. riviello, roberto garello, and roberto fantini. “a novel deep learning approach to csi feedback reporting for nr 5g cellular systems.” in 2020 ieee microwave theory and techniques in wireless communications (mttw), 47–52. riga, latvia: ieee, 2020. https://doi.org/10.1109/mttw51045.2020.9245055.

related topics

网站地图