main content

process big data in the cloud -凯发k8网页登录

this example shows how to access a large data set in the cloud and process it in a cloud cluster using matlab capabilities for big data.

learn how to:

  • access a publicly available large data set on amazon cloud.

  • find and select an interesting subset of this data set.

  • use datastores, tall arrays, and parallel computing toolbox to process this subset in less than 20 minutes.

the public data set in this example is part of the wind integration national dataset toolkit, or wind toolkit [1], [2], [3], [4]. for more information, see .

requirements

to run this example, you must set up access to a cluster in amazon aws. in matlab, you can create clusters in amazon aws directly from the matlab desktop. on the home tab, in the parallel menu, select create and manage clusters. in the cluster profile manager, click create cloud cluster. alternatively, you can use mathworks cloud center to create and access compute clusters in amazon aws. for more information, see .

set up access to remote data

the data set used in this example is the techno-economic wind toolkit. it contains 2 tb (terabyte) of data for wind power estimates and forecasts along with atmospheric variables from 2007 to 2013 within the continental u.s.

the techno-economic wind toolkit is available via amazon web services, in the location s3://nrel-pds-wtk/wtk-techno-economic/pywtk-data. it contains two data sets:

  • s3://nrel-pds-wtk/wtk-techno-economic/pywtk-data/met_data - metrology data

  • s3://nrel-pds-wtk/wtk-techno-economic/pywtk-data/fcst_data - forecast data

to work with remote data in amazon s3, you must define environment variables for your aws credentials. for more information on setting up access to remote data, see work with remote data. in the following code, replace your_aws_access_key_id and your_aws_secret_access_key with your own amazon aws credentials. if you are using temporary aws security credentials, also set the environment variable aws_session_token.

setenv("aws_access_key_id","your_aws_access_key_id");
setenv("aws_secret_access_key","your_aws_secret_access_key");

this data set requires you to specify its geographic region, and so you must set the corresponding environment variable.

setenv("aws_default_region","us-west-2");

to give the workers in your cluster access to the remote data, add these environment variable names to the environmentvariables property of your cluster profile. to edit the properties of your cluster profile, use the cluster profile manager, in parallel > create and manage clusters. for more information, see .

find subset of big data

the 2 tb data set is quite large. this example shows you how to find a subset of the data set that you want to analyze. the example focuses on data for the state of massachusetts.

first obtain the ids that identify the metrological stations in massachusetts, and determine the files that contain their metrological information. metadata information for each station is in a file named three_tier_site_metadata.csv. because this data is small and fits in memory, you can access it from the matlab client with readtable. you can use the readtable function to access open data in s3 buckets directly without needing to write special code.

tmetadata = readtable("s3://nrel-pds-wtk/wtk-techno-economic/pywtk-data/three_tier_site_metadata.csv",...
    "readvariablenames",true,"texttype","string");

to find out which states are listed in this data set, use unique.

states = unique(tmetadata.state)
states = 50×1 string array
    ""
    "alabama"
    "arizona"
    "arkansas"
    "california"
    "colorado"
    "connecticut"
    "delaware"
    "district of columbia"
    "florida"
    "georgia"
    "idaho"
    "illinois"
    "indiana"
    "iowa"
    "kansas"
    "kentucky"
    "louisiana"
    "maine"
    "maryland"
    "massachusetts"
    "michigan"
    "minnesota"
    "mississippi"
    "missouri"
    "montana"
    "nebraska"
    "nevada"
    "new hampshire"
    "new jersey"
    "new mexico"
    "new york"
    "north carolina"
    "north dakota"
    "ohio"
    "oklahoma"
    "oregon"
    "pennsylvania"
    "rhode island"
    "south carolina"
    "south dakota"
    "tennessee"
    "texas"
    "utah"
    "vermont"
    "virginia"
    "washington"
    "west virginia"
    "wisconsin"
    "wyoming"

identify which stations are located in the state of massachusetts.

index = tmetadata.state == "massachusetts";
siteid = tmetadata{index,"site_id"};

the data for a given station is contained in a file that follows this naming convention: s3://nrel-pds-wtk/wtk-techno-economic/pywtk-data/met_data/folder/site_id.nc, where folder is the nearest integer less than or equal to site_id/500. using this convention, compose a file location for each station.

folder = floor(siteid/500);
filelocations = compose("s3://nrel-pds-wtk/wtk-techno-economic/pywtk-data/met_data/%d/%d.nc",folder,siteid);

process big data

you can use datastores and tall arrays to access and process data that does not fit in memory. when performing big data computations, matlab accesses smaller portions of the remote data as needed, so you do not need to download the entire data set at once. with tall arrays, matlab automatically breaks the data into smaller blocks that fit in memory for processing.

if you have parallel computing toolbox, matlab can process the many blocks in parallel. the parallelization enables you to run an analysis on a single desktop with local workers, or scale up to a cluster for more resources. when you use a cluster in the same cloud service as the data, the data stays in the cloud and you benefit from improved data transfer times. keeping the data in the cloud is also more cost-effective. this example ran in less than 20 minutes using 18 workers on a c4.8xlarge machine in amazon aws.

if you use a parallel pool in a cluster, matlab processes this data using workers in the cluster. create a parallel pool in the cluster. in the following code, use the name of your cluster profile instead. attach the script to the pool, because the parallel workers need to access a helper function in it.

p = parpool("myawscluster");
starting parallel pool (parpool) using the 'myawscluster' profile ...
connected to 18 workers.
addattachedfiles(p,mfilename("fullpath"));

create a datastore with the metrology data for the stations in massachusetts. the data is in the form of network common data form (netcdf) files, and you must use a custom read function to interpret them. in this example, this function is named ncreader and reads the netcdf data into timetables. you can explore its contents at the end of this script.

dsmetrology = filedatastore(filelocations,"readfcn",@ncreader,"uniformread",true);

create a tall timetable with the metrology data from the datastore.

ttmetrology = tall(dsmetrology)
ttmetrology =
  m×6 tall timetable
            time            wind_speed    wind_direction    power     density    temperature    pressure
    ____________________    __________    ______________    ______    _______    ___________    ________
    01-jan-2007 00:00:00       5.905          189.35        3.3254    1.2374       269.74        97963  
    01-jan-2007 00:05:00      5.8898          188.77        3.2988    1.2376       269.73        97959  
    01-jan-2007 00:10:00      5.9447          187.85         3.396    1.2376       269.71        97960  
    01-jan-2007 00:15:00      6.0362          187.05        3.5574    1.2376       269.68        97961  
    01-jan-2007 00:20:00      6.1156          186.49        3.6973    1.2375       269.83        97958  
    01-jan-2007 00:25:00      6.2133          185.71        3.8698    1.2376       270.03        97952  
    01-jan-2007 00:30:00      6.3232          184.29        4.0812    1.2379       270.19        97955  
    01-jan-2007 00:35:00      6.4331          182.51        4.3382    1.2382        270.3        97957  
             :                  :               :             :          :            :            :
             :                  :               :             :          :            :            :

get the mean temperature per month using groupsummary, and sort the resulting tall table. for performance, matlab defers most tall operations until the data is needed. in this case, plotting the data triggers evaluation of deferred calculations.

meantemperature = groupsummary(ttmetrology,"time","month","mean","temperature");
meantemperature = sortrows(meantemperature);

plot the results.

figure;
plot(meantemperature.mean_temperature,"*-");
ylim([260 300]);
xlim([1 12*7 1]);
xticks(1:12:12*7 1);
xticklabels(["2007","2008","2009","2010","2011","2012","2013","2014"]);
title("average temperature in massachusetts 2007-2013");
xlabel("year");
ylabel("temperature (k)")

many matlab functions support tall arrays, so you can perform a variety of calculations on big data sets using familiar syntax. for more information on supported functions, see .

define custom read function

the data in the techno-economic wind toolkit is saved in netcdf files. define a custom read function to read its data into a timetable. for more information on reading netcdf files, see .

function t = ncreader(filename)
% ncreader read netcdf file (.nc), extract data set and save as a timetable
% get information about netcdf data source
fileinfo = ncinfo(filename);
% extract variable names and datatypes
varnames = string({fileinfo.variables.name});
vartypes = string({fileinfo.variables.datatype});
% transform variable names into valid names for table variables
if any(startswith(varnames,["4","6"]))
    strvarnames = replace(varnames,["4","6"],["four","six"]);
else
    strvarnames = varnames;
end
% extract the length of each variable
filelength = fileinfo.dimensions.length;
% extract initial timestamp, sample period and create the time axis
tattributes = struct2table(fileinfo.attributes);
starttime = datetime(cell2mat(tattributes.value(contains(tattributes.name,"start_time"))),"convertfrom","epochtime");
sampleperiod = seconds(cell2mat(tattributes.value(contains(tattributes.name,"sample_period"))));
% create the output timetable 
numvars = numel(strvarnames);
tablesize = [filelength numvars];
t = timetable('size',tablesize,'variabletypes',vartypes,'variablenames',strvarnames,'timestep',sampleperiod,'starttime',starttime);
% fill in the timetable with variable data
for k = 1:numvars
    t(:,k) = table(ncread(filename,varnames{k}));
end
end

references

[1] draxl, c., b. m. hodge, a. clifton, and j. mccaa. overview and meteorological validation of the wind integration national dataset toolkit (technical report, nrel/tp-5000-61740). golden, co: national renewable energy laboratory, 2015.

[2] draxl, c., b. m. hodge, a. clifton, and j. mccaa. "the wind integration national dataset (wind) toolkit." applied energy. vol. 151, 2015, pp. 355-366.

[3] king, j., a. clifton, and b. m. hodge. validation of power output for the wind toolkit (technical report, nrel/tp-5d00-61714). golden, co: national renewable energy laboratory, 2014.

[4] lieberman-cribbin, w., c. draxl, and a. clifton. guide to using the wind toolkit validation code (technical report, nrel/tp-5000-62595). golden, co: national renewable energy laboratory, 2014.

see also

| | |

related examples

more about

  • (deep learning toolbox)
  • (deep learning toolbox)
网站地图