signal processing and machine learning techniques for sensor data analytics -凯发k8网页登录
an increasing number of applications require the joint use of signal processing and machine learning techniques on time series and sensor data. matlab® can accelerate the development of data analytics and sensor processing systems by providing a full range of modelling and design capabilities within a single environment.
in this webinar, we present an example of a classification system able to identify the physical activity that a human subject is engaged in, solely based on the accelerometer signals generated by his or her smartphone.
we introduce common signal processing methods in matlab (including digital filtering and frequency-domain analysis) that help extract descripting features from raw waveforms, and we show how parallel computing can accelerate the processing of large datasets. we then discuss how to explore and test different classification algorithms (such as decision trees, support vector machines, or neural networks) both programmatically and interactively.
finally, we demonstrate the use of automatic c/c code generation from matlab to deploy a streaming classification algorithm for embedded sensor analytics.
recorded: 21 mar 2018
welcome to this webinar on signal processing and machine learning techniques for sensor data analytics using matlab. my name is gabriele bunkheila, and i'm part of the product management team here at mathworks. my background is in signal processing, which is also the area where i spent most of my career here, helping engineering scientists apply matlab to their own challenges.
in this webinar, i'm going to discuss the application of some standard matlab techniques for research problems and product design workflows that require the joint use of machine learning—for example, clustering or classification methods—on signals or time series, meaning sampled one devalues varying over time.
as we'll see, much of the complexity of these problems stems from the need of a fair bit of domain knowledge in both machine learning and signal processing. and that often poses a challenge. whether you're unfamiliar with either signal processing, machine learning, or both, or neither, i hope the session will help you understand how matlab can greatly accelerate the algorithm design workflow for these problems.
we are also seeing a growing industry trend in pushing machine learning algorithms along with single processing onto embedded devices and closer to the actual signal sensors. here, i'll refer to that class of applications as sensor data analytics or embedded analytics. because the development of such products poses additional engineering challenges, i thought it'd be useful to allocate some time at the end of the session to review the additional support that matlab provides for those types of workflows.
so here's a course list of what we'll look at. we'll spend quite some time revisiting how common signal processing methods can be applied to both preparing or preprocessing signals and to extracting descriptive features. i'll show how to select, train, and test classification or pattern recognition algorithm in matlab, including some simple approaches to scale up performance for computationally intensive problems. doing this will give us an opportunity to explore a lot of basic matlab features, from interactive apps to interior language constructs, that enable and accelerate algorithm development for workflows.
and finally, we’ll discuss some matlab capabilities for the design of online predictive systems, including the design and simulation of dsp functionality and the generation of source c code from predictive algorithms to have them running on embedded architectures.
now i’ll come back to the slides later. but i’m going to spend most of this hour discussing a practical example running matlab. what you’re looking at here are the three components of the accelerometer signal capture using a smartphone. the signals are generated by a subject wearing a smartphone in a fixed position on their body and engaging on different physical activities. we are running an algorithm as if it operated on live signals. but in this case, we’re using labeled recorded data for validation.
so we get to know the ground truth. but we are also trying to automatically understand what activity they are doing purely based on signal processing and machine learning methods. and as you can see, most of the time we’re successfully guessing that activity.
now i’d like to make a simple point that this is just a simple example walking with accelerometer data. but the techniques that i’ll discuss are relevant to a wide spectrum of applications and to most types of sampled signals or time series.
to make my point, i’ve collected a short list of examples that i came across personally work with matlab users, which, broadly speaking, use the same types of techniques. these already capture use cases in a number of different industries, like electronics, aerospace, defense, finance, automotive. and again, even this is just a random list of examples. the relevance of techniques that we’re discussing here is much wider.
i’ll take advantage of one last slide before going back to matlab to review the different pieces of the examples that we’ve just seen. we take the three components of the sample acceleration coming from a smartphone. and we predict the physical activity of the subject as a choice between six different options or classes, walking, walking up stairs, walking down stairs, sitting, standing, and laying.
the prediction is done through a classification algorithm. classification describes a popular class of machine learning algorithms. the key idea is guessing or predicting the class of a new sample, in this case, a signal buffer, based on previous knowledge of similar data. the way it works is that first, the algorithm is trained with a large set of known or labeled cases optimizing its free parameters to identify those known cases as accurately as possible.
once trained, it can be run on unknown new data to formulate a guess or prediction on what’s the most likely class for that new data. in general, the training phase is a lot more data and computationally intensive than the test or runtime phase. so for embedded applications, it is not uncommon to run the training phase on a host system or computer cluster, but only to deploy a fixed, pretrained algorithm onto the embedded product.
regardless of training or runtime use, it is very uncommon for classifiers to be able to work on raw waveforms like these. in practice, for each signal segment or buffer, once you have a way to extract a finite set of measurements, often called features, from the raw waveforms, the bottom line to choose what features to use is that they should capture the similarities between signals in the same class and the differences between those in different ones. we’ll spend most of the time left to show how matlab can be used to design these two big algorithmic steps, starting right from the initial exploratory phase.
now as a quick note, a key part of working through a similar task or project is the availability of a reference data set. that will be a collection of signal recordings acquired in a controlled experiment and carefully labeled so that signals are known and associated to the right activity. for this example, i’m borrowing a nice data set made available by two research groups, respectively from spain and italy, and available at the address in this slide.
i hope that the general problem is clear enough by now. so let me go back to matlab. in the following, i’m going to assume that you’re familiar with the basics of matlab, including things like scripts and functions and basic plotting and visualization.
to walk you through this example, i’m going to use a script formed of a number of code cells that can be executed independently. i’ll skip the very first cell, which i used to launch my completed application at the very beginning. executing this for a cell loads a portion of my data sets and plots it.
let’s not care too much right now about how i loaded the data or produced this plot. and let’s take a look at the data that we have available. we have a vector, acc, containing the samples of the vertical acceleration for a subject over a period of time. the data set itself is 3d acceleration recordings for 30 different subjects. the acceleration will sample at 50 samples per second. that’s the meaning of the sampling frequency variable, fs. we also have the time vector, t, with the same length of acc, which is good.
t and acc can be plotted one against the other. and the plot shows us that for this subject, we have around 8 minutes’ worth of samples. know that time here is regularly spaced. in some application, that may not be the case or samples may be missing. but keep in mind that matlab has plenty of techniques to regularize and preprocess those types of signals.
the other obvious things to notice is the other long vector, actid, shorthand for activity id, which is telling us what activity each data sample corresponds to as an integer between 1 and 6. we can interpret those integers by looking at the remaining variable acc labels, so 1 is walking, 2, walking up stairs, 3, walking down stairs, and so on.
so the plot here looks very similar to our final objective, which is identifying the activity given a portion of signals. but remember, this is known, the label data. we're only visualizing available information. what we want to do is design a method that can learn from this information to guess it on new data without previous knowledge.
so how could we do that? all first attempts, we'll try to use intuitive approaches. so for example, in this plot, you already see that the acceleration waveform does indeed look different depending on the activity. you can see that almost all activities have an average of around 10, that's g or a 9.8, while 1 is more around 0. and guess what? that's laying because the body has a different orientation with the gravitational field.
there are then three of these that look fairly static—no surprise—that's sitting, standing, and laying, while the other three appear to oscillate up and down a lot more. so one could start by doing something very simple, just use some statistical measurements on a portion of consecutive samples regardless of how they're distributed in time.
for example, looking at the distributions of the walking and laying samples respectively, one can see that just computing the mean value in comparing to a threshold—so you see, 5, here—would give us a pretty good chance of making the difference between the two. similar considerations between, say, walking and standing, but in this case, we'd probably want to measure the standard deviation and compare it to something like 1 or 2 meters per second squared.
but what about if we had to work out the difference between plain walking and walking up stairs? in this case, mean and standard deviation or the width of the distribution look very similar. here, what you should really consider is some more advanced analysis of how values vary over time and look at things like, say, the rate or the shape of the oscillations.
an intuitive reason for that may be because people move faster when they walk down stairs, say, compared to when they would walk up stairs, or the types of movements differ for different activities. this is precisely where signal processing methods start to be part of the picture.
before going there, let me make a point in passing on what i've just done. i've just casually drawn three histogram plots and quickly discussed their meaning. even only this task could be a fairly hard one if you had to do it by hand from scratch. but these type of things are available in matlab as single functions. so despite using a pre-edited function of mine to put those two plots in a figure and give them the right colors, inside it a single call to the histogram function is doing all the hard work for me.
histogram as opposed to the old hist was introduced and released r 2014b of matlab. and it provides a new, more efficient way of plotting histograms.
now to analyze variations over time, we want to focus on the acceleration caused by the body movements. it's reasonable to assume that the body movements produce faster variations while the gravity contribution is often almost constant. if i had two contributions that are blended together into a single signal and i want to separate them out, then a technique that often applies is digital filtering.
in this case, for example, we want to keep only the oscillations quicker than about one cycle per second, say, which is a rough figure for the average number of steps per second, and discard contributions with slower oscillations. using the right jargon, that requires designing and applying to our data an appropriate high-pass filter. i'll repeat these ideas as we go through the process.
now designing and applying a digital filter is hard unless you have the right tools. the design phase in particular requires quite a bit of math and a lot of domain-specific knowledge. in matlab, there are many different ways in which one could design a digital filter. for example, one may choose to do it entirely programmatically, which means only using matlab commons, or through built-in apps.
let's first take a look at what the latter would look like. using an app is generally a great idea when you approach a problem for the first time. to do that, i go to the apps tab of the matlab tool strip, and i scroll down to the signal processing and communications app group. in here, i will pick the filter design and analysis tool. for more advanced filter design, you may also want to try the filter builder app.
the filter design and analysis tool comes with several sections. for example, this filter specifications pane will help us specify the right requirements to our filter. down here to the left is where we start to define what we're looking to achieve. in this case, i'll select a high-pass filter. but you can see that a lot of other choices are also possible.
down here, there's a more technical choice. if you know about digital filters, you'll probably know what an fir and ir mean. and the design methods listed here will probably resonate quite a bit. here, i'll skip the details. and i'll just choose one of these iar options. moving to the right here, all i need to do is capture my requirements using the specification pane above.
the things i have to say include we're using sampling frequency of 50 hertz. we want to keep unaltered; that is, we want to attenuate by a factor of 1 or 0 db. all signal components oscillating more quickly than one time per second or 1 hertz. let's be generous and say 0.8 hertz. then everything to the left of this other value, fstop, is attenuated by at least a given number of db. i'll set this to, say, 0.4 hertz and correspondingly, this astop to 60 db. this means that all oscillations slower than 0.4 times per second will be made 1,000 times smaller by the filter.
finally, by pressing design, the tool does all the job for us. and we end up with a filter that satisfies our requirements. we have a set of analysis tools available right within this app to verify that the filter is behaving as expected.
for example, now we're looking at what's called margin sheet response of a frequency. if i need to confirm this is honoring the specifications, i can overlay a specification mask. or if i wanted to understand the transient behavior, by press of a button, i can quickly visualize things like the step or the impulse response.
once my filter is designed, what i really want to do is to use it in matlab to apply it to my signal. for that, i can choose between two types of approaches. i can export the filter into my matlab workspace as one or more variables. or i can generate some matlab code that realizes all that i've just done interactively through a programmatic script.
the code that you see here has just been generated automatically as the header of this function is telling us. however, i could have just as well decided to use similar comments independently. having this generated automatically for me can also help me gain some insight so that the next time around, i could more quickly design my filter programmatically. but more importantly, this now gives me a quick way of realizing the filter from my own code just by using this function call.
i'm not going to discard this new function because i have a previously saved version already available in my working folder called hp filter. and going back to my script, you can see that i'm creating a filter via my preset function using one line of code. and in the next line, i'm applying the filter to my vertical acceleration. and that creates a new signal where we hope to find only the contributions due to the body movements. if i execute this section, i'm also plotting the new filtered signal against the original one.
in the plot, we can see that the new signal is now all centered on 0, as expected. that's desired. and some transient behavior due to the filter is present, which is perfectly normal.
now let me focus on one activity at a time. how can i do that? there's a very effective matlab feature called logical indexing that i can use to that end. look at this. say i want to isolate this walking portion of my signal. the activity type is stored in the vector actid. and i check here when actid is equal to 1. and because i have two working portions here, you only look at when the time is smaller than 250 seconds.
the result here is a vector of the same length as my signal that i can use to select only the samples that honor those criteria. and here is our working segment. we can zoom in and confirm that the signal oscillates fairly regularly, almost periodically. now the question is, how can i measure how quickly this is oscillating or some parameters to quantify the shape or fingerprints of these oscillations?
a good answer would be by looking at a spectral representation of my signal, or some could say through computing it's fft. much better than fft—which is a pretty low-level operation—the right phrase here is power spectral density that may use the fft along with a few other bits and pieces. again, my best bet is to focus on my objective and see if matlab can simply do that for me, as is the case.
out of the many functions available to estimate the power spectral density of our signal here, i'm using the welsh method, which is pretty popular. in light of code here, i have my spectrum. on the x-axis, i have the frequency from 0 to 1/2 of my sampling frequency, which was 50 hertz. and on the y-axis, i have db per hertz or power density. then the region when the values in this plot are higher is likely to carry the information that i'm after.
for our signals, this pattern of peaks between 0 and 10 hertz with higher energy holds a lot of measurable information. if you have a sudden signal theory class, you remember about the spectra of signals that are periodic or almost periodic. we can see a fundamental frequency, roughly around 1 hertz, and a number of harmonics at positions multiple of that frequency.
as for extracting information from this, the distance in frequency between these peaks is the rate of time domain oscillations. and the relative amplitudes of the peaks described the shape of oscillations, a bit like the timbre in musical signals. to validate these, i'll also show you the spectrum for walking on top of the one for walking up stairs in a range between 0 and 10 hertz.
here, walking up stairs produces slower and smoother movements because the slower of these peaks are all pushed to the left. and the smoothness in the time domain causes the peaks to the right of the fundamental to decrease very quickly, indicating softer time-domain transitions. if this sounds unfamiliar, think about the spectrum of a pure sine wave, which has a single peak in its spectrum compared to that of a square wave, which is full of high-frequency harmonics.
once established that special peaks carry information, we'd like a programmatic way of measuring their height and position. so here's the next question. how do you identify peaks in a curve? contrary to what some may think, that's not trivial. the signal processing toolbox comes to rescue with a function called find peaks that is built to do just that.
if we use it while providing no other information but our raw spectral density, then it will return the complete set of local peaks found in my plot. but if we put some more effort in defining what we want, for example, how many peaks it should return and what is the peak prominence that we require or what is the minimum distance between nearby peaks that we expect, then the results are much more encouraging. and just using a few lines of code, we now have a programmatic measurement approach that can be automated. and it's highly descriptive of our signal characteristics.
in the example that i showed you at the beginning, i was using a couple of more signal processing measurements to extract other features. but i think by now you'd get the general spirit of an exploratory approach for extracting features from signals. what i did at the end of this phase was to collect all the useful measurements identified into a function so that for each new signal segment available, i'm able to automatically produce the collection of all my measurements or features that describe it.
let me show it to you quickly. for every new buffer of acceleration samples in the three directions here, i am computing the mean, filtering out the gravity contribution, computing the rms, measuring the spectral peaks, as well as a couple of other things. if i look at the spectral features subfunction, you can recognize the functions pwelch and find peaks from a few minutes ago.
in total, this function returns 66 highly descriptive features for every new signal buffer it is passed to as an input. what i really like about it is that if i measure the net number of code lines, excluding comments and empty lines, that sums up to only 54. that's 54 lines of code for 66 features or much less than a single line per feature, which i find indicative of how the matlab language is concise to the advantage of both understanding and productivity.
with that, i think we can now say that we're halfway through our exploratory workflow. we put in place a method to extract a finite set of features for every given segment of signal. we now need to design a classifier able to learn how to associate measurements, or sets of 66 features in this case, to a class or a choice of activity among six available options.
to work with a classifier, we first need to map all our data into the new feature-based representation. let me open this other script to show you quickly what i mean. imagine we had first reorganized our data, say, eight minutes of samples times 30 subjects, in a large number of small buffers of equal length, say, 128 samples. what we do now is that for every one of those buffers, we call our feature extraction function to compute our 66 features. and we end up with a new feature data set with as many columns as the number of features and as many rows that are available buffers.
because classification algorithms often need a lot of data to learn, extracting features from an entire data can take a very long time. and if along this exploratory phase one decides to use different features, then the whole operation needs starting over. let me show what i mean here on a small scale.
let's reduce the number of data buffer here to a mere 600 and run this. i start a timer before the loop and stop it right afterwards. we can monitor the progress as my 600 data buffers are converted to features one after the other. and the process terminates roughly around 17 seconds later. let the number of buffers grow, and this will grow linearly with that.
now think about this. the computations in each cycle of the for loop are all independent of each other. so if we had more computational resources available, we could start to think about distributing the burden across the computing nodes available. i suspect most of you would think that that would be a hard task. so let me challenge that perception.
what i'll do here is change my for keyword to parfor, make sure i have a parallel pool enabled, then run again my loop. the buffer is now processed asynchronously by a pool of four server matlab worker sessions running in the background. and i'm finished in a fraction of the original time.
the actual performance gain will change depending on the particular problem. the bottom line is that because i have the parallel computing toolbox installed with my machine, i was able to open locally a number of matlab workers equal to the number of available cores on my machine. i have four cores here. but with external resources like a cluster, the number can be driven up at will. and then i was able to distribute independent iterations of a long for loop simply by changing for into parfor, like parallel for.
once we're done, we can save our featured data set and go back to where we left our problem. we left our problem at the stage where we needed to select a classification algorithm. and now we have the data ready to go ahead. when you need a classifier, you have a choice among a high number of different types of algorithms. the matlab documentation provides some guidance on which types are best suited to which problems, but the whole process of trial and error can be intimidating, especially if you're not familiar with machine learning in general or in particular, with a reasonable number of classification algorithms.
to address that issue from release 2015a, matlab has a new app called classification learner. you can pick it from the apps tab. but let me just load it with preset feature data and open the app by running the common classification learner right from my script.
to start, i load my data and pick this option on the right-hand side here to leave out a fraction of my data set for validation. know that before loading the data in my script, i'd also range it as a matlab table. and that will allow the tool to associate names to my feature and display some simple statistics for each of them. my data also included an actid and activity label for each available feature vector.
as i click import data on the right, i have a simple visual of my data points in a 2d-feature space. i can choose which of my 66 features to use for x and which for y and get a feel for how well my data samples are separable. at this point, we simply start selecting different classifier algorithms from this catalog and train them one by one on our data set using this button.
you don't really need to know what these algorithms are or how they work and what parameters they need to even work because the tool selects them all smartly for you. and if you want, you can change them by using this advanced button. as i hope you can see, when the training completes, the tool displays an accuracy summary beside each of the selected options and highlights in green the choice with best accuracy.
at this stage, you may also need to understand a bit more closely the performance of the classifier. and this app has a few diagnostic options available right from within it, like, for example, the confusion matrix, which shows how well our predictions map to the actual known values in the data set. for example, a full green diagonal here and no instances outside it would indicate 100% prediction accuracy.
as in many other cases with matlab apps, you can then turn your interactive exploratory work into a bit of code to automate the same steps programmatically. in our script, we're using a preset version that comes exactly from the same workflow. what's interesting here is a three-line of code pattern consisting in choosing the setting for the classifier, training it—note the fit keyword in here—and running it on new data to return the predicted class.
we can use this generated function right from within our script to return a trained classifier and use it on new unknown feature vectors. i wouldn't do that just as yet because there's something more that i need to mention. the classification learner provides an intuitive way to access a good number of conventional classifiers that ship with the statistics and machine learning toolbox.
an example of an alternative way to address this problem is neural networks. again, in this case, designing and training a network from scratch would be very complicated. but the neural network toolbox provides apps and functions to get started quickly and design a functional network with only a few lines of code.
in the interest of time and to take a different perspective, let me just share how one could use a programmatic approach in this case to do the same job. here i initialize a pattern recognition network with 80 neurons in a single hidden layer with a single line of code. then i trained it and returned the predicted classes in the test set in just a couple of more lines.
if you ever came across the theory of neural networks, then you'll know that the complexity of the math underneath these operations is considerable. just think about using back propagation for a [inaudible] network architecture and all the optimization options that you may need to consider for your cost function. in this case, most of the well-established algorithm is just available to use so you can focus on solving a specific problem. when i execute this code, i get an interface to monitor the training progress also confirming the architecture of the network: 80 neurons in the hidden layer, 66 inputs as the number of features, and 6 output classes.
when we're done, the trained neural network is available in my workspace. and again, through a programmatic approach, i can use it to run the prediction on the whole test set portion of my data set. as we did before interactively, i can generate diagnostics programmatically as in the case of this confusion matrix.
this reports around 92% accuracy, which is pretty good, along with a detailed view on how all the predictor classes match the already known value. as an example here, we could notice that a lot of the sitting were confused for standing and vice versa. so that would be an area for improvement in our algorithm.
now let me take a step back and review what we have achieved. we were able to train and use a classifier operating on high-quality features extracted with signal processing methods. we tackled a problem that required significant domain expertise in the two domains of signal processing and machine learning, which had no single way of being addressed and could have taken a long time to solve. instead, this only took a few iterations. i used a few different apps and algorithms that were readily available to use without needing to open any complicated book and program any math from scratch. a remarkable result was a signal processing function able to extract 66 features in only 54 lines of code.
now i'd like to spend the last 10 minutes or so of our session to consider a few common engineering challenges slightly beyond the algorithm exploration phase that we discussed until now. imagine that our final objective was to run our predictions on signals coming from a new acceleration sensor. in this case, we used an existing data set. and we didn't ask ourselves many questions on how that had been collected.
in general, getting hold of relevant data may well come to be the first problem on our list. i very often talk to engineers who assume that to acquire real-world signals and to explore your algorithms you need two different tools, and who end up spending quite some time transitioning data between matlab and some external data acquisition software. but it turns out that matlab can directly connect to a number of sensors and data acquisition devices. and using that connectivity can further accelerate your discovery cycles.
because our example uses accelerometers and data from a smartphone, i thought i'd also include a reference to two free support packages downloadable from mathworks website, which allow to stream sensor signals from ios and android devices directly into matlab.
now thinking about the end of a development workflow now, imagine that your shiny matlab algorithm had to be implemented on a real-time system; for example, on an embedded device close to the accelerometer itself. in this case, not only the final real-time software will probably help to be rewritten in c or c , but the actual functionality of the algorithm would have to be readapted in the final product.
the machine learning portion will probably need to be simpler. it's common, for example, for embedded classifiers to be pretrained offline and to be implemented in a lightweight version that only does online prediction. the signal processing will also vary probably even more significantly. for example, filters that work on signals that stream from sensors will continuously accept new samples and update their internal states accordingly.
if the original matlab model didn't take into account these effects, then it's possible in the final implementation could never match the performance of the original simulation, potentially compromising the success of the actual end product.
the good news is that matlab is not only relevant to the initial signal analysis and algorithm exploration phase; they can also be used to simulate real-time systems and generate embeddable source c code. it's beyond the scope of this webinar to cover these aspects in detail. but let me give you an idea of what's possible in this area.
the quickest point to cover is the deployment of the classifier. when we trained and tested our neural network classifier, everything had been done through a network object that we'd called net. this has a wealth of functionality attached to it. and the actual code for the math used for prediction may be quite hard to find. but from my object net, i can run this genfunction method and generate a simple prediction function that only models what needs to happen real time just using basic constructs.
let's now take a look at the modeling of the digital signal processing. on the left here, in extractsignalfeatures.m i have the 54-line feature extraction function that we reviewed before. more signal processing function used here come from the signal processing toolbox. these were extremely valuable during our exploration phase. they are the best choice for data analysis tasks. but they are not intended to model the behavior of a real-time system. and that's not what we had in mind when we put this code together in the first place.
look, for example, at how we filter our signals. my first consideration really is just a side note. but it may help us get into the right mindset. here we compute a filter coefficient for every new signal portion even if they are always the same. we take in the full sequence of entries at once. and we assume to be operating on a pretty long one. as we do that, every time we assume to start with a filter with clean history with zero internal states.
now for comparison, let me look at another way to model this process that has a real-time implementation in mind. and that's in this other function features from buffer.m. most of the signal processing in this right-hand side function comes from a dsp system toolbox. these objects have been developed with system design and simulation in mind. they may be less practical to use for signal analysis. but they can be used to accurately model real-time dsp systems.
if we just look at the filter for comparison, here it's a filter object with a notion of internal structure. and as you could see by simply creating one in matlab, one could even get better accurate behavior by capturing the complete data type specifications; for example, for a fixed-point implementation.
the object keeps hold of its internal state and is declared as persistent. so exiting and re-entering the function here will find it again in its previous state. so really, if required, it could even take in one sample at a time. and it would still operate as expected.
as for the coefficients, they're computed only once the first time this function is called when this persistent variable is initialized. then they're just used at runtime by calling the step method on every new buffer of data. as a side effect, this filter runs very efficiently as it's initialized just once. and from the second time around, it only executes the strictly necessary computations to process the input. these attributes make it ideal for using it with stream signals in the context of signal design and simulation. a first advantage of this new system model, as we may call it now, is that by simulating it, we can verify early on the design of our algorithms for an embedded system and check that the behavior is as expected.
i'm sure this visualization now looks familiar. this is what i showed you right at the beginning of this session to introduce our example. the dynamic simulation here offers a different perspective. for example, i can check the stability of the prediction in the transitions from one activity to the other. or another application, i may need to analyze the signal as i would do on an oscilloscope, for example, using triggers and markers.
this here is a time scope. but other types of visualizations are also possible—including, obviously, in the frequency domain with a spectrum analyzer. but we won't look at that right now.
if we take a look at the code that produces the simulation, we'll meet again a lot of programming practices that we've seen in the new feature extraction function. so, for example, we use a while loop with a new data process at every iteration. this code object here is what we're using from continuous online visualization. within the while loop, we just keep pushing in more data using the same simple constructs of that step method that we have already seen earlier.
at the beginning of this loop we use a file reader object in a similar way to incrementally advance on a data file without needing to load a potentially huge file in memory or to do any complex indexing into the source data. we're simply passing the file name at the beginning and get a new frame of sample that every iteration.
here i also used a buffer to help me operate on a longer data window than the system may be receiving in a single iteration all wrapped in a separate object to hide away in indexing and used through the same step interface. and right in the middle of the loop, you can see the prediction function that we are simulating complete with our new dsp models and the lightweight neural network classifier.
beyond the ability to simulate our system online, a second substantial advantage of having real-time models of both the dsp processing components and the deployed neural network is that we can now automatically generate source c or c code from them. that can be used in an embedded product, an embedded prototype, or simply as reference to share with a downstream software engineering team.
there will be a lot more to say on this, including the ability, say, to directly generate fixed-point or target-optimized c. but i'll just show you the general idea of how that works. although the first time you could also go through this workflow through a dedicated built-in app, the general idea is that with this simple common codegen, we could turn our matlab function predict activity from signal buffer into a fully equivalent open c function with no libraries attached. the generated c is fully open. in this case, i put no effort in optimizing the generated code. but a lot of features are there to do that, including the ability to generate all fixed-point code.
okay. i think we've seen in action all that i was planning to show you. now let me go back to my slides. i'll take a step back and review what we've done along with the capabilities and tools that i've used in the various parts of this presentation.
signal analysis was the first area where the use of de facto standard built-in functions saved us a lot of time. and signal processing toolbox is where all these useful functions came from. just imagine having to implement all these formulae from scratch, let alone looking them up and try to understand them. parallel computing toolbox let us distribute computationally intensive for loops simply by changing a for loop to a parfor loop. additional parallel computing options are available to scale up the framework that we use to larger architectures, including computer clusters and the cloud.
the statistics and machine learning toolbox not only allowed us to test a good number of classifiers but also to quickly explore and compare different options interactively in the classification learner app. i feel that sped up considerably our discovery cycle. after exploring a few conventional classifiers, we also used the neural network toolbox to create a common network topology used for pattern recognition, train it, and test it. we also generated a lightweight pretrained version of that network, which captured the runtime computations using basic matlab constructs fully supported by the c cogeneration engine.
with the fundamentals of our signal processing algorithms already consolidated, we used the number of objects from the dsp system toolbox to model the real-time implementation of our algorithms. we run an online simulation of our system designed using objects that facilitate the streaming of data from long signals stored on disk. and we use scopes that are optimized to handle the continuous visualization of stream signals, similarly to how one would visualize a real-world signal with a benchtop instrument.
as a side effect, our online modeling efforts made our algorithm already a lot more efficient to execute in simulation and also made them ready to generate c or c source code that could be directly deployed onto an embedded processor. and that's where we used matlab coder.
matlab coder is a cogeneration engine that can turn matlab algorithms into fully open c or c source code. there will be a lot to say about what matlab coder can do, especially by generating embeddable source code. so i thought i'd refer you to a great introductory webinar available in prerecorded format on our website that's called, "matlab to c made easy."
on that, we've come to an end of this webinar on signal processing and machine learning techniques for sensor data analytics. i hope that's been useful in highlighting some matlab capabilities that you weren't yet familiar with. i will aim to make the code that i used available in the coming few weeks so you can review the example at your own pace.
if you have to forget about everything that i touched on today, i hope at least you take away the following three key ideas. first of all, our open-ended project was made possible by having available an extensive range of built-in functions, both for signal processing and machine learning. that allowed us to experiment quickly with different options without having to implement any math from scratch.
the complementary part of the picture was the matlab environment itself, from the basic visualization capabilities to the built-in apps that generate reusable code, making constant use of a language that makes it easy to let advanced things happen within a few lines of code.
finally, it took you for a tour through a set of matlab capabilities for transitioning abstract ideas to real-time algorithm implementations. we turned signal processing algorithms into detailed dsp system models that could be simulated over time. and from those, we generated source c code that could be recompiled on an embedded platform.
download code and files
related products
learn more
featured product
signal processing toolbox
up next:
related videos:
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)