compute output, error, and weights of lms adaptive filter -凯发k8网页登录
compute output, error, and weights of lms adaptive filter
description
the dsp.lmsfilter
system object™ implements an adaptive finite impulse response (fir) filter that converges an
input signal to the desired signal using one of the following algorithms:
lms
normalized lms
sign-data lms
sign-error lms
sign-sign lms
for more details on each of these methods, see algorithms.
the filter adapts its weights until the error between the primary input signal and the desired signal is minimal. the mean square of this error (mse) is computed using the function. the predicted version of the mse is determined using a wiener filter in the function. the function computes the maximum adaptation step size, which controls the speed of convergence.
for an overview of the adaptive filter methodology, and the most common applications the adaptive filters are used in, see .
to filter a signal using an adaptive fir filter:
create the
dsp.lmsfilter
object and set its properties.call the object with arguments, as if it were a function.
to learn more about how system objects work, see what are system objects?
under specific conditions, this system object also supports simd code generation. for details, see code generation.
creation
description
returns an lms filter
object, lms
= dsp.lmsfilterlms
, that computes the filtered output, filter
error, and the filter weights for a given input and a desired signal using the least mean
squares (lms) algorithm.
lms = dsp.lmsfilter(
returns an
lms filter object with each specified property set to the specified value. enclose each
property name in single quotes. you can use this syntax with the previous input
argument.name,value
)
properties
unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. objects lock when you call them, and the function unlocks them.
if a property is tunable, you can change its value at any time.
for more information on changing property values, see .
method
— method to calculate filter weights
'lms'
(default) | 'normalized lms'
| 'sign-data lms'
| 'sign-error lms'
| 'sign-sign lms'
method to calculate filter weights, specified as one of the following:
'lms'
–– solves the wiener-hopf equation and finds the filter coefficients for an adaptive filter.'normalized lms'
–– normalized variation of the lms algorithm.'sign-data lms'
–– correction to the filter weights at each iteration depends on the sign of the inputx
.'sign-error lms'
–– correction applied to the current filter weights for each successive iteration depends on the sign of the error,err
.'sign-sign lms'
–– correction applied to the current filter weights for each successive iteration depends on both the sign ofx
and the sign oferr
.
for more details on the algorithms, see algorithms.
length
— length of fir filter weights vector
32
(default) | positive integer
length of the fir filter weights vector, specified as a positive integer.
example: 64
example: 16
data types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
stepsizesource
— method to specify adaptation step size
'property'
(default) | 'input port'
method to specify the adaptation step size, specified as one of the following:
'property'
–– the property stepsize specifies the size of each adaptation step.'input port'
–– specify the adaptation step size as one of the inputs to the object.
stepsize
— adaptation step size
0.1
(default) | non-negative scalar
adaptation step size factor, specified as a non-negative scalar. for convergence of the normalized lms method, the step size must be greater than 0 and less than 2.
a small step size ensures a small steady state error between the output y and the desired signal d. if the step size is small, the convergence speed of the filter decreases. to improve the convergence speed, increase the step size. note that if the step size is large, the filter can become unstable. to compute the maximum step size the filter can accept without becoming unstable, use the function.
tunable: yes
dependencies
this property applies when you set stepsizesource to
'property'
.
data types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
leakagefactor
— leakage factor used in leaky lms method
1
(default) | [0 1]
leakage factor used when implementing the leaky lms method, specified as a scalar in
the range [0 1]
. when the value equals 1, there is no leakage in the
adapting method. when the value is less than 1, the filter implements a leaky lms
method.
example: 0.5
tunable: yes
data types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
initialconditions
— initial conditions of filter weights
0
(default) | scalar | vector
initial conditions of filter weights, specified as a scalar or a vector of length equal to the value of the length property. when the input is real, the value of this property must be real.
example: 0
example: [1 3 1 2 7 8 9 0 2 2 8 2]
data types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
complex number support: yes
adaptinputport
— flag to adapt filter weights
false
(default) | true
flag to adapt filter weights, specified as one of the following:
false
–– the object continuously updates the filter weights.true
–– an adaptation control input is provided to the object when you call its algorithm. if the value of this input is non-zero, the object continuously updates the filter weights. if the value of this input is zero, the filter weights remain at their current value.
weightsresetinputport
— flag to reset filter weights
false
(default) | true
flag to reset filter weights, specified as one of the following:
false
–– the object does not reset weights.true
–– a reset control input is provided to the object when you call its algorithm. this setting enables the weightsresetcondition property. the object resets the filter weights based on the values of theweightsresetcondition
property and the reset input provided to the object algorithm.
weightsresetcondition
— event to reset filter weights
'non-zero'
(default) | 'rising edge'
| 'falling edge'
| 'either edge'
event that triggers the reset of the filter weights, specified as one of the following. the object resets the filter weights whenever a reset event is detected in its reset input.
'non-zero'
–– triggers a reset operation at each sample, when the reset input is not zero.'rising edge'
–– triggers a reset operation when the reset input does one of the following:rises from a negative value to either a positive value or zero.
rises from zero to a positive value, where the rise is not a continuation of a rise from a negative value to zero.
'falling edge'
–– triggers a reset operation when the reset input does one of the following:falls from a positive value to a negative value or zero.
falls from zero to a negative value, where the fall is not a continuation of a fall from a positive value to zero.
'either edge'
–– triggers a reset operation when the reset input is a rising edge or a falling edge.
the object resets the filter weights based on the value of this property and the
reset input r
provided to the object algorithm.
dependencies
this property applies when you set the weightsresetinputport property to
true
.
weightsoutput
— method to output adapted filter weights
'last'
(default) | 'none'
| 'all'
method to output adapted filter weights, specified as one of the following:
'last'
(default) — the object returns a column vector of weights corresponding to the last sample of the data frame. the length of the weights vector is the value given by the length property.'all'
— the object returns a framelength-by-length matrix of weights. the matrix corresponds to the full sample-by-sample history of weights for all framelength samples of the input values. each row in the matrix corresponds to a set of lms filter weights calculated for the corresponding input sample.'none'
— this setting disables the weights output.
roundingmethod
— rounding method for fixed-point operations
'floor'
(default) | 'ceiling'
| 'convergent'
| 'nearest'
| 'round'
| 'simplest'
| 'zero'
specify the rounding mode for fixed-point operations. for more details, see rounding mode.
overflowaction
— overflow action for fixed-point operations
'wrap'
(default) | 'saturate'
overflow action for fixed-point operations, specified as one of the following:
'wrap'
–– the object wraps the result of its fixed-point operations.'saturate'
–– the object saturates the result of its fixed-point operations.
for more details on overflow actions, see overflow mode for fixed-point operations.
stepsizedatatype
— step size word length and fraction length settings
'same word length as first input'
(default) | 'custom'
step size word length and fraction length settings, specified as one of the following:
'same word length as first input'
–– the object specifies the word length of step size to be the same as that of the first input. the fraction length is computed to get the best possible precision.'custom'
–– the step size data type is specified as a custom numeric type through the customstepsizedatatype property.
for more information on the step size data type this object uses, see the fixed point section.
customstepsizedatatype
— word and fraction lengths of step size
numerictype([],16,15)
(default)
word and fraction lengths of the step size, specified as an autosigned numeric type with a word length of 16 and a fraction length of 15.
example: numerictype([],32)
dependencies
this property applies under the following conditions:
stepsizesource property set to
'property'
and stepsizedatatype set to'custom'
.stepsizesource
property set to'input port'
.
leakagefactordatatype
— leakage factor word length and fraction length settings
'same word length as first input'
(default) | 'custom'
leakage factor word length and fraction length settings, specified as one of the following:
'same word length as first input'
–– the object specifies the word length of leakage factor to be the same as that of the first input. the fraction length is computed to get the best possible precision.'custom'
–– the leakage factor data type is specified as a custom numeric type through the customleakagefactordatatype property.
for more information on the leakage factor data type this object uses, see the fixed point section.
customleakagefactordatatype
— word and fraction lengths of the leakage factor
numerictype([],16,15)
(default)
word and fraction lengths of the leakage factor, specified as an autosigned numeric type with a word length of 16 and a fraction length of 15.
example: numerictype([],32)
dependencies
this property applies when you set the leakagefactordatatype property to
'custom'
.
weightsdatatype
— weights word length and fraction length settings
'same as first input'
(default) | 'custom'
weights word length and fraction length settings, specified as one of the following:
'same as first input'
–– the object specifies the data type of the filter weights to be the same as that of the first input.'custom'
–– the data type of filter weights is specified as a custom numeric type through the customweightsdatatype property.
for more information on the filter weights data type this object uses, see the fixed point section.
customweightsdatatype
— word and fraction lengths of filter weights
numerictype([],16,15)
(default)
word and fraction lengths of the filter weights, specified as an autosigned numeric type with a word length of 16 and a fraction length of 15.
example: numerictype([],32,20)
dependencies
this property applies when you set the weightsdatatype property to
'custom'
.
energyproductdatatype
— energy product word length and fraction length settings
'same as first input'
(default) | 'custom'
energy product word length and fraction length settings, specified as one of the following:
'same as first input'
–– the object specifies the data type of the energy product to be the same as that of the first input.'custom'
–– the data type of the energy product is specified as a custom numeric type through the customenergyproductdatatype property.
for more information on the energy product data type this object uses, see the fixed point section.
dependencies
this property applies when you set the method property to 'normalized
lms'
.
customenergyproductdatatype
— word and fraction lengths of energy product
numerictype([],32,20)
(default)
word and fraction lengths of the energy product, specified as an autosigned numeric type with a word length of 32 and a fraction length of 20.
dependencies
this property applies when you set the method property to 'normalized
lms'
and energyproductdatatype property to
'custom'
.
energyaccumulatordatatype
— energy accumulator word length and fraction length settings
'same as first input'
(default) | 'custom'
energy accumulator word length and fraction length settings, specified as one of the following:
'same as first input'
–– the object specifies the data type of the energy accumulator to be the same as that of the first input.'custom'
–– the data type of the energy accumulator is specified as a custom numeric type through the customenergyaccumulatordatatype property.
for more information on the energy accumulator data type this object uses, see the fixed point section.
dependencies
this property applies when you set the method property to 'normalized
lms'
.
customenergyaccumulatordatatype
— word and fraction lengths of energy accumulator
numerictype([],32,20)
(default)
word and fraction lengths of the energy accumulator, specified as an autosigned numeric type with a word length of 32 and a fraction length of 20.
dependencies
this property applies when you set the method property to 'normalized
lms'
and energyaccumulatordatatype property to
'custom'
.
convolutionproductdatatype
— convolution product word length and fraction length settings
'same as first input'
(default) | 'custom'
convolution product word length and fraction length settings, specified as one of the following:
'same as first input'
–– the object specifies the data type of the convolution product to be the same as that of the first input.'custom'
–– the data type of the convolution product is specified as a custom numeric type through the customconvolutionproductdatatype property.
for more information on the convolution product data type this object uses, see the fixed point section.
customconvolutionproductdatatype
— word and fraction lengths of convolution product
numerictype([],32,20)
(default)
word and fraction lengths of the convolution product, specified as an autosigned numeric type with a word length of 32 and a fraction length of 20.
dependencies
this property applies when you set the convolutionproductdatatype property to
'custom'
.
convolutionaccumulatordatatype
— convolution accumulator word length and fraction length settings
'same as first input'
(default) | 'custom'
convolution accumulator word length and fraction length settings, specified as one of the following:
'same as first input'
–– the object specifies the data type of the convolution accumulator to be the same as that of the first input.'custom'
–– the data type of the convolution accumulator is specified as a custom numeric type through the customconvolutionaccumulatordatatype property.
for more information on the convolution accumulator data type this object uses, see the fixed point section.
customconvolutionaccumulatordatatype
— word and fraction lengths of convolution accumulator
numerictype([],32,20)
(default)
word and fraction lengths of the convolution accumulator, specified as an autosigned numeric type with a word length of 32 and a fraction length of 20.
dependencies
this property applies when you set the convolutionaccumulatordatatype property to
'custom'
.
stepsizeerrorproductdatatype
— step size error product word length and fraction length settings
'same as first input'
(default) | 'custom'
step size error product word length and fraction length settings, specified as one of the following:
'same as first input'
–– the object specifies the data type of the step size error product to be the same as that of the first input.'custom'
–– the data type of the step size error product is specified as a custom numeric type through the customstepsizeerrorproductdatatype property.
for more information on the step size error product data type this object uses, see the fixed point section.
customstepsizeerrorproductdatatype
— word and fraction lengths of step size error product
numerictype([],32,20)
(default)
word and fraction lengths of the step size error product, specified as an autosigned numeric type with a word length of 32 and a fraction length of 20.
dependencies
this property applies when you set the stepsizeerrorproductdatatype property to
'custom'
.
weightsupdateproductdatatype
— filter weights update product word length and fraction length settings
'same as first input'
(default) | 'custom'
word and fraction length settings of the filter weights update product, specified as one of the following:
'same as first input'
–– the object specifies the data type of the filter weights update product to be the same as that of the first input.'custom'
–– the data type of the filter weights update product is specified as a custom numeric type through the customweightsupdateproductdatatype property.
for more information on the filter weights update product data type this object uses, see the fixed point section.
customweightsupdateproductdatatype
— word and fraction lengths of filter weights update product
numerictype([],32,20)
(default)
word and fraction lengths of the filter weights update product, specified as an autosigned numeric type with a word length of 32 and a fraction length of 20.
dependencies
this property applies when you set the weightsupdateproductdatatype property to
'custom'
.
quotientdatatype
— quotient word length and fraction length settings
'same as first input'
(default) | 'custom'
quotient word length and fraction length settings, specified as one of the following:
'same as first input'
–– the object specifies the quotient data type to be the same as that of the first input.'custom'
–– the quotient data type is specified as a custom numeric type through the customquotientdatatype property.
for more information on the quotient data type this object uses, see the fixed point section.
dependencies
this property applies when you set the method property to 'normalized
lms'
.
customquotientdatatype
— word and fraction lengths of quotient
numerictype([],32,20)
(default)
word and fraction lengths of the filter weights update product, specified as an autosigned numeric type with a word length of 32 and a fraction length of 20.
dependencies
this property applies when you set the method property to 'normalized
lms'
and quotientdatatype property to
'custom'
.
usage
syntax
description
[
filters the input signal, y
,err
,wts
] = lms(x
,d
)x
, using d
as the
desired signal, and returns the filtered output in y
, the filter
error in err
, and the estimated filter weights in
wts
. the lms filter object estimates the filter weights needed to
minimize the error between the output signal and the desired signal.
[
filters the input signal, y
,err
] = lms(x
,d
)x
, using d
as the
desired signal, and returns the filtered output in y
and the filter
error in err
when the weightsoutput property is set to
'none'
.
[___] = lms(
filters the input signal, x
,d
,mu
)x
, using d
as the
desired signal and mu
as the step size, when the stepsizesource property is set to 'input
port'
. these inputs can be used with either of the previous sets of
outputs.
[___] = lms(
filters the input signal, x
,d
,a
)x
, using d
as the
desired signal and a
as the adaptation control when the adaptinputport property is set to
true
. when a
is nonzero, the system object continuously updates the filter weights. when a
is
zero, the filter weights remain constant.
[___] = lms(
filters the input signal, x
,d
,r
)x
, using d
as the
desired signal and r
as a reset signal when the weightsresetinputport property is set to
true
. the weightsresetcondition property can be used to set the
reset trigger condition. if a reset event occurs, the system object resets the filter weights to their initial values.
input arguments
x
— data input
scalar | column vector
the signal to be filtered by the lms filter. the input, x
,
and the desired signal, d
must have the same size, data type, and
complexity. if the input is fixed-point, the data type must be signed and must have
the same word length as the desired signal.
the input, x
can be a variable-size signal. you can change
the number of elements in the column vector even when the object is locked. the
system object locks when you call the object to run its algorithm.
data types: single
| double
| int8
| int16
| int32
| int64
| fi
complex number support: yes
d
— desired signal
scalar | column vector
the lms filter adapts its filter weights, wts
, to minimize
the error, err
, and converge the input signal
x
to the desired signal d
as closely as
possible.
the input, x
, and the desired signal, d
,
must have the same size, data type, and complexity. if the desired signal is
fixed-point, the data type must be signed and must have the same word length as the
data input.
the input, d
can be a variable-size signal. you can change
the number of elements in the column vector even when the object is locked. the
system object locks when you call the object to run its algorithm.
data types: single
| double
| int8
| int16
| int32
| int64
| fi
complex number support: yes
mu
— step size
nonnegative scalar
adaptation step size factor, specified as a scalar, nonnegative numeric value. for
convergence of the normalized lms method, the step size should be greater than 0 and
less than 2. the data type of the step size input must match the data type of
x
and d
. if the data type is fixed-point,
the data type must be signed.
a small step size ensures a small steady state error between the output
y
and the desired signal d
. if the step
size is small, the convergence speed of the filter decreases. to improve the
convergence speed, increase the step size. note that if the step size is large, the
filter can become unstable. to compute the maximum step size the filter can accept
without becoming unstable, use the
function.
dependencies
this input is required when the stepsizesource property is set to 'input
port'
.
data types: single
| double
| int8
| int16
| int32
| int64
| fi
a
— adaptation control
scalar
adaptation control input that controls how the filter weights are updated. if the value of this input is non-zero, the object continuously updates the filter weights. if the value of this input is zero, the filter weights remain at their current value.
dependencies
this input is required when the adaptinputport property is set to
true
.
data types: single
| double
| int8
| int16
| int32
| uint8
| uint16
| uint32
| logical
r
— reset signal
scalar
reset signal that resets the filter weights based on the values of the weightsresetcondition property.
dependencies
this input is required when the weightsresetinputport property is set to
true
.
data types: single
| double
| int8
| int16
| int32
| uint8
| uint16
| uint32
| logical
output arguments
y
— filtered output
scalar | column vector
filtered output, returned as a scalar or a column vector. the object adapts its
filter weights to converge the input signal x
to match the
desired signal d
. the filter outputs the converged signal.
data types: single
| double
| int8
| int16
| int32
| int64
| fi
complex number support: yes
err
— difference between output and desired signal
scalar | column vector
difference between the output signal y
and the desired signal
d
, returned as a scalar or a column vector. the data type of
err
matches the data type of y
. the
objective of the adaptive filter is to minimize this error. the object adapts its
weights to converge towards optimal filter weights that produce an output signal that
matches closely with the desired signal. for more details on how
err
is computed, see algorithms.
data types: single
| double
| int8
| int16
| int32
| int64
| fi
wts
— adaptive filter weights
scalar | column vector
adaptive filter weights, returned as a scalar or a column vector of length specified by the value in length.
data types: single
| double
| int8
| int16
| int32
| int64
| fi
object functions
to use an object function, specify the
system object as the first input argument. for
example, to release system resources of a system object named obj
, use
this syntax:
release(obj)
specific to dsp.lmsfilter
maximum step size for lms adaptive filter convergence | |
predicted mean squared error for lms adaptive filter | |
estimated mean squared error for adaptive filters |
common to all system objects
run system object algorithm | |
release resources and allow changes to system object property values and input characteristics | |
reset internal states of system object |
examples
predict mean squared error for lms filter
the mean squared error (mse) measures the average of the squares of the errors between the desired signal and the primary signal input to the adaptive filter. reducing this error converges the primary input to the desired signal. determine the predicted value of mse and the simulated value of mse at each time instant using the msepred
and msesim
functions. compare these mse values with each other and with respect to the minimum mse and steady-state mse values. in addition, compute the sum of the squares of the coefficient errors given by the trace of the coefficient covariance matrix.
initialization
create a dsp.firfilter
system object™ that represents the unknown system. pass the signal, x
, to the fir filter. the output of the unknown system is the desired signal, d
, which is the sum of the output of the unknown system (fir filter) and an additive noise signal, n
.
num = fir1(31,0.5); fir = dsp.firfilter('numerator',num); iir = dsp.iirfilter('numerator',sqrt(0.75),... 'denominator',[1 -0.5]); x = iir(sign(randn(2000,25))); n = 0.1*randn(size(x)); d = fir(x) n;
lms filter
create a dsp.lmsfilter
system object to create a filter that adapts to output the desired signal. set the length of the adaptive filter to 32 taps, step size to 0.008, and the decimation factor for analysis and simulation to 5. the variable simmse
represents the simulated mse between the output of the unknown system, d
, and the output of the adaptive filter. the variable mse
gives the corresponding predicted value.
l = 32; mu = 0.008; m = 5; lms = dsp.lmsfilter('length',l,'stepsize',mu); [mmse,emse,meanw,mse,tracek] = msepred(lms,x,d,m); [simmse,meanwsim,wsim,traceksim] = msesim(lms,x,d,m);
plot the mse results
compare the values of simulated mse, predicted mse, minimum mse, and the final mse. the final mse value is given by the sum of minimum mse and excess mse.
nn = m:m:size(x,1); semilogy(nn,simmse,[0 size(x,1)],[(emse mmse)... (emse mmse)],nn,mse,[0 size(x,1)],[mmse mmse]) title('mean squared error performance') axis([0 size(x,1) 0.001 10]) legend('mse (sim.)','final mse','mse','min. mse') xlabel('time index') ylabel('squared error value')
the predicted mse follows the same trajectory as the simulated mse. both these trajectories converge with the steady-state (final) mse.
plot the coefficient trajectories
meanwsim
is the mean value of the simulated coefficients given by msesim
. meanw
is the mean value of the predicted coefficients given by msepred
.
compare the simulated and predicted mean values of lms filter coefficients 12,13,14, and 15.
plot(nn,meanwsim(:,12),'b',nn,meanw(:,12),'r',nn,... meanwsim(:,13:15),'b',nn,meanw(:,13:15),'r') plottitle ={'average coefficient trajectories for';... 'w(12), w(13), w(14), and w(15)'}
plottitle = 2x1 cell
{'average coefficient trajectories for'}
{'w(12), w(13), w(14), and w(15)' }
title(plottitle) legend('simulation','theory') xlabel('time index') ylabel('coefficient value')
in steady state, both the trajectories converge.
sum of squared coefficient errors
compare the sum of the squared coefficient errors given by msepred
and msesim
. these values are given by the trace of the coefficient covariance matrix.
semilogy(nn,traceksim,nn,tracek,'r') title('sum-of-squared coefficient errors') axis([0 size(x,1) 0.0001 1]) legend('simulation','theory') xlabel('time index') ylabel('squared error value')
compute maximum step of lms adaptive filter
the maxstep
function computes the maximum step size of the adaptive filter. this step size keeps the filter stable at the maximum possible speed of convergence. create the primary input signal, x
, by passing a signed random signal to an iir filter. signal x
contains 50 frames of 2000 samples each frame. create an lms filter with 32 taps and a step size of 0.1.
x = zeros(2000,50); iirfilter = dsp.iirfilter('numerator',sqrt(0.75),... 'denominator',[1 -0.5]); for k = 1:size(x,2) x(:,k) = iirfilter(sign(randn(size(x,1),1))); end mu = 0.1; lmsfilter = dsp.lmsfilter('length',32,... 'stepsize',mu);
compute the maximum adaptation step size and the maximum step size in mean-squared sense using the maxstep
function.
[mumax,mumaxmse] = maxstep(lmsfilter,x)
mumax = 0.0625
mumaxmse = 0.0536
system identification of fir filter using lms algorithm
system identification is the process of identifying the coefficients of an unknown system using an adaptive filter. the general overview of the process is shown in . the main components involved are:
the adaptive filter algorithm. in this example, set the
method
property ofdsp.lmsfilter
to'lms'
to choose the lms adaptive filter algorithm.an unknown system or process to adapt to. in this example, the filter designed by is the unknown system.
appropriate input data to exercise the adaptation process. for the generic lms model, these are the desired signal and the input signal .
the objective of the adaptive filter is to minimize the error signal between the output of the adaptive filter and the output of the unknown system (or the system to be identified) . once the error signal is minimized, the adapted filter resembles the unknown system. the coefficients of both the filters match closely.
unknown system
create a dsp.firfilter
object that represents the system to be identified. use the fircband
function to design the filter coefficients. the designed filter is a lowpass filter constrained to 0.2 ripple in the stopband.
filt = dsp.firfilter; filt.numerator = fircband(12,[0 0.4 0.5 1],[1 1 0 0],[1 0.2],... {'w' 'c'});
pass the signal x
to the fir filter. the desired signal d
is the sum of the output of the unknown system (fir filter) and an additive noise signal n
.
x = 0.1*randn(250,1); n = 0.01*randn(250,1); d = filt(x) n;
adaptive filter
with the unknown filter designed and the desired signal in place, create and apply the adaptive lms filter object to identify the unknown filter.
preparing the adaptive filter object requires starting values for estimates of the filter coefficients and the lms step size (mu
). you can start with some set of nonzero values as estimates for the filter coefficients. this example uses zeros for the 13 initial filter weights. set the initialconditions
property of dsp.lmsfilter
to the desired initial values of the filter weights. for the step size, 0.8 is a good compromise between being large enough to converge well within 250 iterations (250 input sample points) and small enough to create an accurate estimate of the unknown filter.
create a dsp.lmsfilter
object to represent an adaptive filter that uses the lms adaptive algorithm. set the length of the adaptive filter to 13 taps and the step size to 0.8.
mu = 0.8;
lms = dsp.lmsfilter(13,'stepsize',mu)
lms = dsp.lmsfilter with properties: method: 'lms' length: 13 stepsizesource: 'property' stepsize: 0.8000 leakagefactor: 1 initialconditions: 0 adaptinputport: false weightsresetinputport: false weightsoutput: 'last' show all properties
pass the primary input signal x
and the desired signal d
to the lms filter. run the adaptive filter to determine the unknown system. the output y
of the adaptive filter is the signal converged to the desired signal d thereby minimizing the error e
between the two signals.
plot the results. the output signal does not match the desired signal as expected, making the error between the two nontrivial.
[y,e,w] = lms(x,d); plot(1:250, [d,y,e]) title('system identification of an fir filter') legend('desired','output','error') xlabel('time index') ylabel('signal value')
compare the weights
the weights vector w
represents the coefficients of the lms filter that is adapted to resemble the unknown system (fir filter). to confirm the convergence, compare the numerator of the fir filter and the estimated weights of the adaptive filter.
the estimated filter weights do not closely match the actual filter weights, confirming the results seen in the previous signal plot.
stem([(filt.numerator).' w]) title('system identification by adaptive lms algorithm') legend('actual filter weights','estimated filter weights',... 'location','northeast')
changing the step size
as an experiment, change the step size to 0.2. repeating the example with mu = 0.2
results in the following stem plot. the filters do not converge, and the estimated weights are not good approxmations of the actual weights.
mu = 0.2; lms = dsp.lmsfilter(13,'stepsize',mu); [~,~,w] = lms(x,d); stem([(filt.numerator).' w]) title('system identification by adaptive lms algorithm') legend('actual filter weights','estimated filter weights',... 'location','northeast')
increase the number of data samples
increase the frame size of the desired signal. even though this increases the computation involved, the lms algorithm now has more data that can be used for adaptation. with 1000 samples of signal data and a step size of 0.2, the coefficients are aligned closer than before, indicating an improved convergence.
release(filt); x = 0.1*randn(1000,1); n = 0.01*randn(1000,1); d = filt(x) n; [y,e,w] = lms(x,d); stem([(filt.numerator).' w]) title('system identification by adaptive lms algorithm') legend('actual filter weights','estimated filter weights',... 'location','northeast')
increase the number of data samples further by inputting the data through iterations. run the algorithm on 4000 samples of data, passed to the lms algorithm in batches of 1000 samples over 4 iterations.
compare the filter weights. the weights of the lms filter match the weights of the fir filter very closely, indicating a good convergence.
release(filt); n = 0.01*randn(1000,1); for index = 1:4 x = 0.1*randn(1000,1); d = filt(x) n; [y,e,w] = lms(x,d); end stem([(filt.numerator).' w]) title('system identification by adaptive lms algorithm') legend('actual filter weights','estimated filter weights',... 'location','northeast')
the output signal matches the desired signal very closely, making the error between the two close to zero.
plot(1:1000, [d,y,e]) title('system identification of an fir filter') legend('desired','output','error') xlabel('time index') ylabel('signal value')
system identification of fir filter using normalized lms algorithm
to improve the convergence performance of the lms algorithm, the normalized variant (nlms) uses an adaptive step size based on the signal power. as the input signal power changes, the algorithm calculates the input power and adjusts the step size to maintain an appropriate value. the step size changes with time, and as a result, the normalized algorithm converges faster with fewer samples in many cases. for input signals that change slowly over time, the normalized lms algorithm can be a more efficient lms approach.
for an example using the lms approach, see .
unknown system
create a dsp.firfilter
object that represents the system to be identified. use the fircband
function to design the filter coefficients. the designed filter is a lowpass filter constrained to 0.2 ripple in the stopband.
filt = dsp.firfilter; filt.numerator = fircband(12,[0 0.4 0.5 1],[1 1 0 0],[1 0.2],... {'w' 'c'});
pass the signal x
to the fir filter. the desired signal d
is the sum of the output of the unknown system (fir filter) and an additive noise signal n
.
x = 0.1*randn(1000,1); n = 0.001*randn(1000,1); d = filt(x) n;
adaptive filter
to use the normalized lms algorithm variation, set the method
property on the dsp.lmsfilter
to 'normalized lms'
. set the length of the adaptive filter to 13 taps and the step size to 0.2.
mu = 0.2; lms = dsp.lmsfilter(13,'stepsize',mu,'method',... 'normalized lms');
pass the primary input signal x
and the desired signal d
to the lms filter.
[y,e,w] = lms(x,d);
the output y
of the adaptive filter is the signal converged to the desired signal d thereby minimizing the error e
between the two signals.
plot(1:1000, [d,y,e]) title('system identification by normalized lms algorithm') legend('desired','output','error') xlabel('time index') ylabel('signal value')
compare the adapted filter to the unknown system
the weights vector w represents the coefficients of the lms filter that is adapted to resemble the unknown system (fir filter). to confirm the convergence, compare the numerator of the fir filter and the estimated weights of the adaptive filter.
stem([(filt.numerator).' w]) title('system identification by normalized lms algorithm') legend('actual filter weights','estimated filter weights',... 'location','northeast')
compare convergence performance between lms algorithm and normalized lms algorithm
an adaptive filter adapts its filter coefficients to match the coefficients of an unknown system. the objective is to minimize the error signal between the output of the unknown system and the output of the adaptive filter. when these two outputs converge and match closely for the same input, the coefficients are said to match closely. the adaptive filter at this state resembles the unknown system. this example compares the rate at which this convergence happens for the normalized lms (nlms) algorithm and the lms algorithm with no normalization.
unknown system
create a dsp.firfilter
that represents the unknown system. pass the signal x
as an input to the unknown system. the desired signal d
is the sum of the output of the unknown system (fir filter) and an additive noise signal n
.
filt = dsp.firfilter; filt.numerator = fircband(12,[0 0.4 0.5 1],[1 1 0 0],[1 0.2],... {'w' 'c'}); x = 0.1*randn(1000,1); n = 0.001*randn(1000,1); d = filt(x) n;
adaptive filter
create two dsp.lmsfilter
objects, with one set to the lms algorithm, and the other set to the normalized lms algorithm. choose an adaptation step size of 0.2 and set the length of the adaptive filter to 13 taps.
mu = 0.2; lms_nonnormalized = dsp.lmsfilter(13,'stepsize',mu,... 'method','lms'); lms_normalized = dsp.lmsfilter(13,'stepsize',mu,... 'method','normalized lms');
pass the primary input signal x
and the desired signal d
to both the variations of the lms algorithm. the variables e1
and e2
represent the error between the desired signal and the output of the normalized and nonnormalized filters, respecitvely.
[~,e1,~] = lms_normalized(x,d); [~,e2,~] = lms_nonnormalized(x,d);
plot the error signals for both variations. the error signal for the nlms variant converges to zero much faster than the error signal for the lms variant. the normalized version adapts in far fewer iterations to a result almost as good as the nonnormalized version.
plot([e1,e2]); title('comparing the lms and nlms conversion performance'); legend('nlms derived filter weights', ... 'lms derived filter weights','location', 'northeast'); xlabel('time index') ylabel('signal value')
cancel noise using lms filter
cancel additive noise, n, added to an unknown system using an lms adaptive filter. the lms filter adapts its coefficients until its transfer function matches the transfer function of the unknown system as closely as possible. the difference between the output of the adaptive filter and the output of the unknown system represents the error signal, e
. minimizing this error signal is the objective of the adaptive filter.
the unknown system and the lms filter process the same input signal, x
, and produce outputs d
and y
, respectively. if the coefficients of the adaptive filter match the coefficients of the unknown system, the error, e
, in effect represents the additive noise.
create a dsp.firfilter
system object to represent the unknown system. create a dsp.lmsfilter
object and set the length to 11 taps and the step size to 0.05. create a sine wave to represent the noise added to the unknown system. view the signals in a time scope.
framesize = 100; niter = 10; lmsfilt2 = dsp.lmsfilter('length',11,'method','normalized lms', ... 'stepsize',0.05); firfilt2 = dsp.firfilter('numerator', fir1(10,[.5, .75])); sinewave = dsp.sinewave('frequency',0.01, ... 'samplerate',1,'samplesperframe',framesize); scope = timescope('timeunits','seconds',... 'ylimits',[-3 3],'bufferlength',2*framesize*niter, ... 'showlegend',true,'channelnames', ... {'noisy signal', 'error signal'});
create a random input signal, x and pass the signal to the fir filter. add a sine wave to the output of the fir filter to generate the noisy signal, d. the signal, d is the output of the unknown system. pass the noisy signal and the primary input signal to the lms filter. view the noisy signal and the error signal in the time scope.
for k = 1:niter x = randn(framesize,1); d = firfilt2(x) sinewave(); [y,e,w] = lmsfilt2(x,d); scope([d,e]) end release(scope)
the error signal, e
, is the sinusoidal noise added to the unknown system. minimizing the error signal minimizes the noise added to the system.
noise cancellation using sign-data lms algorithm
when the amount of computation required to derive an adaptive filter drives your development process, the sign-data variant of the lms (sdlms) algorithm might be a very good choice, as demonstrated in this example.
in the standard and normalized variations of the lms adaptive filter, coefficients for the adapting filter arise from the mean square error between the desired signal and the output signal from the unknown system. the sign-data algorithm changes the mean square error calculation by using the sign of the input data to change the filter coefficients.
when the error is positive, the new coefficients are the previous coefficients plus the error multiplied by the step size µ. if the error is negative, the new coefficients are again the previous coefficients minus the error multiplied by µ — note the sign change.
when the input is zero, the new coefficients are the same as the previous set.
in vector form, the sign-data lms algorithm is:
where
with vector containing the weights applied to the filter coefficients and vector containing the input data. the vector is the error between the desired signal and the filtered signal. the objective of the sdlms algorithm is to minimize this error. step size is represented by .
with a smaller , the correction to the filter weights gets smaller for each sample, and the sdlms error falls more slowly. a larger changes the weights more for each step, so the error falls more rapidly, but the resulting error does not approach the ideal solution as closely. to ensure a good convergence rate and stability, select within the following practical bounds.
where is the number of samples in the signal. also, define as a power of two for efficient computing.
note: how you set the initial conditions of the sign-data algorithm profoundly influences the effectiveness of the adaptation process. because the algorithm essentially quantizes the input signal, the algorithm can become unstable easily.
a series of large input values, coupled with the quantization process might result in the error growing beyond all bounds. restrain the tendency of the sign-data algorithm to get out of control by choosing a small step size and setting the initial conditions for the algorithm to nonzero positive and negative values.
in this noise cancellation example, set the method
property of dsp.lmsfilter
to 'sign-data lms'
. this example requires two input data sets:
data containing a signal corrupted by noise. in the block diagram under , this is the desired signal . the noise cancellation process removes the noise from the signal.
data containing random noise. in the block diagram under , this is . the signal is correlated with the noise that corrupts the signal data. without the correlation between the noise data, the adapting algorithm cannot remove the noise from the signal.
for the signal, use a sine wave. note that signal
is a column vector of 1000 elements.
signal = sin(2*pi*0.055*(0:1000-1)');
now, add correlated white noise to signal
. to ensure that the noise is correlated, pass the noise through a lowpass fir filter and then add the filtered noise to the signal.
noise = randn(1000,1); filt = dsp.firfilter; filt.numerator = fir1(11,0.4); fnoise = filt(noise); d = signal fnoise;
fnoise
is the correlated noise and d
is now the desired input to the sign-data algorithm.
to prepare the dsp.lmsfilter
object for processing, set the initial conditions of the filter weights and mu
(stepsize
). as noted earlier in this section, the values you set for coeffs
and mu
determine whether the adaptive filter can remove the noise from the signal path.
in system identification of fir filter using lms algorithm, you constructed a default filter that sets the filter coefficients to zeros. in most cases that approach does not work for the sign-data algorithm. the closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively.
for this example, start with the coefficients used in the noise filter (filt.numerator
), and modify them slightly so the algorithm has to adapt.
coeffs = (filt.numerator).'-0.01; % set the filter initial conditions. mu = 0.05; % set the step size for algorithm updating.
with the required input arguments for dsp.lmsfilter
prepared, construct the lms filter object, run the adaptation, and view the results.
lms = dsp.lmsfilter(12,'method','sign-data lms',... 'stepsize',mu,'initialconditions',coeffs); [~,e] = lms(noise,d); l = 200; plot(0:l-1,signal(1:l),0:l-1,e(1:l)); title('noise cancellation by the sign-data algorithm'); legend('actual signal','result of noise cancellation',... 'location','northeast'); xlabel('time index') ylabel('signal values')
when dsp.lmsfilter
runs, it uses far fewer multiplication operations than either of the standard lms algorithms. also, performing the sign-data adaptation requires only multiplication by bit shifting when the step size is a power of two.
although the performance of the sign-data algorithm as shown in this plot is quite good, the sign-data algorithm is much less stable than the standard lms variations. in this noise cancellation example, the processed signal is a very good match to the input signal, but the algorithm could very easily grow without bound rather than achieve good performance.
changing the weight initial conditions (initialconditions
) and mu
(stepsize
), or even the lowpass filter you used to create the correlated noise, can cause noise cancellation to fail.
noise cancellation using sign-error lms algorithm
in the standard and normalized variations of the lms adaptive filter, the coefficients for the adapting filter arise from calculating the mean square error between the desired signal and the output signal from the unknown system, and applying the result to the current filter coefficients. the sign-error lms (selms) algorithm replaces the mean square error calculation by using the sign of the error to modify the filter coefficients.
when the error is positive, the new coefficients are the previous coefficients plus the error multiplied by the step size . if the error is negative, the new coefficients are the previous coefficients minus the error multiplied by — note the sign change. when the input is zero, the new coefficients are the same as the previous set.
in vector form, the sign-error lms algorithm is:
,
where
with vector containing the weights applied to the filter coefficients and vector containing the input data. the vector is the error between the desired signal and the filtered signal. the objective of the selms algorithm is to minimize this error.
with a smaller , the correction to the filter weights gets smaller for each sample and the selms error falls more slowly. a larger changes the weights more for each step so the error falls more rapidly, but the resulting error does not approach the ideal solution as closely. to ensure a good convergence rate and stability, select within the following practical bounds.
where is the number of samples in the signal. also, define as a power of two for efficient computation.
note: how you set the initial conditions of the sign-error algorithm profoundly influences the effectiveness of the adaptation process. because the algorithm essentially quantizes the error signal, the algorithm can become unstable easily.
a series of large error values, coupled with the quantization process might result in the error growing beyond all bounds. restrain the tendency of the sign-error algorithm to become unstable by choosing a small step size and setting the initial conditions for the algorithm to nonzero positive and negative values.
in this noise cancellation example, set the method
property of dsp.lmsfilter
to 'sign-error lms'
. this example requires two input data sets:
data containing a signal corrupted by noise. in the block diagram under , this is the desired signal . the noise cancellation process removes the noise from the signal.
data containing random noise. in the block diagram under , this is . the signal is correlated with the noise that corrupts the signal data. without the correlation between the noise data, the adapting algorithm cannot remove the noise from the signal.
for the signal, use a sine wave. note that signal
is a column vector of 1000 elements.
signal = sin(2*pi*0.055*(0:1000-1)');
now, add correlated white noise to signal
. to ensure that the noise is correlated, pass the noise through a lowpass fir filter and then add the filtered noise to the signal.
noise = randn(1000,1); filt = dsp.firfilter; filt.numerator = fir1(11,0.4); fnoise = filt(noise); d = signal fnoise;
fnoise
is the correlated noise and d
is now the desired input to the sign-error algorithm.
to prepare the dsp.lmsfilter
object for processing, set the initial conditions of the filter weights (initialconditions
) and mu
(stepsize
). as noted earlier in this section, the values you set for coeffs
and mu
determine whether the adaptive filter can remove the noise from the signal path.
in system identification of fir filter using lms algorithm, you constructed a default filter that sets the filter coefficients to zeros. in most cases that approach does not work for the sign-error algorithm. the closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively.
for this example, start with the coefficients used in the noise filter (filt.numerator
) and modify them slightly so the algorithm has to adapt.
coeffs = (filt.numerator).'-0.01; % set the filter initial conditions. mu = 0.05; % set the step size for algorithm updating.
with the required input arguments for dsp.lmsfilter
prepared, run the adaptation and view the results.
lms = dsp.lmsfilter(12,'method','sign-error lms',... 'stepsize',mu,'initialconditions',coeffs); [~,e] = lms(noise,d); l = 200; plot(0:199,signal(1:200),0:199,e(1:200)); title('noise cancellation performance by the sign-error lms algorithm'); legend('actual signal','error after noise reduction',... 'location','northeast') xlabel('time index') ylabel('signal value')
when the sign-error lms algorithm runs, it uses far fewer multiplication operations than either of the standard lms algorithms. also, performing the sign-error adaptation requires only bit shifting multiples when the step size is a power of two.
although the performance of the sign-error algorithm as shown in this plot is quite good, the sign-error algorithm is much less stable than the standard lms variations. in this noise cancellation example, the adapted signal is a very good match to the input signal, but the algorithm could very easily become unstable rather than achieve good performance.
changing the weight initial conditions (initialconditions
) and mu
(stepsize
), or even the lowpass filter you used to create the correlated noise, can cause noise cancellation to fail and the algorithm to become useless.
noise cancellation using sign-sign lms algorithm
the sign-sign lms algorithm (sslms) replaces the mean square error calculation by using the sign of the input data to change the filter coefficients. when the error is positive, the new coefficients are the previous coefficients plus the error multiplied by the step size . if the error is negative, the new coefficients are the previous coefficients minus the error multiplied by — note the sign change. when the input is zero, the new coefficients are the same as the previous set.
in essence, the algorithm quantizes both the error and the input by applying the sign operator to them.
in vector form, the sign-sign lms algorithm is:
where
vector contains the weights applied to the filter coefficients and vector contains the input data. the vector is the error between the desired signal and the filtered signal. the objective of the sslms algorithm is to minimize this error.
with a smaller , the correction to the filter weights gets smaller for each sample and the sslms error falls more slowly. a larger changes the weights more for each step, so the error falls more rapidly, but the resulting error does not approach the ideal solution as closely. to ensure a good convergence rate and stability, select within the following practical bounds.
where is the number of samples in the signal. also, define as a power of two for efficient computation
note:
how you set the initial conditions of the sign-sign algorithm profoundly influences the effectiveness of the adaptation process. because the algorithm essentially quantizes the input signal and the error signal, the algorithm can become unstable easily.
a series of large error values, coupled with the quantization process might result in the error growing beyond all bounds. restrain the tendency of the sign-sign algorithm to become unstable by choosing a small step size and setting the initial conditions for the algorithm to nonzero positive and negative values.
in this noise cancellation example, set the method
property of dsp.lmsfilter
to 'sign-sign lms'
. this example requires two input data sets:
data containing a signal corrupted by noise. in the block diagram under , this is the desired signal . the noise cancellation process removes the noise from the signal.
data containing random noise. in the block diagram under , this is . the signal is correlated with the noise that corrupts the signal data. without the correlation between the noise data, the adapting algorithm cannot remove the noise from the signal.
for the signal, use a sine wave. note that signal
is a column vector of 1000 elements.
signal = sin(2*pi*0.055*(0:1000-1)');
now, add correlated white noise to signal
. to ensure that the noise is correlated, pass the noise through a lowpass fir filter, then add the filtered noise to the signal.
noise = randn(1000,1); filt = dsp.firfilter; filt.numerator = fir1(11,0.4); fnoise = filt(noise); d = signal fnoise;
fnoise
is the correlated noise and d
is now the desired input to the sign-sign algorithm.
to prepare the dsp.lmsfilter
object for processing, set the initial conditions of the filter weights (initialconditions
) and mu
(stepsize
). as noted earlier in this section, the values you set for coeffs
and mu
determine whether the adaptive filter can remove the noise from the signal path. in system identification of fir filter using lms algorithm, you constructed a default filter that sets the filter coefficients to zeros. usually that approach does not work for the sign-sign algorithm.
the closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively. for this example, you start with the coefficients used in the noise filter (filt.numerator
), and modify them slightly so the algorithm has to adapt.
coeffs = (filt.numerator).' -0.01; % set the filter initial conditions.
mu = 0.05;
with the required input arguments for dsp.lmsfilter
prepared, run the adaptation and view the results.
lms = dsp.lmsfilter(12,'method','sign-sign lms',... 'stepsize',mu,'initialconditions',coeffs); [~,e] = lms(noise,d); l = 200; plot(0:199,signal(1:200),0:199,e(1:200)); title('noise cancellation performance by the sign-sign lms algorithm'); legend('actual signal','error after noise reduction',... 'location','northeast') xlabel('time index') ylabel('signal value')
when dsp.lmsfilter
runs, it uses far fewer multiplication operations than either of the standard lms algorithms. also, performing the sign-sign adaptation requires only bit shifting multiples when the step size is a power of two.
although the performance of the sign-sign algorithm as shown in this plot is quite good, the sign-sign algorithm is much less stable than the standard lms variations. in this noise cancellation example, the adapted signal is a very good match to the input signal, but the algorithm could very easily become unstable rather than achieve good performance.
changing the weight initial conditions (initialconditions
) and mu (stepsize
), or even the lowpass filter you used to create the correlated noise, can cause noise cancellation to fail and the algorithm to become useless.
access full history of lms filter weights
note: this example runs only in r2017a or later. if you are using a release earlier than r2017a, the object does not output a full sample-by-sample history of filter weights.
initialize the dsp.lmsfilter
system object and set the weightsoutput
property to 'all'
. this setting enables the lms filter to output a matrix of weights with dimensions [framelength length]
, corresponding to the full sample-by-sample history of weights for all framelength
samples of input values.
framesize = 15000; lmsfilt3 = dsp.lmsfilter('length',63,'method','lms', ... 'stepsize',0.001,'leakagefactor',0.99999, ... 'weightsoutput','all'); % full weights history w_actual = fir1(64,[0.5 0.75]); firfilt3 = dsp.firfilter('numerator',w_actual); sinewave = dsp.sinewave('frequency',0.01, ... 'samplerate',1,'samplesperframe',framesize); scope = timescope('timeunits','seconds', ... 'ylimits',[-0.25 0.75],'bufferlength',2*framesize, ... 'showlegend',true,'channelnames', ... {'coeff 33 estimate','coeff 34 estimate','coeff 35 estimate', ... 'coeff 33 actual','coeff 34 actual','coeff 35 actual'});
run one frame and output the full adaptive weights history, w
.
x = randn(framesize,1); % input signal d = firfilt3(x) sinewave(); % noise signal [~,~,w] = lmsfilt3(x,d);
each row in w
is a set of weights estimated for the respective input sample. each column in w
gives the complete history of a specific weight. plot the actual weight and the entire history of the 33rd, 34th, and 35th weight. in the plot, you can see that the estimated weight output eventually converges with the actual weight as the adaptive filter receives input samples and continues to adapt.
idxbeg = 33; idxend = 35; scope([w(:,idxbeg:idxend), repmat(w_actual(idxbeg:idxend),framesize,1)])
more about
fixed point
the following diagrams show the data types used within the
dsp.lmsfilter
object for fixed-point signals. the table summarizes the
definitions of the variables used in the diagrams:
variable | definition |
---|---|
u | input vector |
w | vector of filter weights |
µ | step size |
e | error |
q | quotient, |
product u'u | product data type in energy calculation diagram |
accumulator u'u | accumulator data type in energy calculation diagram |
product w'u | product data type in convolution diagram |
accumulator w'u | accumulator data type in convolution diagram |
product | product data type in product of step size and error diagram |
product | product and accumulator data type in weight update diagram. 1 |
1the accumulator data type for this quantity is automatically set to be the same as the product data type. the minimum, maximum, and overflow information for this accumulator is logged as part of the product information. autoscaling treats this product and accumulator as one data type.
you can set the data type of the properties, weights, products, quotient, and accumulators in the system object properties. fixed-point inputs, outputs, and system object properties must have the following characteristics:
the input signal and the desired signal must have the same word length, but their fraction lengths can differ.
the step size and leakage factor must have the same word length, but their fraction lengths can differ.
the output signal and the error signal have the same word length and the same fraction length as the desired signal.
the quotient and the product output of the u'u, w'u, , and operations must have the same word length, but their fraction lengths can differ.
the accumulator data type of the u'u and w'u operations must have the same word length, but their fraction lengths can differ.
the output of the multiplier is in the product output data type if at least one of the inputs to the multiplier is real. if both of the inputs to the multiplier are complex, the result of the multiplication is in the accumulator data type. for details on the complex multiplication performed, see .
algorithms
the lms filter algorithm is defined by the following equations.
the various lms adaptive filter algorithms available in this system object are defined as:
lms –– solves the wiener-hopf equation and finds the filter coefficients for an adaptive filter.
normalized lms –– normalized variation of the lms algorithm.
in normalized lms, to overcome potential numerical instability in the update of the weights, a small positive constant, ε, has been added in the denominator. for double-precision floating-point input, ε is the output of the function. for single-precision floating-point input, ε is the output of
eps("single")
. for fixed-point input, ε is 0.sign-data lms –– correction to the filter weights at each iteration depends on the sign of the input u(n).
where u(n) is real.
sign-error lms –– correction applied to the current filter weights for each successive iteration depends on the sign of the error, e(n).
sign-sign lms –– correction applied to the current filter weights for each successive iteration depends on both the sign of u(n) and the sign of e(n).
where u(n) is real.
the variables are as follows:
variable | description |
---|---|
n |
the current time index |
u(n) |
the vector of buffered input samples at step n |
u*(n) |
the complex conjugate of the vector of buffered input samples at step n |
w(n) |
the vector of filter weight estimates at step n |
y(n) |
the filtered output at step n |
e(n) |
the estimation error at step n |
d(n) |
the desired response at step n |
µ |
the adaptation step size |
α | the leakage factor (0 < α ≤ 1) |
ε | a constant that corrects any potential numerical instability that occurs during the update of weights. |
references
[1] hayes, m.h. statistical digital signal processing and modeling. new york: john wiley & sons, 1996.
extended capabilities
c/c code generation
generate c and c code using matlab® coder™.
usage notes and limitations:
see (matlab coder).
the dsp.lmsfilter
system object supports simd code generation using intel avx2 technology under these
conditions:
method
is set to'lms'
or'normalized lms'
.weightsoutput
is set to'none'
or'last'
.input signal is real-valued.
input signal has a data type of
single
ordouble
.
the simd technology significantly improves the performance of the generated code.
version history
introduced in r2012a
see also
functions
- | |
objects
- | | |
dsp.frequencydomainadaptivefilter
|dsp.adaptivelatticefilter
|dsp.affineprojectionfilter
|dsp.fasttransversalfilter
|dsp.rlsfilter
blocks
- |
topics
打开示例
您曾对此示例进行过修改。是否要打开带有您的编辑的示例?
matlab 命令
您点击的链接对应于以下 matlab 命令:
请在 matlab 命令行窗口中直接输入以执行命令。web 浏览器不支持 matlab 命令。
select a web site
choose a web site to get translated content where available and see local events and offers. based on your location, we recommend that you select: .
you can also select a web site from the following list:
how to get best site performance
select the china site (in chinese or english) for best site performance. other mathworks country sites are not optimized for visits from your location.
americas
- (español)
- (english)
- (english)
europe
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
asia pacific
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)