dummy variables
this topic provides an introduction to dummy variables, describes how the software creates them for classification and regression problems, and shows how you can create dummy variables by using the function.
what are dummy variables?
when you perform classification and regression analysis, you often need to include both continuous (quantitative) and categorical (qualitative) predictor variables. a categorical variable must not be included as a numeric array. numeric arrays have both order and magnitude. a categorical variable can have order (for example, an ordinal variable), but it does not have magnitude. using a numeric array implies a known “distance” between the categories. the appropriate way to include categorical predictors is as dummy variables. to define dummy variables, use indicator variables that have the values 0 and 1.
the software chooses one of four schemes to define dummy variables based on the
type of analysis, as described in the next sections. for example, suppose you have a
categorical variable with three categories: cool
,
cooler
, and coolest
.
full dummy variables
represent the categorical variable with three categories using three dummy variables, one variable for each category.
x0 is a dummy variable that has the
value 1 for cool
, and 0 otherwise.
x1 is a dummy variable that has
the value 1 for cooler
, and 0 otherwise.
x2 is a dummy variable that has
the value 1 for coolest
, and 0 otherwise.
dummy variables with reference group
represent the categorical variable with three categories using two dummy variables with a reference group.
you can distinguish cool
, cooler
, and
coolest
using only
x1 and
x2, without
x0. observations for
cool
have 0s for both dummy variables. the category
represented by all 0s is the reference group.
dummy variables for ordered categorical variable
assume the mathematical ordering of the categories is cool
< cooler
< coolest
. this coding
scheme uses 1 and –1 values, and uses more 1s for higher categories, to indicate
the ordering.
x1 is a dummy variable that has the
value 1 for cooler
and coolest
, and –1 for
cool
. x2 is a
dummy variable that has the value 1 for coolest
, and –1
otherwise.
you can indicate that a categorical variable has mathematical ordering by
using the 'ordinal'
name-value pair argument of the
function.
dummy variables created with effects coding
effects coding uses 1, 0, and –1 to create dummy variables. instead of using 0 values to represent a reference group, as in dummy variables with reference group, effects coding uses –1 to represent the last category.
creating dummy variables
automatic creation of dummy variables
statistics and machine learning toolbox™ offers several classification and regression fitting functions that accept categorical predictors. some fitting functions create dummy variables to handle categorical predictors.
the following is the default behavior of the fitting functions in identifying categorical predictors.
if the predictor data is in a table, the functions assume that a variable is categorical if it is a logical vector, categorical vector, character array, string array, or cell array of character vectors. the fitting functions that use decision trees assume ordered categorical vectors to be continuous variables.
if the predictor data is a matrix, the functions assume all predictors are continuous.
to identify any other predictors as categorical predictors, specify them by
using the 'categoricalpredictors'
or
'categoricalvars'
name-value pair argument.
the fitting functions handle the identified categorical predictors as follows:
,
fitclinear
,fitcnet
,fitcsvm
,fitrgp
, ,fitrlinear
, , andfitrsvm
use two different schemes to create dummy variables, depending on whether a categorical variable is unordered or ordered.for an unordered categorical variable, the functions use full dummy variables.
for an ordered categorical variable, the functions use dummy variables for ordered categorical variable.
parametric regression fitting functions such as , , and use dummy variables with reference group. when the functions include the dummy variables, the estimated coefficients of the dummy variables are relative to the reference group. for an example, see .
, and allow you to specify the scheme for creating dummy variables by using the
'dummyvarcoding'
name-value pair argument. the functions support three schemes: full dummy variables ('dummyvarcoding','full'
), dummy variables with reference group ('dummyvarcoding','reference'
), and dummy variables created with effects coding ('dummyvarcoding','effects'
). note that these functions do not offer a name-value pair argument for specifying categorical variables.other fitting functions that accept categorical predictors use algorithms that can handle categorical predictors without creating dummy variables.
manual creation of dummy variables
this example shows how to create your own dummy variable design matrix by using the function. this function accepts grouping variables and returns a matrix containing zeros and ones, whose columns are dummy variables for the grouping variables.
create a column vector of categorical data specifying gender.
gender = categorical({'male';'female';'female';'male';'female'});
create dummy variables for gender
.
dv = dummyvar(gender)
dv = 5×2
0 1
1 0
1 0
0 1
1 0
dv
has five rows corresponding to the number of rows in gender
and two columns for the unique groups, female
and male
. column order corresponds to the order of the levels in gender
. for categorical arrays, the default order is ascending alphabetical. you can check the order by using the function.
categories(gender)
ans = 2x1 cell
{'female'}
{'male' }
to use the dummy variables in a regression model, you must either delete a column (to create a reference group) or fit a regression model with no intercept term. for the gender example, you need only one dummy variable to represent two genders. notice what happens if you add an intercept term to the complete design matrix dv
.
x = [ones(5,1) dv]
x = 5×3
1 0 1
1 1 0
1 1 0
1 0 1
1 1 0
rank(x)
ans = 2
the design matrix with an intercept term is not of full rank and is not invertible. because of this linear dependence, use only c – 1 indicator variables to represent a categorical variable with c categories in a regression model with an intercept term.
see also
|