The quality of data and the amount of useful information are key factors that determine how well a machine learning algorithm can learn. Therefore, it is absolutely critical that we make sure to encode categorical variables correctly, before we feed data into a machine learning algorithm.
In this article, with simple yet effective examples we will explain how to deal with categorical data in computing machine learning algorithms and how we to map ordinal and nominal feature values to integer representations.
The article is an excerpt from the book Python Machine Learning - Third Edition by Sebastian Raschka and Vahid Mirjalili. This book is a comprehensive guide to machine learning and deep learning with Python. It acts as both a clear step-by-step tutorial, and a reference you’ll keep coming back to as you build your machine learning systems.
It is not uncommon that real-world datasets contain one or more categorical feature columns. When we are talking about categorical data, we have to further distinguish between nominal and ordinal features. Ordinal features can be understood as categorical values that can be sorted or ordered.
For example, t-shirt size would be an ordinal feature, because we can define an order XL > L > M. In contrast, nominal features don't imply any order and, to continue with the previous example, we could think of t-shirt color as a nominal feature since it typically doesn't make sense to say that, for example, red is larger than blue.
Categorical data encoding with pandas
Before we explore different techniques to handle such categorical data, let's create a new DataFrame to illustrate the problem:
>>> import pandas as pd>>> df = pd.DataFrame([... ['green', 'M', 10.1, 'class1'],... ['red', 'L', 13.5, 'class2'],... ['blue', 'XL', 15.3, 'class1']])>>> df.columns = ['color', 'size', 'price', 'classlabel']>>> dfcolor size price classlabel0 green M 10.1 class11 red L 13.5 class22 blue XL 15.3 class1
As we can see in the preceding output, the newly created DataFrame contains a nominal feature (color), an ordinal feature (size), and a numerical feature (price) column. The class labels (assuming that we created a dataset for a supervised learning task) are stored in the last column.
To make sure that the learning algorithm interprets the ordinal features correctly, we need to convert the categorical string values into integers. Unfortunately, there is no convenient function that can automatically derive the correct order of the labels of our size feature, so we have to define the mapping manually. In the following simple example, let's assume that we know the numerical difference between features, for example, XL = L + 1 = M + 2:
>>> size_mapping = {... 'XL': 3,... 'L': 2,... 'M': 1}>>> df['size'] = df['size'].map(size_mapping)>>> dfcolor size price classlabel0 green 1 10.1 class11 red 2 13.5 class22 blue 3 15.3 class1
If we want to transform the integer values back to the original string representation at a later stage, we can simply define a reverse-mapping dictionary inv_size_mapping = {v: k for k, v in size_mapping.items()} that can then be used via the pandas map method on the transformed feature column, similar to the size_mapping dictionary that we used previously. We can use it as follows:
>>> inv_size_mapping = {v: k for k, v in size_mapping.items()}>>> df['size'].map(inv_size_mapping)0 M1 L2 XLName: size, dtype: object
Many machine learning libraries require that class labels are encoded as integer values. Although most estimators for classification in scikit-learn convert class labels to integers internally, it is considered good practice to provide class labels as integer arrays to avoid technical glitches. To encode the class labels, we can use an approach similar to the mapping of ordinal features discussed previously. We need to remember that class labels are not ordinal, and it doesn't matter which integer number we assign to a particular string label. Thus, we can simply enumerate the class labels, starting at 0:
>>> import numpy as np>>> class_mapping = {label:idx for idx,label in... enumerate(np.unique(df['classlabel']))}>>> class_mapping{'class1': 0, 'class2': 1}
Next, we can use the mapping dictionary to transform the class labels into integers:
>>> df['classlabel'] = df['classlabel'].map(class_mapping)>>> dfcolor size price classlabel0 green 1 10.1 01 red 2 13.5 12 blue 3 15.3 0
We can reverse the key-value pairs in the mapping dictionary as follows to map the converted class labels back to the original string representation:
>>> inv_class_mapping = {v: k for k, v in class_mapping.items()}>>> df['classlabel'] = df['classlabel'].map(inv_class_mapping)>>> dfcolor size price classlabel0 green 1 10.1 class11 red 2 13.5 class22 blue 3 15.3 class1
Alternatively, there is a convenient LabelEncoder class directly implemented in scikit-learn to achieve this:
>>> from sklearn.preprocessing import LabelEncoder>>> class_le = LabelEncoder()>>> y = class_le.fit_transform(df['classlabel'].values)>>> yarray([0, 1, 0])
Note that the fit_transform method is just a shortcut for calling fit and transform separately, and we can use the inverse_transform method to transform the integer class labels back into their original string representation:
>>> class_le.inverse_transform(y)array(['class1', 'class2', 'class1'], dtype=object)
In the Mapping ordinal features section, we used a simple dictionary-mapping approach to convert the ordinal size feature into integers. Since scikit-learn's estimators for classification treat class labels as categorical data that does not imply any order (nominal), we used the convenient LabelEncoder to encode the string labels into integers. It may appear that we could use a similar approach to transform the nominal color column of our dataset, as follows:
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
>>> X = df[['color', 'size', 'price']].values>>> color_le = LabelEncoder()>>> X[:, 0] = color_le.fit_transform(X[:, 0])>>> Xarray([[1, 1, 10.1],[2, 2, 13.5],[0, 3, 15.3]], dtype=object)
After executing the preceding code, the first column of the NumPy array X now holds the new color values, which are encoded as follows:
- blue = 0
- green = 1
- red = 2
If we stop at this point and feed the array to our classifier, we will make one of the most common mistakes in dealing with categorical data. Can you spot the problem? Although the color values don't come in any particular order, a learning algorithm will now assume that green is larger than blue, and red is larger than green. Although this assumption is incorrect, the algorithm could still produce useful results. However, those results would not be optimal.
A common workaround for this problem is to use a technique called one-hot encoding. The idea behind this approach is to create a new dummy feature for each unique value in the nominal feature column. Here, we would convert the color feature into three new features: blue, green, and red. Binary values can then be used to indicate the particular color of an example; for example, a blue example can be encoded as blue=1, green=0, red=0. To perform this transformation, we can use the OneHotEncoder that is implemented in scikit-learn's preprocessing module:
>>> from sklearn.preprocessing import OneHotEncoder>>> X = df[['color', 'size', 'price']].values>>> color_ohe = OneHotEncoder()>>> color_ohe.fit_transform(X[:, 0].reshape(-1, 1)).toarray()array([[0., 1., 0.],[0., 0., 1.],[1., 0., 0.]])
Note that we applied the OneHotEncoder to a single column (X[:, 0].reshape(-1, 1))) only, to avoid modifying the other two columns in the array as well. If we want to selectively transform columns in a multi-feature array, we can use the ColumnTransformer that accepts a list of (name, transformer, column(s)) tuples as follows:
>>> from sklearn.compose import ColumnTransformer>>> X = df[['color', 'size', 'price']].values>>> c_transf = ColumnTransformer([... ('onehot', OneHotEncoder(), [0]),... ('nothing', 'passthrough', [1, 2])... ])>>> c_transf.fit_transform(X) .astype(float)array([[0.0, 1.0, 0.0, 1, 10.1],[0.0, 0.0, 1.0, 2, 13.5],[1.0, 0.0, 0.0, 3, 15.3]])
In the preceding code example, we specified that we only want to modify the first column and leave the other two columns untouched via the 'passthrough' argument.
An even more convenient way to create those dummy features via one-hot encoding is to use the get_dummies method implemented in pandas. Applied to a DataFrame, the get_dummies method will only convert string columns and leave all other columns unchanged:
>>> pd.get_dummies(df[['price', 'color', 'size']])price size color_blue color_green color_red0 10.1 1 0 1 01 13.5 2 0 0 12 15.3 3 1 0 0
When we are using one-hot encoding datasets, we have to keep in mind that it introduces multicollinearity, which can be an issue for certain methods (for instance, methods that require matrix inversion). If features are highly correlated, matrices are computationally difficult to invert, which can lead to numerically unstable estimates. To reduce the correlation among variables, we can simply remove one feature column from the one-hot encoded array. Note that we do not lose any important information by removing a feature column, though; for example, if we remove the column color_blue, the feature information is still preserved since if we observe color_green=0 and color_red=0, it implies that the observation must be blue.
If we use the get_dummies function, we can drop the first column by passing a True argument to the drop_first parameter, as shown in the following code example:
>>> pd.get_dummies(df[['price', 'color', 'size']],... drop_first=True)price size color_green color_red0 10.1 1 1 01 13.5 2 0 12 15.3 3 0 0
In order to drop a redundant column via the OneHotEncoder , we need to set drop='first' and set categories='auto' as follows:
>>> color_ohe = OneHotEncoder(categories='auto', drop='first')>>> c_transf = ColumnTransformer([... ('onehot', color_ohe, [0]),... ('nothing', 'passthrough', [1, 2])... ])>>> c_transf.fit_transform(X).astype(float)array([[ 1. , 0. , 1. , 10.1],[ 0. , 1. , 2. , 13.5],[ 0. , 0. , 3. , 15.3]])
In this article, we have gone through some of the methods to deal with categorical data in datasets. We distinguished between nominal and ordinal features, and with examples we explained how they can be handled.
To harness the power of the latest Python open source libraries in machine learning check out this book Python Machine Learning - Third Edition, written by Sebastian Raschka and Vahid Mirjalili.
Other interesting read in data!
The best business intelligence tools 2019: when to use them and how much they cost
Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine