Terence Parr and Jeremy Howard
Copyright © 2018-2019 Terence Parr. All rights reserved.
Please don't replicate on web or redistribute in any way.
This book generated from markup+markdown+python+latex source with Bookish.
You can make comments or annotate this page by going to the annotated version of this page. You'll see existing annotated bits highlighted in yellow. They are PUBLICLY VISIBLE. Or, you can send comments, suggestions, or fixes directly to Terence.
Contents
“Coming up with features is difficult, timeconsuming, requires expert domain knowledge. When working applications of learning, we spend a lot of time tuning the features.” — Andrew Ng in a Stanford plenary talk on AI, 2011
Creating a good model is more about feature engineering than it is about choosing the right model; well, assuming your go-to model is a good one like Random Forest. Feature engineering means improving, acquiring, and even synthesizing features that are strong predictors of your model's target variable. Synthesizing features means deriving new features from existing features or injecting features from other data sources. For example, we could synthesize the name of an apartment's New York City neighborhood from it's latitude and longitude. It doesn't matter how sophisticated our model is if we don't give it something useful to chew on. If there is no relationship to discover, because the features are not predictive, no machine learning model is going to give accurate predictions.
So far we've dropped nonnumeric features, such as apartment description strings and manager ID categorical values, because machine learning models can only learn from numeric values. But, there is potentially predictive power that we could exploit in these string values. In this chapter, we're going to learn the basics of feature engineering in order to squeeze a bit more juice out of the nonnumeric features found in the New York City apartment rent data set.
Most of the predictive power for rent price comes from apartment location, number of bedrooms, and number of bathrooms, so we shouldn't expect a massive boost in model performance. The primary goal of this chapter is to learn the techniques for use on real problems in your daily work. Or, if you've entered a Kaggle competition, the difference between the top spot and position 1000 is often razor thin, so even a small advantage from feature engineering could be useful in that context.
2See Section 3.2.1 Loading and sniffing the training data for instructions on downloading JSON rent data from Kaggle and creating the CSV files.
1Don't forget the notebooks aggregating the code snippets from the various chapters.
In order to measure any improvements in model accuracy, let's get a baseline using just the cleaned up numeric features from rent.csv in the data directory underneath where we started Jupyter.2 As we did in Chapter 5 Exploring and Denoising Your Data Set, let's load the rent data, strip extreme prices, and remove apartments not in New York City:1
Now train an RF using just the numeric features and print the out-of-bag (OOB) score:
0.8677576247438816
The 0.868 OOB score is pretty good as it's close to 1.0. (Recall that a score of 0 indicates our model does know better than simply guessing the average rent price for all apartments and 1.0 means a perfect predictor of rent price.) Let's also get an idea of how much work the RF has to do in order to capture the relationship between those features and rent price. The rfpimp package provides a few simple measures we can use: the total number of nodes in all decision trees of the forest and the height (in nodes) of the typical tree.
2,432,836 tree nodes and 35.0 median tree height
The tree height matters because that is the path taken by the RF prediction mechanism and so tree height effects prediction speed.
Let's also get a baseline for feature importances so that, as we introduce new features, we can get a sense of their predictive power. Longitude and latitude are really one meta-feature called “location,” so we can combine those two using the features parameter to the importances() function. We're going to ask for feature importances a lot in this chapter so let's make a handy function:
Now that we have a good benchmark for model and feature performance, let's try to improve our model by converting some existing nonnumeric features into numeric features.
The interest_level feature is a categorical variable that seems to encode interest in an apartment, no doubt taken from webpage activity logs. It's a categorical variable because it takes on values from a finite set of choices: low, medium, and high. More specifically, interest_level is an ordinal categorical variable, which means that the values can be ordered even if they are not actual numbers. Looking at the count of each ordinal value is a good way to get an overview of an ordinal feature:
low 33270 medium 11203 high 3827 Name: interest_level, dtype: int64
Ordinal variables are the easiest to convert from strings to numbers because we can simply assign a different integer for each possible ordinal value. One way to do the conversion is to use the Pandas map() function and a dictionary argument:
1 33270 2 11203 3 3827 Name: interest_level, dtype: int64
The key here is to ensure that the numbers we use for each category maintains the same ordering so, for example, medium's value of 2 is bigger than low's value of 1. We could also have chosen {'low':10,'medium':20,'high':30} as the encoding because RFs care about the order and not the scale of the features.
Let's see how an RF model performs using the numeric features and this newly converted interest_level feature:
OOB R^2 0.87037 using 3,025,158 tree nodes with 36.0 median tree height
That 0.870 score is only a little bit better than our baseline of 0.868, but it's still useful. As you approach an of 1.0, it gets harder and harder to nudge model performance. The difference in scores actually represents a few percentage points of the remaining accuracy, so we can be happy with this bump. Also, remember that using a single OOB score is a fairly blunt metric that aggregates the performance of the model on 50,000 records. It's possible that a certain type of apartment got a really big accuracy boost. The only cost to this accuracy boost is an increase in the number of tree nodes, but the typical decision tree height in the forest remains the same as our baseline. Prediction using this model should be just as fast as the baseline.
Ordinal categorical variables are simple to encode numerically. Unfortunately, there's another kind of categorical variable called a nominal variable for which there is no meaningful order between the category values. For example, in this data set, columns manager_id, building_id, and display address are nominal features. Without an order between categories, it's hard to encode nominal variables as numbers in a meaningful way, particularly when there are very many category values:
(3409, 7417, 8692)
So-called high-cardinality (many-valued) categorical variables such as manager_id, building_id, and display address are best handled using more advanced techniques such as embeddings or mean encoding as we'll see in Section 6.5 Target encoding categorical variables, but we'll introduce two simple encoding approaches here that are often useful.
The first technique is called label encoding and simply converts each category to a numeric value, as we did before, while ignoring the fact that the categories are not really ordered. Sometimes an RF can get some predictive power from features encoded in this way but typically requires larger trees in the forest.
To label encode any categorical variable, convert the column to an ordered categorical variable and then convert the strings to the associated categorical code (which is computed automatically by Pandas):
The category codes start with 0 and the code for “not a number” (nan) is -1. To bring everything into the range 0 or above, we add one to the category code. (Sklearn has equivalent functionality in its OrdinalEncoder transformer but it can't handle object columns with both integers and strings, plus it's get an error for missing values represented as np.nan.)
Let's see if this new feature improves performance over the baseline model:
OOB R^2 0.86583 using 3,120,336 tree nodes with 37.5 median tree height
Unfortunately, the score is roughly the same as the baseline's score and it increases the tree height. Not a good trade.
Let's try another encoding approach called frequency encoding and apply it to the manager_id feature. Frequency encoding converts categories to the frequencies with which they appear in the training. For example, here are the category value counts for the top 5 manager_ids:
e6472c7237327dd3903b3d6f6a94515a 2509 6e5c10246156ae5bdcd9b487ca99d96a 695 8f5a9c893f6d602f4953fcc0b8e6e9b4 404 62b685cc0d876c3a1a51d63a0d6a8082 396 cb87dadbca78fad02b388dc9e8f25a5b 370 Name: manager_id, dtype: int64
The idea behind this transformation is that there might be predictive power in the number of apartments managed by a particular manager.
Function value_counts() gives us the encoding and from there we can use map() to transform the manager_id into a new column called mgr_apt_count:
Again, we could actually divide the accounts by len(df), but that just scales the column and won't affect predictive power.
Let's see how a model does with both of these categorical variables encoded:
OOB R^2 0.86434 using 4,569,236 tree nodes with 41.0 median tree height
The model accuracy does not improve and the complexity of the trees goes up, so we should avoid introducing these features. The feature importance plot confirms that these features are unimportant.
We can conclude that, for this data set, label encoding and frequency encoding of high-cardinality categorical variables is not helpful. These encoding techniques could, however, be useful on other data set so it's worth learning them. We'll learn about another categorical variable encoding technique called “one-hot encoding” in [chp:onehot], but we avoid one-hot encoding here because it would add thousands of new columns to our data set.
The apartment data set also has some variables that are both nonnumeric and noncategorical, description and features. Such arbitrary strings have no obvious numerical encoding, but we can extract bits of information from them to create new features. For example, an apartment with parking, doorman, dishwasher, and so on might fetch a higher price so let's synthesize some Boolean features derived from string features.
First let's normalize the string columns to be lowercase and convert any missing values (represented as “not a number” np.nan values) to be empty strings; otherwise, applying split() to the columns will fail because split() does not apply to floating-point numbers (np.nan).
Now we can create new columns by applying contains() to the string columns, once for each new column:
Another trick is to count the number of words in a string
There's not much we can do with the list of photo URLs in the other photos string column, but the number of photos might be slightly predictive of price, so let's create a feature for that as well:
Let's see how our new features affect performance above the baseline numerical features:
OOB R^2 0.86242 using 4,767,582 tree nodes with 43.0 median tree height
It's disappointing that the accuracy does not improve with these features and that the complexity of the model is higher, but this does provide useful marketing information. Apartment features such as parking, laundry, and dishwashers are attractive but there's little evidence that people are willing to pay more for them (except maybe for having a doorman).
If you grew up with lots of siblings, you're likely familiar with the notion of waiting to use the bathroom in the morning. Perhaps there is some predictive power in the ratio of bedrooms to bathrooms, so let's try synthesizing a new column that is the ratio of two numeric columns:
OOB R^2 0.86831 using 2,433,878 tree nodes with 35.0 median tree height
Combining numerical columns is a useful technique to keep in mind, but unfortunately this combination doesn't affect our model significantly here. Perhaps we'd have better luck combining a numeric column with the target, such as the ratio of bedrooms to price:
OOB R^2 0.98601 using 1,313,390 tree nodes with 31.0 median tree height
Wow! That's almost a perfect score, which should trigger an alarm that it's too good to be true. In fact, we do have an error, but it's not a code bug. We have effectively copied the price column into the feature set, kind of like studying for a quiz by looking at the answers. This is a form of data leakage, which is a general term for the use of features that directly or indirectly hint at the target variable.
To illustrate how this leakage causes overfitting, let's split out 20% of the data as a validation set and compute the beds_per_price feature for the training set:
While we do have price information for the validation set in df_test, we can't use it to compute beds_per_price for the validation set. In production, the model will not have the validation set prices and so df['beds_per_price'] must be created only from the training set. (If the model had apartment prices in practice, we wouldn't be the model.) To avoid this common pitfall, it's helpful to imagine computing features for and sending validation records to the model one by one, rather than all at once.
Once we have the beds_per_price feature for the training set, we can compute a dictionary mapping bedrooms to the beds_per_price feature. Then, we can synthesize the beds_per_price in the validation set using map() on the bedrooms column:
2125 0.000332 47172 0.000667 5584 0.000000 860 0.000294 49025 0.000332 Name: beds_per_price, dtype: float64
The fillna() code deals with the situation where the validation set has a number of bedrooms that is not in the training set; for example, sometimes the validation set has an apartment with 7 bedrooms, but there is no bedroom key equal to 7 in the bpmap.
Now that we have beds_per_price for both training and validation sets, we can train an RF model using just the training set and evaluate its performance using just the validation set:
OOB R^2 -0.41233 1,169,770 nodes, 31.0 median height
An of -0.412 on the validation set is terrible and indicates that the model lacks generality. In this situation, overfitting means that the model has put too much emphasis on the beds_per_price feature, which is strongly predictive of price in the training but not the validation set. (The feature was computed using just data from the training set.) We get a whiff of overfitting even in the complexity of the model, which has half the number of nodes as a model trained just on the numeric features. It's not always an error to derive features from the target; we just have to be more careful.
Creating features that incorporate information about the target variable is called target encoding and is often used to derive features from categorical variables to great effect. One of the most common target encodings is called mean encoding, which replaces each category value with the average target value associated with that category. For example, building managers in our apartment data set might rent apartments in certain price ranges. The manager IDs by themselves carry little predictive power but converting IDs to the average rent price for apartments they manage could be predictive. Similarly, certain buildings might have more expensive apartments than others. The average rent price per building is easy enough to get with Pandas by grouping the data by building_id and asking for the mean:
Unfortunately, as we saw in the last section, it's easy to overfit models when incorporating target information. Preventing overfitting is nontrivial and it's best to rely on a vetted library, such as the category_encoders package contributed to sklearn. (To prevent overfitting, the idea is to compute the mean from a subset of the training data targets for each category.) You can install category_encoders with pip on the commandline:
{TODO: Maybe show my mean encoder. much faster. worth explaining somewhere. rent/mean-encoder.ipynb sees like we need an alpha parameter that gets rare cats towards avg; supposed to work better.}
Here's how to use the TargetEncoder object to encode three categorical variables from the data set and get an OOB score:
OOB R^2 0.87236 using 2,746,024 tree nodes with 39.0 median tree height
That score is a bit better than the baseline of 0.868 for numeric-only features. Given the tendency to overfit the model with target encoding, however, it's a good idea to test the model with a validation set. Let's split out 20% as a validation set and get a baseline for numeric features:
0.845264 score 2,126,222 tree nodes and 35.0 median tree height
The validation score and the OOB score are very similar, confirming that OOB scores are an excellent approximation to validation scores. With that baseline, let's see what happens when we properly target encode the validation set, that is, using only data from the training set (warning: this takes several minutes to execute):
0.8488 score 2,378,442 tree nodes and 38.0 median tree height
The validation score for numeric plus target-encoded feature (0.849) is less than the validation score for numeric-only features (0.845). The model finds the target-encoded feature strongly predictive of the training set prices, causing it to overfit by overemphasizing this feature. This loss of generality explains the drop in validation scores. The feature importance graph provides evidence of this overemphasis because it shows the target-encoded building_id feature as the most important.
While not beneficial for this data set, target encoding is reported to be useful by many practitioners and Kaggle competition winners. It's worth knowing about this technique and learning to apply it properly (computing validation set features using data only from the training set).
When we've exhausted our bag of tricks deriving features from a given data set, sometimes it's fruitful to inject features derived from external data sources.
Our data set has longitude and latitude coordinates, but a more obvious price predictor would be a categorical variable identifying the neighborhood because some neighborhoods are more desirable than others. Given the trouble we've seen with categorical variables above, though, a numeric feature would be more useful. Instead of identifying the neighborhood, let's use proximity to highly desirable neighborhoods as a numeric feature. Forbes magazine has an article, The Top 10 New York City Neighborhoods to Live In, According to the Locals, from which we can get neighborhood names. Then, using a mapping website, we can estimate the longitude and latitude of those neighborhoods and record them like this:
To synthesize new features, we compute the so-called Manhattan distance (also called L1 distance) from each apartment to each neighborhood center, which measures the number of blocks one must walk to reach the neighborhood. In contrast, the Euclidean distance would measure distance as the crow flies, cutting through the middle of buildings.
Training a model on the numeric features and these new neighborhood features, gives us a decent bump in performance:
OOB R^2 0.87213 using 2,412,662 tree nodes with 43.0 median tree height
An OOB score of 0.872 is noticeably better than the baseline 0.868 for numeric features. The number of trees increased significantly, but the number of nodes is about the same. The model works a little bit harder to associate similar apartments with similar rent prices, but it's worth the extra complexity for the performance boost.
The new proximity features effectively triangulate an apartment relative to desirable neighborhoods, which means we probably don't need apartment longitude and latitude anymore. Dropping longitude and latitude and retraining a model shows a similar OOB score and shallower trees:
0.8700 score 2,421,776 tree nodes and 41.0 median tree height
Injecting outside data is definitely worth trying as a general rule. As another example, consider predicting sales at a store or website. We've found that injecting columns indicating paydays or national holidays often helps predict fluctuations in sales volume that are not explained well with existing features.
We've explored a lot of common feature engineering techniques for categorical variables in this chapter, so let's put them all together to see how a combined model performs:
OOB R^2 0.87876 using 4,842,474 tree nodes with 44.0 median tree height
While OOB score 0.879 is not too much larger than our numeric-only model baseline of 0.868 in absolute terms, it means we've squeezed an extra 8.321% relative accuracy out of our data (computed using ((oob_combined-oob_baseline) / (1-oob_baseline))*100).
Something to notice is that the RF model does not get confused with a combination of all of these features, which is one of the reasons we recommend RFs. RFs simply ignore features without much predictive power. The feature importance graph to the right indicates what the model finds predictive. For model accuracy reasons, it's a good idea to keep all features, but we might want to remove unimportant features to simplify the model for interpretation purposes (when explaining model behavior).
{TODO: Summarize techniques from this chapter}