AI Features

Univariate Feature Selection

Learn about univariate feature selection, a technique of testing features one by one against the response variable.

What it does and doesn’t do

In this chapter, we have learned techniques for going through features one by one to see whether they have predictive power. This is a good first step, and if you already have features that are very predictive of the outcome variable, you may not need to spend much more time considering features before modeling. However, there are drawbacks to univariate feature selection. In particular, it does not consider the interactions between features. For example, what if the credit default rate is very high specifically for people with both a certain education level and a certain range of credit limit?

Also, with the methods we used here, only the linear effects of features are captured. If a feature is more predictive when it’s undergone some type of transformation, such as a polynomial or logarithmic transformation, or binning (discretization), linear techniques of univariate feature selection may not be effective. Interactions and transformations are examples of feature engineering, or creating new features, in these cases from existing features. The shortcomings of linear feature selection methods can be remedied by non-linear modeling techniques including decision trees and methods based on them, which we will examine later. But there is still value in looking for simple relationships that can be found by linear methods for univariate feature selection, and it is quick to do.

Understanding logistic regression and the sigmoid function

In this section, we will open the “black box” of logistic regression all the way: we will gain a comprehensive understanding of how it works. We’ll start off by introducing a new programming concept: functions. At the same time, we’ll learn about a mathematical function, the sigmoid function, which plays a key role in logistic regression.

Python functions

In the most basic sense, a ...