Introduction: Decision Trees and Random Forests
Get introduced to our topic for this chapter: tree-based machine learning models.
We'll cover the following
Overview
In this chapter, we’ll shift our focus to another type of machine learning model that has taken data science by storm in recent years: tree-based models. In this chapter, after learning about trees individually, you’ll then learn how models made up of many trees, called random forests, can improve the overfitting associated with individual trees. After reading this chapter, you will be able to train decision trees for machine learning purposes, visualize trained decision trees, and train random forests and visualize the results.
In the last two chapters, we have gained a thorough understanding of the workings of logistic regression. We have also gotten a lot of experience with using the scikit-learn package in Python to create logistic regression models.
Decision trees for machine learning
We will also introduce a powerful type of predictive model that takes a completely different approach from the logistic regression model: decision trees. Decision trees and the models based on them are some of the most performant models available today for general machine learning applications. The concept of using a tree process to make decisions is simple, and therefore, decision tree models are easy to interpret. However, a common criticism of decision trees is that they overfit to the training data. In order to remedy this issue, researchers have developed ensemble methods, such as random forests, that combine many decision trees to work together and make better predictions than any individual tree could.
Get hands-on with 1200+ tech skills courses.