Non-Linear Machine Learning Algorithms – Decision Trees

Non-Linear Machine Learning Algorithms – Decision Trees

Introduction
In this Machine Learning Blog Series so far, I have discussed Logistic Regression and Linear Regression Machine Learning Algorithms. These Algorithms are Linear, in this blog I am going to introduce an example of a non-linear Algorithm, Decision Trees.

What is a Decision Tree?
Non-Linear Machine Learning Algorithms – Decision Trees
Above is an example of a simple decision tree. Each root represents an input variable (x) and a split at that variable. Each Leaf represents an output variable (y) which is used to make a prediction. The above Tree takes an example of an animal as the input variable and depending on the features of the example animal, a branch is selected, this process is repeated until a leaf is reached.

Creating a Tree from Data
A tree can be derived from training data by using a recursive binary split. Different splits are tried and tested using a cost function. All input variables and all possible splits are evaluated in a greedy manner. (Best split is chosen).

The most common cost function is the mean square of the error. For Example, a tree is created with random splits, a supervised training set is an input into the tree, and the output (y) is returned. The square of the error is calculated for each member of the training set:

(training set output – y)2

The average square error across the entire tree is calculated. Another random tree is created and the process repeats. After n trees have been created the best tree, i.e. the tree with the smallest average square error, is returned.

Pruning the Tree
Pruning a decision tree refers to removing leaf nodes to improve the performance and readability of the tree.

The quickest and simplest way of pruning a tree is to work through each node and determine the effect on the tree of removing the node. For example, remove a node and then rerun the training set through the tree, if the error has reduced or remained the same remove the node, if the error has increased keep the node.

Conclusion
This blog is a continuation of the Technical Machine Learning Series, it introduced the idea of Non-Linear Machine Learning Algorithms. In this blog, I gave a simple overview of the decision trees, how they work and how they are created.

Contact Alan and the team to find out more about our Software Development services.

Alan Lehane, Software Developer

Alan Lehane, Software Developer

Alan has been working with Aspira for 4 years as a Software Developer, specialising in Data Analytics and Machine Learning. He has provided a wide variety of services to Aspira’s clients including Software Development, Test Automation, Data Analysis and Machine Learning.

Scroll to Top