Information gain calculator

This online calculator calculates information gain, the change in information entropy from a prior state to a state that takes some information as given

The online calculator below parses the set of training examples, then computes the information gain for each attribute/feature. If you are unsure what it is all about, or you want to see the formulas, read the explanation below the calculator.

Note: Training examples should be entered as a csv list, with a semicolon used as a separator. The first row is considered to be a row of labels, first attributes/features labels, then the class label. All other rows are examples. The default data in this calculator is the famous example of the data for the "Play Tennis" decision tree

PLANETCALC, Information Gain Calculator

Information Gain Calculator

Digits after the decimal point: 3
The file is very large. Browser slowdown may occur during loading and creation.

Information gain and decision trees

Information gain is a metric that is particularly useful in building decision trees. A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (the decision taken after computing all the attributes). The paths from root to leaf represent classification rules.1

Let's look at the calculator's default data.

Attributes to be analyzed are:

  • Outlook: Sunny/Overcast/Rain
  • Humidity: High/Normal
  • Wind: True/False
  • Temperature: Hot/Mild/Cool

Class label is:

  • Play: Yes/No

So, by analyzing the attributes one by one, the algorithm should effectively answer the question: "Should we play tennis?" Thus, in order to perform as few steps as possible, we need to choose the best decision attribute on each step – the one that gives us maximum information.

How do we measure the information that each attribute can give us? One of the ways is to measure the reduction in entropy, and this is exactly what Information Gain metric does.

Lets get back to the example. In our training set we have five examples labelled as "No" and nine examples labelled as "Yes". According to the well-known Shannon Entropy formula, the current entropy is

H=-\frac{5}{14} \log_2\frac{5}{14} - \frac{9}{14} \log_2\frac{9}{14} = 0.94

Now, let's imagine we want to classify an example. We decide to test the "Windy" attribute first. Technically, we are performing a split on the "Windy" attribute.

If the value of the "Windy" attribute is "True", we are left with six examples. Three of them have "Yes" as the Play label, and three of them have "No" as the Play label.
Their entropy is

H=-\frac{3}{6} \log_2\frac{3}{6} - \frac{3}{6} \log_2\frac{3}{6} = 1

So, if our example under test has "True" as the "Windy" attribute, we are left with more uncertainty than before.

Now, if the value of the "Windy" attribute is "False", we are left with eight examples. Six of them have "Yes" as the Play label, and two of them have "No" as the Play label.
Their entropy is

H=-\frac{6}{8} \log_2\frac{6}{8} - \frac{2}{8} \log_2\frac{2}{8} = 0.81

This is, of course, better than our initial 0.94 bits of entropy (if we are lucky enough to get "False" in our example under test).

In order to estimate entropy reduction in general, we need to average using the probability to get "True" and "False" attribute values. We have six examples with a "True" value of the "Windy" attribute and eight examples with a "False" value of the "Windy" attribute. So, the average entropy after the split would be

H_{Windy}=\frac{6}{14} H_{Windy=True} + \frac{8}{14} H_{Windy=False} = 0.429+0.463=0.892

Thus, our initial entropy is 0.94, and the average entropy after the split on the "Windy" attribute is 0.892. Hence the information gain as reduction in entropy is

IG=H-H_{Windy}=0.048

The general formula for the information gain for the attribute a is

IG(T,a)=\mathrm {H} (T)-\mathrm {H} (T|a),

where
T - a set of training examples, each of the form (\textbf{x},y) = (x_1, x_2, x_3, ..., x_k, y) where x_a\in vals(a) is the value of the a^{\text{th}} attribute or feature of example and y is the corresponding class label,
\mathrm {H} (T|a) - the entropy of T conditioned on a (Conditional entropy)

The conditional entropy formula is

{\displaystyle \mathrm {H} (T|a)=\sum _{v\in vals(a)}{{\frac {|S_{a}{(v)}|}{|T|}}\cdot \mathrm {H} \left(S_{a}{\left(v\right)}\right)}.}

where
S_{a}{(v)} - the set of training examples of T such for which attribute a is equal to v

Using this approach, we can find the information gain for each of the attributes, and find out that the "Outlook" attribute gives us the greatest information gain, 0.247 bits. Now we can conclude that the first split on the "Windy" attribute was a really bad idea, and the given training examples suggest that we should test on the "Outlook" attribute first.

On a final note. You might wonder why we need a decision tree if we can just provide the decision for each combination of attributes. Of course you can, but even for this small example, the total number of combinations is 3*2*2*3=36. From the other side, we have just used a subset of combinations (14 examples) to train our algorithm (by building a decision tree) and now it can classify all other combinations without our help. That's the point of machine learning.

URL copiado para a área de transferência
PLANETCALC, Information gain calculator

Comentários