You are viewing the RapidMiner Studio documentation for version 9.2 - Check here for latest version
Rule Induction (RapidMiner Studio Core)
SynopsisThis operator learns a pruned set of rules with respect to the information gain from the given ExampleSet.
The Rule Induction operator works similar to the propositional rule learner named 'Repeated Incremental Pruning to Produce Error Reduction' (RIPPER, Cohen 1995). Starting with the less prevalent classes, the algorithm iteratively grows and prunes rules until there are no positive examples left or the error rate is greater than 50%.
In the growing phase, for each rule greedily conditions are added to the rule until it is perfect (i.e. 100% accurate). The procedure tries every possible value of each attribute and selects the condition with highest information gain.
In the prune phase, for each rule any final sequences of the antecedents is pruned with the pruning metric p/(p+n).
Rule Set learners are often compared to Decision Tree learners. Rule Sets have the advantage that they are easy to understand, representable in first order logic (easy to implement in languages like Prolog) and prior knowledge can be added to them easily. The major disadvantages of Rule Sets were that they scaled poorly with training set size and had problems with noisy data. The RIPPER algorithm (which this operator implements) pretty much overcomes these disadvantages. The major problem with Decision Trees is overfitting i.e. the model works very well on the training set but does not perform well on the validation set. Reduced Error Pruning (REP) is a technique that tries to overcome overfitting. After various improvements and enhancements over the period of time REP changed to IREP, IREP* and RIPPER.
Pruning in decision trees is a technique in which leaf nodes that do not add to the discriminative power of the decision tree are removed. This is done to convert an over-specific or over-fitted tree to a more general form in order to enhance its predictive power on unseen datasets. A similar concept of pruning implies on Rule Sets.
- training set (IOObject)
This input port expects an ExampleSet. It is the output of the Discretize by Frequency operator in the attached Example Process. The output of other operators can also be used as input.
- model (Decision Rule Model)
The Rule Model is delivered from this output port. This model can now be applied on unseen data sets.
- example set (IOObject)
The ExampleSet that was given as input is passed without changing to the output through this port. This is usually used to reuse the same ExampleSet in further operators or to view the ExampleSet in the Results Workspace.
- criterionThis parameter specifies the criterion for selecting attributes and numerical splits. It can have one of the following values:
- information_gain: The entropy of all the attributes is calculated. The attribute with minimum entropy is selected for split. This method has a bias towards selecting attributes with a large number of values.
- accuracy: Such an attribute is selected for a split that maximizes the accuracy of the Rule Set.
- sample_ratioThis parameter specifies the sample ratio of training data used for growing and pruning. Range: real
- purenessThis parameter specifies the desired pureness, i.e. the minimum ratio of the major class in a covered subset in order to consider the subset pure. Range: real
- minimal_prune_benefitThis parameter specifies the minimum amount of benefit which must be exceeded over unpruned benefit in order to be pruned. Range: real
- use_local_random_seedIndicates if a local random seed should be used for randomization. Range: boolean
- local_random_seedThis parameter specifies the local random seed. This parameter is only available if the use local random seed parameter is set to true. Range: integer
Introduction to the Rule Induction operator
The 'Golf' data set is loaded using the Retrieve operator. The Discretize by Frequency operator is applied on it to convert the numerical attributes to nominal attributes. This is done because the Rule Learners usually perform well on nominal attributes. The number of bins parameter of the Discretize by Frequency operator is set to 3. All other parameters are used with default values. A breakpoint is inserted here so that you can have a look at the ExampleSet before application of the Rule Induction operator. The Rule Induction operator is applied next. All parameters are used with default values. The resulting model is connected to the result port of the process. The Rule Set (RuleModel) can be seen in the Results Workspace after execution of the process.