You are viewing the RapidMiner Studio documentation for version 9.5 - Check here for latest version
Performance (RapidMiner Studio Core)
SynopsisThis operator is used for performance evaluation. It delivers a list of performance criteria values. These performance criteria are automatically determined in order to fit the learning task type.
In contrast to the other performance evaluation operators like the Performance (Classification) operator, the Performance (Binominal Classification) operator or the Performance (Regression) operator, this operator can be used for all types of learning tasks. It automatically determines the learning task type and calculates the most common criteria for that type. For more sophisticated performance calculations, you should use the operators mentioned above. If none of them meets your requirements, you can use Performance (User-Based) operator which allows you to write your own performance measure.
The following criteria are added for binominal classification tasks:
- AUC (optimistic)
- AUC (neutral)
- AUC (pessimistic)
The following criteria are added for polynominal classification tasks:
- Kappa statistic
The following criteria are added for regression tasks:
- Root Mean Squared Error
- Mean Squared Error
- labelled data (IOObject)
This input port expects a labelled ExampleSet. The Apply Model operator for example provides labeled data. Make sure that the ExampleSet has a label attribute and a prediction attribute. See the Set Role operator for more details.
- performance (Performance Vector)
This is an optional parameter. It requires a Performance Vector.
- performance (Performance Vector)
This port delivers a Performance Vector (we call it output-performance-vector for now). The Performance Vector is a list of performance criteria values. The output-performance-vector contains performance criteria calculated by this Performance operator (we call it calculated-performance-vector here). If a Performance Vector was also fed at the input port (we call it input-performance-vector here), the criteria of the input-performance-vector are also added in the output-performance-vector. If the input-performance-vector and the calculated-performance-vector both have the same criteria but with different values, the values of the calculated-performance-vector are delivered through the output port. This concept can be easily understood by studying the attached Example Process.
- example set (IOObject)
The ExampleSet that was given as input is passed without changing to the output through this port. This is usually used to reuse the same ExampleSet in further operators or to view the ExampleSet in the Results Workspace.
- use_example_weightsThis parameter allows example weights to be used for performance calculations if possible. This parameter has no effect if no attribute has weight role. In order to consider weights of examples the ExampleSet should have an attribute with weight role. Several operators are available that assign weights e.g. the Generate Weights operator. Please study the Set Roles operator for more information regarding weight roles. Range: boolean
Assessing the performance of a prediction
This process is composed of two Subprocess operators and one Performance operator. Double click on the first Subprocess operator and you will see the operators within this subprocess. The first subprocess 'Subprocess (labeled data provider)' loads the 'Golf' data set using the Retrieve operator and then learns a classification model using the k-NN operator. Then the learnt model is applied on the 'Golf-Testset' data set using the Apply Model operator. Then Generate Weight operator is used to add an attribute with weight role. Thus, this subprocess provides a labeled ExampleSet with a weight attribute. A breakpoint is inserted after this subprocess to show this ExampleSet. It is provided at the labeled data input port of the Performance operator in the main process.
The second Subprocess operator 'Subprocess (performance vector provider)' loads the 'Golf' data set using the Retrieve operator and then learns a classification model using the k-NN operator. Then the learnt model is applied on the 'Golf' data set using the Apply Model operator. Then the Performance (Classification) operator is applied on the labeled data to produce a Performance Vector. A breakpoint is inserted after this subprocess to show this Performance Vector. Note that this model was trained and tested on the same data set ('Golf' data set), so its accuracy is 100%. Thus this subprocess provides a Performance Vector with 100% accuracy and 0.00% classification error. This Performance Vector is connected to the performance input port of the Performance operator in the main process.
When you run the process, first you will see an ExampleSet which is output of the first Subprocess operator. Press the Run button again and you will see a Performance Vector. This is the output of the second Subprocess operator. Press the Run button again and you will see various criteria in the criterion selector window in the Results Workspace. These include classification error, accuracy, precision, recall, AUC (optimistic), AUC and AUC (pessimistic). Now select accuracy from the criterion selector window, its value is 71.43%. On the contrary the accuracy of the input Performance Vector provided by the second subprocess was 100%. The accuracy of the final Performance Vector is 71.43% instead of 100% because if the input Performance Vector and the calculated Performance Vector both have the same criteria but with different values, the values of the calculated Performance Vector are delivered through the output port. Now, note that the classification error criterion is added to the criteria list because of the Performance Vector provided at the performance input port. Disable the second Subprocess operator and run the same process again, you will see that the classification error criterion does not appear now. This is because if a Performance Vector is fed at the performance input port, its criteria are also added to the output Performance Vector.
Accuracy is calculated by taking the percentage of correct predictions over the total number of examples. Correct prediction means examples where the value of the prediction attribute is equal to the value of the label attribute. If you look at the ExampleSet in the Results Workspace, you can see that there are 14 examples in this data set. 10 out of 14 examples are correct predictions i.e. their label and prediction attributes have the same values. This is why the accuracy was 71.43% (10 x 100 /14 = 71.43%). Now run the same process again but this time set the use example weights parameter to true. Check the results again. They have changed now because the weight of each example was taken into account this time. The accuracy is 68.89% this time. If you take the percentage of the weight of the correct predictions and the total weight you get the same answer (0.6889 x 100/1 = 68.89%). In this Example Process, using weights reduced the accuracy but this is not always the case.
Note: This Example Process is just for highlighting different perspectives of the Performance operator. It may not be very useful in real scenarios.