Technical Reports

HPL-2009-359

Click here for full text: PDF

Apples-to-Apples in Cross-Validation Studies: Pitfalls in Classifier Performance Measurement

Forman, George; Scholz, Martin
HP Laboratories

HPL-2009-359

Keyword(s): AUC, F-measure, machine learning, ten-fold cross- validation, classification performance measurement, high class imbalance, class skew, experiment protocol

Abstract: Cross-validation is a mainstay for measuring performance and progress in machine learning. There are subtle differences in how exactly to compute accuracy, F-measure and Area Under the ROC Curve (AUC) in cross-validation studies. However, these details are not discussed in the literature, and incompatible methods are used by various papers and software packages. This leads to inconsistency across the research literature. Anomalies in performance calculations for particular folds and situations go undiscovered when they are buried in aggregated results over many folds and datasets, without ever a person looking at the intermediate performance measurements. This research note clarifies and illustrates the differences, and it provides guidance for how best to measure classification performance under cross-validation. In particular, there are several divergent methods used for computing F- measure, which is often recommended as a performance measure under class imbalance, e.g., for text classification domains and in one-vs.-all reductions of datasets having many classes. We show by experiment that all but one of these computation methods leads to biased measurements, especially under high class imbalance. This paper is of particular interest to those designing machine learning software libraries and researchers focused on high class imbalance.

9 Pages

Additional publication information: Published in ACM SIGKDD Explorations Newsletter, Volume 12, Issue 1, June 2010. t

External Posting Date: November 21, 2009 [Fulltext]. Approved for External Publication
Internal Posting Date: November 21, 2009 [Fulltext]

Back to Index