|
Click here for full text:
Choose Your Words Carefully: An Empirical Study of Feature Selection Metrics for Text Classification
Forman, George
HPL-2002-88R2
Keyword(s): supervised machine learning; document categorization; support vector machines; binormal separation; residual failure analysis
Abstract: Good feature selection is essential for text classification to make it tractable for machine learning, and to improve classification performance. This study benchmarks the performance of twelve feature selection metrics across 229 text classification problems drawn from Reuters, OHSUMED, TREC, etc. using Support Vector Machines. The results are analyzed for various objectives. For best accuracy, F-measure or recall, the findings reveal an outstanding new feature selection metric, "Bi-Normal Separation" (BNS). For precision alone, however, Information Gain (IG) was superior. A new evaluation methodology is offered that focuses on the needs of the data mining practitioner who seeks to choose one or two metrics to try that are mostly likely to have the best performance for the single dataset at hand. This analysis determined, for example, that IG and Chi-Squared have correlated failures for precision, and that IG paired with BNS is a better choice. Notes: Copyright Springer-Verlag. Published in and presented at the 13th European Conference on Machine Learning (ECML '02)/6th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD), 19-23 August 2002, Helsinki, Finland
12 Pages
Back to Index
|