Abstract

Feature selection techniques can be evaluated based on either model performance or the stability (robustness) of the technique. The ideal situation is to choose a feature selec- tion technique that is robust to change, while also ensuring that models built with the selected features perform well. One domain where feature selection is especially important is software defect prediction, where large numbers of met- rics collected from previous software projects are used to help engineers focus their efforts on the most faulty mod- ules. This study presents a comprehensive empirical ex- amination of seven filter-based feature ranking techniques (rankers) applied to nine real-world software measurement datasets of different sizes. Experimental results demonstrate that signal-to-noise ranker performed moderately in terms of robustness and was the best ranker in terms of model performance. The study also shows that although Relief was the most stable feature selection technique, it performed significantly worse than other rankers in terms of model performance.

Disciplines

Computer Engineering | Computer Sciences | Engineering | Physical Sciences and Mathematics

Share

COinS