Binary JAYA Algorithm with Adaptive Mutation for Feature Selection


Awadallah M. A., Al-Betar M. A., Hammouri A. I., Alomari O. A.

Arabian Journal for Science and Engineering, vol.45, no.12, pp.10875-10890, 2020 (SCI-Expanded) identifier

  • Publication Type: Article / Article
  • Volume: 45 Issue: 12
  • Publication Date: 2020
  • Doi Number: 10.1007/s13369-020-04871-2
  • Journal Name: Arabian Journal for Science and Engineering
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Communication Abstracts, Metadex, Pollution Abstracts, zbMATH, Civil Engineering Abstracts
  • Page Numbers: pp.10875-10890
  • Keywords: Feature selection, JAYA algorithm, Machine learning, Metaheuristic, Optimization
  • Istanbul Gelisim University Affiliated: Yes

Abstract

© 2020, King Fahd University of Petroleum & Minerals.In this paper, a new metaheuristic algorithm called JAYA algorithm has been adapted for feature selection. Feature selection is a typical problem in machine learning and data mining domain concerned with determining the subset of high discriminative features from the irrelevant, noisy, redundant, and high-dimensional features. JAYA algorithm is initially proposed for continuous optimization. Due to the binary nature of the feature selection problem, the JAYA algorithm is adjusted using sinusoidal (i.e., S-shape) transfer function. Furthermore, the mutation operator controlled by adaptive mutation rate (Rm) parameter is also utilized to control the diversity during the search. The proposed binary JAYA algorithm with adaptive mutation is called BJAM algorithm. The performance of BJAM algorithm is tested using 22 real-world benchmark datasets, which vary in terms of the number of features and the number of instances. Four measures are used for performance analysis: classification accuracy, number of features, fitness values, and computational times. Initially, a comparison between binary JAYA (BJA) algorithm and the proposed BJAM algorithm is conducted to show the effect of the mutation operator in the convergence behavior. After that, the results produced by the BJAM algorithm are compared against those yielded by ten state-of-the-art methods. Surprisingly, the proposed BJAM algorithm is able to excel other comparative methods in 7 out of 22 datasets in terms of classification accuracy. This can lead to the conclusion that the proposed BJAM algorithm is an efficient algorithm for the problems belonging to the feature selection domain and is pregnant with fruitful results.