Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Selection of feature subset is a preprocessing step in computational learning, and it serves several purposes like reducing the dimensionality of a dataset, decreasing the computational time required for classification and enhancing the classification accuracy of a classifier by removing redundant and misleading or erroneous features. This paper presents a new feature selection and weighting method aided with the decomposition based evolutionary multi-objective algorithm called MOEA/D. The feature vectors are selected and weighted or scaled simultaneously to project the data points to such a hyper space, where the distance between data points of non-identical classes is increased, thus, making them easier to classify. The inter-class and intra-class distances are simultaneously optimized by using MOEA/D to obtain the optimal features and the scaling factor associated with them. Finally, k-NN (k-Nearest Neighbor) is used to classify the data points having the reduced and weighted feature set. The proposed algorithm is tested with several practical datasets from the well-known data repositories like UCI and LIBSVM. The results are compared with those obtained with the state-of-the-art algorithms to demonstrate the superiority of the proposed algorithm.
•Presents a simultaneous feature selection and weighting method.•Use of penalty to reduce number of selected features.•Use of very competitive MOEA/D as a core optimizer.•Best compromise solution to obtain the best feature selection and weighting vector.•Evaluation on UCI and LIBSVM datasets.