Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Feature selection aims to select a subset of features from high-dimensional data according to a predefined selecting criterion. Sparse learning has been proven to be a powerful technique in feature selection. Sparse regularizer, as a key component of sparse learning, has been studied for several years. Although convex regularizers have been used in many works, there are some cases where nonconvex regularizers outperform convex regularizers. To make the process of selecting relevant features more effective, we propose a novel nonconvex sparse metric on matrices as the sparsity regularization in this paper. The new nonconvex regularizer could be written as the difference of the <inline-formula> <tex-math notation="LaTeX">\ell _{2,1} </tex-math></inline-formula> norm and the Frobenius (<inline-formula> <tex-math notation="LaTeX">\ell _{2,2} </tex-math></inline-formula>) norm, which is named the <inline-formula> <tex-math notation="LaTeX">\ell _{2,1-2} </tex-math></inline-formula>. To find the solution of the resulting nonconvex formula, we design an iterative algorithm in the framework of ConCave-Convex Procedure (CCCP) and prove its strong global convergence. An adopted alternating direction method of multipliers is embedded to solve the sequence of convex subproblems in CCCP efficiently. Using the scaled cluster indictors of data points as pseudolabels, we also apply <inline-formula> <tex-math notation="LaTeX">\ell _{2,1-2} </tex-math></inline-formula> to the unsupervised case. To the best of our knowledge, it is the first work considering nonconvex regularization for matrices in the unsupervised learning scenario. Numerical experiments are performed on real-world data sets to demonstrate the effectiveness of the proposed method.