Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ordinal regression is a special kind of machine learning problem, which aims to label patterns with an ordinal scale. Due to the ubiquitous existence of the ordering information in many practical cases, ordinal regression has received much attention and can be found in a great variety of applications. Meanwhile, Kernel Extreme Learning Machine (KELM), as the extension of Extreme Learning Machine in the framework of kernel learning, has shown its strength in many machine learning tasks. Nevertheless, existing KELM methods have paid little attention to ordinal regression especially in large-scale situations. In this paper, we consider the kernel technique and ordinal scales in labels at the same time, and propose a new KELM model for ordinal regression by exploiting a quadratic cost-sensitive encoding scheme. In order to make the training process more efficient in the big data scenario, a fast algorithm is designed based on the low rank approximation. The incomplete Cholesky factorization and the Sherman–Morrison–Woodbury formula are utilized together to avoid computing the inverse of the kernel matrix. The time complexity of the new algorithm is thus linear with the number of the training instances, which makes it suitable to the large-scale occasions. Numerical experiments on multiple public datasets validate the effectiveness and efficiency of the proposed methods.