Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
IEEE transactions on circuits and systems for video technology, 2014-06, Vol.24 (6), p.965-978
2014
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Learning Cross-Media Joint Representation With Sparse and Semisupervised Regularization
Ist Teil von
  • IEEE transactions on circuits and systems for video technology, 2014-06, Vol.24 (6), p.965-978
Ort / Verlag
New York, NY: IEEE
Erscheinungsjahr
2014
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • Cross-media retrieval has become a key problem in both research and application, in which users can search results across all of the media types (text, image, audio, video, and 3-D) by submitting a query of any media type. How to measure the content similarity among different media is the key challenge. Existing cross-media retrieval methods usually focus on modeling the pairwise correlation or semantic information separately. In fact, these two kinds of information are complementary to each other and optimizing them simultaneously can further improve the accuracy. In this paper, we propose a novel feature learning algorithm for cross-media data, called joint representation learning (JRL), which is able to explore jointly the correlation and semantic information in a unified optimization framework. JRL integrates the sparse and semisupervised regularization for different media types into one unified optimization problem, while existing feature learning methods generally focus on a single media type. On one hand, JRL learns sparse projection matrix for different media simultaneously, so different media can align with each other, which is robust to the noise. On the other hand, both the labeled data and unlabeled data of different media types are explored. Unlabeled examples of different media types increase the diversity of training data and boost the performance of joint representation learning. Furthermore, JRL can not only reduce the dimension of the original features, but also incorporate the cross-media correlation into the final representation, which further improves the performance of both cross-media retrieval and single-media retrieval. Experiments on two datasets with up to five media types show the effectiveness of our proposed approach, as compared with the state-of-the-art methods.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX