Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 20 von 92
Expert systems with applications, 2023-11, Vol.230, p.120545, Article 120545
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Antipodal-points-aware dual-decoding network for robotic visual grasp detection oriented to multi-object clutter scenes
Ist Teil von
  • Expert systems with applications, 2023-11, Vol.230, p.120545, Article 120545
Ort / Verlag
Elsevier Ltd
Erscheinungsjahr
2023
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • It is challenging for robots to detect grasps with high accuracy and efficiency-oriented to multi-object clutter scenes, especially scenes with objects of large-scale differences. Effective grasping representation, full utilization of data, and formulation of grasping strategies are critical to solving the problem. To this end, this paper proposes an antipodal-points grasping representation model. Based on this, the Antipodal-Points-aware Dual-decoding Network (APDNet) is presented for grasping detection in multi-object scenes. APDNet employs an encoding–decoding architecture. The shared encoding strategy based on an Adaptive Gated Fusion Module (AGFM) is proposed in the encoder to fuse RGB-D multimodal data. Two decoding branches, namely StartpointNet and EndpointNet, are presented to detect antipodal points. To better focus on objects at different scales in multi-object scenes, a global multi-view cumulative attention mechanism, called Global Accumulative Attention Mechanism (GAAM), is also designed in this paper for StartpointNet. The proposed method is comprehensively validated and compared using a public dataset and real robot platform. On the GraspNet-1Billion dataset, the proposed method achieves 30.7%, 26.4%, and 12.7% accuracy at a speed of 88.4 FPS for seen, unseen, and novel objects, respectively. On the AUBO robot platform, the detection and grasp success rates are 100.0% and 95.0% on single-object scenes and 97.0% and 90.3% on multi-object scenes, respectively. It is demonstrated that the proposed method exhibits state-of-the-art performance with well-balanced accuracy and efficiency.
Sprache
Englisch
Identifikatoren
ISSN: 0957-4174
eISSN: 1873-6793
DOI: 10.1016/j.eswa.2023.120545
Titel-ID: cdi_crossref_primary_10_1016_j_eswa_2023_120545

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX