Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 5 von 39
2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.6793-6803
2021
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Where2Act: From Pixels to Actions for Articulated 3D Objects
Ist Teil von
  • 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.6793-6803
Ort / Verlag
IEEE
Erscheinungsjahr
2021
Quelle
IEEE/IET Electronic Library
Beschreibungen/Notizen
  • One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment. In this paper, we take a step towards that long-term goal - we extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts. For example, given a drawer, our network predicts that applying a pulling force on the handle opens the drawer. We propose, discuss, and evaluate novel network architectures that given image and depth data, predict the set of actions possible at each pixel, and the regions over articulated parts that are likely to move under the force. We propose a learning-from-interaction framework with an online data sampling strategy that allows us to train the network in simulation (SAPIEN) and generalizes across categories. Check the website for code and data release.
Sprache
Englisch
Identifikatoren
eISSN: 2380-7504
DOI: 10.1109/ICCV48922.2021.00674
Titel-ID: cdi_ieee_primary_9710965

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX