From the Lab
Learning to Predict Part Mobility from a Single Static Snapshot
Journal Article
Abstract
We introduce a method for learning a model for the mobility of parts in 3D objects. Our method allows not only to understand the dynamic functionalities of one or more parts in a 3D object, but also to apply the mobility functions to static 3D models. Specifically, the learned part mobility model can predict mobilities for parts of a 3D object given in the form of a single static snapshot reflecting the spatial configuration of the object parts in 3D space, and transfer the mobility from relevant units in the training data. The training data consists of a set of mobility units of different motion types. Each unit is composed of a pair of 3D object parts (one moving and one reference part), along with usage examples consisting of a few snapshots capturing different motion states of the unit. Taking advantage of a linearity characteristic exhibited by most part motions in everyday objects, and utilizing a set of part-relation descriptors, we define a mapping from static snapshots to dynamic units. This mapping employs a motion-dependent snapshot-to-unit distance obtained via metric learning. We show that our learning scheme leads to accurate motion prediction from single static snapshots and allows proper motion transfer. We also demonstrate other applications such as motion-driven object detection and motion hierarchy construction.
BibTeX
@article{hu17icon3,
author = {Ruizhen Hu and Wenchao Li and Oliver van Kaick and Ariel Shamir and Hao Zhang and Hui Huang},
title = {Learning to Predict Part Mobility from a Single Static Snapshot},
journal = {ACM Trans. on Graphics (Proc. SIGGRAPH Asia)},
volume = {36},
number = {6},
pages = {227:1--227:13},
year = 2017,
}