CORE4D : A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement

Chengwen Zhang*1,2, Yun Liu*1,3,4, Ruofan Xing1, Bingda Tang1, Li Yi1,3,4
1Institute for Interdisciplinary Information Sciences, Tsinghua University, 2Beijing University of Posts and Telecommunications, 3Shanghai Artificial Intelligence Laboratory, 4Shanghai Qi Zhi Institute
*Equal Contribution

CORE4D Dataset Overview

Abstract

Understanding how humans cooperatively rearrange household objects is critical for VR/AR and human-robot interaction. However, in-depth studies on modeling these behaviors are under-researched due to the lack of relevant datasets. We fill this gap by presenting CORE4D, a novel large-scale 4D human-object-human interaction dataset focusing on collaborative object rearrangement, which encompasses diverse compositions of various object geometries, collaboration modes, and 3D scenes. With 1K human-object-human motion sequences captured in the real world, we enrich CORE4D by contributing an iterative collaboration retargeting strategy to augment motions to a variety of novel objects. Leveraging this approach, CORE4D comprises a total of 11K collaboration sequences spanning 3K real and virtual object shapes. Benefiting from extensive motion patterns provided by CORE4D, we benchmark two tasks aiming at generating human-object interaction: human-object motion forecasting and interaction synthesis. Extensive experiments demonstrate the effectiveness of our collaboration retargeting strategy and indicate that CORE4D has posed new challenges to existing humanobject interaction generation methodologies.

Data Update Records

2024/8/17: Uploaded V2 of CORE4D-Real, including updated human motions in "CORE4D_Real_human_object_motions_v2"

2024/5/31: Uploaded CORE4D-V1

Contact Us

If you have any questions or suggestions, please contact Chengwen Zhang (zcwoctopus@gmail.com), Yun Liu (yun-liu22@mails.tsinghua.edu.cn), or Li Yi (ericyi@mail.tsinghua.edu.cn).