HumanML3D Dataset

Description

HumanML3D is a 3D human motion-language dataset that originates from a combination of HumanAct12 and Amass dataset. It covers a broad range of human actions such as daily activities (e.g., 'walking', 'jumping'), sports (e.g., 'swimming', 'playing golf'), acrobatics (e.g., 'cartwheel') and artistry (e.g., 'dancing'). Overall, HumanML3D dataset consists of 14,616 motions and 44,970 descriptions composed by 5,371 distinct words. The total length of motions amounts to 28.59 hours. The average motion length is 7.1 seconds, while average description length is 12 words.

Ling-Hao CHEN's Homepage

J. Imaging, Free Full-Text

Congyi Wang - CatalyzeX

2307.00818] Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset

2307.00818] Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset

Science Cast

GitHub - LinghaoChan/UniMoCap: [Open-source Project] UniMoCap: community implementation to unify the text-motion datasets (HumanML3D, KIT-ML, and BABEL) and whole-body motion dataset (Motion-X).

Experiments of MotionGPT (Spring 2023) - Human Motion Synthesis

2307.00818] Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset

PDF] MotionGPT: Human Motion as a Foreign Language

Experiments of MotionGPT (Spring 2023) - Human Motion Synthesis

$ 30.99USD
Score 4.6(265)
In stock
Continue to book