This repo is the official implementation for the ICCV2023 paper "COOP: Decoupling and Coupling of Whole-Body Grasping Pose Generation".
COOP is a novel framework to synthesize life-like whole-body poses that cover the widest range of human grasping capabilities.
For more details please refer to the paper COOP: Decoupling and Coupling of Whole-Body Grasping Pose Generation.
To install the dependencies please follow the next steps:
- Clone this repository:
git clone https://github.com/zhengyanzhao1997/COOP cd COOP
- Install the dependencies by the following command:
pip install -r requirements.txt
- we have released the checkpoints of BNet and HNet in the folders as below.
COOP
├── src
│ ├── models
│ ├── Bnet
│ └── Hnet
-
Download body models following the steps on the SMPLX repo.
-
Download body models following the steps on the MANO repo.
-
Please put the SMPLX and MANO models in the folders as below.
COOP
├── model
│ ├── smplx
│ └── mano
cd src
Then
python infer/infer.py --rh-work-dir ../model/Hnet/ --body-work-dir../model/Bnet/ --data-file ../data --save-file [your save path]
You can also specify the test object name, subject shape and the position of the test obejct:
python ... --object apple --subject s1 --x 0.4 --z 0.4 --y 0.4
The visualize result (.html) and the generated mesh file will be saved at your save path.
coming soon
@InProceedings{Zheng_2023_ICCV,
author = {Zheng, Yanzhao and Shi, Yunzhou and Cui, Yuhao and Zhao, Zhongzhou and Luo, Zhiling and Zhou, Wei},
title = {COOP: Decoupling and Coupling of Whole-Body Grasping Pose Generation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {2163-2173}
}