180

GitHub - DeNA/Chainer_Realtime_Multi-Person_Pose_Estimation: Chainer version of...

 6 years ago
source link: https://github.com/DeNA/Chainer_Realtime_Multi-Person_Pose_Estimation
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Chainer_Realtime_Multi-Person_Pose_Estimation

This is an implementation of Realtime Multi-Person Pose Estimation with Chainer. The original project is here.

日本語版 README

Results

people.png
 
people_result.png

This project is licensed under the terms of the license.

Content

Requirements

  • Python 3.0+
  • Chainer 2.0+
  • NumPy
  • Matplotlib
  • OpenCV

Convert Caffe model to Chainer model

The authors of the original implementation provide trained caffe model which you can use to extract model weights. Execute the following commands to download the trained model and convert it to npz file:

cd models
wget http://posefs1.perception.cs.cmu.edu/OpenPose/models/pose/coco/pose_iter_440000.caffemodel
wget http://posefs1.perception.cs.cmu.edu/OpenPose/models/face/pose_iter_116000.caffemodel
wget http://posefs1.perception.cs.cmu.edu/OpenPose/models/hand/pose_iter_102000.caffemodel
python convert_model.py posenet pose_iter_440000.caffemodel coco_posenet.npz
python convert_model.py facenet pose_iter_116000.caffemodel facenet.npz
python convert_model.py handnet pose_iter_102000.caffemodel handnet.npz
cd ..

Test using the trained model

Execute the following command with the weight parameter file and the image file as arguments for estimating pose. The resulting image will be saved as result.png.

python pose_detector.py posenet models/coco_posenet.npz --img data/person.png

If you have a gpu device, use the --gpu option.

python pose_detector.py posenet models/coco_posenet.npz --img data/person.png --gpu 0
person.png
 
person_result.png

Similarly, execute the following command for face estimation. The resulting image will be saved as result.png.

python face_detector.py facenet models/facenet.npz --img data/face.png
face.png
 
face_result.png

Similarly, execute the following command for hand estimation. The resulting image will be saved as result.png.

python hand_detector.py handnet models/handnet.npz --img data/hand.png
hand.png
 
hand_result.png

Similarly, you can detect all poses, faces and hands by executing the following command. The resulting image will be saved as result.png.

python demo.py --img data/dinner.png
dinner.png
 
dinner_result.png

If you have a web camera, you can execute the following cammand to run real-time demostration mode with the camera activated. Quit with the q key.

Real-time pose estimation:

python camera_pose_demo.py

Real-time face estimation:

python camera_face_demo.py

Train your model

This is a training procedure using COCO 2017 dataset.

Download COCO 2017 dataset

bash getData.sh

If you already downloaded the dataset by yourself, please skip this procedure and change coco_dir in entity.py to the dataset path that was already downloaded.

Setup COCO API

git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI/
make
python setup.py install
cd ../../

Download VGG-19 pretrained model

wget -P models http://www.robots.ox.ac.uk/%7Evgg/software/very_deep/caffe/VGG_ILSVRC_19_layers.caffemodel

Generate and save image masks

Mask images are created in order to filter out people regions who were not labeled with any keypoints. vis option can be used to visualize the mask generated from each image.

python gen_ignore_mask.py

Check data generator

Execute the following command to check randomly generated training images by generator. Please confirm that you can see the correct PAFs, Heatmaps, and masks on the clipped image.

python coco_data_loader.py

Train with COCO dataset

For each 1000 iterations, the recent weight parameters are saved as a weight file model_iter_1000.

python train_coco_pose_estimation.py --gpu 0

Test using your own trained model

Execute the following command with your own trained weight parameter file and the image file as arguments for inference. The resulting image will be saved as result.png.

python pose_detector.py posenet model_iter_1000 --img data/person.png

Related repository

Citation

Please cite the original paper in your publications if it helps your research:

@InProceedings{cao2017realtime,
  title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2017}
  }

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK