84

GitHub - akanazawa/cmr: Project repo for Learning Category-Specific Mesh Reconst...

 5 years ago
source link: https://github.com/akanazawa/cmr
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

Learning Category-Specific Mesh Reconstruction from Image Collections

Angjoo Kanazawa*, Shubham Tulsiani*, Alexei A. Efros, Jitendra Malik

University of California, Berkeley

Project Page Teaser Image

Requirements

  • Python 2.7
  • PyTorch tested on version 0.3.0.post4

Installation

Setup virtualenv

virtualenv venv_cmr
source venv_cmr/bin/activate
pip install -U pip
deactivate
source venv_cmr/bin/activate
pip install -r requirements.txt

Install Neural Mesh Renderer and Perceptual loss

cd external;
bash install_external.sh

Demo

  1. From the cmr directory, download the trained model:
wget https://people.eecs.berkeley.edu/~kanazawa/cachedir/cmr/model.tar.gz & tar -vzxf model.tar.gz

You should see cmr/cachedir/snapshots/bird_net/

  1. Run the demo:
python -m cmr.demo --name bird_net --num_train_epoch 500 --img_path cmr/demo_data/img1.jpg
python -m cmr.demo --name bird_net --num_train_epoch 500 --img_path cmr/demo_data/birdie.jpg

Training

Please see doc/train.md

Citation

If you use this code for your research, please consider citing:

@article{cmrKanazawa18,
  title={Learning Category-Specific Mesh Reconstruction
  from Image Collections},
  author = {Angjoo Kanazawa and
  Shubham Tulsiani
  and Alexei A. Efros
  and Jitendra Malik},
  journal={arXiv preprint arXiv:1803.07549},
  year={2018}
}


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK