6

GitHub - aim-uofa/AdelaiDepth: This repo contains the projects: 'Virtual Normal'...

 2 years ago
source link: https://github.com/aim-uofa/AdelaiDepth
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

AdelaiDepth

AdelaiDepth is an open source toolbox for monocular depth prediction. Relevant work from our group is open-sourced here.

AdelaiDepth contains the following algorithms:

News:

  • [Jun. 13, 2021] Our "Learning to Recover 3D Scene Shape from a Single Image" work is in the CVPR'21 Best Paper Finalist.
  • [Jun. 6, 2021] We have made the training data of DiverseDepth available.

Results and Dataset Examples:

  1. 3D Scene Shape

You may want to check this video which provides a very brief introduction to the work:

RGB Depth Point Cloud
2-rgb.jpg
2.jpg
2.gif
  1. DiverseDepth

Results examples.

DiverseDepth dataset examples.

BibTeX

@inproceedings{Yin2019enforcing,
  title={Enforcing geometric constraints of virtual normal for depth prediction},
  author={Yin, Wei and Liu, Yifan and Shen, Chunhua and Yan, Youliang},
  booktitle= {The IEEE International Conference on Computer Vision (ICCV)},
  year={2019}
}

@inproceedings{Wei2021CVPR,
  title     =  {Learning to Recover 3D Scene Shape from a Single Image},
  author    =  {Wei Yin and Jianming Zhang and Oliver Wang and Simon Niklaus and Long Mai and Simon Chen and Chunhua Shen},
  booktitle =  {Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (CVPR)},
  year      =  {2021}
}

@article{yin2021virtual,
  title={Virtual Normal: Enforcing Geometric Constraints for Accurate and Robust Depth Prediction},
  author={Yin, Wei and Liu, Yifan and Shen, Chunhua},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
  year={2021}
}

@article{yin2020diversedepth,
  title={DiverseDepth: Affine-invariant Depth Prediction Using Diverse Data},
  author={Yin, Wei and Wang, Xinlong and Shen, Chunhua and Liu, Yifan and Tian, Zhi and Xu, Songcen and Sun, Changming and Renyin, Dou},
  journal={arXiv preprint arXiv:2002.00569},
  year={2020}
}

Contact

Wei Yin: [email protected]

License

The 3D Scene Shape code is under a non-commercial license from Adobe Research. See the LICENSE file for details.

Other depth prediction projects are licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial use, please contact Chunhua Shen.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK