40
GitHub - ialhashim/DenseDepth: High Quality Monocular Depth Estimation via Trans...
source link: https://github.com/ialhashim/DenseDepth
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
README.md
High Quality Monocular Depth Estimation via Transfer Learning (arXiv 2018)
Ibraheem Alhashim and Peter Wonka
Offical Keras (TensorFlow) implementaiton. If you have any questions or need more help with the code, feel free to contact the first author.
[Update] Experimental PyTorch code added.
Results
- KITTI
- NYU Depth V2
Requirements
- This code is tested with Keras 2.2.4, Tensorflow 1.13, CUDA 9.0, on a machine with an NVIDIA Titan V and 16GB+ RAM running on Windows 10 or Ubuntu 16.
- Other packages needed
keras pillow matplotlib scikit-learn scikit-image opencv-python pydot
andGraphViz
. - Minimum hardware tested on for inference NVIDIA GeForce 940MX (laptop) / NVIDIA GeForce GTX 950 (desktop).
- Training takes about 24 hours on a single NVIDIA TITAN RTX with batch size 8.
Pre-trained Models
- NYU Depth V2 (165 MB)
- KITTI (165 MB)
Demo
- After downloading the pre-trained model (nyu.h5), run
python test.py
. You should see a montage of images with their estimated depth maps.
Data
- NYU Depth V2 (50K) (4.1 GB): You don't need to extract the dataset since the code loads the entire zip file into memory when training.
- KITTI: copy the raw data to a folder with the path '../kitti'. Our method expects dense input depth maps, therefore, you need to run a depth inpainting method on the Lidar data. For our experiments, we used our Python re-implmentaiton of the Matlab code provided with NYU Depth V2 toolbox. The entire 80K images took 2 hours on an 80 nodes cluster for inpainting. For our training, we used the subset defined here.
- Unreal-1k: coming soon.
Training
- Run
python train.py --data nyu --gpus 4 --bs 8
.
Evaluation
- Download, but don't extract, the ground truth test data from here (1.4 GB). Then simply run
python evaluate.py
.
Reference
Corresponding paper to cite:
@article{Alhashim2018,
author = {Ibraheem Alhashim and Peter Wonka},
title = {High Quality Monocular Depth Estimation via Transfer Learning},
journal = {arXiv e-prints},
volume = {abs/1812.11941},
year = {2018},
url = {https://arxiv.org/abs/1812.11941},
eid = {arXiv:1812.11941},
eprint = {1812.11941}
}
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK