11

GitHub - facebookresearch/co3d: Tooling for the Common Objects In 3D dataset.

 2 years ago
source link: https://github.com/facebookresearch/co3d
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

CO3D: Common Objects In 3D

This repository contains a set of tools for working with the Common Objects in 3D (CO3D) dataset. The dataset has been introduced in our ICCV'21 paper: Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction.

Download the dataset

The dataset can be downloaded from the following Facebook AI Research web page: download link

Automatic batch-download

We also provide a python script that allows downloading all dataset files at once:

  1. Open CO3D downloads page in your browser.
  2. Download the file with CO3D file links at the bottom of the page.
  3. Execute the download script:
    python ./download_dataset.py --link_list_file LINK_LIST_FILE --download_folder DOWNLOAD_FOLDER
    

where LINK_LIST_FILE is the file downloaded at step 2) above, and DOWNLOAD_FOLDER is a local target folder for downloading the dataset files.

Installation

This is a Python 3 / PyTorch codebase.

  1. Install PyTorch.
  2. Install PyTorch3D.
  3. Install the remaining dependencies in requirements.txt:
pip install lpips visdom tqdm requests

Note that the core data model in dataset/types.py is independent of PyTorch and can be imported and used with other machine-learning frameworks.

Dependencies

Getting started

  1. Install dependencies - See Installation above.
  2. Download the dataset here to a given root folder DATASET_ROOT_FOLDER.
  3. In dataset/dataset_zoo.py set the DATASET_ROOT variable to your DATASET_ROOT_FOLDER`:
    dataset_zoo.py:25: DATASET_ROOT = DATASET_ROOT_FOLDER
    
  4. Run eval_demo.py:
    python eval_demo.py
    
    Note that eval_demo.py runs an evaluation of a simple depth-based image rendering (DBIR) model on the same data as in the paper. Hence, the results are directly comparable to the numbers reported in the paper.

Running tests

Unit tests can be executed with:

python -m unittest

Reference

If you use our dataset, please use the following citation:

@inproceedings{reizenstein21co3d,
	Author = {Reizenstein, Jeremy and Shapovalov, Roman and Henzler, Philipp and Sbordone, Luca and Labatut, Patrick and Novotny, David},
	Booktitle = {International Conference on Computer Vision},
	Title = {Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction},
	Year = {2021},
}

License

The CO3D codebase is released under the BSD License.

Overview video

The following presentation of the dataset was delivered at the Extreme Vision Workshop at CVPR 2021:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK