GitHub - bmild/nerf: Code release for NeRF (Neural Radiance Fields)
source link: https://github.com/bmild/nerf
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
README.md
NeRF: Neural Radiance Fields
Project | Video | Paper
Tensorflow implementation of optimizing a neural representation for a single scene and rendering new views.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Ben Mildenhall*1,
Pratul P. Srinivasan*1,
Matthew Tancik*1,
Jonathan T. Barron2,
Ravi Ramamoorthi3,
Ren Ng1
1UC Berkeley, 2Google Research, 3UC San Diego
*denotes equal contribution
Setup
Python 3 dependencies:
- Tensorflow 1.15
- matplotlib
- numpy
- imageio
- configargparse
The LLFF data loader requires ImageMagick.
You will also need the LLFF code (and COLMAP) set up to compute poses if you want to run on your own real data.
What is a NeRF?
A neural radiance field is a simple fully connected network (weights are ~5MB) trained to reproduce input views of a single scene using a rendering loss. The network directly maps from spatial location and viewing direction (5D input) to color and opacity (4D output), acting as the "volume" so we can use volume rendering to differentiably render new views.
Optimizing a NeRF takes between a few hours and a day or two (depending on resolution) and only requires a single GPU. Rendering an image from an optimized NeRF takes somewhere between less than a second and ~30 seconds, again depending on resolution.
Running code
Optimizing a NeRF
Run
bash download_example_data.sh
to get the our synthetic Lego dataset and the LLFF Fern dataset. To optimize a low-res Fern NeRF:
python run_nerf.py --config config_fern.txt
To optimize a low-res Lego NeRF:
python run_nerf.py --config config_lego.txt
Rendering a NeRF
Run
bash download_example_weights.sh
to get a pretrained high-res NeRF for the Fern dataset. Now you can use the render_demo.ipynb
to render new views.
Generating poses for your own scenes
We recommend using the imgs2poses.py
script from the LLFF code. Then you can pass the base scene directory into our code using --datadir <myscene>
along with -dataset_type llff
. You can take a look at the config_fern.txt
config file for example settings to use for a forward facing scene.
Recommend
-
1
Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Vid...
-
6
NerfingMVS Project Page | Paper | Video |
-
6
Plenoxels: Radiance Fields without Neural Networks Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa UC Berkeley Website and video: http...
-
4
TensoRF Project page | Paper This repository contains a pytorch implementation for the paper:
-
2
论文地址:https://arxiv.org/abs/2203.12575 作者:Jiang, Wei and Yi, Kwang Moo and Samei, Golnoosh and Tuzel, Oncel and Ranjan, Anurag 发表: ECCV22 开源代码: https://github.com/apple/ml-neuman 人体的渲染和新...
-
4
-
2
This is part of my journey of learning NeRF. 2.5. Manipulate Neural Fields Neural fields is ready to be a prime representation, similar as...
-
5
This is part of my journey of learning NeRF. 2.4. Prior-based reconstruction of neural fields Sounds like a one-shot ta...
-
1
FFmpeg 6.0 正式发布,支持 Radiance HDR 图像和新的解码器 当前位置: 系统极客
-
8
Abstract Point-based radiance field rendering has demonstrated impressive results for novel view synthesis, offering a compelling blend of re...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK