14

GitHub - luanfujun/deep-painterly-harmonization: Code and data for paper "D...

 6 years ago
source link: https://github.com/luanfujun/deep-painterly-harmonization
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

deep-painterly-harmonization

Code and data for paper "Deep Painterly Harmonization"

Disclaimer

This software is published for academic and non-commercial use only.

Setup

This code is based on torch. It has been tested on Ubuntu 16.04 LTS.

Dependencies:

CUDA backend:

Download VGG-19:

sh models/download_models.sh

Compile cuda_utils.cu (Adjust PREFIX and NVCC_PREFIX in makefile for your machine):

make clean && make

Usage

To generate all results (in data/) using the provided scripts, simply run

python gen_all.py

in Python and then

run('filt_cnn_artifact.m')

in Matlab or Octave. The final output will be in results/.

Note that in the paper we trained a CNN on a dataset of 80,000 paintings collected from wikiart.org, which estimates the stylization level of a given painting and adjust weights accordingly. We will release the pre-trained model in the next update. Users will need to set those weights manually if running on their new paintings for now.

Examples

Here are some results from our algorithm (from left to right are original painting, naive composite and our output):

0_target.jpg 0_naive.jpg 0_final_res2.png

1_target.jpg 1_naive.jpg 1_final_res2.png

2_target.jpg 2_naive.jpg 2_final_res2.png

3_target.jpg 3_naive.jpg 3_final_res2.png

4_target.jpg 4_naive.jpg 4_final_res2.png

5_target.jpg 5_naive.jpg 5_final_res2.png

6_target.jpg 6_naive.jpg 6_final_res2.png

7_target.jpg 7_naive.jpg 7_final_res2.png

8_target.jpg 8_naive.jpg 8_final_res.png

8_target.jpg 8_naive_balloon.jpg 8_result_balloon.jpg

9_target.jpg 9_naive.jpg 9_final_res2.png

10_target.jpg 10_naive.jpg 10_final_res2.png

11_target.jpg 11_naive.jpg 11_final_res2.png

12_target.jpg 12_naive.jpg 12_final_res2.png

13_target.jpg 13_naive.jpg 13_final_res2.png

14_target.jpg 14_naive.jpg 14_final_res2.png

15_target.jpg 15_naive.jpg 15_final_res2.png

16_target.jpg 16_naive.jpg 16_final_res2.png

17_target.jpg 17_naive.jpg 17_final_res2.png

17_target.jpg 17_naive_sherlock.jpg 17_result_sherlock.jpg

18_target.jpg 18_naive.jpg 18_final_res2.png

19_target.jpg 19_naive.jpg 19_final_res2.png

20_target.jpg 20_naive.jpg 20_final_res2.png

21_target.jpg 21_naive.jpg 21_final_res2.png

22_target.jpg 22_naive.jpg 22_final_res2.png

23_target.jpg 23_naive.jpg 23_final_res2.png

24_target.jpg 24_naive.jpg 24_final_res2.png

25_target.jpg 25_naive.jpg 25_final_res2.png

26_target.jpg 26_naive.jpg 26_final_res2.png

27_target.jpg 27_naive.jpg 27_final_res2.png

28_target.jpg 28_naive.jpg 28_final_res2.png

29_target.jpg 29_naive.jpg 29_final_res2.png

30_target.jpg 30_naive.jpg 30_final_res2.png

31_target.jpg 31_naive.jpg 31_final_res2.png

32_target.jpg 32_naive.jpg 32_final_res2.png

33_target.jpg 33_naive.jpg 33_final_res2.png

34_target.jpg 34_naive.jpg 34_final_res2.png

Acknowledgement

  • Our torch implementation is based on Justin Johnson's code;
  • Histogram loss is inspired by Risser et al.

Citation

If you find this work useful for your research, please cite:

@article{luan2018deep,
  title={Deep Painterly Harmonization},
  author={Luan, Fujun and Paris, Sylvain and Shechtman, Eli and Bala, Kavita},
  journal={arXiv preprint arXiv:1804.03189},
  year={2018}
}

Contact

Feel free to contact me if there is any question (Fujun Luan [email protected]).


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK