

GitHub - emilwallner/Screenshot-to-code-in-Keras: A neural network that transfor...
source link: https://github.com/emilwallner/Screenshot-to-code-in-Keras
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

A detailed tutorial covering the code in this repository: Turning design mockups into code with deep learning.
Plug: Check out my 60-page guide, No ML Degree, on how to land a machine learning job without a degree.
The neural network is built in three iterations. Starting with a Hello World version, followed by the main neural network layers, and ending by training it to generalize.
The models are based on Tony Beltramelli's pix2code, and inspired by Airbnb's sketching interfaces, and Harvard's im2markup.
Note: only the Bootstrap version can generalize on new design mock-ups. It uses 16 domain-specific tokens which are translated into HTML/CSS. It has a 97% accuracy. The best model uses a GRU instead of an LSTM. This version can be trained on a few GPUs. The raw HTML version has potential to generalize, but is still unproven and requires a significant amount of GPUs to train. The current model is also trained on a homogeneous and small dataset, thus it's hard to tell how well it behaves on more complex layouts.
A quick overview of the process:
1) Give a design image to the trained neural network
2) The neural network converts the image into HTML markup
3) Rendered output
Installation
FloydHub
Click this button to open a Workspace on FloydHub where you will find the same environment and dataset used for the Bootstrap version. You can also find the trained models for testing.
Local
pip install keras tensorflow pillow h5py jupyter
git clone https://github.com/emilwallner/Screenshot-to-code.git
cd Screenshot-to-code/
jupyter notebook
Go do the desired notebook, files that end with '.ipynb'. To run the model, go to the menu then click on Cell > Run all
The final version, the Bootstrap version, is prepared with a small set to test run the model. If you want to try it with all the data, you need to download the data here: https://www.floydhub.com/emilwallner/datasets/imagetocode, and specify the correct dir_name
.
Folder structure
| |-Bootstrap #The Bootstrap version
| | |-compiler #A compiler to turn the tokens to HTML/CSS (by pix2code)
| | |-resources
| | | |-eval_light #10 test images and markup
| |-Hello_world #The Hello World version
| |-HTML #The HTML version
| | |-Resources_for_index_file #CSS,images and scripts to test index.html file
| | |-html #HTML files to train it on
| | |-images #Screenshots for training
|-readme_images #Images for the readme page
Hello World
Bootstrap
Model weights
Acknowledgments
- Thanks to IBM for donating computing power through their PowerAI platform
- The code is largely influenced by Tony Beltramelli's pix2code paper. Code Paper
- The structure and some of the functions are from Jason Brownlee's excellent tutorial
Recommend
-
169
README.md Tinn (Tiny Neural Network) is a 200 line dependency free neural network libra...
-
270
README.md
-
7
Files Permalink Latest commit message Commit time
-
14
README.md
-
10
Report: 88% of IT leaders face integration challenges that slow digital transformation Image Credit: Getty Images Join today's le...
-
9
Neural Translation – Machine Translation with Neural Nets (BiLSTM) with Keras / Python In this blog, we shall discuss about how to build a neural network to translate from English to German. This problem appeared...
-
6
Introduction to 1D Convolutional Neural Networks in Keras for Time SequencesA step by step guide to one dimensional convolutional neural networksPhoto: a-image/ShutterstockIntroductionMany...
-
7
-
8
Guest Contextualizing OT data to enhance factory operations and drive digital transformation
-
9
We're seeking nominations for Insider's li...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK