27

How I Fine-Tuned GPT-2 to Generate Creative Domain Names

 4 years ago
source link: https://towardsdatascience.com/how-i-fine-tuned-gpt-2-to-generate-creative-domain-names-a56d59d55aca?gi=ce70b0deb1ab
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

r6zIRnE.jpg!web

Photo by Science in HD on Unsplash

I had a goal in my mind to create an AI service which is helpful to people and super simple in the same time. After fiddling around with GPT-2, I have realized it has an immense creative potential that could prove useful in creative text generation.

Therefore I created NameKrea which is an AI that generates domain names . Domain name generator business is online since long time, but it hasn’t seen this amount of good quality content. (If you want to learn more about project’s ideation phase and productivity tips here is the first part of the article )

Let me walk you through how I build an AI service that generates domain names and business ideas!

Introduction

After scraping around 100.000 websites from the Majestic Millions top 1 Million Domain list, I fine-tuned 355M parameter model. Results are weirdly accurate and also creative at the same time. Have a look at the results:

yEJr2a7.png!web

Namekrea AI Generated Domain Names and Meta Text

GPT-2 is able to understand the context if there is enough training data is there. To be able to train it we need lots of data. This can be easily done by scraping meta description of a website. Luckily there is no shortage of websites on the internet :)

Fine tuning GPT-2 is possible by reading each line using a CSV file. Before start scraping we need to define what kind of data structure is understandable for the algorithm. For that I take a rather simplistic approach of feeding GPT-2 with a 1 line text per domain with meta description. A single entry in our training data will look like the following:

Create an account or log into Facebook. Connect with friends, family and other people you know. Share photos and videos, send messages and get updates. = @ = facebook.com

As you can see, we first feed in meta context of the given context and then use a delimiter which doesn’t exist in normal text. You can choose anything that is not normally exist in a natural text. I have chosen this delimiter: -> = @ =

Step 1: Scraping

As you might assume, it will take significant amount of time to manually copy and paste meta context of the domains. We need to come up with a scraping algorithm which is able to generate us clean training data.

Cleanliness of the data is important since most of the machine learning models are relying on the quality. Your machine learning model can be as good as your training data. Therefore:

When training a machine learning model, always remember: Trash in, trash out!

JRreia2.jpg!web

Trash in, trash out!

So what do I mean by clean data? First of all GPT-2 is trained mostly on English data that was scraped all over the internet. Therefore we need to make sure that we are collecting meta context data in English. Secondly there are many websites with meta descriptions which uses emojis and different characters. We don’t want any of this in our final collected data.

If we design a scraping algorithm, it should be able to filter and extract data with following logic:

  1. English only
  2. No emojis and smileys and alike. Just bare English Text.
  3. Only collect data from a range of TLD’s (like .com, .net, .org..)
  4. Be Fast! We need to have multiprocessing for fetching data from multiple domains at the same time other wise it will take ages to scrape data.

Since we decided on our main requirements, let’s move on to building our scraper!

Python has lot of great packages for scraping such as BeatifulSoup . It has many features make it possible to start scraping websites in an instant. We will use this library to fetch domains and then write them into a csv file.

complete scraper at github repo of namekrea

For some reason Github Gist embeds are not working properly. Have a look at the scraper.py from source code at the github repo of namekrea

First of all scraper.py reads domain names from the majestic’s top 1 million domain list and then starts the process of scraping data.

Note: After running scraper.py you will end up with 5 different files from 5 different threads. Therefore you need to combine those files into 1 and turn them into a csv file otherwise fine-tuning is not going to be possible.

.txt output from scraper.py will look like this:

Create an account or log into Facebook. Connect with friends, family and other people you know. Share photos and videos, send messages and get updates. = @ = facebook.com  Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for. = @ = google.com   Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. = @ = youtube.com

When you are done with scraping data, we will continue to the next steps.

Step 2: Fine Tune it!

GPT-2 is huge! Medium scale pre-trained model has 355 Million parameters! Fine tuning this kind of architecture is definitely not possible by using your ordinary laptop CPU. On my setup I have used 2x1070Ti GPU’s and it took around 2 hours to reach a level of good quality output.

Let’s have a look at the general architecture of the project to understand how to train this model:

eMJRJ3n.png!web

Basic Architecture of the Workflow for fine tuning GPT-2 for generating domain names

So first of all, we have scraped the data and combined text files into a csv to make it usable by the model_trainer.py script.

When training is complete, we will load the checkpoint to text_generator.py to generate domain names randomly.

Step 3 AI generated Domain Names:

Randomly generated domain names are cool, but it is not very useful if we are not able to give prompts to the GPT-2. Luckily this is possible with the prefix variable:

gpt2.generate(sess, model_name=model_name,
#run_name=run_name, checkpoint_dir=checkpoint_dir,
temperature=0.8, include_prefix=True, prefix='The best e-cigarette',
truncate='<|endoftext|>', nsamples=10, batch_size=2, length=128
)

I can’t stop laughing when I see the result:

The best e-cigarette with a nicotine level of over 99%. E-cigarette is more than just a vape. It is a COMMUNICATION APP. Use it as a portable vaporizer. Or put it in your pocket and control your vape with a smartphone. Vaporsca is the best e-cigarette comparison site.

A vape and also a COMMUNICATION APP?? I can certainly say that this thing very creative :D


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK