GitHub - leod/hncynic: Generate Hacker News Comments from Titles
source link: https://github.com/leod/hncynic
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
README.md
hncynic
The best Hacker News comments are written with a complete disregard for the linked article.
hncynic
is an attempt at capturing this phenomenon by training a model to predict
Hacker News comments just from the submission title. More specifically, I trained a
Transformer encoder-decoder model on
Hacker News data.
In my second attempt, I also included data from Wikipedia.
The generated comments are fun to read, but often turn out meaningless or contradictory -- see here for some examples generated from recent HN titles.
There is a demo live at https://hncynic.leod.org/.
Steps
Hacker News
Train a model on Hacker News data only:
- data: Prepare the data and extract title-comment pairs from the HN data dump.
- train: Train a Transformer translation model on the title-comment pairs using TensorFlow and OpenNMT-tf.
Transfer Learning
Train a model on Wikipedia data, then switch to Hacker News data:
- data-wiki: Prepare data from Wikipedia articles.
- train-wiki: Train a model to predict Wikipedia section texts from titles.
- train-wiki-hn: Continue training on HN data.
Hosting
Future Work
- Acquire GCP credits, train for more steps.
- It's probably nonideal to use encoder-decoder models. In retrospect, I should have trained
a language model instead, on data like
title <SEP> comment
. - I've completely excluded HN comments that are replies from the training data. It might be interesting to train on these as well.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK