97

GitHub - dmlc/gluon-nlp: NLP made easy

 5 years ago
source link: https://github.com/dmlc/gluon-nlp
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.rst

gluon_s2.png

GluonNLP: Your Choice of Deep Learning for NLP

68747470733a2f2f696d672e736869656c64732e696f2f62616467652f707974686f6e2d322e37253243253230332e362d626c75652e737667 68747470733a2f2f636f6465636f762e696f2f67682f6c65657a752f676c756f6e2d6e6c702f6272616e63682f6d61737465722f67726170682f62616467652e7376673f746f6b656e3d785132484b446b397578 687474703a2f2f63692e6d786e65742e696f2f6a6f622f676c756f6e2d6e6c702f6a6f622f6d61737465722f62616467652f69636f6e

GluonNLP is a toolkit that enables easy text preprocessing, datasets loading and neural models building to help you speed up your Natural Language Processing (NLP) research.

Installation

Make sure you have Python 2.7 or Python 3.6 and recent version of MXNet. You can install MXNet and GluonNLP using pip:

pip install --pre --upgrade mxnet
pip install gluonnlp

Docs ?

GluonNLP documentation is available at our website.

Community

For questions and comments, please visit our forum (and Chinese version). For bug reports, please submit Github issues.

How to Contribute

GluonNLP has been developed by community members. Everyone is more than welcome to contribute. We together can make the GluonNLP better and more user-friendly to more users.

Read our contributing guide to get to know about our development procedure, how to propose bug fixes and improvements, as well as how to build and test your changes to GluonNLP.

Join our contributors.

Resources

Check out how to use GluonNLP for your own research or projects.

If you are new to Gluon, please check out our 60-minute crash course.

For getting started quickly, refer to notebook runnable examples at Examples.

For advanced examples, check out our Scripts.

For experienced users, check out our API Notes.

Quick Start Guide

Dataset Loading

Load the Wikitext-2 dataset, for example:

>>> import gluonnlp as nlp
>>> train = nlp.data.WikiText2(segment='train')
>>> train[0][0:5]
['=', 'Valkyria', 'Chronicles', 'III', '=']

Vocabulary Construction

Build vocabulary based on the above dataset, for example:

>>> vocab = nlp.Vocab(counter=nlp.data.Counter(train[0]))
>>> vocab
Vocab(size=33280, unk="<unk>", reserved="['<pad>', '<bos>', '<eos>']")

Neural Models Building

From the models package, apply an Standard RNN langauge model to the above dataset:

>>> model = nlp.model.language_model.StandardRNN('lstm', len(vocab),
...                                              200, 200, 2, 0.5, True)
>>> model
StandardRNN(
  (embedding): HybridSequential(
    (0): Embedding(33280 -> 200, float32)
    (1): Dropout(p = 0.5, axes=())
  )
  (encoder): LSTM(200 -> 200.0, TNC, num_layers=2, dropout=0.5)
  (decoder): HybridSequential(
    (0): Dense(200 -> 33280, linear)
  )
)

Word Embeddings Loading

For example, load a GloVe word embedding, one of the state-of-the-art English word embeddings:

>>> glove = nlp.embedding.create('glove', source='glove.6B.50d')
# Obtain vectors for 'baby' in the GloVe word embedding
>>> type(glove['baby'])
<class 'mxnet.ndarray.ndarray.NDArray'>
>>> glove['baby'].shape
(50,)

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK