57

GitHub - huggingface/tokenizers: 💥Fast State-of-the-Art Tokenizers optimized for...

 4 years ago
source link: https://github.com/huggingface/tokenizers
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md


68747470733a2f2f68756767696e67666163652e636f2f6c616e64696e672f6173736574732f746f6b656e697a6572732f746f6b656e697a6572732d6c6f676f2e706e67

Build GitHub 68747470733a2f2f706570792e746563682f62616467652f746f6b656e697a6572732f7765656b

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility.

Main features:

  • Train new vocabularies and tokenize, using todays most used tokenizers.
  • Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU.
  • Easy to use, but also extremely versatile.
  • Designed for research and production.
  • Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token.
  • Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.

Quick examples using Python:

Start using in a matter of seconds:

# Tokenizers provides ultra-fast implementations of most current tokenizers:
from tokenizers import (ByteLevelBPETokenizer,
                        BPETokenizer,
                        SentencePieceBPETokenizer,
                        BertWordPieceTokenizer)
# Ultra-fast => they can encode 1GB of text in ~20sec on a standard server's CPU
# Tokenizers can be easily instantiated from standard files
tokenizer = BertWordPieceTokenizer("bert-base-uncased-vocab.txt", lowercase=True)
>>> Tokenizer(vocabulary_size=30522, model=BertWordPiece, add_special_tokens=True, unk_token=[UNK], 
              sep_token=[SEP], cls_token=[CLS], clean_text=True, handle_chinese_chars=True, 
              strip_accents=True, lowercase=True, wordpieces_prefix=##)

# Tokenizers provide exhaustive outputs: tokens, mapping to original string, attention/special token masks.
# They also handle model's max input lengths as well as padding (to directly encode in padded batches)
output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
>>> Encoding(num_tokens=13, attributes=[ids, type_ids, tokens, offsets, attention_mask, special_tokens_mask, overflowing, original_str, normalized_str])
print(output.ids, output.tokens, output.offsets)
>>> [101, 7592, 1010, 1061, 1005, 2035, 999, 2129, 2024, 2017, 100, 1029, 102]
>>> ['[CLS]', 'hello', ',', 'y', "'", 'all', '!', 'how', 'are', 'you', '[UNK]', '?', '[SEP]']
>>> [(0, 0), (0, 5), (5, 6), (7, 8 (8, 9), (9, 12), (12, 13), (14, 17), (18, 21), (22, 25), (26, 27),
     (28, 29), (0, 0)]
# Here is an example using the offsets mapping to retrieve the string coresponding to the 10th token:
output.original_str[output.offsets[10]]
>>> '😁'

And training an new vocabulary is just as easy:

# You can also train a BPE/Byte-levelBPE/WordPiece vocabulary on your own files
tokenizer = ByteLevelBPETokenizer()
tokenizer.train(["wiki.test.raw"], vocab_size=20000)
>>> [00:00:00] Tokenize words                 ████████████████████████████████████████   20993/20993
>>> [00:00:00] Count pairs                    ████████████████████████████████████████   20993/20993
>>> [00:00:03] Compute merges                 ████████████████████████████████████████   19375/19375

Bindings

We provide bindings to the following languages (more to come!):


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK