3 Ways To Make New Language Models

栏目: IT技术 · 发布时间: 4年前

内容简介:But how to make new language models?Recently, Huggingface released a blog on how to make a language model from scratch. It consists of training a tokeniser, defining the architecture and training the model.

We started with open source ‘ code ’ contribution. Now we are at a stage where we do open source ‘ model ’ contribution.

But how to make new language models?

Scenario 1: Model from scratch

Recently, Huggingface released a blog on how to make a language model from scratch. It consists of training a tokeniser, defining the architecture and training the model.

Pros

  • You can make a model on your custom text or a new language
  • You have complete control of model parameters. If you are looking to make a model which works on a text of fixed domain with less vocab, you can make the smallest possible model . Helps in latency!

Cons

  • You need to have a decently large dataset
  • Training is going to be costly

Scenario 2: Transfer learning

3 Ways To Make New Language Models

This is the common approach where we take a pretrained model like AWD-LSTM or ALBERT and then fine-tune it for our task.

Pros

  • You can train with less data
  • Training is cheaper

Cons

  • If your text has many new words, they will be split into very small chunks or even characters if it’s a subword model like BERT. If it’s a word-based model, all new words will be <unk> tokens.

Splitting into very small chunks and <unk> can lead to poor performance of the model.

Scenario 3: Transfer learning with new vocab addition

3 Ways To Make New Language Models

This methodology is the sweet spot between using transfer learning and making a model from scratch.

AWD-LSTM (A bite of history)

This was first explained nicely in the fastai lectures. Using the below convert_weights function they add zero vectors to the embedding matrix of AWD-LSTM for new vocab. AWD-LSTM has a vocab of ~33k and hidden of 400. If you add 10k new vocab, your total vocab is now 43k.

So the embedding matrix is now changed from (33k, 400) to (43k, 400) where the new 10k rows added are just 0 vectors of size hidden.

Using this methodology, we don’t need to start from scratch for old vocab which is a huge advantage! :fire:

Fastai code is superb and does all these automatically without worries. The problem is how to do this with transformers library?

Adding new vocab to transformers :rocket:

On a lonely night with no progress of accuracy on a problem we were working with, this solution struck to me. We had tried out transfer learning with AWD-LSTM, BERT, ALBERT, XLM-R, and making a new model from scratch too.

Nothing worked for us because we had 2 problems:

  • We had huge OOV issues
  • We had very less training data. We scraped data and again didn't have enough data. Available models are trained on GBs of data and we hardly had 50MB of data.

Solution

I thought why not try convert_weights approach for transformers.

Model selection

I tried two models canwenxu/BERT-of-Theseus-MNLI and TinyBERT . I selected these for a few reasons

  • Performance : Both have almost BERT-base performance
  • Library availability : Both can be used with transformers library
  • Model size : Since I had very less training data, I wanted the smallest model as the amount of data required is proportional to no. of parameters. As of writing, these are the smallest model with BERT-base level performance.

Theseus has 66M,

AWD-LSTM has 24M and

TinyBERT has 15M parameters :panda_face:

I didn’t select distilBERT as Theseus has same parameters and better performance as shown in the snap.

3 Ways To Make New Language Models

BERT-of-Theseus-MNLI https://arxiv.org/pdf/2002.02925.pdf

3 Ways To Make New Language Models

TinyBERT https://arxiv.org/pdf/1909.10351.pdf

UPDATE:

Now other range of small BERT models have also become available.

3 Ways To Make New Language Models

https://github.com/google-research/bert/

3 Ways To Make New Language Models

https://github.com/google-research/bert/

These models have been made available in Transformers library and you can read this paper to understand the motivation and methodology.

Basically, we don’t need a model as big as BERT-base all the time and the latency requirements push a need to make smaller models.

Overall the results of these small models are very very impressive.

Add vocab to model

So here is how the changes look like. You need to choose appropriate tokenizer as per your model. Theseus is a distilled model of BERT and hence uses BertWordPieceTokenizer.

The below method takes the dataframe containing column ‘text’ and fits a wordpiece tokenizer for the vocab_size breaking words into subwords as needed. Then we export the vocab and load it as a list.

Now we add the vocab to the original tokenizer and pass the length of tokenizer to model to initialise new empty rows for new vocab.

You need to use this method after the tokenizer and model are loaded in the run_language_modeling.py

New vocab size

Be careful about vocab_size because if you add a huge number of new vocab, the model might become inferior. It’s a parameter to be hyper-tuned.

Once you have the model, do a before/after analyses of the tokenization with old and new model.

The new model should tokenize a sentence in less number of tokens.

Results :bar_chart:

I was able to get the same metric and faster task training with TinyBERT. Although this was for a competition where goal is to get high score, it can save a ton of headache and money for inference in the production scenario :heart_eyes:

I highly recommend using TinyBERT and suggest being open to evaluating smaller models before finalising a heavy transformer model :monkey_face:


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

数据结构(C语言版)

数据结构(C语言版)

严蔚敏、吴伟民 / 清华大学出版社 / 2012-5 / 29.00元

《数据结构》(C语言版)是为“数据结构”课程编写的教材,也可作为学习数据结构及其算法的C程序设计的参数教材。 本书的前半部分从抽象数据类型的角度讨论各种基本类型的数据结构及其应用;后半部分主要讨论查找和排序的各种实现方法及其综合分析比较。其内容和章节编排1992年4月出版的《数据结构》(第二版)基本一致,但在本书中更突出了抽象数据类型的概念。全书采用类C语言作为数据结构和算法的描述语言。 ......一起来看看 《数据结构(C语言版)》 这本书的介绍吧!

JSON 在线解析
JSON 在线解析

在线 JSON 格式化工具

MD5 加密
MD5 加密

MD5 加密工具

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换