Language modeling

tags
NLP

LM with RNNs

Different models have been studied, starting from the initial recurrent neural network based language model (Mikolov et al. 2011). Recurrent neural networks

LSTM were then used with more success than previous models (Zaremba et al. 2015).

Recently, transformers seem to have dominated language modeling. However it is not clear if this is due to their real superiority over RNNs or their practical scalability (Merity 2019).

LM with Transformers

Existing models:

Language modeling and Compression

Text generation

Language models can be used to generate text from a prompt or starting sentence. This is the kind of examples that made models like GPT-2 and GPT-3 famous, because of their ability to generate long sequences of apparently coherent text (Radford et al. 2019; Brown et al. 2020).

Other applications

Language modeling for Automated theorem proving

(Polu, Sutskever 2020)

Language modeling for Reinforcement Learning

(Janner et al. 2021)

Bibliography

  1. . . "Recurrent Neural Network Based Language Model". In , 4.
  2. . . "Recurrent Neural Network Regularization". Arxiv:1409.2329 [cs]. http://arxiv.org/abs/1409.2329.
  3. . . "Single Headed Attention RNN: Stop Thinking with Your Head". Arxiv:1911.11423 [cs]. http://arxiv.org/abs/1911.11423.
  4. . . "Language Models Are Unsupervised Multitask Learners". Openai Blog 1 (8):9.
  5. . . "Language Models Are Few-shot Learners". Arxiv:2005.14165 [cs]. http://arxiv.org/abs/2005.14165.
  6. . . "Generative Language Modeling for Automated Theorem Proving". Arxiv:2009.03393 [cs, Stat]. http://arxiv.org/abs/2009.03393.
  7. . . "Reinforcement Learning as One Big Sequence Modeling Problem". Arxiv:2106.02039 [cs]. http://arxiv.org/abs/2106.02039.
Last changed | authored by

Comments


← Back to Notes