GPTInstruct

tags
Transformers, GPT, NLP
paper
(Ouyang et al. 2022)

Architecture

This model starts off from a pretrained GPT-3. Reward modeling is added with Reinforcement learning.

Parameter count

175B

Bibliography

  1. . . "Training Language Models to Follow Instructions with Human Feedback". arXiv. DOI.

Links to this note

Last changed | authored by

Comments


← Back to Notes