Gato

tags
Transformers, Reinforcement learning
paper
(Reed et al. 2022)

Architecture

A standard decoder-only transformer is preceded by an embedding layer that embeds text and images with positional encoding and spatial information if available.

Parameter count

1.2B

Bibliography

  1. . . "A Generalist Agent". https://arxiv.org/abs/2205.06175v2.
Last changed | authored by

Comments


← Back to Notes