DQ-BART

tags
Transformers, BART, NLP
paper
(Li et al. 2022)

Architecture

It is a distilled and quantized version of BART. It improves performance as well as the model size.

Bibliography

  1. . . "DQ-BART: Efficient Sequence-to-sequence Model via Joint Distillation and Quantization". arXiv. DOI.
Last changed | authored by

Comments


← Back to Notes