"Compression of Generative Pre-trained Language Models via Quantization."

Chaofan Tao et al. (2022)

Details and statistics

DOI: 10.18653/V1/2022.ACL-LONG.331

access: open

type: Conference or Workshop Paper

metadata version: 2024-07-17

a service of  Schloss Dagstuhl - Leibniz Center for Informatics