default search action
Mike Lewis
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [c70]Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, Mike Lewis:
Self-Alignment with Instruction Backtranslation. ICLR 2024 - [c69]Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Richard James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
RA-DIT: Retrieval-Augmented Dual Instruction Tuning. ICLR 2024 - [c68]Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Wen-tau Yih, Mike Lewis:
In-Context Pretraining: Language Modeling Beyond Document Boundaries. ICLR 2024 - [c67]Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis:
Efficient Streaming Language Models with Attention Sinks. ICLR 2024 - [c66]Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Wen-tau Yih:
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding. NAACL (Short Papers) 2024: 783-791 - [c65]Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma:
Effective Long-Context Scaling of Foundation Models. NAACL-HLT 2024: 4643-4663 - [c64]Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Richard James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
REPLUG: Retrieval-Augmented Black-Box Language Models. NAACL-HLT 2024: 8371-8384 - [i70]Zexuan Zhong, Mengzhou Xia, Danqi Chen, Mike Lewis:
Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training. CoRR abs/2405.03133 (2024) - [i69]Xi Victoria Lin, Akshat Shrivastava, Liang Luo, Srinivasan Iyer, Mike Lewis, Gargi Ghosh, Luke Zettlemoyer, Armen Aghajanyan:
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts. CoRR abs/2407.21770 (2024) - [i68]Ming Zhong, Aston Zhang, Xuewei Wang, Rui Hou, Wenhan Xiong, Chenguang Zhu, Zhengxing Chen, Liang Tan, Chloe Bi, Mike Lewis, Sravya Popuri, Sharan Narang, Melanie Kambadur, Dhruv Mahajan, Sergey Edunov, Jiawei Han, Laurens van der Maaten:
Law of the Weakest Link: Cross Capabilities of Large Language Models. CoRR abs/2409.19951 (2024) - 2023
- [j7]Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer:
Questions Are All You Need to Train a Dense Passage Retriever. Trans. Assoc. Comput. Linguistics 11: 600-616 (2023) - [j6]Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed:
LegoNN: Building Modular Encoder-Decoder Models. IEEE ACM Trans. Audio Speech Lang. Process. 31: 3112-3126 (2023) - [c63]Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, Luke Zettlemoyer:
Nonparametric Masked Language Modeling. ACL (Findings) 2023: 2097-2118 - [c62]Anastasia Razdaibiedina, Yuning Mao, Madian Khabsa, Mike Lewis, Rui Hou, Jimmy Ba, Amjad Almahairi:
Residual Prompt Tuning: improving prompt tuning with residual reparameterization. ACL (Findings) 2023: 6740-6757 - [c61]Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad:
In-context Examples Selection for Machine Translation. ACL (Findings) 2023: 8857-8873 - [c60]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. ACL (1) 2023: 12286-12312 - [c59]Weiyan Shi, Emily Dinan, Adi Renduchintala, Daniel Fried, Athul Paul Jacob, Zhou Yu, Mike Lewis:
AutoReply: Detecting Nonsense in Dialogue with Discriminative Replies. EMNLP (Findings) 2023: 294-309 - [c58]Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis:
Measuring and Narrowing the Compositionality Gap in Language Models. EMNLP (Findings) 2023: 5687-5711 - [c57]Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. EMNLP 2023: 12076-12100 - [c56]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. ICLR 2023 - [c55]Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi:
Progressive Prompts: Continual Learning for Language Models. ICLR 2023 - [c54]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Richard James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-Tau Yih:
Retrieval-Augmented Multimodal Language Modeling. ICML 2023: 39755-39769 - [c53]Tianyi Zhang, Tao Yu, Tatsunori Hashimoto, Mike Lewis, Wen-Tau Yih, Daniel Fried, Sida Wang:
Coder Reviewer Reranking for Code Generation. ICML 2023: 41832-41846 - [c52]Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis:
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. NeurIPS 2023 - [c51]Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy:
LIMA: Less Is More for Alignment. NeurIPS 2023 - [i67]Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi:
Progressive Prompts: Continual Learning for Language Models. CoRR abs/2301.12314 (2023) - [i66]Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
REPLUG: Retrieval-Augmented Black-Box Language Models. CoRR abs/2301.12652 (2023) - [i65]Suchin Gururangan, Margaret Li, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Scaling Expert Language Models with Unsupervised Domain Discovery. CoRR abs/2303.14177 (2023) - [i64]Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Jimmy Ba, Amjad Almahairi:
Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization. CoRR abs/2305.03937 (2023) - [i63]Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis:
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. CoRR abs/2305.07185 (2023) - [i62]Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy:
LIMA: Less Is More for Alignment. CoRR abs/2305.11206 (2023) - [i61]Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. CoRR abs/2305.14251 (2023) - [i60]Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Scott Wen-tau Yih:
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding. CoRR abs/2305.14739 (2023) - [i59]Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, Mike Lewis:
Self-Alignment with Instruction Backtranslation. CoRR abs/2308.06259 (2023) - [i58]Sean O'Brien, Mike Lewis:
Contrastive Decoding Improves Reasoning in Large Language Models. CoRR abs/2309.09117 (2023) - [i57]Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma:
Effective Long-Context Scaling of Foundation Models. CoRR abs/2309.16039 (2023) - [i56]Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis:
Efficient Streaming Language Models with Attention Sinks. CoRR abs/2309.17453 (2023) - [i55]Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih:
RA-DIT: Retrieval-Augmented Dual Instruction Tuning. CoRR abs/2310.01352 (2023) - [i54]Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis:
In-Context Pretraining: Language Modeling Beyond Document Boundaries. CoRR abs/2310.10638 (2023) - 2022
- [c50]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. ACL (Findings) 2022: 711-728 - [c49]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. ACL (1) 2022: 5316-5330 - [c48]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. EMNLP 2022: 3781-3797 - [c47]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? EMNLP 2022: 11048-11064 - [c46]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. ICLR 2022 - [c45]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. ICLR 2022 - [c44]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. ICLR 2022 - [c43]Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech:
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models. NAACL-HLT 2022: 2361-2375 - [c42]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. NAACL-HLT 2022: 2791-2809 - [c41]Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, Angela Fan:
Tricks for Training Sparse Translation Models. NAACL-HLT 2022: 3340-3345 - [c40]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. NAACL-HLT 2022: 5557-5576 - [c39]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale. NeurIPS 2022 - [i53]Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer:
CM3: A Causal Masked Multimodal Model of the Internet. CoRR abs/2201.07520 (2022) - [i52]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? CoRR abs/2202.12837 (2022) - [i51]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. CoRR abs/2204.05999 (2022) - [i50]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. CoRR abs/2204.07496 (2022) - [i49]Mandar Joshi, Terra Blevins, Mike Lewis, Daniel S. Weld, Luke Zettlemoyer:
Few-shot Mining of Naturally Occurring Inputs and Outputs. CoRR abs/2205.04050 (2022) - [i48]Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed:
LegoNN: Building Modular Encoder-Decoder Models. CoRR abs/2206.03318 (2022) - [i47]Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer:
Questions Are All You Need to Train a Dense Passage Retriever. CoRR abs/2206.10658 (2022) - [i46]Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models. CoRR abs/2208.03306 (2022) - [i45]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. CoRR abs/2208.07339 (2022) - [i44]Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis:
Measuring and Narrowing the Compositionality Gap in Language Models. CoRR abs/2210.03350 (2022) - [i43]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. CoRR abs/2210.15097 (2022) - [i42]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
Retrieval-Augmented Multimodal Language Modeling. CoRR abs/2211.12561 (2022) - [i41]Weiyan Shi, Emily Dinan, Adi Renduchintala, Daniel Fried, Athul Paul Jacob, Zhou Yu, Mike Lewis:
AutoReply: Detecting Nonsense in Dialogue Introspectively with Discriminative Replies. CoRR abs/2211.12615 (2022) - [i40]Tianyi Zhang, Tao Yu, Tatsunori B. Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, Sida I. Wang:
Coder Reviewer Reranking for Code Generation. CoRR abs/2211.16490 (2022) - [i39]Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, Luke Zettlemoyer:
Nonparametric Masked Language Modeling. CoRR abs/2212.01349 (2022) - [i38]Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad:
In-context Examples Selection for Machine Translation. CoRR abs/2212.02437 (2022) - [i37]Andrew Lee, David Wu, Emily Dinan, Mike Lewis:
Improving Chess Commentaries by Combining Language Models with Symbolic Reasoning Engines. CoRR abs/2212.08195 (2022) - 2021
- [c38]Ofir Press, Noah A. Smith, Mike Lewis:
Shortformer: Better Language Modeling using Shorter Inputs. ACL/IJCNLP (1) 2021: 5493-5505 - [c37]Michael Sejr Schlichtkrull, Vladimir Karpukhin, Barlas Oguz, Mike Lewis, Wen-tau Yih, Sebastian Riedel:
Joint Verification and Reranking for Open Fact Checking Over Tables. ACL/IJCNLP (1) 2021: 6787-6799 - [c36]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. ICLR 2021 - [c35]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. ICML 2021: 6265-6274 - [c34]Athul Paul Jacob, Mike Lewis, Jacob Andreas:
Multitasking Inhibits Semantic Drift. NAACL-HLT 2021: 5351-5366 - [i36]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. CoRR abs/2103.16716 (2021) - [i35]Athul Paul Jacob, Mike Lewis, Jacob Andreas:
Multitasking Inhibits Semantic Drift. CoRR abs/2104.07219 (2021) - [i34]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. CoRR abs/2106.08190 (2021) - [i33]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. CoRR abs/2107.06955 (2021) - [i32]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. CoRR abs/2108.04106 (2021) - [i31]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. CoRR abs/2108.05036 (2021) - [i30]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. CoRR abs/2108.12409 (2021) - [i29]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. CoRR abs/2110.02861 (2021) - [i28]Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, Angela Fan:
Tricks for Training Sparse Translation Models. CoRR abs/2110.08246 (2021) - [i27]Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech:
Sparse Distillation: Speeding Up Text Classification by Using Bigger Models. CoRR abs/2110.08536 (2021) - [i26]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. CoRR abs/2110.15943 (2021) - 2020
- [j5]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. Trans. Assoc. Comput. Linguistics 8: 726-742 (2020) - [c33]Alex Wang, Kyunghyun Cho, Mike Lewis:
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. ACL 2020: 5008-5020 - [c32]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. ACL 2020: 7871-7880 - [c31]Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Michael Haeger, Haoran Li, Yashar Mehdad, Veselin Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta:
Conversational Semantic Parsing. EMNLP (1) 2020: 5026-5035 - [c30]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. EMNLP (1) 2020: 6869-6882 - [c29]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020 - [c28]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. NeurIPS 2020 - [c27]Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela:
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. NeurIPS 2020 - [i25]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. CoRR abs/2001.08210 (2020) - [i24]Alex Wang, Kyunghyun Cho, Mike Lewis:
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. CoRR abs/2004.04228 (2020) - [i23]Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela:
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. CoRR abs/2005.11401 (2020) - [i22]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida I. Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. CoRR abs/2006.15020 (2020) - [i21]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. CoRR abs/2009.07396 (2020) - [i20]Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Mike Haeger, Haoran Li, Yashar Mehdad, Ves Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta:
Conversational Semantic Parsing. CoRR abs/2009.13655 (2020) - [i19]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. CoRR abs/2010.00710 (2020) - [i18]Michael Sejr Schlichtkrull, Vladimir Karpukhin, Barlas Oguz, Mike Lewis, Wen-tau Yih, Sebastian Riedel:
Joint Verification and Reranking for Open Fact Checking Over Tables. CoRR abs/2012.15115 (2020) - [i17]Ofir Press, Noah A. Smith, Mike Lewis:
Shortformer: Better Language Modeling using Shorter Inputs. CoRR abs/2012.15832 (2020)
2010 – 2019
- 2019
- [c26]Angela Fan, Mike Lewis, Yann N. Dauphin:
Strategies for Structuring Story Generation. ACL (1) 2019: 2650-2660 - [c25]Akshat Agarwal, Swaminathan Gurumurthy, Vasu Sharma, Mike Lewis, Katia P. Sycara:
Community Regularization of Visually-Grounded Dialog. AAMAS 2019: 1042-1050 - [c24]Panupong Pasupat, Sonal Gupta, Karishma Mandyam, Rushin Shah, Mike Lewis, Luke Zettlemoyer:
Span-based Hierarchical Semantic Parsing for Task-Oriented Dialog. EMNLP/IJCNLP (1) 2019: 1520-1526 - [c23]Mike Lewis, Angela Fan:
Generative Question Answering: Learning to Answer the Whole Question. ICLR (Poster) 2019 - [c22]Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis:
Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog. NAACL-HLT (1) 2019: 3795-3805 - [c21]Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, Mike Lewis:
Hierarchical Decision Making by Generating and Following Natural Language Instructions. NeurIPS 2019: 10025-10034 - [i16]Angela Fan, Mike Lewis, Yann N. Dauphin:
Strategies for Structuring Story Generation. CoRR abs/1902.01109 (2019) - [i15]Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer:
Improving Semantic Parsing for Task Oriented Dialog. CoRR abs/1902.06000 (2019) - [i14]Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, Mike Lewis:
Hierarchical Decision Making by Generating and Following Natural Language Instructions. CoRR abs/1906.00744 (2019) - [i13]Sean Vasquez, Mike Lewis:
MelNet: A Generative Model for Audio in the Frequency Domain. CoRR abs/1906.01083 (2019) - [i12]Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov:
RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692 (2019) - [i11]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. CoRR abs/1910.13461 (2019) - [i10]