


Остановите войну!
for scientists:


default search action
Noah A. Smith
Person information

- affiliation: University of Washington, Seattle, WA, USA
- affiliation: Allen Institute for AI, Seattle, WA, USA
- affiliation: Carnegie Mellon University, Pittsburgh, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [c261]Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
One Embedder, Any Task: Instruction-Finetuned Text Embeddings. ACL (Findings) 2023: 1102-1121 - [c260]Nikita Haduong, Alice Gao, Noah A. Smith:
Risks and NLP Design: A Case Study on Procedural Document QA. ACL (Findings) 2023: 1248-1269 - [c259]Wenya Wang, Vivek Srikumar, Hannaneh Hajishirzi, Noah A. Smith:
Elaboration-Generating Commonsense Question Answering at Scale. ACL (1) 2023: 1619-1635 - [c258]Haoxin Li, Phillip Keung, Daniel Cheng, Jungo Kasai, Noah A. Smith:
NarrowBERT: Accelerating Masked Language Model Pretraining and Inference. ACL (2) 2023: 1723-1730 - [c257]Sofia Serrano, Jesse Dodge, Noah A. Smith:
Stubborn Lexical Bias in Data and Models. ACL (Findings) 2023: 8131-8146 - [c256]Hamish Ivison, Noah A. Smith, Hannaneh Hajishirzi, Pradeep Dasigi:
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors. ACL (Findings) 2023: 9036-9061 - [c255]Ian Magnusson, Noah A. Smith, Jesse Dodge:
Reproducibility in NLP: What Have We Learned from the Checklist? ACL (Findings) 2023: 12789-12811 - [c254]Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi:
Self-Instruct: Aligning Language Models with Self-Generated Instructions. ACL (1) 2023: 13484-13508 - [c253]Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Binding Language Models in Symbolic Languages. ICLR 2023 - [c252]Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Selective Annotation Makes Language Models Better Few-Shot Learners. ICLR 2023 - [c251]Orevaoghene Ahia, Hila Gonen, Vidhisha Balachandran, Yulia Tsvetkov, Noah A. Smith:
LEXPLAIN: Improving Model Explanations via Lexicon Supervision. *SEM@ACL 2023: 207-216 - [i169]Haoxin Li, Phillip Keung, Daniel Cheng, Jungo Kasai, Noah A. Smith:
NarrowBERT: Accelerating Masked Language Model Pretraining and Inference. CoRR abs/2301.04761 (2023) - [i168]Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, Noah A. Smith:
TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering. CoRR abs/2303.11897 (2023) - [i167]Suchin Gururangan, Margaret Li, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Scaling Expert Language Models with Unsupervised Domain Discovery. CoRR abs/2303.14177 (2023) - [i166]Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A. Smith, Yejin Choi:
We're Afraid Language Models Aren't Modeling Ambiguity. CoRR abs/2304.14399 (2023) - [i165]Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi:
Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements. CoRR abs/2305.03695 (2023) - [i164]Muru Zhang, Ofir Press, William Merrill, Alisa Liu, Noah A. Smith:
How Language Model Hallucinations Can Snowball. CoRR abs/2305.13534 (2023) - [i163]Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David R. Mortensen, Noah A. Smith, Yulia Tsvetkov:
Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models. CoRR abs/2305.13707 (2023) - [i162]Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, Hannaneh Hajishirzi:
Fine-Grained Human Feedback Gives Better Rewards for Language Model Training. CoRR abs/2306.01693 (2023) - [i161]Sofia Serrano, Jesse Dodge, Noah A. Smith:
Stubborn Lexical Bias in Data and Models. CoRR abs/2306.02190 (2023) - [i160]Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, Hannaneh Hajishirzi:
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources. CoRR abs/2306.04751 (2023) - [i159]Judit Ács, Endre Hamerlik, Roy Schwartz, Noah A. Smith, András Kornai:
Morphosyntactic probing of multilingual BERT models. CoRR abs/2306.06205 (2023) - [i158]Ian Magnusson, Noah A. Smith, Jesse Dodge:
Reproducibility in NLP: What Have We Learned from the Checklist? CoRR abs/2306.09562 (2023) - [i157]Yanai Elazar, Jiayao Zhang, David Wadden, Bo Zhang, Noah A. Smith:
Estimating the Causal Effect of Early ArXiving on Paper Acceptance. CoRR abs/2306.13891 (2023) - [i156]Bo-Ru Lu, Nikita Haduong, Chia-Hsuan Lee, Zeqiu Wu, Hao Cheng, Paul Koester, Jean Utke, Tao Yu, Noah A. Smith, Mari Ostendorf:
DIALGEN: Collaborative Human-LM Generated Dialogues for Improved Understanding of Human-Human Conversations. CoRR abs/2307.07047 (2023) - [i155]Hao Peng, Qingqing Cao, Jesse Dodge, Matthew E. Peters, Jared Fernandez, Tom Sherborne, Kyle Lo, Sam Skjonsberg, Emma Strubell, Darrell Plessas, Iz Beltagy, Evan Pete Walsh, Noah A. Smith, Hannaneh Hajishirzi:
Efficiency Pentathlon: A Standardized Arena for Efficiency Evaluation. CoRR abs/2307.09701 (2023) - [i154]Sewon Min, Suchin Gururangan, Eric Wallace, Hannaneh Hajishirzi, Noah A. Smith, Luke Zettlemoyer:
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore. CoRR abs/2308.04430 (2023) - 2022
- [j31]William Merrill, Ashish Sabharwal, Noah A. Smith:
Saturated Transformers are Constant-Depth Threshold Circuits. Trans. Assoc. Comput. Linguistics 10: 843-856 (2022) - [c250]Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, Yejin Choi:
Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text. ACL (1) 2022: 7250-7274 - [c249]Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, Noah A. Smith:
ABC: Attention with Bounded-memory Control. ACL (1) 2022: 7469-7483 - [c248]Tal August, Katharina Reinecke, Noah A. Smith:
Generating Scientific Definitions with Controllable Complexity. ACL (1) 2022: 8298-8317 - [c247]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. EMNLP 2022: 602-631 - [c246]Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz:
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers. EMNLP (Findings) 2022: 1403-1416 - [c245]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. EMNLP 2022: 2562-2580 - [c244]Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari Ostendorf:
In-Context Learning for Few-Shot Dialogue State Tracking. EMNLP (Findings) 2022: 2627-2643 - [c243]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir Radev, Yejin Choi, Noah A. Smith:
Twist Decoding: Diverse Generators Guide Each Other. EMNLP 2022: 4909-4923 - [c242]Bo-Ru Lu, Yushi Hu, Hao Cheng, Noah A. Smith, Mari Ostendorf:
Unsupervised Learning of Hierarchical Conversation Structure. EMNLP (Findings) 2022: 5657-5670 - [c241]Alisa Liu, Swabha Swayamdipta, Noah A. Smith, Yejin Choi:
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation. EMNLP (Findings) 2022: 6826-6847 - [c240]Zhaofeng Wu, Hao Peng, Nikolaos Pappas, Noah A. Smith:
Modeling Context With Linear Attention for Scalable Document-Level Translation. EMNLP (Findings) 2022: 6931-6939 - [c239]Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, Daniel S. Weld:
GENIE: Toward Reproducible and Standardized Human Evaluation for Text Generation. EMNLP 2022: 11444-11458 - [c238]Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, Will Buchanan:
Measuring the Carbon Intensity of AI in Cloud Instances. FAccT 2022: 1877-1894 - [c237]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. ICLR 2022 - [c236]Daniel Edmiston, Phillip Keung, Noah A. Smith:
Domain Mismatch Doesn't Always Prevent Cross-lingual Transfer Learning. LREC 2022: 892-899 - [c235]Daniel Cheng, Kyle Yan, Phillip Keung, Noah A. Smith:
The Engage Corpus: A Social Media Dataset for Text-Based Recommender Systems. LREC 2022: 1885-1889 - [c234]Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, Yejin Choi:
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics. NAACL-HLT 2022: 780-799 - [c233]Jungo Kasai, Keisuke Sakaguchi, Lavinia Dunagan, Jacob Morrison, Ronan Le Bras, Yejin Choi, Noah A. Smith:
Transparent Human Evaluation for Image Captioning. NAACL-HLT 2022: 3464-3478 - [c232]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R. Fabbri, Yejin Choi, Noah A. Smith:
Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand. NAACL-HLT 2022: 3540-3557 - [c231]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. NAACL-HLT 2022: 5557-5576 - [c230]Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, Noah A. Smith:
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection. NAACL-HLT 2022: 5884-5906 - [c229]Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Karishma Mandyam, Noah A. Smith:
Time Waits for No One! Analysis and Challenges of Temporal Misalignment. NAACL-HLT 2022: 5944-5958 - [i153]Maarten Sap, Anna Jafarpour, Yejin Choi, Noah A. Smith, James W. Pennebaker, Eric Horvitz:
Computational Lens on Cognition: Study Of Autobiographical Versus Imagined Stories With Large-Scale Language Models. CoRR abs/2201.02662 (2022) - [i152]Alisa Liu, Swabha Swayamdipta, Noah A. Smith, Yejin Choi:
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation. CoRR abs/2201.05955 (2022) - [i151]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir R. Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. CoRR abs/2201.05966 (2022) - [i150]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. CoRR abs/2201.10474 (2022) - [i149]Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari Ostendorf:
In-Context Learning for Few-Shot Dialogue State Tracking. CoRR abs/2203.08568 (2022) - [i148]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Dragomir R. Radev, Yejin Choi, Noah A. Smith:
Beam Decoding with Controlled Patience. CoRR abs/2204.05424 (2022) - [i147]Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi
, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Hannaneh Hajishirzi, Noah A. Smith, Daniel Khashabi:
Benchmarking Generalization via In-Context Instructions on 1, 600+ Language Tasks. CoRR abs/2204.07705 (2022) - [i146]Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir R. Radev, Yejin Choi, Noah A. Smith:
Twist Decoding: Diverse Generators Guide Each Other. CoRR abs/2205.09273 (2022) - [i145]Bo-Ru Lu, Yushi Hu, Hao Cheng, Noah A. Smith, Mari Ostendorf:
Unsupervised Learning of Hierarchical Conversation Structure. CoRR abs/2205.12244 (2022) - [i144]Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, Will Buchanan:
Measuring the Carbon Intensity of AI in Cloud Instances. CoRR abs/2206.05229 (2022) - [i143]Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu
, Dragomir R. Radev, Noah A. Smith, Yejin Choi, Kentaro Inui:
RealTime QA: What's the Answer Right Now? CoRR abs/2207.13332 (2022) - [i142]Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models. CoRR abs/2208.03306 (2022) - [i141]Wenya Wang, Vivek Srikumar, Hanna Hajishirzi, Noah A. Smith:
Elaboration-Generating Commonsense Question Answering at Scale. CoRR abs/2209.01232 (2022) - [i140]Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Selective Annotation Makes Language Models Better Few-Shot Learners. CoRR abs/2209.01975 (2022) - [i139]Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Binding Language Models in Symbolic Languages. CoRR abs/2210.02875 (2022) - [i138]Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis:
Measuring and Narrowing the Compositionality Gap in Language Models. CoRR abs/2210.03350 (2022) - [i137]Zhaofeng Wu, William Merrill, Hao Peng, Iz Beltagy, Noah A. Smith:
Transparency Helps Reveal When Language Models Learn Meaning. CoRR abs/2210.07468 (2022) - [i136]Zhaofeng Wu, Hao Peng, Nikolaos Pappas, Noah A. Smith:
Modeling Context With Linear Attention for Scalable Document-Level Translation. CoRR abs/2210.08431 (2022) - [i135]Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz:
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers. CoRR abs/2211.03495 (2022) - [i134]Yushi Hu, Hang Hua, Zhengyuan Yang, Weijia Shi, Noah A. Smith, Jiebo Luo:
PromptCap: Prompt-Guided Task-Aware Image Captioning. CoRR abs/2211.09699 (2022) - [i133]Daniel Edmiston, Phillip Keung, Noah A. Smith:
Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning. CoRR abs/2211.16671 (2022) - [i132]Hamish Ivison, Noah A. Smith, Hannaneh Hajishirzi, Pradeep Dasigi:
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors. CoRR abs/2212.00196 (2022) - [i131]Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith, Luke Zettlemoyer:
Demystifying Prompts in Language Models via Perplexity Estimation. CoRR abs/2212.04037 (2022) - [i130]Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
One Embedder, Any Task: Instruction-Finetuned Text Embeddings. CoRR abs/2212.09741 (2022) - [i129]Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi:
Self-Instruct: Aligning Language Model with Self Generated Instructions. CoRR abs/2212.10560 (2022) - 2021
- [j30]Zhaofeng Wu, Hao Peng, Noah A. Smith:
Infusing Finetuning with Semantic Dependencies. Trans. Assoc. Comput. Linguistics 9: 226-242 (2021) - [j29]William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith:
Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand? Trans. Assoc. Comput. Linguistics 9: 1047-1060 (2021) - [c228]Alexander Miserlis Hoyle, Ana Marasovic, Noah A. Smith:
Promoting Graph Awareness in Linearized Graph-to-Text Generation. ACL/IJCNLP (Findings) 2021: 944-956 - [c227]Kelvin Luu, Xinyi Wu, Rik Koncel-Kedziorski, Kyle Lo, Isabel Cachola, Noah A. Smith:
Explaining Relationships Between Scientific Documents. ACL/IJCNLP (1) 2021: 2130-2144 - [c226]Ofir Press, Noah A. Smith, Mike Lewis:
Shortformer: Better Language Modeling using Shorter Inputs. ACL/IJCNLP (1) 2021: 5493-5505 - [c225]Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, Yejin Choi:
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts. ACL/IJCNLP (1) 2021: 6691-6706 - [c224]Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, Noah A. Smith:
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text. ACL/IJCNLP (1) 2021: 7282-7296 - [c223]Rahul Nadkarni, David Wadden, Iz Beltagy, Noah A. Smith, Hannaneh Hajishirzi, Tom Hope:
Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study. AKBC 2021 - [c222]Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, Noah A. Smith:
Challenges in Automated Debiasing for Toxic Language Detection. EACL 2021: 3143-3155 - [c221]Zeyu Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, Noah A. Smith:
Probing Across Time: What Does RoBERTa Know and When? EMNLP (Findings) 2021: 820-842 - [c220]William Merrill, Vivek Ramanujan, Yoav Goldberg, Roy Schwartz, Noah A. Smith:
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent. EMNLP (1) 2021: 1766-1781 - [c219]Matt Gardner, William Merrill, Jesse Dodge, Matthew E. Peters, Alexis Ross, Sameer Singh, Noah A. Smith:
Competency Problems: On Finding and Removing Artifacts in Language Data. EMNLP (1) 2021: 1801-1813 - [c218]Ivan Montero, Nikolaos Pappas, Noah A. Smith:
Sentence Bottleneck Autoencoders from Transformer Language Models. EMNLP (1) 2021: 1822-1831 - [c217]Jesse Dodge, Suchin Gururangan, Dallas Card
, Roy Schwartz, Noah A. Smith:
Expected Validation Performance and Estimation of a Random Variable's Maximum. EMNLP (Findings) 2021: 4066-4073 - [c216]Sarah Wiegreffe, Ana Marasovic, Noah A. Smith:
Measuring Association Between Labels and Free-Text Rationales. EMNLP (1) 2021: 10266-10284 - [c215]Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith:
Finetuning Pretrained Transformers into RNNs. EMNLP (1) 2021: 10630-10643 - [c214]Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, Noah A. Smith:
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation. ICLR 2021 - [c213]Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong:
Random Feature Attention. ICLR 2021 - [c212]Elizabeth Clark, Noah A. Smith:
Choose Your Own Adventure: Paired Suggestions in Collaborative Writing for Evaluating Story Generation Models. NAACL-HLT 2021: 3566-3575 - [c211]Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner:
A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers. NAACL-HLT 2021: 4599-4610 - [i128]Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, Daniel S. Weld:
GENIE: A Leaderboard for Human-in-the-Loop Evaluation of Text Generation. CoRR abs/2101.06561 (2021) - [i127]Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Noah A. Smith, Yejin Choi:
Challenges in Automated Debiasing for Toxic Language Detection. CoRR abs/2102.00086 (2021) - [i126]Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong:
Random Feature Attention. CoRR abs/2103.02143 (2021) - [i125]Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith:
Finetuning Pretrained Transformers into RNNs. CoRR abs/2103.13076 (2021) - [i124]Leo Z. Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, Noah A. Smith:
Probing Across Time: What Does RoBERTa Know and When? CoRR abs/2104.07885 (2021) - [i123]Matt Gardner, William Merrill, Jesse Dodge, Matthew E. Peters, Alexis Ross, Sameer Singh, Noah A. Smith:
Competency Problems: On Finding and Removing Artifacts in Language Data. CoRR abs/2104.08646 (2021) - [i122]Rik Koncel-Kedziorski, Noah A. Smith:
Go Forth and Prosper: Language Modeling with Ancient Textual History. CoRR abs/2104.08742 (2021) - [i121]William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith:
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? CoRR abs/2104.10809 (2021) - [i120]Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner:
A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers. CoRR abs/2105.03011 (2021) - [i119]Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, Yejin Choi:
On-the-Fly Controlled Text Generation with Experts and Anti-Experts. CoRR abs/2105.03023 (2021) - [i118]Ethan C. Chau, Noah A. Smith:
Specializing Multilingual Language Models: An Empirical Study. CoRR abs/2106.09063 (2021) - [i117]Rahul Nadkarni, David Wadden, Iz Beltagy, Noah A. Smith, Hannaneh Hajishirzi, Tom Hope:
Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study. CoRR abs/2106.09700 (2021) - [i116]William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith:
On the Power of Saturated Transformers: A View from Circuit Complexity. CoRR abs/2106.16213 (2021) - [i115]Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, Noah A. Smith:
All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text. CoRR abs/2107.00061 (2021) - [i114]Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, Yejin Choi:
Scarecrow: A Framework for Scrutinizing Machine Text. CoRR abs/2107.01294 (2021) - [i113]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. CoRR abs/2108.05036 (2021) - [i112]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. CoRR abs/2108.12409 (2021) - [i111]