default search action
Noah D. Goodman
Person information
- affiliation: Stanford University, Department of Psychology, USA
- affiliation: Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j25]Gabriel Poesia, Kanishk Gandhi, Eric Zelikman, Noah D. Goodman:
Certified Deductive Reasoning with Language Models. Trans. Mach. Learn. Res. 2024 (2024) - [c152]Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, Noah D. Goodman:
Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations. CLeaR 2024: 160-187 - [c151]Rose E. Wang, Pawan Wirawarn, Omar Khattab, Noah D. Goodman, Dorottya Demszky:
Backtracing: Retrieving the Cause of the Query. EACL (Findings) 2024: 722-735 - [c150]Joy He-Yueya, Noah D. Goodman, Emma Brunskill:
Evaluating and Optimizing Educational Content with Large Language Model Judgments. EDM 2024 - [c149]Steven Y. Feng, Noah D. Goodman, Michael Frank:
Is Child-Directed Speech Effective Training Data for Language Models? EMNLP 2024: 22055-22071 - [c148]Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, Noah D. Goodman:
Hypothesis Search: Inductive Reasoning with Language Models. ICLR 2024 - [c147]Michael Y. Li, Emily B. Fox, Noah D. Goodman:
Automated Statistical Model Discovery with Language Models. ICML 2024 - [c146]Alex Tamkin, Mohammad Taufeeque, Noah D. Goodman:
Codebook Features: Sparse and Discrete Interpretability for Neural Networks. ICML 2024 - [c145]Zhengxuan Wu, Atticus Geiger, Aryaman Arora, Jing Huang, Zheng Wang, Noah D. Goodman, Christopher D. Manning, Christopher Potts:
pyvene: A Library for Understanding and Improving PyTorch Models via Interventions. NAACL (Demonstrations) 2024: 158-165 - [i127]Zhengxuan Wu, Atticus Geiger, Jing Huang, Aryaman Arora, Thomas Icard, Christopher Potts, Noah D. Goodman:
A Reply to Makelov et al. (2023)'s "Interpretability Illusion" Arguments. CoRR abs/2401.12631 (2024) - [i126]Michael Y. Li, Emily B. Fox, Noah D. Goodman:
Automated Statistical Model Discovery with Language Models. CoRR abs/2402.17879 (2024) - [i125]Joy He-Yueya, Noah D. Goodman, Emma Brunskill:
Evaluating and Optimizing Educational Content with Large Language Model Judgments. CoRR abs/2403.02795 (2024) - [i124]Rose E. Wang, Pawan Wirawarn, Omar Khattab, Noah D. Goodman, Dorottya Demszky:
Backtracing: Retrieving the Cause of the Query. CoRR abs/2403.03956 (2024) - [i123]Kunal Handa, Yarin Gal, Ellie Pavlick, Noah D. Goodman, Jacob Andreas, Alex Tamkin, Belinda Z. Li:
Bayesian Preference Elicitation with Language Models. CoRR abs/2403.05534 (2024) - [i122]Zhengxuan Wu, Atticus Geiger, Aryaman Arora, Jing Huang, Zheng Wang, Noah D. Goodman, Christopher D. Manning, Christopher Potts:
pyvene: A Library for Understanding and Improving PyTorch Models via Interventions. CoRR abs/2403.07809 (2024) - [i121]Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, Noah D. Goodman:
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking. CoRR abs/2403.09629 (2024) - [i120]Chinmaya Andukuri, Jan-Philipp Fränken, Tobias Gerstenberg, Noah D. Goodman:
STaR-GATE: Teaching Language Models to Ask Clarifying Questions. CoRR abs/2403.19154 (2024) - [i119]Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, Noah D. Goodman:
Stream of Search (SoS): Learning to Search in Language. CoRR abs/2404.03683 (2024) - [i118]Jan-Philipp Fränken, Kanishk Gandhi, Tori Qiu, Ayesha Khawaja, Noah D. Goodman, Tobias Gerstenberg:
Procedural Dilemma Generation for Evaluating Moral Reasoning in Humans and Language Models. CoRR abs/2404.10975 (2024) - [i117]Jan-Philipp Fränken, Eric Zelikman, Rafael Rafailov, Kanishk Gandhi, Tobias Gerstenberg, Noah D. Goodman:
Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels. CoRR abs/2404.14313 (2024) - [i116]Gabriel Poesia, David Broman, Nick Haber, Noah D. Goodman:
Learning Formal Mathematics From Intrinsic Motivation. CoRR abs/2407.00695 (2024) - [i115]Shubhra Mishra, Gabriel Poesia, Belinda Mo, Noah D. Goodman:
MathCAMPS: Fine-grained Synthesis of Mathematical Problems From Human Curricula. CoRR abs/2407.00900 (2024) - [i114]Zachary Kenton, Noah Y. Siegel, János Kramár, Jonah Brown-Cohen, Samuel Albanie, Jannis Bulian, Rishabh Agarwal, David Lindner, Yunhao Tang, Noah D. Goodman, Rohin Shah:
On scalable oversight with weak LLMs judging strong LLMs. CoRR abs/2407.04622 (2024) - [i113]Joy He-Yueya, Wanjing Anya Ma, Kanishk Gandhi, Benjamin W. Domingue, Emma Brunskill, Noah D. Goodman:
Psychometric Alignment: Capturing Human Knowledge Distributions via Language Models. CoRR abs/2407.15645 (2024) - [i112]Steven Y. Feng, Noah D. Goodman, Michael C. Frank:
Is Child-Directed Speech Effective Training Data for Language Models? CoRR abs/2408.03617 (2024) - [i111]Joy Hsu, Jiayuan Mao, Joshua B. Tenenbaum, Noah D. Goodman, Jiajun Wu:
What Makes a Maze Look Like a Maze? CoRR abs/2409.08202 (2024) - [i110]Kanishk Gandhi, Zoe Lynch, Jan-Philipp Fränken, Kayla Patterson, Sharon Wambu, Tobias Gerstenberg, Desmond C. Ong, Noah D. Goodman:
Human-like Affective Cognition in Foundation Models. CoRR abs/2409.11733 (2024) - [i109]Aryaman Arora, Dan Jurafsky, Christopher Potts, Noah D. Goodman:
Bayesian scaling laws for in-context learning. CoRR abs/2410.16531 (2024) - 2023
- [c144]Rose E. Wang, Pawan Wirawarn, Noah D. Goodman, Dorottya Demszky:
SIGHT: A Large Annotated Dataset on Student Insights Gathered from Higher Education Transcripts. BEA@ACL 2023: 315-351 - [c143]Ben Prystawski, Dilip Arumugam, Noah D. Goodman:
Cultural reinforcement learning: a framework for modeling cumulative culture on a limited channel. CogSci 2023 - [c142]Ben Prystawski, Paul H. Thibodeau, Christopher Potts, Noah D. Goodman:
Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models. CogSci 2023 - [c141]Polina Tsvilodub, Michael Franke, Robert D. Hawkins, Noah D. Goodman:
Overinformative Question Answering by Humans and Machines. CogSci 2023 - [c140]Dhara Yu, Noah D. Goodman, Jesse Mu:
Characterizing tradeoffs between teaching via language and demonstrations in multi-agent systems. CogSci 2023 - [c139]Jasmine Bayrooti, Noah D. Goodman, Alex Tamkin:
Multispectral Contrastive Learning with Viewmaker Networks. CVPR Workshops 2023: 440-448 - [c138]Joy Hsu, Gabriel Poesia, Jiajun Wu, Noah D. Goodman:
Can Visual Scratchpads With Diagrammatic Abstractions Augment LLM Reasoning? ICBINB 2023: 21-28 - [c137]Alex Tamkin, Kunal Handa, Avash Shrestha, Noah D. Goodman:
Task Ambiguity in Humans and Language Models. ICLR 2023 - [c136]Megha Srivastava, Noah D. Goodman, Dorsa Sadigh:
Generating Language Corrections for Teaching Physical Control Tasks. ICML 2023: 32561-32574 - [c135]Kanishk Gandhi, Jan-Philipp Fränken, Tobias Gerstenberg, Noah D. Goodman:
Understanding Social Reasoning in Language Models with Language Models. NeurIPS 2023 - [c134]Jesse Mu, Xiang Li, Noah D. Goodman:
Learning to Compress Prompts with Gist Tokens. NeurIPS 2023 - [c133]Ben Prystawski, Michael Li, Noah D. Goodman:
Why think step by step? Reasoning emerges from the locality of experience. NeurIPS 2023 - [c132]Alex Tamkin, Margalit Glasgow, Xiluo He, Noah D. Goodman:
Feature Dropout: Revisiting the Role of Augmentations in Contrastive Learning. NeurIPS 2023 - [c131]Zhengxuan Wu, Atticus Geiger, Thomas Icard, Christopher Potts, Noah D. Goodman:
Interpretability at Scale: Identifying Causal Mechanisms in Alpaca. NeurIPS 2023 - [c130]Eric Zelikman, Qian Huang, Gabriel Poesia, Noah D. Goodman, Nick Haber:
Parsel🦆: Algorithmic Reasoning with Language Models by Composing Decompositions. NeurIPS 2023 - [i108]Jasmine Bayrooti, Noah D. Goodman, Alex Tamkin:
Multispectral Self-Supervised Learning with Viewmaker Networks. CoRR abs/2302.05757 (2023) - [i107]Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, Noah D. Goodman:
Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations. CoRR abs/2303.02536 (2023) - [i106]Ben Prystawski, Noah D. Goodman:
Why think step-by-step? Reasoning emerges from the locality of experience. CoRR abs/2304.03843 (2023) - [i105]Jesse Mu, Xiang Lisa Li, Noah D. Goodman:
Learning to Compress Prompts with Gist Tokens. CoRR abs/2304.08467 (2023) - [i104]Joy He-Yueya, Gabriel Poesia, Rose E. Wang, Noah D. Goodman:
Solving Math Word Problems by Combining Language Models With Symbolic Solvers. CoRR abs/2304.09102 (2023) - [i103]Dilip Arumugam, Mark K. Ho, Noah D. Goodman, Benjamin Van Roy:
Bayesian Reinforcement Learning with Limited Cognitive Load. CoRR abs/2305.03263 (2023) - [i102]Polina Tsvilodub, Michael Franke, Robert D. Hawkins, Noah D. Goodman:
Overinformative Question Answering by Humans and Machines. CoRR abs/2305.07151 (2023) - [i101]Zhengxuan Wu, Atticus Geiger, Christopher Potts, Noah D. Goodman:
Interpretability at Scale: Identifying Causal Mechanisms in Alpaca. CoRR abs/2305.08809 (2023) - [i100]Dhara Yu, Noah D. Goodman, Jesse Mu:
Characterizing tradeoffs between teaching via language and demonstrations in multi-agent systems. CoRR abs/2305.11374 (2023) - [i99]Kanishk Gandhi, Dorsa Sadigh, Noah D. Goodman:
Strategic Reasoning with Language Models. CoRR abs/2305.19165 (2023) - [i98]Gabriel Poesia, Kanishk Gandhi, Eric Zelikman, Noah D. Goodman:
Certified Reasoning with Language Models. CoRR abs/2306.04031 (2023) - [i97]Megha Srivastava, Noah D. Goodman, Dorsa Sadigh:
Generating Language Corrections for Teaching Physical Control Tasks. CoRR abs/2306.07012 (2023) - [i96]Rose E. Wang, Pawan Wirawarn, Noah D. Goodman, Dorottya Demszky:
SIGHT: A Large Annotated Dataset on Student Insights Gathered from Higher Education Transcripts. CoRR abs/2306.09343 (2023) - [i95]Eric Zelikman, Qian Huang, Percy Liang, Nick Haber, Noah D. Goodman:
Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared Randomness. CoRR abs/2306.10015 (2023) - [i94]Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum:
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought. CoRR abs/2306.12672 (2023) - [i93]Kanishk Gandhi, Jan-Philipp Fränken, Tobias Gerstenberg, Noah D. Goodman:
Understanding Social Reasoning in Language Models with Language Models. CoRR abs/2306.15448 (2023) - [i92]Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, Noah D. Goodman:
Hypothesis Search: Inductive Reasoning with Language Models. CoRR abs/2309.05660 (2023) - [i91]Jiayuan Mao, Xuelin Yang, Xikun Zhang, Noah D. Goodman, Jiajun Wu:
CLEVRER-Humans: Describing Physical and Causal Events the Human Way. CoRR abs/2310.03635 (2023) - [i90]Belinda Z. Li, Alex Tamkin, Noah D. Goodman, Jacob Andreas:
Eliciting Human Preferences with Language Models. CoRR abs/2310.11589 (2023) - [i89]Alex Tamkin, Mohammad Taufeeque, Noah D. Goodman:
Codebook Features: Sparse and Discrete Interpretability for Neural Networks. CoRR abs/2310.17230 (2023) - [i88]Jan-Philipp Fränken, Sam Kwok, Peixuan Ye, Kanishk Gandhi, Dilip Arumugam, Jared Moore, Alex Tamkin, Tobias Gerstenberg, Noah D. Goodman:
Social Contract AI: Aligning AI Assistants with Implicit Group Norms. CoRR abs/2310.17769 (2023) - 2022
- [j24]Michael Henry Tessler, Noah D. Goodman:
Warm (for Winter): Inferring Comparison Classes in Communication. Cogn. Sci. 46(3) (2022) - [j23]Michael Henry Tessler, Joshua B. Tenenbaum, Noah D. Goodman:
Logic, Probability, and Pragmatics in Syllogistic Reasoning. Top. Cogn. Sci. 14(3): 574-601 (2022) - [c129]Julia White, Amy Burkhardt, Jason D. Yeatman, Noah D. Goodman:
Automated generation of sentence reading fluency test items. CogSci 2022 - [c128]Fei Fang, Kunal Sinha, Noah D. Goodman, Christopher Potts, Elisa Kreiss:
Color Overmodification Emerges from Data-Driven Learning and Pragmatic Reasoning. CogSci 2022 - [c127]Veronica Boyce, Robert D. Hawkins, Noah D. Goodman, Michael C. Frank:
Two's company but six is a crowd: emergence of conventions in multiparty communication games. CogSci 2022 - [c126]Gabriel Poesia Reis e Silva, Noah D. Goodman:
Left to the Reader: Abstracting Solutions in Mathematical Reasoning. CogSci 2022 - [c125]Julia White, Noah D. Goodman, Robert X. D. Hawkins:
Mixed-effects transformers for hierarchical adaptation. EMNLP 2022: 3944-3954 - [c124]Elisa Kreiss, Fei Fang, Noah D. Goodman, Christopher Potts:
Concadia: Towards Image-Based Text Generation with a Purpose. EMNLP 2022: 4667-4684 - [c123]Rose E. Wang, Esin Durmus, Noah D. Goodman, Tatsunori Hashimoto:
Language modeling via stochastic processes. ICLR 2022 - [c122]Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah D. Goodman, Christopher Potts:
Inducing Causal Structure for Interpretable Neural Networks. ICML 2022: 7324-7338 - [c121]Zhengxuan Wu, Atticus Geiger, Joshua Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah D. Goodman:
Causal Distillation for Language Models. NAACL-HLT 2022: 4288-4295 - [c120]Joy Hsu, Jiajun Wu, Noah D. Goodman:
Geoclidean: Few-Shot Generalization in Euclidean Geometry. NeurIPS 2022 - [c119]Jiayuan Mao, Xuelin Yang, Xikun Zhang, Noah D. Goodman, Jiajun Wu:
CLEVRER-Humans: Describing Physical and Causal Events the Human Way. NeurIPS 2022 - [c118]Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah D. Goodman, Tim Rocktäschel, Edward Grefenstette:
Improving Intrinsic Exploration with Language Abstractions. NeurIPS 2022 - [c117]Megha Srivastava, Erdem Biyik, Suvir Mirchandani, Noah D. Goodman, Dorsa Sadigh:
Assistive Teaching of Motor Control Tasks to Humans. NeurIPS 2022 - [c116]Alex Tamkin, Gaurab Banerjee, Mohamed Owda, Vincent Liu, Shashank Rammoorthy, Noah D. Goodman:
DABS 2.0: Improved Datasets and Algorithms for Universal Self-Supervision. NeurIPS 2022 - [c115]Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah D. Goodman:
Active Learning Helps Pretrained Models Learn the Intended Task. NeurIPS 2022 - [c114]Mike Wu, Noah D. Goodman:
Foundation Posteriors for Approximate Probabilistic Inference. NeurIPS 2022 - [c113]Eric Zelikman, Yuhuai Wu, Jesse Mu, Noah D. Goodman:
STaR: Bootstrapping Reasoning With Reasoning. NeurIPS 2022 - [i87]Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah D. Goodman, Tim Rocktäschel, Edward Grefenstette:
Improving Intrinsic Exploration with Language Abstractions. CoRR abs/2202.08938 (2022) - [i86]Rose E. Wang, Esin Durmus, Noah D. Goodman, Tatsunori Hashimoto:
Language modeling via stochastic processes. CoRR abs/2203.11370 (2022) - [i85]Eric Zelikman, Yuhuai Wu, Noah D. Goodman:
STaR: Bootstrapping Reasoning With Reasoning. CoRR abs/2203.14465 (2022) - [i84]Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah D. Goodman:
Active Learning Helps Pretrained Models Learn the Intended Task. CoRR abs/2204.08491 (2022) - [i83]Rose E. Wang, Mike Wu, Noah D. Goodman:
Know Thy Student: Interactive Learning with Gaussian Processes. CoRR abs/2204.12072 (2022) - [i82]Julia White, Noah D. Goodman, Robert X. D. Hawkins:
Mixed-effects transformers for hierarchical adaptation. CoRR abs/2205.01749 (2022) - [i81]Fei Fang, Kunal Sinha, Noah D. Goodman, Christopher Potts, Elisa Kreiss:
Color Overmodification Emerges from Data-Driven Learning and Pragmatic Reasoning. CoRR abs/2205.09172 (2022) - [i80]Mike Wu, Noah D. Goodman:
Foundation Posteriors for Approximate Probabilistic Inference. CoRR abs/2205.09735 (2022) - [i79]Ben Prystawski, Paul H. Thibodeau, Noah D. Goodman:
Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models. CoRR abs/2209.08141 (2022) - [i78]Dilip Arumugam, Mark K. Ho, Noah D. Goodman, Benjamin Van Roy:
On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement Learning. CoRR abs/2210.16877 (2022) - [i77]Zhening Li, Gabriel Poesia, Omar Costilla-Reyes, Noah D. Goodman, Armando Solar-Lezama:
LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions. CoRR abs/2211.08671 (2022) - [i76]Megha Srivastava, Erdem Biyik, Suvir Mirchandani, Noah D. Goodman, Dorsa Sadigh:
Assistive Teaching of Motor Control Tasks to Humans. CoRR abs/2211.14003 (2022) - [i75]Gabriel Poesia, Noah D. Goodman:
Peano: Learning Formal Mathematical Reasoning. CoRR abs/2211.15864 (2022) - [i74]Joy Hsu, Jiajun Wu, Noah D. Goodman:
Geoclidean: Few-Shot Generalization in Euclidean Geometry. CoRR abs/2211.16663 (2022) - [i73]Robert D. Hawkins, Andrew M. Berdahl, Alex 'Sandy' Pentland, Joshua B. Tenenbaum, Noah D. Goodman, P. M. Krafft:
Flexible social inference facilitates targeted social learning when rewards are not observable. CoRR abs/2212.00869 (2022) - [i72]Alex Tamkin, Margalit Glasgow, Xiluo He, Noah D. Goodman:
Feature Dropout: Revisiting the Role of Augmentations in Contrastive Learning. CoRR abs/2212.08378 (2022) - [i71]Eric Zelikman, Qian Huang, Gabriel Poesia, Noah D. Goodman, Nick Haber:
Parsel: A Unified Natural Language Framework for Algorithmic Reasoning. CoRR abs/2212.10561 (2022) - [i70]Alex Tamkin, Kunal Handa, Avash Shrestha, Noah D. Goodman:
Task Ambiguity in Humans and Language Models. CoRR abs/2212.10711 (2022) - 2021
- [j22]Robert X. D. Hawkins, Hyowon Gweon, Noah D. Goodman:
The Division of Labor in Communication: Speakers Help Listeners Account for Asymmetries in Visual Perspective. Cogn. Sci. 45(3) (2021) - [j21]Shyamal Buch, Li Fei-Fei, Noah D. Goodman:
Neural Event Semantics for Grounded Language Understanding. Trans. Assoc. Comput. Linguistics 9: 875-890 (2021) - [j20]Desmond C. Ong, Harold Soh, Jamil Zaki, Noah D. Goodman:
Applying Probabilistic Programming to Affective Computing. IEEE Trans. Affect. Comput. 12(2): 306-317 (2021) - [c112]Gabriel Poesia, Noah D. Goodman:
Pragmatic Code Autocomplete. AAAI 2021: 445-452 - [c111]Megha Srivastava, Noah D. Goodman:
Question Generation for Adaptive Education. ACL/IJCNLP (2) 2021: 692-701 - [c110]Ali Malik, Mike Wu, Vrinda Vasavada, Jinpeng Song, Madison Coots, John Mitchell, Noah D. Goodman, Chris Piech:
Generative Grading: Near Human-level Accuracy for Automated Feedback on Richly Structured Problems. EDM 2021 - [c109]Julia White, Gabriel Poesia, Robert X. D. Hawkins, Dorsa Sadigh, Noah D. Goodman:
Open-domain clarification question generation without question examples. EMNLP (1) 2021: 563-570 - [c108]Rose E. Wang, Julia White, Jesse Mu, Noah D. Goodman:
Calibrate your listeners! Robust communication-based training for pragmatic speakers. EMNLP (Findings) 2021: 977-984 - [c107]Alex Tamkin, Mike Wu, Noah D. Goodman:
Viewmaker Networks: Learning Views for Unsupervised Representation Learning. ICLR 2021 - [c106]Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah D. Goodman:
Conditional Negative Sampling for Contrastive Learning of Visual Representations. ICLR 2021 - [c105]Gabriel Poesia, Wenxin Dong, Noah D. Goodman:
Contrastive Reinforcement Learning of Symbolic Reasoning Domains. NeurIPS 2021: 15946-15956 - [c104]Jesse Mu, Noah D. Goodman:
Emergent Communication of Generalizations. NeurIPS 2021: 17994-18007 - [c103]Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah D. Goodman:
DABS: a Domain-Agnostic Benchmark for Self-Supervised Learning. NeurIPS Datasets and Benchmarks 2021 - [c102]Mike Wu, Noah D. Goodman, Stefano Ermon:
Improving Compositionality of Neural Networks by Decoding Representations to Inputs. NeurIPS 2021: 26689-26700 - [i69]Robert X. D. Hawkins, Michael Franke, Michael C. Frank, Kenny Smith, Thomas L. Griffiths, Noah D. Goodman:
From partners to populations: A hierarchical Bayesian account of coordination and convention. CoRR abs/2104.05857 (2021) - [i68]Elisa Kreiss, Noah D. Goodman, Christopher Potts:
Concadia: Tackling image accessibility with context. CoRR abs/2104.08376 (2021) - [i67]Mike Wu, Noah D. Goodman, Stefano Ermon:
Improving Co