


Остановите войну!
for scientists:


default search action
Percy Liang
Person information

- affiliation: Stanford University, Computer Science Department
- award (2019): Presidential Early Career Award for Scientists and Engineers
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [c166]Yuchen Cui
, Siddharth Karamcheti
, Raj Palleti
, Nidhya Shivakumar
, Percy Liang
, Dorsa Sadigh
:
No, to the Right: Online Language Corrections for Robotic Manipulation via Shared Autonomy. HRI 2023: 93-101 - [i165]Yuchen Cui, Siddharth Karamcheti, Raj Palleti, Nidhya Shivakumar, Percy Liang, Dorsa Sadigh:
"No, to the Right" - Online Language Corrections for Robotic Manipulation via Shared Autonomy. CoRR abs/2301.02555 (2023) - [i164]Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen R. McKeown, Tatsunori B. Hashimoto:
Benchmarking Large Language Models for News Summarization. CoRR abs/2301.13848 (2023) - [i163]Yann Dubois, Tatsunori Hashimoto, Percy Liang:
Evaluating Self-Supervised Learning via Risk Decomposition. CoRR abs/2302.03068 (2023) - [i162]Sang Michael Xie, Shibani Santurkar, Tengyu Ma, Percy Liang:
Data Selection for Language Models via Importance Resampling. CoRR abs/2302.03169 (2023) - [i161]Irena Gao, Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, Percy Liang:
Out-of-Domain Robustness via Targeted Augmentations. CoRR abs/2302.11861 (2023) - [i160]Siddharth Karamcheti, Suraj Nair, Annie S. Chen, Thomas Kollar, Chelsea Finn, Dorsa Sadigh, Percy Liang:
Language-Driven Representation Learning for Robotics. CoRR abs/2302.12766 (2023) - [i159]Michael Sun, Ananya Kumar, Divyam Madaan, Percy Liang:
Improving Representational Continuity via Continued Pretraining. CoRR abs/2302.13289 (2023) - [i158]Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark W. Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang:
High-throughput Generative Inference of Large Language Models with a Single GPU. CoRR abs/2303.06865 (2023) - [i157]Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang:
Foundation Models and Fair Use. CoRR abs/2303.15715 (2023) - [i156]Rishi Bommasani, Dilara Soylu, Thomas I. Liao, Kathleen A. Creel, Percy Liang:
Ecosystem Graphs: The Social Footprint of Foundation Models. CoRR abs/2303.15772 (2023) - [i155]Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, Tatsunori Hashimoto:
Whose Opinions Do Language Models Reflect? CoRR abs/2303.17548 (2023) - [i154]Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein:
Generative Agents: Interactive Simulacra of Human Behavior. CoRR abs/2304.03442 (2023) - [i153]Nelson F. Liu, Tianyi Zhang, Percy Liang:
Evaluating Verifiability in Generative Search Engines. CoRR abs/2304.09848 (2023) - [i152]Deepak Narayanan, Keshav Santhanam, Peter Henderson, Rishi Bommasani, Tony Lee, Percy Liang:
Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs. CoRR abs/2305.02440 (2023) - [i151]Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu:
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining. CoRR abs/2305.10429 (2023) - [i150]Qian Huang, Hongyu Ren, Peng Chen, Gregor Krzmanc, Daniel Zeng, Percy Liang, Jure Leskovec:
PRODIGY: Enabling In-context Learning Over Graphs. CoRR abs/2305.12600 (2023) - 2022
- [j10]Pang Wei Koh
, Jacob Steinhardt, Percy Liang:
Stronger data poisoning attacks break data sanitization defenses. Mach. Learn. 111(1): 1-47 (2022) - [j9]Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus:
Emergent Abilities of Large Language Models. Trans. Mach. Learn. Res. 2022 (2022) - [c165]Michihiro Yasunaga, Jure Leskovec, Percy Liang:
LinkBERT: Pretraining Language Models with Document Links. ACL (1) 2022: 8003-8016 - [c164]Mina Lee, Percy Liang, Qian Yang:
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. CHI 2022: 388:1-388:19 - [c163]John Hewitt, Christopher D. Manning, Percy Liang:
Truncation Sampling as Language Model Desmoothing. EMNLP (Findings) 2022: 3414-3427 - [c162]Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, Jure Leskovec:
GreaseLM: Graph REASoning Enhanced Language Models. ICLR 2022 - [c161]Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, Percy Liang:
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. ICLR 2022 - [c160]Xuechen Li, Florian Tramèr
, Percy Liang, Tatsunori Hashimoto:
Large Language Models Can Be Strong Differentially Private Learners. ICLR 2022 - [c159]Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo
, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang:
Extending the WILDS Benchmark for Unsupervised Adaptation. ICLR 2022 - [c158]Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma:
An Explanation of In-context Learning as Implicit Bayesian Inference. ICLR 2022 - [c157]Kendrick Shen, Robbie M. Jones, Ananya Kumar, Sang Michael Xie, Jeff Z. HaoChen, Tengyu Ma, Percy Liang:
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation. ICML 2022: 19847-19878 - [c156]Chris Donahue, John Thickstun, Percy Liang:
Melody transcription via generative pre-training. ISMIR 2022: 485-492 - [c155]Shivam Garg, Dimitris Tsipras, Percy Liang, Gregory Valiant:
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes. NeurIPS 2022 - [c154]Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang:
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? NeurIPS 2022 - [c153]Yann Dubois, Stefano Ermon, Tatsunori B. Hashimoto, Percy Liang:
Improving Self-Supervised Learning by Characterizing Idealized Representations. NeurIPS 2022 - [c152]Xiang Li, John Thickstun, Ishaan Gulrajani, Percy Liang, Tatsunori B. Hashimoto:
Diffusion-LM Improves Controllable Text Generation. NeurIPS 2022 - [c151]Yuhuai Wu, Felix Li, Percy Liang:
Insights into Pre-training via Simpler Synthetic Tasks. NeurIPS 2022 - [c150]Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, Jure Leskovec:
Deep Bidirectional Language-Knowledge Graph Pretraining. NeurIPS 2022 - [c149]Binhang Yuan, Yongjun He, Jared Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang:
Decentralized Training of Foundation Models in Heterogeneous Environments. NeurIPS 2022 - [c148]Ananya Kumar, Tengyu Ma, Percy Liang, Aditi Raghunathan:
Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift. UAI 2022: 1041-1051 - [c147]Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein:
Social Simulacra: Creating Populated Prototypes for Social Computing Systems. UIST 2022: 74:1-74:18 - [i149]Mina Lee, Percy Liang, Qian Yang:
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. CoRR abs/2201.06796 (2022) - [i148]Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, Jure Leskovec:
GreaseLM: Graph REASoning Enhanced Language Models for Question Answering. CoRR abs/2201.08860 (2022) - [i147]Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, Percy Liang:
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. CoRR abs/2202.10054 (2022) - [i146]Michihiro Yasunaga, Jure Leskovec, Percy Liang:
LinkBERT: Pretraining Language Models with Document Links. CoRR abs/2203.15827 (2022) - [i145]Kendrick Shen, Robbie Jones, Ananya Kumar, Sang Michael Xie, Jeff Z. HaoChen, Tengyu Ma, Percy Liang:
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation. CoRR abs/2204.00570 (2022) - [i144]Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, Tatsunori B. Hashimoto:
Diffusion-LM Improves Controllable Text Generation. CoRR abs/2205.14217 (2022) - [i143]Binhang Yuan, Yongjun He, Jared Quincy Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang:
Decentralized Training of Foundation Models in Heterogeneous Environments. CoRR abs/2206.01288 (2022) - [i142]Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus:
Emergent Abilities of Large Language Models. CoRR abs/2206.07682 (2022) - [i141]Yuhuai Wu, Felix Li, Percy Liang:
Insights into Pre-training via Simpler Synthetic Tasks. CoRR abs/2206.10139 (2022) - [i140]Shibani Santurkar, Yann Dubois, Rohan Taori, Percy Liang, Tatsunori Hashimoto:
Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning. CoRR abs/2207.07635 (2022) - [i139]Ananya Kumar, Tengyu Ma, Percy Liang, Aditi Raghunathan:
Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift. CoRR abs/2207.08977 (2022) - [i138]Shivam Garg, Dimitris Tsipras, Percy Liang, Gregory Valiant:
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes. CoRR abs/2208.01066 (2022) - [i137]Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein:
Social Simulacra: Creating Populated Prototypes for Social Computing Systems. CoRR abs/2208.04024 (2022) - [i136]Yann Dubois, Tatsunori Hashimoto, Stefano Ermon, Percy Liang:
Improving Self-Supervised Learning by Characterizing Idealized Representations. CoRR abs/2209.06235 (2022) - [i135]Nelson F. Liu, Ananya Kumar, Percy Liang, Robin Jia:
Are Sample-Efficient NLP Models More Robust? CoRR abs/2210.06456 (2022) - [i134]Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, Jure Leskovec:
Deep Bidirectional Language-Knowledge Graph Pretraining. CoRR abs/2210.09338 (2022) - [i133]Yoonho Lee, Annie S. Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, Chelsea Finn:
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts. CoRR abs/2210.11466 (2022) - [i132]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. CoRR abs/2210.15097 (2022) - [i131]John Hewitt, Christopher D. Manning, Percy Liang:
Truncation Sampling as Language Model Desmoothing. CoRR abs/2210.15191 (2022) - [i130]Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yüksekgönül, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri S. Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreeda:
Holistic Evaluation of Language Models. CoRR abs/2211.09110 (2022) - [i129]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
Retrieval-Augmented Multimodal Language Modeling. CoRR abs/2211.12561 (2022) - [i128]Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daumé III, Emma Pierson, Nihar B. Shah:
How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions? CoRR abs/2211.12966 (2022) - [i127]Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang:
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? CoRR abs/2211.13972 (2022) - [i126]Chris Donahue, John Thickstun, Percy Liang:
Melody transcription via generative pre-training. CoRR abs/2212.01884 (2022) - [i125]Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael S. Bernstein, Percy Liang:
Evaluating Human-Language Model Interaction. CoRR abs/2212.09746 (2022) - [i124]Rishi Bommasani, Percy Liang:
Trustworthy Social Bias Measurement. CoRR abs/2212.11672 (2022) - [i123]Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, Matei Zaharia:
Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP. CoRR abs/2212.14024 (2022) - 2021
- [c146]Xiang Lisa Li, Percy Liang:
Prefix-Tuning: Optimizing Continuous Prompts for Generation. ACL/IJCNLP (1) 2021: 4582-4597 - [c145]Siddharth Karamcheti, Megha Srivastava, Percy Liang, Dorsa Sadigh:
LILA: Language-Informed Latent Actions. CoRL 2021: 1379-1390 - [c144]John Hewitt, Kawin Ethayarajh, Percy Liang, Christopher D. Manning:
Conditional probing: measuring usable information beyond a baseline. EMNLP (1) 2021: 1626-1639 - [c143]Michihiro Yasunaga, Jure Leskovec, Percy Liang:
LM-Critic: Language Models for Unsupervised Grammatical Error Correction. EMNLP (1) 2021: 7752-7763 - [c142]Fereshte Khani, Percy Liang:
Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately. FAccT 2021: 196-205 - [c141]Erik Jones, Shiori Sagawa, Pang Wei Koh, Ananya Kumar, Percy Liang:
Selective Classification Can Magnify Disparities Across Groups. ICLR 2021 - [c140]Sang Michael Xie, Ananya Kumar, Robbie Jones, Fereshte Khani, Tengyu Ma, Percy Liang:
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness. ICLR 2021 - [c139]Jared Quincy Davis, Albert Gu, Krzysztof Choromanski, Tri Dao, Christopher Ré, Chelsea Finn, Percy Liang:
Catformer: Designing Stable Transformers via Sensitivity Analysis. ICML 2021: 2489-2499 - [c138]Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran S. Haque, Sara M. Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, Percy Liang:
WILDS: A Benchmark of in-the-Wild Distribution Shifts. ICML 2021: 5637-5664 - [c137]Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn:
Just Train Twice: Improving Group Robustness without Training Group Information. ICML 2021: 6781-6792 - [c136]Evan Zheran Liu, Aditi Raghunathan, Percy Liang, Chelsea Finn:
Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices. ICML 2021: 6925-6935 - [c135]John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, Ludwig Schmidt:
Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization. ICML 2021: 7721-7735 - [c134]Sang Michael Xie, Tengyu Ma, Percy Liang:
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization. ICML 2021: 11424-11435 - [c133]Michihiro Yasunaga, Percy Liang:
Break-It-Fix-It: Unsupervised Learning for Program Repair. ICML 2021: 11941-11952 - [c132]Rodrigo Castellon, Chris Donahue, Percy Liang:
Codified audio language modeling learns useful representations for music information retrieval. ISMIR 2021: 88-96 - [c131]Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut
, Percy Liang, Jure Leskovec:
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering. NAACL-HLT 2021: 535-546 - [c130]Mina Lee
, Chris Donahue, Robin Jia, Alexander Iyabor, Percy Liang:
Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality. NAACL-HLT 2021: 4362-4379 - [c129]Yu Gu, Sue Kase, Michelle Vanni, Brian M. Sadler, Percy Liang, Xifeng Yan, Yu Su:
Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases. WWW 2021: 3477-3488 - [e2]Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan:
Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual. 2021 [contents] - [i122]Xiang Lisa Li, Percy Liang:
Prefix-Tuning: Optimizing Continuous Prompts for Generation. CoRR abs/2101.00190 (2021) - [i121]Nelson F. Liu, Tony Lee, Robin Jia, Percy Liang:
Can Small and Synthetic Benchmarks Drive Modeling Innovation? A Retrospective Study of Question Answering Modeling Approaches. CoRR abs/2102.01065 (2021) - [i120]Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, Jure Leskovec:
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering. CoRR abs/2104.06378 (2021) - [i119]Mina Lee, Chris Donahue, Robin Jia, Alexander Iyabor, Percy Liang:
Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality. CoRR abs/2106.04102 (2021) - [i118]Michihiro Yasunaga, Percy Liang:
Break-It-Fix-It: Unsupervised Learning for Program Repair. CoRR abs/2106.06600 (2021) - [i117]John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, Ludwig Schmidt:
Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization. CoRR abs/2107.04649 (2021) - [i116]Rodrigo Castellon, Chris Donahue, Percy Liang:
Codified audio language modeling learns useful representations for music information retrieval. CoRR abs/2107.05677 (2021) - [i115]Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn:
Just Train Twice: Improving Group Robustness without Training Group Information. CoRR abs/2107.09044 (2021) - [i114]Fahim Tajwar, Ananya Kumar, Sang Michael Xie, Percy Liang:
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets. CoRR abs/2109.05554 (2021) - [i113]Michihiro Yasunaga, Jure Leskovec, Percy Liang:
LM-Critic: Language Models for Unsupervised Grammatical Error Correction. CoRR abs/2109.06822 (2021) - [i112]John Hewitt, Kawin Ethayarajh, Percy Liang, Christopher D. Manning:
Conditional probing: measuring usable information beyond a baseline. CoRR abs/2109.09234 (2021) - [i111]Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto:
Large Language Models Can Be Strong Differentially Private Learners. CoRR abs/2110.05679 (2021) - [i110]Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma:
An Explanation of In-context Learning as Implicit Bayesian Inference. CoRR abs/2111.02080 (2021) - [i109]Siddharth Karamcheti, Megha Srivastava, Percy Liang, Dorsa Sadigh:
LILA: Language-Informed Latent Actions. CoRR abs/2111.03205 (2021) - [i108]Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang:
Extending the WILDS Benchmark for Unsupervised Adaptation. CoRR abs/2112.05090 (2021) - 2020
- [j8]Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H. Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Andrew Wolfe, Abby Wray, Yuchen Zhang, Alexander Zotov:
Task-Oriented Dialogue as Dataflow Synthesis. Trans. Assoc. Comput. Linguistics 8: 556-571 (2020) - [c128]Shikhar Murty, Pang Wei Koh, Percy Liang:
ExpBERT: Representation Engineering with Natural Language Explanations. ACL 2020: 2106-2113 - [c127]Chris Donahue, Mina Lee
, Percy Liang:
Enabling Language Models to Fill in the Blanks. ACL 2020: 2492-2501 - [c126]Erik Jones, Robin Jia, Aditi Raghunathan, Percy Liang:
Robust Encodings: A Framework for Combating Adversarial Typos. ACL 2020: 2752-2765 - [c125]Jesse Mu, Percy Liang, Noah D. Goodman:
Shaping Visual Representations with Language for Few-Shot Classification. ACL 2020: 4823-4830 - [c124]Amita Kamath, Robin Jia, Percy Liang:
Selective Question Answering under Domain Shift. ACL 2020: 5684-5696 - [c123]Benjamin Newman, John Hewitt, Percy Liang, Christopher D. Manning:
The EOS Decision and Length Extrapolation. BlackboxNLP@EMNLP 2020: 276-291 - [c122]John Hewitt, Michael Hahn, Surya Ganguli, Percy Liang, Christopher D. Manning:
RNNs can generate bounded hierarchical languages with optimal memory. EMNLP (1) 2020: 1978-2010 - [c121]Stephen Mussmann, Robin Jia, Percy Liang:
On the Importance of Adaptive Data Collection for Extremely Imbalanced Pairwise Tasks. EMNLP (Findings) 2020: 3400-3413 - [c120]Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, Matei Zaharia:
Selection via Proxy: Efficient Data Selection for Deep Learning. ICLR 2020 - [c119]Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay S. Pande, Jure Leskovec:
Strategies for Pre-training Graph Neural Networks. ICLR 2020 - [c118]Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, Percy Liang:
Distributionally Robust Neural Networks. ICLR 2020 - [c117]Fereshte Khani, Percy Liang:
Feature Noise Induces Loss Discrepancy Across Groups. ICML 2020: 5209-5219 - [c116]Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang:
Concept Bottleneck Models. ICML 2020: 5338-5348 - [c115]Ananya Kumar, Tengyu Ma, Percy Liang:
Understanding Self-Training for Gradual Domain Adaptation. ICML 2020: 5468-5479 - [c114]