


Остановите войну!
for scientists:


default search action
Omer Levy
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [i58]Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke Zettlemoyer:
Scaling Laws for Generative Mixed-Modal Language Models. CoRR abs/2301.03728 (2023) - [i57]Yuval Kirstain, Omer Levy, Adam Polyak:
X&Fuse: Fusing Visual Information in Text-to-Image Generation. CoRR abs/2303.01000 (2023) - 2022
- [c55]Uri Shaham, Omer Levy:
What Do You Get When You Cross Beam Search with Nucleus Sampling? Insights@ACL 2022: 38-45 - [c54]Yuval Kirstain, Patrick S. H. Lewis, Sebastian Riedel, Omer Levy:
A Few More Examples May Be Worth Billions of Parameters. EMNLP (Findings) 2022: 1017-1029 - [c53]Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, Omer Levy:
Transformer Language Models without Positional Encodings Still Learn Positional Information. EMNLP (Findings) 2022: 1382-1390 - [c52]Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy:
SCROLLS: Standardized CompaRison Over Long Language Sequences. EMNLP 2022: 12007-12021 - [c51]Wenhan Xiong, Barlas Oguz, Anchit Gupta, Xilun Chen, Diana Liskovich, Omer Levy, Scott Yih, Yashar Mehdad:
Simple Local Attentions Remain Competitive for Long-Context Tasks. NAACL-HLT 2022: 1975-1986 - [c50]Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, Amir Globerson:
Learning to Retrieve Passages without Supervision. NAACL-HLT 2022: 2687-2700 - [c49]Itay Itzhak, Omer Levy:
Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens. NAACL-HLT 2022: 5061-5068 - [i56]Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy:
SCROLLS: Standardized CompaRison Over Long Language Sequences. CoRR abs/2201.03533 (2022) - [i55]Avital Friedland, Jonathan Zeltser, Omer Levy:
Are Mutually Intelligible Languages Easier to Translate? CoRR abs/2201.13072 (2022) - [i54]Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, Omer Levy:
Transformer Language Models without Positional Encodings Still Learn Positional Information. CoRR abs/2203.16634 (2022) - [i53]Omri Keren, Tal Avinari, Reut Tsarfaty, Omer Levy:
Breaking Character: Are Subwords Good Enough for MRLs After All? CoRR abs/2204.04748 (2022) - [i52]Or Honovich, Uri Shaham, Samuel R. Bowman, Omer Levy:
Instruction Induction: From Few Examples to Natural Language Task Descriptions. CoRR abs/2205.10782 (2022) - [i51]Avia Efrat, Or Honovich, Omer Levy:
LMentry: A Language Model Benchmark of Elementary Language Tasks. CoRR abs/2211.02069 (2022) - [i50]Uri Shaham, Maha Elbayad, Vedanuj Goswami, Omer Levy, Shruti Bhosale:
Causes and Cures for Interference in Multilingual Translation. CoRR abs/2212.07530 (2022) - [i49]Lior Vassertail, Omer Levy:
A Simple Baseline for Beam Search Reranking. CoRR abs/2212.08926 (2022) - [i48]Or Honovich, Thomas Scialom, Omer Levy, Timo Schick:
Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor. CoRR abs/2212.09689 (2022) - 2021
- [j5]Omer Levy, Dror G. Feitelson
:
Understanding large-scale software systems - structure and flows. Empir. Softw. Eng. 26(3): 48 (2021) - [c48]Yuval Kirstain, Ori Ram, Omer Levy:
Coreference Resolution without Span Representations. ACL/IJCNLP (2) 2021: 14-19 - [c47]Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy:
Few-Shot Question Answering by Pretraining Span Selection. ACL/IJCNLP (1) 2021: 3066-3079 - [c46]Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy:
Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language. EMNLP (1) 2021: 4186-4192 - [c45]Mor Geva, Roei Schuster, Jonathan Berant, Omer Levy:
Transformer Feed-Forward Layers Are Key-Value Memories. EMNLP (1) 2021: 5484-5495 - [c44]Peter Izsak, Moshe Berchansky, Omer Levy:
How to Train BERT with an Academic Budget. EMNLP (1) 2021: 10644-10652 - [c43]Uri Shaham, Omer Levy:
Neural Machine Translation without Embeddings. NAACL-HLT 2021: 181-186 - [c42]Adi Haviv, Lior Vassertail, Omer Levy:
Can Latent Alignments Improve Autoregressive Machine Translation? NAACL-HLT 2021: 2637-2641 - [i47]Yuval Kirstain, Ori Ram, Omer Levy:
Coreference Resolution without Span Representations. CoRR abs/2101.00434 (2021) - [i46]Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy:
Few-Shot Question Answering by Pretraining Span Selection. CoRR abs/2101.00438 (2021) - [i45]Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy:
Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language. CoRR abs/2103.01242 (2021) - [i44]Peter Izsak, Moshe Berchansky, Omer Levy:
How to Train BERT with an Academic Budget. CoRR abs/2104.07705 (2021) - [i43]Adi Haviv, Lior Vassertail, Omer Levy:
Can Latent Alignments Improve Autoregressive Machine Translation? CoRR abs/2104.09554 (2021) - [i42]Uri Shaham, Omer Levy:
What Do You Get When You Cross Beam Search with Nucleus Sampling? CoRR abs/2107.09729 (2021) - [i41]Or Castel, Ori Ram, Avia Efrat, Omer Levy:
How Optimal is Greedy Decoding for Extractive Question Answering? CoRR abs/2108.05857 (2021) - [i40]Itay Itzhak, Omer Levy:
Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens. CoRR abs/2108.11193 (2021) - [i39]Omri Keren, Omer Levy:
ParaShoot: A Hebrew Question Answering Dataset. CoRR abs/2109.11314 (2021) - [i38]Yuval Kirstain, Patrick S. H. Lewis, Sebastian Riedel, Omer Levy:
A Few More Examples May Be Worth Billions of Parameters. CoRR abs/2110.04374 (2021) - [i37]Wenhan Xiong, Barlas Oguz, Anchit Gupta, Xilun Chen, Diana Liskovich, Omer Levy, Wen-tau Yih, Yashar Mehdad:
Simple Local Attentions Remain Competitive for Long-Context Tasks. CoRR abs/2112.07210 (2021) - [i36]Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, Amir Globerson:
Learning to Retrieve Passages without Supervision. CoRR abs/2112.07708 (2021) - 2020
- [j4]Christopher D. Manning
, Kevin Clark, John Hewitt
, Urvashi Khandelwal, Omer Levy:
Emergent linguistic structure in artificial neural networks trained by self-supervision. Proc. Natl. Acad. Sci. USA 117(48): 30046-30054 (2020) - [j3]Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy:
SpanBERT: Improving Pre-training by Representing and Predicting Spans. Trans. Assoc. Comput. Linguistics 8: 64-77 (2020) - [c41]Ofir Press, Noah A. Smith, Omer Levy:
Improving Transformer Models by Reordering their Sublayers. ACL 2020: 2996-3005 - [c40]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. ACL 2020: 7871-7880 - [c39]Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, Jie Tang:
Blockwise Self-Attention for Long Document Understanding. EMNLP (Findings) 2020: 2555-2565 - [c38]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020 - [c37]Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav:
Structural Language Models of Code. ICML 2020: 245-256 - [c36]Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy:
Aligned Cross Entropy for Non-Autoregressive Machine Translation. ICML 2020: 3515-3523 - [i35]Marjan Ghazvininejad, Omer Levy, Luke Zettlemoyer:
Semi-Autoregressive Training Improves Mask-Predict Decoding. CoRR abs/2001.08785 (2020) - [i34]Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy:
Aligned Cross Entropy for Non-Autoregressive Machine Translation. CoRR abs/2004.01655 (2020) - [i33]Uri Shaham, Omer Levy:
Neural Machine Translation without Embeddings. CoRR abs/2008.09396 (2020) - [i32]Avia Efrat, Omer Levy:
The Turking Test: Can Language Models Understand Instructions? CoRR abs/2010.11982 (2020) - [i31]Mor Geva, Roei Schuster, Jonathan Berant, Omer Levy:
Transformer Feed-Forward Layers Are Key-Value Memories. CoRR abs/2012.14913 (2020)
2010 – 2019
- 2019
- [j2]Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav:
code2vec: learning distributed representations of code. Proc. ACM Program. Lang. 3(POPL): 40:1-40:29 (2019) - [c35]Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, Marjan Ghazvininejad:
Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation. W-NUT@EMNLP 2019: 42-47 - [c34]Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning:
What Does BERT Look at? An Analysis of BERT's Attention. BlackboxNLP@ACL 2019: 276-286 - [c33]Mandar Joshi, Omer Levy, Luke Zettlemoyer, Daniel S. Weld:
BERT for Coreference Resolution: Baselines and Analysis. EMNLP/IJCNLP (1) 2019: 5802-5807 - [c32]Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer:
Mask-Predict: Parallel Decoding of Conditional Masked Language Models. EMNLP/IJCNLP (1) 2019: 6111-6120 - [c31]Uri Alon, Shaked Brody, Omer Levy, Eran Yahav:
code2seq: Generating Sequences from Structured Representations of Code. ICLR (Poster) 2019 - [c30]Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman:
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. ICLR (Poster) 2019 - [c29]Omer Levy, Dror G. Feitelson
:
Understanding large-scale software: a hierarchical view. ICPC 2019: 283-293 - [c28]Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer:
pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference. NAACL-HLT (1) 2019: 3597-3608 - [c27]Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman:
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. NeurIPS 2019: 3261-3275 - [c26]Paul Michel, Omer Levy, Graham Neubig:
Are Sixteen Heads Really Better than One? NeurIPS 2019: 14014-14024 - [i30]Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, Marjan Ghazvininejad:
Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation. CoRR abs/1902.01509 (2019) - [i29]Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer:
Constant-Time Machine Translation with Conditional Masked Language Models. CoRR abs/1904.09324 (2019) - [i28]Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman:
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. CoRR abs/1905.00537 (2019) - [i27]Paul Michel, Omer Levy, Graham Neubig:
Are Sixteen Heads Really Better than One? CoRR abs/1905.10650 (2019) - [i26]Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning:
What Does BERT Look At? An Analysis of BERT's Attention. CoRR abs/1906.04341 (2019) - [i25]Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy:
SpanBERT: Improving Pre-training by Representing and Predicting Spans. CoRR abs/1907.10529 (2019) - [i24]Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov:
RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692 (2019) - [i23]Mandar Joshi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer:
BERT for Coreference Resolution: Baselines and Analysis. CoRR abs/1908.09091 (2019) - [i22]Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav:
Structural Language Models for Any-Code Generation. CoRR abs/1910.00577 (2019) - [i21]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. CoRR abs/1910.13461 (2019) - [i20]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. CoRR abs/1911.00172 (2019) - [i19]Jiezhong Qiu, Hao Ma, Omer Levy, Scott Wen-tau Yih, Sinong Wang, Jie Tang:
Blockwise Self-Attention for Long Document Understanding. CoRR abs/1911.02972 (2019) - [i18]Ofir Press, Noah A. Smith, Omer Levy:
Improving Transformer Models by Reordering their Sublayers. CoRR abs/1911.03864 (2019) - 2018
- [c25]Terra Blevins, Omer Levy, Luke Zettlemoyer:
Deep RNNs Encode Soft Hierarchical Syntax. ACL (2) 2018: 14-19 - [c24]Eunsol Choi, Omer Levy, Yejin Choi, Luke Zettlemoyer:
Ultra-Fine Entity Typing. ACL (1) 2018: 87-96 - [c23]Luheng He, Kenton Lee, Omer Levy, Luke Zettlemoyer:
Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling. ACL (2) 2018: 364-369 - [c22]Omer Levy, Kenton Lee, Nicholas FitzGerald, Luke Zettlemoyer:
Long Short-Term Memory as a Dynamically Computed Element-wise Weighted Sum. ACL (2) 2018: 732-739 - [c21]Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman:
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. BlackboxNLP@EMNLP 2018: 353-355 - [c20]Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, Yejin Choi:
Simulating Action Dynamics with Neural Process Networks. ICLR (Poster) 2018 - [c19]Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, Noah A. Smith:
Annotation Artifacts in Natural Language Inference Data. NAACL-HLT (2) 2018: 107-112 - [c18]Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav:
A general path-based representation for predicting program properties. PLDI 2018: 404-419 - [c17]Nelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, Noah A. Smith:
LSTMs Exploit Linguistic Attributes of Data. Rep4NLP@ACL 2018: 180-186 - [i17]Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, Noah A. Smith:
Annotation Artifacts in Natural Language Inference Data. CoRR abs/1803.02324 (2018) - [i16]Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav
:
code2vec: Learning Distributed Representations of Code. CoRR abs/1803.09473 (2018) - [i15]Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav:
A General Path-Based Representation for Predicting Program Properties. CoRR abs/1803.09544 (2018) - [i14]Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman:
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. CoRR abs/1804.07461 (2018) - [i13]Omer Levy, Kenton Lee, Nicholas FitzGerald, Luke Zettlemoyer:
Long Short-Term Memory as a Dynamically Computed Element-wise Weighted Sum. CoRR abs/1805.03716 (2018) - [i12]Terra Blevins, Omer Levy, Luke Zettlemoyer:
Deep RNNs Encode Soft Hierarchical Syntax. CoRR abs/1805.04218 (2018) - [i11]Luheng He, Kenton Lee, Omer Levy, Luke Zettlemoyer:
Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling. CoRR abs/1805.04787 (2018) - [i10]Nelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, Noah A. Smith:
LSTMs Exploit Linguistic Attributes of Data. CoRR abs/1805.11653 (2018) - [i9]Eunsol Choi, Omer Levy, Yejin Choi, Luke Zettlemoyer:
Ultra-Fine Entity Typing. CoRR abs/1807.04905 (2018) - [i8]Uri Alon, Omer Levy, Eran Yahav:
code2seq: Generating Sequences from Structured Representations of Code. CoRR abs/1808.01400 (2018) - [i7]Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer:
pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference. CoRR abs/1810.08854 (2018) - 2017
- [c16]Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, Omer Levy:
Named Entity Disambiguation for Noisy Text. CoNLL 2017: 58-68 - [c15]Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer:
Zero-Shot Relation Extraction via Reading Comprehension. CoNLL 2017: 333-342 - [c14]Omer Levy, Anders Søgaard, Yoav Goldberg:
A Strong Baseline for Learning Cross-Lingual Word Embeddings from Sentence Alignments. EACL (1) 2017: 765-774 - [e1]Samuel R. Bowman, Yoav Goldberg, Felix Hill, Angeliki Lazaridou, Omer Levy, Roi Reichart, Anders Søgaard:
Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, RepEval@EMNLP 2017, Copenhagen, Denmark, September 8, 2017. Association for Computational Linguistics 2017, ISBN 978-1-945626-90-6 [contents] - [i6]Kenton Lee, Omer Levy, Luke Zettlemoyer:
Recurrent Additive Networks. CoRR abs/1705.07393 (2017) - [i5]Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer:
Zero-Shot Relation Extraction via Reading Comprehension. CoRR abs/1706.04115 (2017) - [i4]Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, Omer Levy:
Named Entity Disambiguation for Noisy Text. CoRR abs/1706.09147 (2017) - [i3]Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, Yejin Choi:
Simulating Action Dynamics with Neural Process Networks. CoRR abs/1711.05313 (2017) - 2016
- [c13]Omer Levy, Ido Dagan:
Annotating Relation Inference in Context via Question Answering. ACL (2) 2016 - [c12]Omer Levy, Ido Dagan, Gabriel Stanovsky, Judith Eckle-Kohler, Iryna Gurevych:
Modeling Extractive Sentence Intersection via Subtree Entailment. COLING 2016: 2891-2901 - [i2]Omer Levy, Anders Søgaard, Yoav Goldberg:
Reconsidering Cross-lingual Word Embeddings. CoRR abs/1608.05426 (2016) - 2015
- [j1]Omer Levy, Yoav Goldberg, Ido Dagan:
Improving Distributional Similarity with Lessons Learned from Word Embeddings. Trans. Assoc. Comput. Linguistics 3: 211-225 (2015) - [c11]Vered Shwartz, Omer Levy, Ido Dagan, Jacob Goldberger:
Learning to Exploit Structured Resources for Lexical Inference. CoNLL 2015: 175-184 - [c10]Oren Melamud, Omer Levy, Ido Dagan:
A Simple Word Embedding Model for Lexical Substitution. VS@HLT-NAACL 2015: 1-7 - [c9]Omer Levy, Steffen Remus, Chris Biemann, Ido Dagan:
Do Supervised Distributional Methods Really Learn Lexical Inference Relations? HLT-NAACL 2015: 970-976 - 2014
- [c8]Bernardo Magnini, Roberto Zanoli, Ido Dagan, Kathrin Eichler, Guenter Neumann, Tae-Gil Noh, Sebastian Padó
, Asher Stern, Omer Levy:
The Excitement Open Platform for Textual Inferences. ACL (System Demonstrations) 2014: 43-48 - [c7]Omer Levy, Yoav Goldberg:
Dependency-Based Word Embeddings. ACL (2) 2014: 302-308 - [c6]Omer Levy, Ido Dagan, Jacob Goldberger:
Focused Entailment Graphs for Open IE Propositions. CoNLL 2014: 87-97 - [c5]Omer Levy, Yoav Goldberg:
Linguistic Regularities in Sparse and Explicit Word Representations. CoNLL 2014: 171-180 - [c4]Omer Levy, Yoav Goldberg:
Neural Word Embedding as Implicit Matrix Factorization. NIPS 2014: 2177-2185 - [i1]Yoav Goldberg, Omer Levy:
word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method. CoRR abs/1402.3722 (2014) - 2013
- [c3]Omer Levy, Torsten Zesch, Ido Dagan, Iryna Gurevych:
Recognizing Partial Textual Entailment. ACL (2) 2013: 451-455 - [c2]Omer Levy, Torsten Zesch, Ido Dagan, Iryna Gurevych:
UKP-BIU: Similarity and Entailment Metrics for Student Response Analysis. SemEval@NAACL-HLT 2013: 285-289 - 2012
- [c1]Omer Levy, Shaul Markovitch:
Teaching Machines to Learn by Metaphors. AAAI 2012