Stop the war!
Остановите войну!
for scientists:
default search action
Workshop on Neural Generation and Translation@EMNLP-IJCNLP 2019: Hong Kong
- Alexandra Birch, Andrew M. Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, Katsuhito Sudoh:
Proceedings of the 3rd Workshop on Neural Generation and Translation@EMNLP-IJCNLP 2019, Hong Kong, November 4, 2019. Association for Computational Linguistics 2019, ISBN 978-1-950737-83-3 - Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew M. Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh:
Findings of the Third Workshop on Neural Generation and Translation. 1-14 - Pawel Budzianowski, Ivan Vulic:
Hello, It's GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems. 15-22 - Kenji Imamura, Eiichiro Sumita:
Recycling a Pre-trained BERT Encoder for Neural Machine Translation. 23-31 - Woon Sang Cho, Yizhe Zhang, Sudha Rao, Chris Brockett, Sungjin Lee:
Generating a Common Question from Multiple Documents using Multi-source Encoder-Decoder Models. 32-43 - Lifu Tu, Xiaoan Ding, Dong Yu, Kevin Gimpel:
Generating Diverse Story Continuations with Controllable Semantics. 44-58 - Zi-Yi Dou, Xinyi Wang, Junjie Hu, Graham Neubig:
Domain Differential Adaptation for Neural Machine Translation. 59-69 - Elozino Egonmwan, Yllias Chali:
Transformer-based Model for Single Documents Neural Summarization. 70-79 - Alham Fikri Aji, Kenneth Heafield:
Making Asynchronous Stochastic Gradient Descent Work for Transformers. 80-89 - Nikolaos Malandrakis, Minmin Shen, Anuj Kumar Goyal, Shuyang Gao, Abhishek Sethi, Angeliki Metallinou:
Controlled Text Generation for Data Augmentation in Intelligent Artificial Agents. 90-98 - Anna Currey, Kenneth Heafield:
Zero-Resource Neural Machine Translation with Monolingual Pivot Data. 99-107 - Stéphane Clinchant, Kweon Woo Jung, Vassilina Nikoulina:
On the use of BERT for Neural Machine Translation. 108-117 - Victor Prokhorov, Ehsan Shareghi, Yingzhen Li, Mohammad Taher Pilehvar, Nigel Collier:
On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation. 118-127 - Ivan P. Yamshchikov, Viacheslav Shibaev, Aleksander Nagaev, Jürgen Jost, Alexey Tikhonov:
Decomposing Textual Information For Style Transfer. 128-137 - Richard Yuanzhe Pang, Kevin Gimpel:
Unsupervised Evaluation Metrics and Learning Criteria for Non-Parallel Textual Transfer. 138-147 - Li Gong, Josep Maria Crego, Jean Senellart:
Enhanced Transformer Model for Data-to-Text Generation. 148-156 - Florian Schmidt:
Generalization in Generation: A closer look at Exposure Bias. 157-167 - Alexandre Berard, Ioan Calapodescu, Marc Dymetman, Claude Roux, Jean-Luc Meunier, Vassilina Nikoulina:
Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and Robustness. 168-176 - Poorya Zaremoodi, Gholamreza Haffari:
Adaptively Scheduled Multitask Learning: The Case of Low-Resource Neural Machine Translation. 177-186 - Duygu Ataman, Orhan Firat, Mattia Antonino Di Gangi, Marcello Federico, Alexandra Birch:
On the Importance of Word Boundaries in Character-level Neural Machine Translation. 187-193 - Lala Li, William Chan:
Big Bidirectional Insertion Representations for Documents. 194-198 - Gayatri Bhat, Sachin Kumar, Yulia Tsvetkov:
A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation. 199-205 - Hongyi Cui, Shohei Iida, Po-Hsuan Hung, Takehito Utsuro, Masaaki Nagata:
Mixed Multi-Head Self-Attention for Neural Machine Translation. 206-214 - Sam Witteveen, Martin Andrews:
Paraphrasing with Large Language Models. 215-220 - Pooya Moradi, Nishant Kambhatla, Anoop Sarkar:
Interrogating the Explanatory Power of Attention in Neural Machine Translation. 221-230 - Kenton Murray, Jeffery Kinnison, Toan Q. Nguyen, Walter J. Scheirer, David Chiang:
Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine Translation. 231-240 - Chan Young Park, Yulia Tsvetkov:
Learning to Generate Word- and Phrase-Embeddings for Efficient Phrase-Based Neural Machine Translation. 241-248 - Elozino Egonmwan, Yllias Chali:
Transformer and seq2seq model for Paraphrase Generation. 249-255 - Sameen Maruf, Gholamreza Haffari:
Monash University's Submissions to the WNGT 2019 Document Translation Task. 256-261 - Li Gong, Josep Maria Crego, Jean Senellart:
SYSTRAN @ WNGT 2019: DGT Task. 262-267 - Ratish Puduppully, Jonathan Mallinson, Mirella Lapata:
University of Edinburgh's submission to the Document-level Generation and Translation Shared Task. 268-272 - Fahimeh Saleh, Alexandre Berard, Ioan Calapodescu, Laurent Besacier:
Naver Labs Europe's Systems for the Document-Level Generation and Translation Task at WNGT 2019. 273-279 - Young Jin Kim, Marcin Junczys-Dowmunt, Hany Hassan, Alham Fikri Aji, Kenneth Heafield, Roman Grundkiewicz, Nikolay Bogoychev:
From Research to Production and Back: Ludicrously Fast Neural Machine Translation. 280-288 - Lesly Miculicich, Marc Marone, Hany Hassan:
Selecting, Planning, and Rewriting: A Modular Approach for Data-to-Document Generation and Translation. 289-296 - Kenton Murray, Brian DuSell, David Chiang:
Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT 2019 Efficiency Task. 297-301
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.