


default search action
19th ALTA 2021: online
- Afshin Rahimi, William Lane, Guido Zuccon:
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association, ALTA 2021, online, December 8-10, 2021. Association for Computational Linguistics 2021
Australasian Language Technology Association Workshop (2021)
- Rongxin Zhu, Jey Han Lau, Jianzhong Qi:
Findings on Conversation Disentanglement. 1-11 - Li-An Chen, Hanna Suominen:
An Approach to the Frugal Use of Human Annotators to Scale up Auto-coding for Text Classification Tasks. 12-21 - Narjes Askarian, Ehsan Abbasnejad, Ingrid Zukerman, Wray L. Buntine, Gholamreza Haffari:
Curriculum Learning Effectively Improves Low Data VQA. 22-33 - Danielly Sorato, Diana Zavala-Rojas, Maria del Carme Colominas Ventura:
Using Word Embeddings to Quantify Ethnic Stereotypes in 12 years of Spanish News. 34-46 - Karun Varghese Mathew, Venkata S. Aditya Tarigoppula, Lea Frermann:
Multi-modal Intent Classification for Assistive Robots with Large-scale Naturalistic Datasets. 47-57 - Rhys Biddle, Maciek Rybinski, Qian Li, Cécile Paris, Guandong Xu:
Harnessing Privileged Information for Hyperbole Detection. 58-67 - Vincent Nguyen, Sarvnaz Karimi, Zhenchang Xing:
Combining Shallow and Deep Representations for Text-Pair Classification. 68-78 - Éric Le Ferrand, Steven Bird, Laurent Besacier:
Phone Based Keyword Spotting for Transcribing Very Low Resource Languages. 79-86 - Nannan Huang, Xiuzhen Zhang:
Evaluation of Review Summaries via Question-Answering. 87-96 - Zhuohan Xie, Jey Han Lau, Trevor Cohn:
Exploring Story Generation with Multi-task Objectives in Variational Autoencoders. 97-106 - Thomas Scelsi, Alfonso Martinez Arranz, Lea Frermann:
Principled Analysis of Energy Discourse across Domains with Thesaurus-based Automatic Topic Labeling. 107-118 - Rinaldo Gagiano, Maria Myung-Hee Kim, Xiuzhen Zhang, Jennifer Biggs:
Robustness Analysis of Grover for Machine-Generated News Detection. 119-127 - Najam Zaidi, Trevor Cohn, Gholamreza Haffari:
Document Level Hierarchical Transformer. 128-137 - Xinzhe Li, Ming Liu, Xingjun Ma, Longxiang Gao:
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts. 138-148 - Abdus Salam, Rolf Schwitter, Mehmet A. Orgun:
Generating and Modifying Natural Language Explanations. 149-157 - Shiwei Zhang, Xiuzhen Zhang:
Does QA-based intermediate training help fine-tuning language models for text classification? 158-162 - Raquel G. Alhama, Francesca Zermiani, Atiqah Khaliq:
Retrodiction as Delayed Recurrence: the Case of Adjectives in Italian and English. 163-168 - Thanh Vu, Dai Quoc Nguyen:
Automatic Post-Editing for Vietnamese. 169-173 - Antonio Jimeno-Yepes, Ameer Albahem, Karin Verspoor:
Using Discourse Structure to Differentiate Focus Entities from Background Entities in Scientific Literature. 174-178 - Qian Sun, Aili Shen, Hiyori Yoshikawa, Chunpeng Ma, Daniel Beck, Tomoya Iwakura, Timothy Baldwin:
Evaluating Hierarchical Document Categorisation. 179-184 - Pradeesh Parameswaran, Andrew Trotman, Veronica Liesaputra, David M. Eyers:
BERT's The Word : Sarcasm Target Detection using BERT. 185-191 - Vincent Nguyen, Sarvnaz Karimi, Maciej Rybinski, Zhenchang Xing:
Cross-Domain Language Modeling: An Empirical Investigation. 192-200 - Diego Mollá:
Overview of the 2021 ALTA Shared Task: Automatic Grading of Evidence, 10 years later. 201-204 - Pradeesh Parameswaran, Andrew Trotman, Veronica Liesaputra, David M. Eyers:
Quick, get me a Dr. BERT: Automatic Grading of Evidence using Transfer Learning. 205-212 - Yuting Guo, Yao Ge, Ruqi Liao, Abeed Sarker:
An Ensemble Model for Automatic Grading of Evidence. 213-217 - Fajri Koto, Biaoyan Fang:
Handling Variance of Pretrained Language Models in Grading Evidence in the Medical Literature. 218-223

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.