Related Papers in ACL 2020 (2020.07.06)

2020/07/06 00:00:00 2020/07/06 00:00:00 paper list

Link

Recurrent Neural Network

  • Generating Informative Conversational Response using Recurrent Knowledge-Interaction and Knowledge-Copy

    Xiexiong Lin, Weiyu Jian, Jianshan He, Taifeng Wang and Wei Chu

  • MART: Memory-Augmented Recurrent Transformer for Coherent Video Paragraph Captioning

    Jie Lei, Liwei Wang, Yelong Shen, Dong Yu, Tamara Berg and Mohit Bansal

  • Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension

    Hongyu Gong, Yelong Shen, Dian Yu, Jianshu Chen and Dong Yu

  • Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment

    Forrest Davis and Marten van Schijndel

  • Synchronous Double-channel Recurrent Network for Aspect-Opinion Pair Extraction

    Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang and Ziming Chi

Autoencoder

  • Autoencoding Pixies: Amortised Variational Inference with Graph Convolutions for Functional Distributional Semantics

    Guy Emerson

  • Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder

    Daya Guo, Duyu Tang, Nan Duan, Jian Yin, Daxin Jiang and Ming Zhou

  • Semi-Supervised Semantic Dependency Parsing Using CRF Autoencoders

    Zixia Jia, Youmi Ma, Jiong Cai and Kewei Tu

  • Autoencoding Keyword Correlation Graph for Document Clustering

    Billy Chiu, Sunil Kumar Sahu, Derek Thomas, Neha Sengupta and Mohammady Mahdy

  • Crossing Variational Autoencoders for Answer Retrieval

    Wenhao Yu, Lingfei Wu, Qingkai Zeng, Shu Tao, Yu Deng and Meng Jiang

  • Interpretable Operational Risk Classification with Semi-Supervised Variational Autoencoder

    Fan Zhou, Shengming Zhang and Yi Yang

  • SCAR: Sentence Compression using Autoencoders for Reconstruction

    Chanakya Malireddy, Tirth Maniar and Manish Shrivastava

LSTM

  • Inducing Grammar from Long Short-Term Memory Networks by Shapley Decomposition

    Yuhui Zhang and Allen Nie

Sequence

  • A Study of Non-autoregressive Model for Sequence Generation

    Yi Ren, Jinglin Liu, Xu Tan, Zhou Zhao, Sheng Zhao and Tie-Yan Liu

  • 已读 BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

    Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov and Luke Zettlemoyer

  • Conditional Augmentation for Aspect Term Extraction via Masked Sequence-to-Sequence Generation

    Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling and Yan Song

  • DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking

    Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab and Smaranda Muresan

  • Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks

    Fynn Schröder and Chris Biemann

  • Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation

    Junliang Guo, Linli Xu and Enhong Chen

  • Location Attention for Extrapolation to Longer Sequences

    Yann Dubois, Gautier Dagan, Dieuwke Hupkes and Elia Bruni

  • NAT: Noise-Aware Training for Robust Neural Sequence Labeling

    Marcin Namysl, Sven Behnke and Joachim Köhler

  • SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling

    Luoxin Chen, Weitong Ruan, Xinyue Liu and Jianhua Lu

  • Structure-Level Knowledge Distillation For Multilingual Sequence Labeling

    Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Fei Huang and Kewei Tu

  • Enriched In-Order Linearization for Faster Sequence-to-Sequence Constituent Parsing

    Daniel Fernández-González and Carlos Gómez-Rodríguez

  • Low Resource Sequence Tagging using Sentence Reconstruction

    Tal Perl, Sriram Chaudhury and Raja Giryes

  • Embeddings of Label Components for Sequence Labeling: A Case Study of Fine-grained Named Entity Recognition

    Takuma Kato, Kaori Abe, Hiroki Ouchi, Shumpei Miyawaki, Jun Suzuki and Kentaro Inui

Data augmentation

  • AdvAug: Robust Adversarial Augmentation for Neural Machine Translation

    Yong Cheng, Lu Jiang, Wolfgang Macherey and Jacob Eisenstein

  • Conditional Augmentation for Aspect Term Extraction via Masked Sequence-to-Sequence Generation

    Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling and Yan Song

  • Good-Enough Compositional Data Augmentation

    Jacob Andreas

    由于语言任务中的某些模式具有通用性,为了让神经网络学习到这些通用性,从而提出这种增强方法,具体方法:

  1. 分析数据集中的语言模式,即在同样的语言环境中出现的不同词句,这些不同字句就是需要被学习到的通用性,下面是一对例子。
    1. She picks the wug up in Fresno.
    2. She puts the wug down in Tempe.
  2. 在这个例子中,粗体部分代表着同样的语言环境,则斜体部分则为需要学习到的通用性,在网络受到1的句子的时候,也需要具备推导出2中斜体部分内容的能力。
  • Review-based Question Generation with Adaptive Instance Transfer and Augmentation

    Qian Yu, Lidong Bing, Qiong Zhang, Wai Lam and Luo Si

  • Logic-Guided Data Augmentation and Regularization for Consistent Question Answering

    Akari Asai and Hannaneh Hajishirzi

  • Parallel Data Augmentation for Formality Style Transfer

    Yi Zhang, Tao Ge and Xu SUN

  • Syntactic Data Augmentation Increases Robustness to Inference Heuristics

    Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler and Tal Linzen

  • Noise-Based Augmentation Techniques for Emotion Datasets: What do we Recommend?

    Mimansa Jaiswal and Emily Mower Provost