Related Papers in AAAI 2020 (2020.02.07)

2020/02/07 00:00:00 2020/02/07 00:00:00 paper list

Link

Time Series

  • 已读 DATA-GRU: Dual-Attention Time-Aware Gated Recurrent Unit for Irregular Multivariate Time Series

    Qingxiong Tan (Department of Computer Science, Hong Kong Baptist University)*; Mang YE (Inception Institute of
    Artificial Intelligence); Baoyao Yang (Department of Computer Science, Hong Kong Baptist University); Siqi Liu
    (Department of Computer Science, Hong Kong Baptist University); Andy Jinhua Ma (School of Data and Computer
    Science, Sun Yat-Sen University); Terry Cheuk-Fung Yip (Department of Medicine and Therapeutics, The Chinese
    University of Hong Kong); Grace Lai-Hung Wong (Department of Medicine and Therapeutics, The Chinese
    University of Hong Kong); PongChi Yuen (Department of Computer Science, Hong Kong Baptist University)

  • Joint Modeling of Local and Global Temporal Dynamics for Multivariate Time Series Forecasting with Missing Values

    Xianfeng Tang (The Pennsylvania State University)*; Huaxiu Yao (Pennsylvania State University); Yiwei Sun (Penn
    State University); Charu Aggarwal (IBM); Prasenjit Mitra (Pennsylvania State University ); Suhang Wang
    (Pennsylvania State University)

  • Factorized Inference in Deep Markov Models for Incomplete Multimodal Time Series

    Tan Zhi-Xuan (Massachusetts Institute of Technology)*; Desmond Ong (A*STAR Artificial Intelligence Initiative);
    Harold Soh (National University of Singapore)

  • A Variational Point Process Approach for Social Event Sequences

    Zhen Pan (University of Science and Technology of China)*; Zhenya Huang (University of Science and Technology of
    China ); Defu Lian (University of Science and Technology of China); Enhong Chen (University of Science and
    Technology of China)

  • Deep Unsupervised Binary Coding Networks for Multivariate Time Series Retrieval

    Dixian Zhu (University of Iowa); Dongjin Song (NEC Labs America)*; Yuncong Chen (NEC Laboratories America,
    Inc.); Cristian Lumezanu (NEC Labs); Wei Cheng (NEC Laboratories America); Bo Zong (NEC Labs); Jingchao Ni ( NEC
    Laboratories America); Takehiko Mizoguchi (NEC Laboratories America, Inc.); Tianbao Yang (University of Iowa);
    Haifeng Chen (NEC Labs)

  • Relation Inference among Sensor Time Series in Smart Buildings with Metric Learning

    Shuheng Li (Peking University)*; Dezhi Hong (UC San Diego); Hongning Wang (University of Virginia)

  • Time2Graph: Revisiting Time Series Modeling with Dynamic Shapelets

    Ziqiang Cheng (Zhejiang University); Yang Yang (Zhejiang University)*; Wei Wang (State Grid Huzhou Power Supply
    Co. Ltd.); Wenjie Hu (Zhejiang University); Yueting Zhuang (Zhejiang University); guojie song (PKU, China)

  • OMuLeT: Online Multi-Lead Time Location Prediction for Hurricane Trajectory Forecasting

    Ding Wang (Michigan State University)*; Boyang Liu (Michigan State University); Pang-Ning Tan (MSU); Lifeng Luo
    (Michigan State University)

  • Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series

    Dongkuan Xu (The Pennsylvania State University)*; Wei Cheng (NEC Laboratories America); Bo Zong (NEC Labs);
    Dongjin Song (NEC Labs America); Jingchao Ni ( NEC Laboratories America); Wenchao Yu (UCLA); Yanchi Liu
    (Rutgers); Haifeng Chen (NEC Labs); Xiang Zhang (The Pennsylvania State University)

  • Block Hankel Tensor ARIMA for Multiple Short Time Series Forecasting

    QIQUAN SHI (Huawei Noah’s Ark Lab)*; Jiaming YIN (Tongji University); Jiajun CAI (The University of Hong Kong);
    Andrzej Cichocki (Skolkovo Institute of Science and Technology); Tatsuya Yokota (Nagoya Institute of Technology);
    Lei CHEN (Huawei Noah’s Ark Lab); Mingxuan Yuan (Huawei); Jia Zeng (Huawei Noah’s Ark Lab)

  • The Missing Data Encoder: Cross-Channel Image Completion with Hide-And-Seek Adversarial Network

    Arnaud Dapogny (Pierre and Marie Curie University (UPMC))*; Matthieu Cord (Sorbonne University); Patrick Pérez
    (Valeo.ai)

  • Joint Modeling of Local and Global Temporal Dynamics for Multivariate Time Series Forecasting with Missing
    Values

    Xianfeng Tang (The Pennsylvania State University)*; Huaxiu Yao (Pennsylvania State University); Yiwei Sun (Penn
    State University); Charu Aggarwal (IBM); Prasenjit Mitra (Pennsylvania State University ); Suhang Wang
    (Pennsylvania State University)

missing value

  • Random Intersection Graphs and Missing Data

    Dror Salti (Ben Gurion University of The Negev)*; Yakir Berchenko (Ben Gurion University of The Negev)

  • Polynomial Matrix Completion for Missing Data Imputation and Transductive Learning

    Jicong Fan (Cornell University)*; Yuqian Zhang (Cornell University); Madeleine Udell (Cornell University)

Recurrent Neural Network

  • Eigenvalue Normalized Recurrent Neural Networks for Short Term Memory

    Kyle Helfrich (University of Kentucky)*; Qiang Ye (University of Kentucky)

  • Particle Filter Recurrent Neural Networks

    Xiao Ma (National University of Singapore)*; Peter Karkus (National University of Singapore); David Hsu (NUS);
    Wee Sun Lee (National University of Singapore)

  • Segmenting Medical MRI via Recurrent Decoding Cell

    Kai Xie (East China Normal University); 何良华 (同济大学); Ying Wen (East China Normal University)*

  • An Attentional Recurrent Neural Network for Personalized Next Location Recommendation

    Qing Guo (Nanyang Technological University)*; Zhu Sun (Nanyang Technological University); Jie Zhang (Nanyang
    Technological University); Yin-Leng Theng (Nanyang Technological University)

  • Weighted Automata Extraction from Recurrent Neural Networks via Regression on State Spaces

    Takamasa Okudono (National Institute of Informatics)*; Masaki Waga (National Institute of Informatics); Taro
    Sekiyama (National Institute of Informatics); Ichiro Hasuo (National Institute of Informatics & SOKENDAI)

  • Recurrent Nested Model for Sequence Generation

    Wenhao Jiang (Tencent AI Lab)*; Lin Ma (Tencent AI Lab); Wei Lu (UESTC)

  • Span-based Neural Buffer: Towards Efficient and Effective Utilization of Long-Distance Context for Neural Sequence Models

    Yangming Li (Ant Financial Services Group)*; Kaisheng Yao (Ant Financial Services Group); Libo Qin (Research
    Center for Social Computing and Information Retrieval, Harbin Institute of Technology); Shuang Peng (Ant Financial
    Services Group); Yijia Liu (Alibaba Group); Xiaolong Li (Ant Financial)

  • Multi-Zone Unit for Recurrent Neural Networks

    Fandong Meng (Tencent WeChat AI - Pattern Recognition Center Tencent Inc.)*; Jinchao Zhang (Tencent); Yang Liu
    (Tsinghua University); Jie Zhou (Tencent)

  • Event-Driven Continuous Time Bayesian Networks

    Debarun Bhattacharjya (IBM Research)*; Karthikeyan Shanmugam (IBM Research NY); Tian Gao (IBM Research);
    Nicholas Mattei (Tulane University); Kush Varshney (IBM Research); Dharmashankar Subramanian (IBM Research)

  • Modeling Electrical Motor Dynamics using Encoder-Decoder with Recurrent Skip Connection

    Sagar Verma (IIIT Delhi)*; Nicolas Henwood (Schneider Electric); Marc Castella (Telecom SudParis); Francois
    Malrait (Schneider Electric); Jean-Christophe Pesquet (CentraleSupelec)

  • Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

    Minhao Cheng (UCLA)*; Jinfeng Yi (JD AI Research); Pin-Yu Chen (IBM Research); Huan Zhang (UCLA); Cho-Jui Hsieh (UCLA)

  • Temporal Pyramid Recurrent Neural Network

    Qianli Ma (South China University of Technology)*; Zhenxi Lin (South China University of Technology); Enhuan
    Chen (South China University of Technology); Garrison Cottrell (UC San Diego)

  • AirNet: A Calibration Model for Low-Cost Air Monitoring Sensors Using Dual Sequence Encoder Networks

    Haomin Yu (Beijing Jiaotong University); Qingyong Li (Beijing Jiaotong University)*; YangLi-ao Geng (Beijing
    Jiaotong University); Yingjun Zhang (Beijing Jiaotong University); Zhi Wei (New Jersey Institute of Technology)

  • A Skip-connected Evolving Recurrent Neural Network for Data Stream Classification under Label Latency Scenario

    Monidipa Das (Nanyang Technological University); Mahardhika Pratama (Nanyang Technology University)*; Jie
    Zhang (Nanyang Technological University); Yew Soon Ong (Nanyang Technological University, Nanyang View,
    Singapore)

  • 已读 Not All Attention Is Needed: Gated Attention Network for Sequence Data

    LANQING XUE (Hong Kong University of Science and Technology)*; Xiaopeng Li (Hong Kong U. of Sci. & Tech.);
    Nevin Zhang (HKUST)

  • Biologically Plausible Sequence Learning with Spiking Neural Networks

    Zuozhu Liu (Singapore University of Technology and Design); Thiparat Chotibut (Chulalongkorn university)*;
    Christopher Hillar (Rewwood Center); Shaowei Lin (SUTD)

  • Structured Sparsification of Gated Recurrent Neural Networks

    Ekaterina Lobacheva (Samsung-HSE Laboratory, National Research University Higher School of Economics)*;
    Nadezhda Chirkova (Samsung-HSE Laboratory, National Research University Higher School of Economics);
    Aleksandr Markovich (National Research University Higher School of Economics); Dmitry Vetrov (National Research
    University Higher School of Economics, Samsung AI Center Moscow)

  • TapNet: Multivariate Time Series Classificationwith Attentional Prototype Network

    Xuchao Zhang (Virginia Tech)*; Yifeng Gao (George Mason University); Jessica Lin (George Mason University);
    Chang-Tien Lu (Virginia Tech, USA)

Anomaly Detection

  • Likelihood Ratios and Generative Classifiers for Unsupervised Out-of-Domain Detection In Task Oriented
    Dialog

    Varun Prashant Gangal (Carnegie Mellon University)*; Abhinav Arora (Facebook); Arash Einolghozati (Facebook);
    Sonal Gupta (Facebook)

  • Self-Supervised Learning for Generalizable Out-of-Distribution Detection

    Sina Mohseni (Texas A&M University)*; Mandar Pitale (NVIDIA); JBS Yadawa (NVIDIA); Zhangyang Wang (TAMU)

    解决的问题:主任务为有监督的out-of-distribution detection,于此同时要解决OOD detection问题,例子:在一个分类动物的问题上,要拒绝对一个人像进行识别,即不提供任何网络输出。

    方法:一个两阶段的训练方法:

    1. 先训练一个C类分类器用于分类原始任务(即动物分类问题,C为动物类别)
    2. 再在已经训练后的分类器的最后一层添加新的K类分类层,用于做OOD detection,其中$K = C + A$,A为一个超参数,是OOD样本的总类别数(即假设OOD样本共有A个类别)。上述网络使用混合样本进行微调,其中混合样本是in- 和out- of distribution 样本的混合。OOD样本在训练的时候可以设置为任意的OOD(即,对于一个原始问题为猫狗分类的网络,OOD_train可以设置为人脸,即使在真实世界中的OOD样本不止有人脸)。上述训练方法的目标是:让网络能够在不忘记in-distribution样本特征的情况下,还能记住OOD的一部分特征。
    3. 在推理的时候,使用步骤2的分类层,即K类的分类层。对于任意一个测试样本,首先计算在C类分类问题上的预测结果,即$y_{pred} = arg_{max_i}{\gamma[1:C]}$,让后计算其OOD score,$Score_{OOD} = sum(\gamma[C:K])$,即在A类OOD分类上的SOFTMAX响应总和。
    4. 个人理解:OOD样本将在C:K上有较高的响应,此时该样本的$y_{pred}$将不可信,并拒绝该样本的分类。反之则接受该样本的分类。
  • Adaptive Double Exploration Tradeoff for Outlier Detection

    Xiaojin Zhang (CUHK)*; Honglei Zhuang (Google Research); Shengyu Zhang (Tencent); Yuan Zhou (UIUC)

  • MixedAD: A Scalable Algorithm for Detecting Mixed Anomalies in Attributed Graphs

    Mengxiao Zhu (Beihang Univerisity)*; Haogang Zhu (Beihang University)

  • Detecting semantic anomalies

    Faruk Ahmed (Mila, Universite de Montreal)*; Aaron Courville (Universite de Montreal)

  • 已读 Outlier Detection Ensemble with Embedded Feature Selection

    Li Cheng (National University of Defense Technology)*; Yijie Wang (“ National University of Defense Technology,
    China”); Xinwang Liu (National University of Defense Technology); Bin Li ( National University of Defense
    Technology)

  • Multi-scale Anomaly Detection on Attributed Networks

    Leonardo Gutierrez Gomez (Universite catholique de Louvain)*; Alexandre Bovet (Universite catholique de
    Louvain); Jean-Charles Delvenne (Universite catholique de Louvain)

  • Transfer Learning for Anomaly Detection through Localized and Unsupervised Instance Selection

    Vincent Vercruyssen (KU Leuven)*; Jesse Davis (KU Leuven); Wannes Meert (KU Leuven)

  • MIDAS: Microcluster-Based Detector of Anomalies in Edge Streams

    Siddharth Bhatia (National University of Singapore)*; Bryan Hooi (National University of Singapore); Minji Yoon
    (Carnegie Mellon University); Kijung Shin (KAIST); Christos Faloutsos ()

Sequence

  • Graph Transformer for Graph-to-Sequence Learning

    Deng Cai (The Chinese University of Hong Kong)*; Wai Lam (The Chinese University of Hong Kong)

  • Sequence Generation with Optimal-Transport-Enhanced Reinforcement Learning

    Liqun Chen (Duke University)*; Ke Bai (Duke University); Chenyang Tao (Duke University); Yizhe Zhang (Microsoft
    Research); Guoyin Wang (Duke University); Wenlin Wang (Duke Univeristy); Ricardo Henao (Duke University);
    Lawrence Carin Duke (CS)

Interpretable

  • Dynamic Network Pruning with Interpretable Layerwise Channel Selection

    Yulong Wang (Tsinghua University); Xiaolu Zhang (Ant Financial Services Group); Hang Su (Tsinghua Univiersity); Bo
    Zhang (Tsinghua University); Xiaolin Hu (Tsinghua University)*

  • Interpretable and Differentially Private Predictions

    Frederik Harder (Max Planck Institute)*; Matthias Bauer (MPI Tübingen); Mijung Park (MPI Tuebingen)

  • MRI Reconstruction with Interpretable Pixel-Wise Operations Using Reinforcement Learning

    wentian li (Tsinghua University)*; XIDONG FENG (department of Automation,Tsinghua University); Haotian An
    (Tsinghua University); Xiang Yao Ng (Tsinghua University); Yu-Jin Zhang (Tsinghua University)

  • Interpretable rumor detection in microblogs by attending to user interactions

    Serena Khoo (DSO National Laboratories)*; Hai Leong Chieu (DSO National Laboratories); Zhong Qian (Soochow
    University); Jing Jiang (Singapore Management University)

  • Asymmetrical Hierarchical Networks with Attentive Interactions for Interpretable Review-Based Recommendation

    Xin Dong (Rutgers University); Jingchao Ni ( NEC Laboratories America)*; Wei Cheng (NEC Laboratories America);
    Zhengzhang Chen (NEC Laboratories America, Inc.); Bo Zong (NEC Labs); Dongjin Song (NEC Labs America); Yanchi
    Liu (NEC Labs America); Haifeng Chen (NEC Labs); Gerard de Melo (Rutgers University)

  • Iteratively Questioning and Answering for Interpretable Legal Judgment Prediction

    Haoxi Zhong (Tsinghua University)*; Yuzhong Wang (Tsinghua University); Cunchao Tu (Tsinghua University);
    Tianyang Zhang (Powerlaw); Zhiyuan Liu (Tsinghua University); Maosong Sun (Tsinghua University)

  • Chemically Interpretable Graph Interaction Network for Prediction of Pharmacokinetic Properties of Drug-
    like Molecules

    Yashaswi Pathak (International Institute of Information Technology,Hyderabad); Siddhartha Laghuvarapu (IIIT
    Hyderabad); Sarvesh Mehta (IIIT Hyderabad); Deva Priyakumar (IIIT Hyderabad)*

  • Select, Answer and Explain: Interpretable Multi-hop Reading Comprehension over Multiple Documents

    Ming Tu (JD AI Research)*; Kevin Huang (JD AI Research); Guangtao Wang (JD.com); Jing Huang (JD.COM);
    Xiaodong He (JD AI Research); Bowen Zhou (JD)

Autoencoder

  • General Partial Label Learning via Dual Bipartite Graph Autoencoder

    Brian Chen (Columbia University); Bo Wu (Columbia University); Alireza Zareian (Columbia University); Hanwang
    Zhang (Nanyang Technological University); Shih-Fu Chang (Columbia University)*

  • A Variational Autoencoder with Deep Embedding Model for Generalized Zero-Shot Learning

    Peirong Ma (Guangzhou University); Xiao Hu (Guangzhou University)*

  • Graph Representation Learning via Ladder Gamma Variational Autoencoders

Arindam Sarkar (Amazon)*; Nikhil Mehta (Duke University); Piyush Rai (IIT Kanpur)

  • Semi-Supervised Text Simplification with Back-Translation and Asymmetric Denoising Autoencoders

    Yanbin Zhao (Shanghai Jiao Tong University)*; Lu Chen (Shanghai Jiao Tong University); Zhi Chen (Shanghai Jiao
    Tong University); Kai Yu (Shanghai Jiao Tong University)

  • Draft and Edit: Automatic Storytelling Through Multi-Pass Hierarchical Conditional Variational Autoencoder

    Meng Hsuan Yu (Peking University )*; Juntao Li (Peking University); Danyang Liu (Shanghai Jiao Tong University);
    Bo Tang (Southern University of Science and Technology); Haisong Zhang (Tencent AI Lab); Dongyan Zhao (Peking
    University); Rui Yan (Peking University)

  • Vector Quantization-Based Regularization for Autoencoders

    Hanwei Wu (KTH Royal Institute of Technology)*; Markus Flierl (KTH Royal Institute of Technology)

LSTM

  • SalSAC: A Video Saliency Prediction Model with Shuffled Attentions and Correlation-based ConvLSTM

    Xinyi Wu (University of South Carolina); Zhenyao Wu (University of South Carolina); jinglin zhang (Nanjing
    University of Information Science and Technology); Lili Ju (University of South Carolina); Song Wang (University of
    South Carolina)*

  • Graph LSTM with Context-Gated Mechanism for Spoken Language Understanding

    Linhao Zhang (Peking University)*; Dehong Ma (Peking University); Xiaodong Zhang (Peking University); Xiaohui
    Yan (Huawei Technologies); Houfeng Wang (Peking University)

  • Self-Attention ConvLSTM for Spatiotemporal Prediction

    Zhihui Lin (Tsinghua University)*; Maomao Li (Tsinghua university); Zhuobin Zheng ( Tsinghua University);
    Yangyang Cheng (Tsinghua University); Chun Yuan (Tsinghua University)

  • Bivariate Beta-LSTM

    Kyungwoo Song (KAIST)*; JoonHo Jang (KAIST); Seung jae Shin (KAIST); Il-Chul Moon (KAIST)

  • D2D-LSTM: LSTM-based Path Prediction of Content Diffusion Tree in Device-to-Device Social Networks

    Heng Zhang (Tianjin University)*; Xiaofei Wang (College of Intelligence and Computing,Tianjin University); Jiawen
    Chen (Tianjin University); Chenyang Wang (Tianjin University); Jianxin Li (Deakin University)

  • Why Attention? Analyze BiLSTM Deficiency and Its Remedies in the Case of NER

    Peng-Hsuan Li (Academia Sinica)*; Tsu-Jui Fu (National Tsing Hua University); Wei-yun Ma (台湾中研院)

  • CF-LSTM: Cascaded Feature-Based Long Short-Term Networks for Predicting Pedestrian Trajectory

    Yi Xu (Xi’an Jiaotong University); JING YANG (Xi’an Jiaotong University); Shaoyi Du (Xi’an Jiaotong Unviersity)*

Data augmentation

  • Random Erasing Data Augmentation

    Zhun Zhong (Xiamen University)*; Liang Zheng (Australian National University); Guoliang Kang (CMU); Shaozi Li
    (Xiamen University, China); Yi Yang (UTS)

    论文为CNN训练提出了一种新的数据增强方法。Random Erasing,在一张图片中随机的选择一个矩形框,在随机的位置上使用随机的值来擦除图片原来的像素。通过该方法能够给图片加入不同程度的遮挡,通过这样的训练数据,可以减少模型过拟合的风险同时对遮挡具有一定的鲁棒性。随机擦除和random cropping,random flipping一样可以作为数据增强的方法,在分类,检测和行人重识别领域能够取得不错的效果。

  • Effective Data Augmentation with Multi-Domain Learning GANs

    Shin’ya Yamaguchi (NTT)*; Sekitoshi Kanai (NTT Software Innovation Center/Keio University); Takeharu Eda (NTT)

  • CONAN: Complementary Pattern Augmentation for Rare Disease Detection

    Limeng Cui (Penn State University); Siddharth Biswal (Georgia Institute of Technology); Lucas Glass (IQVIA); Greg
    Lever (IQVIA); Jimeng Sun (Georgia Tech); Cao Xiao (IQVIA)*

    稀有疾病影响着全世界亿万人民,但由于它们的患病率极低(从1 / 1,000到1 / 200,000患者不等)并且被严重误诊,因此难以发现。我们如何可靠地检测出如此低的患病率?如何进一步利用诊断可能不确定的患者改善检测率?在本文中,我们提出了一种用于罕见病检测的互补模式增强(CONAN)框架。CONAN结合了对抗训练和最大利润率分类的思想。它首先学习自我专注和分层嵌入,以进行患者模式表征。然后,我们开发了互补的生成对抗网络(GAN)模型,通过鼓励各类别之间的最大差额来生成不确定患者的候选阳性和阴性样本。此外,CONAN具有疾病检测器,可在对抗性训练期间用作识别罕见疾病的识别器。我们在两项疾病检测任务上评估了CONAN。对于低流行性炎症性肠病(IBD)检测,CONAN曲线下的精确召回面积(PR-AUC)为0.96,相对于最佳基线,相对改善了50.1%。对于罕见疾病特发性肺纤维化(IPF)检测,CONAN达到0.22 PR-AUC,相对最佳基准,相对改善41.3%。

  • Nonlinear Mixup: Out-Of-Manifold Data Augmentation for Text Classification

    Hongyu Guo (National Research Council Canada)*

  • Dialog State Tracking with Reinforced Data Augmentation

    Yichun Yin (Noah’s Ark Lab of Huawei)*; Lifeng Shang (Noah’s Ark Lab); Xin Jiang (Huawei Noah’s Ark Lab); Xiao
    Chen (Huawei Noah’s Ark Lab); Qun Liu (Huawei Noah’s Ark Lab)

  • Constructing Multiple Tasks for Augmentation: Improving Neural Image Classification With K-means Features

    Tao Gui (Fudan University)*; Lizhi Qing (Fudan university); Qi Zhang (Fudan University); Jiacheng Ye (fudan
    university); Hang Yan (Fudan University); zichu fei (FUDAN University); Xuanjing Huang (“ Fudan University, China”)

  • Incorporating Label Embedding and Feature Augmentation for Multi-Dimensional Classification

    Haobo Wang (Zhejiang University)*; Chen Chen (Zhejiang University); Weiwei Liu (Wuhan University); Ke Chen (
    Zhejiang University); Tianlei Hu (Zhejiang University); Gang Chen (Zhejiang University)

    通过集成标签信息来操纵特征空间的特征增强是解决多维分类(MDC)问题的最流行策略之一。但是,香草特征增强方法无法考虑类内的排他性,并且可能会导致性能下降。为了填补这一空白,提出了一种基于神经网络的新型模型,该模型将标签嵌入和特征增强(LEFA)技术无缝集成以学习标签相关性。具体地,基于注意力分解机,引入了互相关感知网络以学习低维标签表示,其同时描绘了类间相关性和类内排他性。然后,可以将学习到的潜在标签矢量用于扩大原始特征空间。