Related Papers in ICLR 2022 (2022.04.25)

2022/02/25 00:00:00 2022/02/25 00:00:00 paper list

Accept paper list

anomaly detection [anomaly, outlier, out-of-distribution, one-class]

  • Anomaly Detection for Tabular Data with Internal Contrastive Learning

    Tom Shenkar, Lior Wolf

    摘要: 我们考虑在表格数据中寻找类外样本的任务,其中几乎不能假设数据的结构。
    为了捕捉单个训练类样本的结构,我们学习了最大化每个样本与被屏蔽部分之间的互信息的映射。通过使用对比损失来学习映射,该损失一次只考虑一个样本。一旦学习,我们可以通过使用该样本的掩码部分测量学习的映射是否导致小的对比损失来对测试样本进行评分。我们的实验表明,与文献相比,我们的方法存在相当大的准确性差距,并且相同的默认超参数集在基准测试中提供了最先进的结果。

    一句话总结: 一种基于预测向量中被屏蔽部分的能力的异常检测方法。

  • Igeood: An Information Geometry Approach to Out-of-Distribution Detection

    Eduardo Dadalto Camara Gomes, Florence Alberge, Pierre Duhamel, Pablo Piantanida

    摘要:可靠的分布外 (OOD) 检测是实现更安全的现代机器学习 (ML) 系统的基础。在本文中,我们介绍了 Igeood,一种检测 OOD 样本的有效方法。Igeood 适用于任何预训练的神经网络,在对 ML 模型的不同程度的访问下工作,不需要 OOD 样本或对 OOD 数据的假设,但也可以从 OOD 样本中受益(如果有的话)。通过建立基础数据分布之间的测地线(Fisher-Rao)距离,我们的鉴别器可以结合来自 logits 输出的置信度分数和深度神经网络的学习特征。根据经验,我们表明 Igeood 在各种网络架构和数据集上优于竞争的最先进方法。

    一句话总结: 我们通过建立概率分布之间的 Fisher-Rao 距离,提出了一种灵活有效的分布外检测方法。

  • VOS: Learning What You Don’t Know by Virtual Outlier Synthesis

    Xuefeng Du, Zhaoning Wang, Mu Cai, Yixuan Li

    摘要: 由于其在神经网络的安全部署中的重要性,分布外(OOD)检测最近受到了很多关注。主要挑战之一是模型缺乏来自未知数据的监督信号,因此可能会对 OOD 数据产生过度自信的预测。以前的方法依赖于真正的异常数据集来进行模型正则化,这在实践中可能代价高昂,有时甚至不可行。在本文中,我们提出了 VOS,这是一种新的 OOD 检测框架,通过自适应合成虚拟异常值,可以在训练期间有意义地规范模型的决策边界。具体来说,VOS 从特征空间中估计的类条件分布的低似然区域采样虚拟异常值。此外,我们引入了一个新颖的未知感知训练目标,它对比地塑造了 ID 数据和合成异常值数据之间的不确定性空间。VOS 在物体检测和图像分类模型上都取得了具有竞争力的性能,与之前在物体检测器上的最佳方法相比,FPR95 降低了高达 7.87%。代码可在 https://github.com/deeplearning-wisc/vos 获得。

Time series

  • Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting

    Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, Schahram Dustdar

  • CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting

    Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, Steven Hoi

  • Huber Additive Models for Non-stationary Time Series Analysis

    Yingjie Wang, Xianrui Zhong, Fengxiang He, Hong Chen, Dacheng Tao

  • DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting

    Wei Fan, Shun Zheng, Xiaohan Yi, Wei Cao, Yanjie Fu, Jiang Bian, Tie-Yan Liu

  • Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift

    Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, Jaegul Choo

  • Omni-Scale CNNs: a simple and effective kernel size configuration for time series classification

    Wensi Tang, Guodong Long, Lu Liu, Tianyi Zhou, Michael Blumenstein, Jing Jiang

  • T-WaveNet: A Tree-Structured Wavelet Neural Network for Time Series Signal Analysis

    Minhao LIU, Ailing Zeng, Qiuxia LAI, Ruiyuan Gao, Min Li, Jing Qin, Qiang Xu

  • Graph-Guided Network for Irregularly Sampled Multivariate Time Series

    Xiang Zhang, Marko Zeman, Theodoros Tsiligkaridis, Marinka Zitnik

  • Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series

    Satya Narayan Shukla, Benjamin Marlin

  • Filling the G_ap_s: Multivariate Time Series Imputation by Graph Neural Networks

    Andrea Cini, Ivan Marisca, Cesare Alippi

  • Coherence-based Label Propagation over Time Series for Accelerated Active Learning

    Yooju Shin, Susik Yoon, Sundong Kim, Hwanjun Song, Jae-Gil Lee, Byung Suk Lee

  • PSA-GAN: Progressive Self Attention GANs for Synthetic Time Series

    Paul Jeha, Michael Bohlke-Schneider, Pedro Mercado, Shubham Kapoor, Rajbir Singh Nirwan, Valentin Flunkert, Jan Gasthaus, Tim Januschowski

Sequence learning

  • Efficiently Modeling Long Sequences with Structured State Spaces

    Albert Gu, Karan Goel, Christopher Re

  • Long Expressive Memory for Sequence Modeling

    T. Konstantin Rusch, Siddhartha Mishra, N. Benjamin Erichson, Michael W. Mahoney

  • 【需要看看】 On the approximation properties of recurrent encoder-decoder architectures
    Zhong Li, Haotian Jiang, Qianxiao Li

    摘要: 编码器-解码器架构最近在序列到序列建模方面获得了普及,在最先进的模型(如转换器)中具有特色。然而,对其工作原理的数学理解仍然有限。在本文中,我们研究了循环编码器-解码器架构的近似特性。先前的工作为线性设置中的 RNN 建立了理论结果,其中近似能力可能与目标时间关系的平滑度和记忆有关。在这里,我们发现编码器和解码器一起形成了一个特定的“时间积结构”,它决定了逼近效率。此外,编码器-解码器架构泛化了具有学习时间非均匀关系的能力的 RNN。

    一句话总结: 给出了循环编码器-解码器架构的近似属性,其中形成的时间积结构进一步表征了能够有效学习
    的时间关系。

  • Temporal Alignment Prediction for Supervised Representation Learning and Few-Shot Sequence Classification

    Bing Su, Ji-Rong Wen

interpretable/interpretability

  • Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset

    Leon Sixt, Martin Schuessler, Oana-Iuliana Popescu, Philipp Weiß, Tim Landgraf

  • Toward Faithful Case-based Reasoning through Learning Prototypes in a Nearest Neighbor-friendly Space.

    Seyed Omid Davoudi, Majid Komeili

  • Explaining Point Processes by Learning Interpretable Temporal Logic Rules

    Shuang Li, Mingquan Feng, Lu Wang, Abdelmajid Essofi, Yufeng Cao, Junchi Yan, Le Song

  • Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions

    Arda Sahiner, Tolga Ergen, Batu Ozturkler, Burak Bartan, John M. Pauly, Morteza Mardani, Mert Pilanci

  • Model Agnostic Interpretability for Multiple Instance Learning

    Joseph Early, Christine Evers, SArvapali Ramchurn

  • POETREE: Interpretable Policy Learning with Adaptive Decision Trees

    Alizée Pace, Alex Chan, Mihaela van der Schaar

  • NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning

    Chun-Hao Chang, Rich Caruana, Anna Goldenberg