当前位置:科学网首页 > 小柯机器人 >详情
心电图的深度学习模型对对抗攻击敏感
作者:小柯机器人 发布时间:2020/3/22 19:49:59

美国纽约大学Rajesh Ranganath、Xintian Han等研究人员合作发现,心电图的深度学习模型对对抗攻击敏感。相关论文于2020年3月9日在线发表在《自然—医学》杂志上。

研究人员表示,心电图(ECG)采集在医疗和商业设备中越来越普遍,因此有必要开发自动注释程序。近年来,深度神经网络已被用于自动分析心电图描记,并在检测某些节律不规则方面优于医师。但是,深度学习分类器容易受到对抗性示例的攻击,这些对抗性示例是通过原始数据创建的,以欺骗分类器,从而将示例分配给错误的类别,但人眼无法察觉。对抗性示例也被创建用于医疗相关任务。但是,创建对抗性示例的传统攻击方法并未直接扩展到ECG信号,因为此类方法会引入生理上不合理的方波伪像。
 
研究人员开发了一种方法,可以构建缓和的对抗样本用于ECG追踪,并表明用于从单导联ECG中检测到心律失常的深度学习模型容易受到此类攻击。此外,研究人员提供了一种通用技术,用于整理和干扰已知的对抗性示例来创建多个新示例。深度学习ECG算法对对抗性错误分类的敏感性表明,在可能已被更改的ECG上评估这些模型时应小心,尤其是当存在导致错误分类的诱因时。
 
附:英文原文

Title: Deep learning models for electrocardiograms are susceptible to adversarial attack

Author: Xintian Han, Yuxuan Hu, Luca Foschini, Larry Chinitz, Lior Jankelson, Rajesh Ranganath

Issue&Volume: 2020-03-09

Abstract: Electrocardiogram (ECG) acquisition is increasingly widespread in medical and commercial devices, necessitating the development of automated interpretation strategies. Recently, deep neural networks have been used to automatically analyze ECG tracings and outperform physicians in detecting certain rhythm irregularities1. However, deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye2,3. Adversarial examples have also been created for medical-related tasks4,5. However, traditional attack methods to create adversarial examples do not extend directly to ECG signals, as such methods introduce square-wave artefacts that are not physiologically plausible. Here we develop a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation and show that a deep learning model for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack. Moreover, we provide a general technique for collating and perturbing known adversarial examples to create multiple new ones. The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist.

DOI: 10.1038/s41591-020-0791-x

Source: https://www.nature.com/articles/s41591-020-0791-x

期刊信息

Nature Medicine:《自然—医学》,创刊于1995年。隶属于施普林格·自然出版集团,最新IF:30.641
官方网址:https://www.nature.com/nm/
投稿链接:https://mts-nmed.nature.com/cgi-bin/main.plex