梯度对齐下的目标检测模型对抗鲁棒性提升
作者:
作者单位:

1.安徽工业大学 计算机科学与技术学院;2.国防科技大学电子对抗学院

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

安徽省自然科学基金资助项目(2208085MF168);安徽省高校科研计划资助项目(2023AH040149)


Enhancing Adversarial Robustness in Object Detection: A Gradient Alignment Approach
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献()
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对目标检测模型在受到对抗攻击时对抗鲁棒性不足的问题,提出了基于梯度对齐的目标检测模型对抗鲁棒性提升方法。该方法在模型对抗训练阶段,构建基于对抗损失和对齐损失的复合损失函数,并通过引入梯度对齐策略,约束对抗样本与干净样本之间的梯度差异。结合知识蒸馏的监督信号和自监督学习的表征能力,最大化对抗样本与干净样本的特征相似度。在PASCAL VOC和MS COCO数据集上的实验结果表明,所提方法可以有效提升模型在对抗样本上的对抗鲁棒精度。

    Abstract:

    To better address the problem of insufficient adversarial robustness of object detection models under adversarial attacks, an adversarial robustness enhancement method for object detection models based on gradient alignment is proposed. During the adversarial training phase, we construct a composite loss function based on adversarial loss and alignment loss, and by introducing a gradient alignment strategy, the gradient differences between adversarial examples and clean examples can be constrained. Furthermore, combining the supervisory signals of knowledge distillation and the representational capability of self-supervised learning, our approach maximizes the feature similarity between adversarial and clean examples. Experimental results on the PASCAL VOC and MS COCO datasets demonstrate that the proposed method effectively improves the model"s adversarial robustness accuracy against adversarial examples.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-03-21
  • 最后修改日期:2025-10-13
  • 录用日期:2025-06-12
  • 在线发布日期:
  • 出版日期:
文章二维码