引用本文: | 冀中,王思迪,于云龙.多模态交叉解耦的少样本学习方法.[J].国防科技大学学报,2024,46(1):12-21.[点击复制] |
JI Zhong,WANG Sidi,YU Yunlong.Multimodal cross-decoupling for few-shot learning[J].Journal of National University of Defense Technology,2024,46(1):12-21[点击复制] |
|
|
|
本文已被:浏览 3832次 下载 2839次 |
多模态交叉解耦的少样本学习方法 |
冀中1,王思迪1,于云龙2 |
(1. 天津大学 电气自动化与信息工程学院, 天津 300072;2. 浙江大学 信息与电子工程学院, 浙江 杭州 310027)
|
摘要: |
当前的多模态少样本学习方法忽视了属性间差异对正确识别样本类别的影响。针对这一问题,提出一种利用多模态交叉解耦的方法,通过解耦不同属性语义特征,并经过特征重建学习样本的本质类别特征,缓解类别属性差异对类别判别的影响。在两个属性差异较大的基准少样本数据集MIT-States和C-GQA上进行的大量实验表明,所提方法较现有方法有较大的性能提升,充分验证了方法的有效性,表明多模态交叉解耦的少样本学习方法能够提升识别少量测试样本的分类性能。 |
关键词: 少样本学习 多模态学习 特征解耦 属性 |
DOI:10.11887/j.cn.202401002 |
投稿日期:2022-06-21 |
基金项目:浙江省重点研发计划资助项目(2021C01119);国家自然科学基金资助项目(62176178,62002320,U19B2043) |
|
Multimodal cross-decoupling for few-shot learning |
JI Zhong1, WANG Sidi1, YU Yunlong2 |
(1. School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;2. College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou 310027, China)
|
Abstract: |
Current multi-modal few-shot learning methods overlook the impact of inter-attribute differences on accurately recognizing sample categories. To address this problem, a multimodal cross-decoupling method was proposed which could decouple semantic features with different attributes and reconstruct the essential category features of samples, aiming to alleviate the impact of category attribute differences on category discrimination. Extensive experiments on two benchmark few-shot datasets MIT-States and C-GQA with large attribute discrepancy indicates that the proposed method outperforms the existing approaches, which fully verifies its effectiveness, indicating that the multimodal cross-decoupling few-shot learning method can improve the classification performance of identifying few test samples. |
Keywords: few-shot learning multimodal learning feature decoupling attribute |
|
|
|
|
|