基于三重网络的一致性互补约束重组医学图像多视图学习

Xingyue Wang, Jiansheng Fang, Na Zeng, Jingqi Huang, Hanpei Miao, W. Kwapong, Ziyi Zhang, Shuting Zhang, Jiang Liu
{"title":"基于三重网络的一致性互补约束重组医学图像多视图学习","authors":"Xingyue Wang, Jiansheng Fang, Na Zeng, Jingqi Huang, Hanpei Miao, W. Kwapong, Ziyi Zhang, Shuting Zhang, Jiang Liu","doi":"10.1109/BIBM55620.2022.9995213","DOIUrl":null,"url":null,"abstract":"Existing multi-view learning methods based on the information bottleneck principle exhibit impressing generalization by capturing inter-view consistency and complementarity. They leverage cross-view joint information (consistency) and view-specific information (complementarity) while discarding redundant information. By fusing visual features, multi-view learning methods help medical image processing to produce more reliable predictions. However, multi-views of medical images often have low consistency and high complementarity due to modal differences in imaging or different projection depths, thus challenging existing methods to balance them to the maximal extent. To mitigate such an issue, we improve the information bottleneck (IB) loss function with a balanced regularization term, termed IBB loss, reassembling the constraints of multi-view consistency and complementarity. In particular, the balanced regularization term with a unique trade-off factor in IBB loss helps minimize the mutual information on consistency and complementarity to strike a balance. In addition, we devise a triplet multi-view network named TM net to learn the consistent and complementary features from multi-view medical images. By evaluating two datasets, we demonstrate the superiority of our method against several counterparts. The extensive experiments also confirm that our IBB loss significantly improves multi-view learning in medical images.","PeriodicalId":210337,"journal":{"name":"2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reassembling Consistent-Complementary Constraints in Triplet Network for Multi-view Learning of Medical Images\",\"authors\":\"Xingyue Wang, Jiansheng Fang, Na Zeng, Jingqi Huang, Hanpei Miao, W. Kwapong, Ziyi Zhang, Shuting Zhang, Jiang Liu\",\"doi\":\"10.1109/BIBM55620.2022.9995213\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Existing multi-view learning methods based on the information bottleneck principle exhibit impressing generalization by capturing inter-view consistency and complementarity. They leverage cross-view joint information (consistency) and view-specific information (complementarity) while discarding redundant information. By fusing visual features, multi-view learning methods help medical image processing to produce more reliable predictions. However, multi-views of medical images often have low consistency and high complementarity due to modal differences in imaging or different projection depths, thus challenging existing methods to balance them to the maximal extent. To mitigate such an issue, we improve the information bottleneck (IB) loss function with a balanced regularization term, termed IBB loss, reassembling the constraints of multi-view consistency and complementarity. In particular, the balanced regularization term with a unique trade-off factor in IBB loss helps minimize the mutual information on consistency and complementarity to strike a balance. In addition, we devise a triplet multi-view network named TM net to learn the consistent and complementary features from multi-view medical images. By evaluating two datasets, we demonstrate the superiority of our method against several counterparts. The extensive experiments also confirm that our IBB loss significantly improves multi-view learning in medical images.\",\"PeriodicalId\":210337,\"journal\":{\"name\":\"2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BIBM55620.2022.9995213\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIBM55620.2022.9995213","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现有的基于信息瓶颈原理的多视图学习方法通过捕捉视图间一致性和互补性表现出令人印象深刻的泛化。它们利用跨视图联合信息(一致性)和特定于视图的信息(互补性),同时丢弃冗余信息。通过融合视觉特征,多视图学习方法有助于医学图像处理产生更可靠的预测。然而,由于成像模态的差异或投影深度的不同,多视图医学图像往往具有一致性低、互补性高的特点,这给现有方法最大程度地平衡它们带来了挑战。为了缓解这一问题,我们用平衡正则化项IBB loss改进了信息瓶颈(IB)损失函数,重组了多视图一致性和互补性约束。特别是,IBB损失中具有唯一权衡因子的平衡正则化项有助于最小化一致性和互补性的相互信息,从而达到平衡。此外,我们设计了一个名为TM网的三联体多视图网络,从多视图医学图像中学习一致和互补的特征。通过评估两个数据集,我们证明了我们的方法相对于几个同类方法的优越性。大量的实验也证实了我们的IBB损失显著改善了医学图像的多视图学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reassembling Consistent-Complementary Constraints in Triplet Network for Multi-view Learning of Medical Images
Existing multi-view learning methods based on the information bottleneck principle exhibit impressing generalization by capturing inter-view consistency and complementarity. They leverage cross-view joint information (consistency) and view-specific information (complementarity) while discarding redundant information. By fusing visual features, multi-view learning methods help medical image processing to produce more reliable predictions. However, multi-views of medical images often have low consistency and high complementarity due to modal differences in imaging or different projection depths, thus challenging existing methods to balance them to the maximal extent. To mitigate such an issue, we improve the information bottleneck (IB) loss function with a balanced regularization term, termed IBB loss, reassembling the constraints of multi-view consistency and complementarity. In particular, the balanced regularization term with a unique trade-off factor in IBB loss helps minimize the mutual information on consistency and complementarity to strike a balance. In addition, we devise a triplet multi-view network named TM net to learn the consistent and complementary features from multi-view medical images. By evaluating two datasets, we demonstrate the superiority of our method against several counterparts. The extensive experiments also confirm that our IBB loss significantly improves multi-view learning in medical images.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A framework for associating structural variants with cell-specific transcription factors and histone modifications in defect phenotypes Secure Password Using EEG-based BrainPrint System: Unlock Smartphone Password Using Brain-Computer Interface Technology On functional annotation with gene co-expression networks ST-ChIP: Accurate prediction of spatiotemporal ChIP-seq data with recurrent neural networks Discovering the Knowledge in Unstructured Early Drug Development Data Using NLP and Advanced Analytics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1