Chen Chen , Qianfei Liu , Renpeng Xu , Ying Zhang , Huiru Wang , Qingmin Yu
{"title":"通过具有结构信息的 L0/1 软边际损失实现多视角支持向量机分类器","authors":"Chen Chen , Qianfei Liu , Renpeng Xu , Ying Zhang , Huiru Wang , Qingmin Yu","doi":"10.1016/j.inffus.2024.102733","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-view learning seeks to leverage the advantages of various views to complement each other and make full use of the latent information in the data. Nevertheless, effectively exploring and utilizing common and complementary information across diverse views remains challenging. In this paper, we propose two multi-view classifiers: multi-view support vector machine via <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></math></span> soft-margin loss (Mv<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></math></span>-SVM) and structural Mv<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></math></span>-SVM (Mv<span><math><mrow><mi>S</mi><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></mrow></math></span>-SVM). The key difference between them is that Mv<span><math><mrow><mi>S</mi><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></mrow></math></span>-SVM additionally fuses structural information, which simultaneously satisfies the consensus and complementarity principles. Despite the discrete nature inherent in the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></math></span> soft-margin loss, we successfully establish the optimality theory for Mv<span><math><mrow><mi>S</mi><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></mrow></math></span>-SVM. This includes demonstrating the existence of optimal solutions and elucidating their relationships with P-stationary points. Drawing inspiration from the P-stationary point optimality condition, we design and integrate a working set strategy into the proximal alternating direction method of multipliers. This integration significantly enhances the overall computational speed and diminishes the number of support vectors. Last but not least, numerical experiments show that our suggested models perform exceptionally well and have faster computational speed, affirming the rationality and effectiveness of our methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"115 ","pages":"Article 102733"},"PeriodicalIF":14.7000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-view support vector machine classifier via L0/1 soft-margin loss with structural information\",\"authors\":\"Chen Chen , Qianfei Liu , Renpeng Xu , Ying Zhang , Huiru Wang , Qingmin Yu\",\"doi\":\"10.1016/j.inffus.2024.102733\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-view learning seeks to leverage the advantages of various views to complement each other and make full use of the latent information in the data. Nevertheless, effectively exploring and utilizing common and complementary information across diverse views remains challenging. In this paper, we propose two multi-view classifiers: multi-view support vector machine via <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></math></span> soft-margin loss (Mv<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></math></span>-SVM) and structural Mv<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></math></span>-SVM (Mv<span><math><mrow><mi>S</mi><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></mrow></math></span>-SVM). The key difference between them is that Mv<span><math><mrow><mi>S</mi><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></mrow></math></span>-SVM additionally fuses structural information, which simultaneously satisfies the consensus and complementarity principles. Despite the discrete nature inherent in the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></math></span> soft-margin loss, we successfully establish the optimality theory for Mv<span><math><mrow><mi>S</mi><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn><mo>/</mo><mn>1</mn></mrow></msub></mrow></math></span>-SVM. This includes demonstrating the existence of optimal solutions and elucidating their relationships with P-stationary points. Drawing inspiration from the P-stationary point optimality condition, we design and integrate a working set strategy into the proximal alternating direction method of multipliers. This integration significantly enhances the overall computational speed and diminishes the number of support vectors. Last but not least, numerical experiments show that our suggested models perform exceptionally well and have faster computational speed, affirming the rationality and effectiveness of our methods.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"115 \",\"pages\":\"Article 102733\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2024-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253524005116\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253524005116","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Multi-view support vector machine classifier via L0/1 soft-margin loss with structural information
Multi-view learning seeks to leverage the advantages of various views to complement each other and make full use of the latent information in the data. Nevertheless, effectively exploring and utilizing common and complementary information across diverse views remains challenging. In this paper, we propose two multi-view classifiers: multi-view support vector machine via soft-margin loss (Mv-SVM) and structural Mv-SVM (Mv-SVM). The key difference between them is that Mv-SVM additionally fuses structural information, which simultaneously satisfies the consensus and complementarity principles. Despite the discrete nature inherent in the soft-margin loss, we successfully establish the optimality theory for Mv-SVM. This includes demonstrating the existence of optimal solutions and elucidating their relationships with P-stationary points. Drawing inspiration from the P-stationary point optimality condition, we design and integrate a working set strategy into the proximal alternating direction method of multipliers. This integration significantly enhances the overall computational speed and diminishes the number of support vectors. Last but not least, numerical experiments show that our suggested models perform exceptionally well and have faster computational speed, affirming the rationality and effectiveness of our methods.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.