{"title":"从三重比较包中进行多实例学习","authors":"Senlin Shu, Deng-Bao Wang, Suqin Yuan, Hongxin Wei, Jiuchuan Jiang, Lei Feng, Min-Ling Zhang","doi":"10.1145/3638776","DOIUrl":null,"url":null,"abstract":"<p><i>Multiple-instance learning</i> (MIL) solves the problem where training instances are grouped in bags, and a binary (positive or negative) label is provided for each bag. Most of the existing MIL studies need fully labeled bags for training an effective classifier, while it could be quite hard to collect such data in many real-world scenarios, due to the high cost of data labeling process. Fortunately, unlike fully labeled data, <i>triplet comparison data</i> can be collected in a more accurate and human-friendly way. Therefore, in this paper, we for the first time investigate MIL from <i>only triplet comparison bags</i>, where a triplet (<i>X<sub>a</sub></i>, <i>X<sub>b</sub></i>, <i>X<sub>c</sub></i>) contains the weak supervision information that bag <i>X<sub>a</sub></i> is more similar to <i>X<sub>b</sub></i> than to <i>X<sub>c</sub></i>. To solve this problem, we propose to train a bag-level classifier by the <i>empirical risk minimization</i> framework and theoretically provide a generalization error bound. We also show that a convex formulation can be obtained only when specific convex binary losses such as the square loss and the double hinge loss are used. Extensive experiments validate that our proposed method significantly outperforms other baselines.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"30 8","pages":""},"PeriodicalIF":4.0000,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiple-Instance Learning from Triplet Comparison Bags\",\"authors\":\"Senlin Shu, Deng-Bao Wang, Suqin Yuan, Hongxin Wei, Jiuchuan Jiang, Lei Feng, Min-Ling Zhang\",\"doi\":\"10.1145/3638776\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><i>Multiple-instance learning</i> (MIL) solves the problem where training instances are grouped in bags, and a binary (positive or negative) label is provided for each bag. Most of the existing MIL studies need fully labeled bags for training an effective classifier, while it could be quite hard to collect such data in many real-world scenarios, due to the high cost of data labeling process. Fortunately, unlike fully labeled data, <i>triplet comparison data</i> can be collected in a more accurate and human-friendly way. Therefore, in this paper, we for the first time investigate MIL from <i>only triplet comparison bags</i>, where a triplet (<i>X<sub>a</sub></i>, <i>X<sub>b</sub></i>, <i>X<sub>c</sub></i>) contains the weak supervision information that bag <i>X<sub>a</sub></i> is more similar to <i>X<sub>b</sub></i> than to <i>X<sub>c</sub></i>. To solve this problem, we propose to train a bag-level classifier by the <i>empirical risk minimization</i> framework and theoretically provide a generalization error bound. We also show that a convex formulation can be obtained only when specific convex binary losses such as the square loss and the double hinge loss are used. Extensive experiments validate that our proposed method significantly outperforms other baselines.</p>\",\"PeriodicalId\":49249,\"journal\":{\"name\":\"ACM Transactions on Knowledge Discovery from Data\",\"volume\":\"30 8\",\"pages\":\"\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Knowledge Discovery from Data\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3638776\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Knowledge Discovery from Data","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3638776","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
多实例学习(Multiple-instance Learning,MIL)解决的问题是将训练实例分组为袋,并为每个袋提供二元(正或负)标签。现有的大多数 MIL 研究都需要完全标记的袋来训练有效的分类器,而由于数据标记过程的成本较高,在现实世界的许多场景中可能很难收集到这样的数据。幸运的是,与完全标记数据不同,三元组比较数据可以以更准确、更人性化的方式收集。因此,在本文中,我们首次研究了仅来自三元组比较袋的 MIL,其中三元组(Xa, Xb, Xc)包含弱监督信息,即袋 Xa 与 Xb 的相似度高于与 Xc 的相似度。为了解决这个问题,我们建议通过经验风险最小化框架来训练袋级分类器,并从理论上提供了泛化误差约束。我们还证明,只有在使用特定的凸二元损失(如平方损失和双铰链损失)时,才能获得凸表述。大量实验验证了我们提出的方法明显优于其他基线方法。
Multiple-Instance Learning from Triplet Comparison Bags
Multiple-instance learning (MIL) solves the problem where training instances are grouped in bags, and a binary (positive or negative) label is provided for each bag. Most of the existing MIL studies need fully labeled bags for training an effective classifier, while it could be quite hard to collect such data in many real-world scenarios, due to the high cost of data labeling process. Fortunately, unlike fully labeled data, triplet comparison data can be collected in a more accurate and human-friendly way. Therefore, in this paper, we for the first time investigate MIL from only triplet comparison bags, where a triplet (Xa, Xb, Xc) contains the weak supervision information that bag Xa is more similar to Xb than to Xc. To solve this problem, we propose to train a bag-level classifier by the empirical risk minimization framework and theoretically provide a generalization error bound. We also show that a convex formulation can be obtained only when specific convex binary losses such as the square loss and the double hinge loss are used. Extensive experiments validate that our proposed method significantly outperforms other baselines.
期刊介绍:
TKDD welcomes papers on a full range of research in the knowledge discovery and analysis of diverse forms of data. Such subjects include, but are not limited to: scalable and effective algorithms for data mining and big data analysis, mining brain networks, mining data streams, mining multi-media data, mining high-dimensional data, mining text, Web, and semi-structured data, mining spatial and temporal data, data mining for community generation, social network analysis, and graph structured data, security and privacy issues in data mining, visual, interactive and online data mining, pre-processing and post-processing for data mining, robust and scalable statistical methods, data mining languages, foundations of data mining, KDD framework and process, and novel applications and infrastructures exploiting data mining technology including massively parallel processing and cloud computing platforms. TKDD encourages papers that explore the above subjects in the context of large distributed networks of computers, parallel or multiprocessing computers, or new data devices. TKDD also encourages papers that describe emerging data mining applications that cannot be satisfied by the current data mining technology.