{"title":"ble2:使用粗糙集从例子中学习贝叶斯规则","authors":"Chien-Chung Chan, Santhosh Sengottiyan","doi":"10.1109/NAFIPS.2003.1226779","DOIUrl":null,"url":null,"abstract":"This paper introduces an algorithm for learning Bayes' rules from examples using rough sets. Induced rules are associated with properties of support, certainty, strength, and coverage factors as defined by Pawlak in his study of connections between rough set theory and Bayes' theorem. Differences between the two learning algorithms LEM2 and BLEM2 are presented. An idea of how to develop an optimized inference engine by taking advantage of induced rule properties is discussed.","PeriodicalId":153530,"journal":{"name":"22nd International Conference of the North American Fuzzy Information Processing Society, NAFIPS 2003","volume":"29 14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"BLEM2: learning Bayes' rules from examples using rough sets\",\"authors\":\"Chien-Chung Chan, Santhosh Sengottiyan\",\"doi\":\"10.1109/NAFIPS.2003.1226779\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces an algorithm for learning Bayes' rules from examples using rough sets. Induced rules are associated with properties of support, certainty, strength, and coverage factors as defined by Pawlak in his study of connections between rough set theory and Bayes' theorem. Differences between the two learning algorithms LEM2 and BLEM2 are presented. An idea of how to develop an optimized inference engine by taking advantage of induced rule properties is discussed.\",\"PeriodicalId\":153530,\"journal\":{\"name\":\"22nd International Conference of the North American Fuzzy Information Processing Society, NAFIPS 2003\",\"volume\":\"29 14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"22nd International Conference of the North American Fuzzy Information Processing Society, NAFIPS 2003\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NAFIPS.2003.1226779\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"22nd International Conference of the North American Fuzzy Information Processing Society, NAFIPS 2003","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NAFIPS.2003.1226779","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
BLEM2: learning Bayes' rules from examples using rough sets
This paper introduces an algorithm for learning Bayes' rules from examples using rough sets. Induced rules are associated with properties of support, certainty, strength, and coverage factors as defined by Pawlak in his study of connections between rough set theory and Bayes' theorem. Differences between the two learning algorithms LEM2 and BLEM2 are presented. An idea of how to develop an optimized inference engine by taking advantage of induced rule properties is discussed.