{"title":"分类的概率层次学习理论","authors":"Ziauddin Ursani, Ahsan Ahmad Ursani","doi":"10.33166/aetic.2023.01.005","DOIUrl":null,"url":null,"abstract":"Providing the ability of classification to computers has remained at the core of the faculty of artificial intelligence. Its application has now made inroads towards nearly every walk of life, spreading over healthcare, education, defence, economics, linguistics, sociology, literature, transportation, agriculture, and industry etc. To our understanding most of the problems faced by us can be formulated as classification problems. Therefore, any novel contribution in this area has a great potential of applications in the real world. This paper proposes a novel way of learning from classification datasets i.e., hierarchical learning through set partitioning. The theory of probabilistic hierarchical learning for classification has been evolved through several works while widening its scope with each instance. The theory demonstrates that the classification of any dataset can be learnt by generating a hierarchy of learnt models each capable of classifying a disjoint subset of the training set. The basic assertion behind the theory is that an accurate classification of complex datasets can be achieved through hierarchical application of low complexity models. In this paper, the theory is redefined and revised based on four mathematical principles namely, principle of successive bifurcation, principle of two-tier discrimination, principle of class membership and the principle of selective data normalization. The algorithmic implementation of each principle is also discussed. The scope of the approach is now further widened to include ten popular real-world datasets in its test base. This approach does not only produce their accurate models but also produced above 95% accuracy on average with regard to the generalising ability, which is competitive with the contemporary literature.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"277 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The Theory of Probabilistic Hierarchical Learning for Classification\",\"authors\":\"Ziauddin Ursani, Ahsan Ahmad Ursani\",\"doi\":\"10.33166/aetic.2023.01.005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Providing the ability of classification to computers has remained at the core of the faculty of artificial intelligence. Its application has now made inroads towards nearly every walk of life, spreading over healthcare, education, defence, economics, linguistics, sociology, literature, transportation, agriculture, and industry etc. To our understanding most of the problems faced by us can be formulated as classification problems. Therefore, any novel contribution in this area has a great potential of applications in the real world. This paper proposes a novel way of learning from classification datasets i.e., hierarchical learning through set partitioning. The theory of probabilistic hierarchical learning for classification has been evolved through several works while widening its scope with each instance. The theory demonstrates that the classification of any dataset can be learnt by generating a hierarchy of learnt models each capable of classifying a disjoint subset of the training set. The basic assertion behind the theory is that an accurate classification of complex datasets can be achieved through hierarchical application of low complexity models. In this paper, the theory is redefined and revised based on four mathematical principles namely, principle of successive bifurcation, principle of two-tier discrimination, principle of class membership and the principle of selective data normalization. The algorithmic implementation of each principle is also discussed. The scope of the approach is now further widened to include ten popular real-world datasets in its test base. This approach does not only produce their accurate models but also produced above 95% accuracy on average with regard to the generalising ability, which is competitive with the contemporary literature.\",\"PeriodicalId\":36440,\"journal\":{\"name\":\"Annals of Emerging Technologies in Computing\",\"volume\":\"277 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Emerging Technologies in Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.33166/aetic.2023.01.005\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Emerging Technologies in Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33166/aetic.2023.01.005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
The Theory of Probabilistic Hierarchical Learning for Classification
Providing the ability of classification to computers has remained at the core of the faculty of artificial intelligence. Its application has now made inroads towards nearly every walk of life, spreading over healthcare, education, defence, economics, linguistics, sociology, literature, transportation, agriculture, and industry etc. To our understanding most of the problems faced by us can be formulated as classification problems. Therefore, any novel contribution in this area has a great potential of applications in the real world. This paper proposes a novel way of learning from classification datasets i.e., hierarchical learning through set partitioning. The theory of probabilistic hierarchical learning for classification has been evolved through several works while widening its scope with each instance. The theory demonstrates that the classification of any dataset can be learnt by generating a hierarchy of learnt models each capable of classifying a disjoint subset of the training set. The basic assertion behind the theory is that an accurate classification of complex datasets can be achieved through hierarchical application of low complexity models. In this paper, the theory is redefined and revised based on four mathematical principles namely, principle of successive bifurcation, principle of two-tier discrimination, principle of class membership and the principle of selective data normalization. The algorithmic implementation of each principle is also discussed. The scope of the approach is now further widened to include ten popular real-world datasets in its test base. This approach does not only produce their accurate models but also produced above 95% accuracy on average with regard to the generalising ability, which is competitive with the contemporary literature.