{"title":"图论降维:一个预测帕金森病的案例研究","authors":"Shithi Maitra, Tonmoy Hossain, Khan Md Hasib, Fairuz Shadmani Shishir","doi":"10.1109/IEMCON51383.2020.9284926","DOIUrl":null,"url":null,"abstract":"In the present world, the commotion centering Big Data is somewhat obscuring the craft of mining information from smaller samples. Populations with limited examples but huge dimensionality are a common phenomenon, otherwise known as the curse of dimensionality-especially in the health sector-thanks to the recently-discovered potential of data mining and the enthusiasm for feature engineering. Correlated, noisy, redundant features are byproducts of this tendency, which makes learning algorithms converge with greater efforts. This paper proposes a novel feature-pruning technique relying on computational graph theory. Restoring the appeal of pre-AI conventional computing, the paper applies Disjoint Set Union (DSU) on unidirectional graphs prepared basis thresholded Spearman's rank correlation coefficient, r. Gradual withdrawal of leniency on Spearman's r caused a greater tendency in features to form clusters, causing the dimensionality to shrink. The results-extracting out finer, more representative roots as features-have been $k$-fold cross-validated on a case study examining subjects for Parkinson's. Qualitatively, the method overcomes Principal Component Analysis's (PCA) limitation of inexplicit merging of features and Linear Discriminant Analysis's (LDA) limitation of inextendibility to multiple classes. Statistical inference verified a significant rise in performance, establishing an example of conventional hard computing reinforcing modern soft computing.","PeriodicalId":6871,"journal":{"name":"2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"36 1","pages":"0134-0140"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Graph Theory for Dimensionality Reduction: A Case Study to Prognosticate Parkinson's\",\"authors\":\"Shithi Maitra, Tonmoy Hossain, Khan Md Hasib, Fairuz Shadmani Shishir\",\"doi\":\"10.1109/IEMCON51383.2020.9284926\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the present world, the commotion centering Big Data is somewhat obscuring the craft of mining information from smaller samples. Populations with limited examples but huge dimensionality are a common phenomenon, otherwise known as the curse of dimensionality-especially in the health sector-thanks to the recently-discovered potential of data mining and the enthusiasm for feature engineering. Correlated, noisy, redundant features are byproducts of this tendency, which makes learning algorithms converge with greater efforts. This paper proposes a novel feature-pruning technique relying on computational graph theory. Restoring the appeal of pre-AI conventional computing, the paper applies Disjoint Set Union (DSU) on unidirectional graphs prepared basis thresholded Spearman's rank correlation coefficient, r. Gradual withdrawal of leniency on Spearman's r caused a greater tendency in features to form clusters, causing the dimensionality to shrink. The results-extracting out finer, more representative roots as features-have been $k$-fold cross-validated on a case study examining subjects for Parkinson's. Qualitatively, the method overcomes Principal Component Analysis's (PCA) limitation of inexplicit merging of features and Linear Discriminant Analysis's (LDA) limitation of inextendibility to multiple classes. Statistical inference verified a significant rise in performance, establishing an example of conventional hard computing reinforcing modern soft computing.\",\"PeriodicalId\":6871,\"journal\":{\"name\":\"2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)\",\"volume\":\"36 1\",\"pages\":\"0134-0140\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IEMCON51383.2020.9284926\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEMCON51383.2020.9284926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
摘要
在当今世界,以大数据为中心的骚动在某种程度上模糊了从较小样本中挖掘信息的工艺。由于最近发现的数据挖掘的潜力和对特征工程的热情,人口数量有限但维度巨大是一种常见的现象,或者被称为维度的诅咒——特别是在卫生部门。相关的、有噪声的、冗余的特征是这种趋势的副产品,这使得学习算法需要更大的努力才能收敛。本文提出了一种基于计算图论的特征剪枝技术。为了恢复前ai传统计算的吸引力,本文在基于阈值的Spearman秩相关系数r制备的单向图上应用Disjoint Set Union (DSU)。逐渐撤销对Spearman秩相关系数r的宽容度导致特征更倾向于形成聚类,从而导致维数缩小。结果——提取出更精细、更有代表性的根作为特征——已经在一个帕金森患者的案例研究中得到了k倍的交叉验证。在定性上,该方法克服了主成分分析(PCA)特征不明确合并的局限性和线性判别分析(LDA)不可扩展到多类的局限性。统计推断验证了性能的显著提高,建立了传统硬计算增强现代软计算的例子。
Graph Theory for Dimensionality Reduction: A Case Study to Prognosticate Parkinson's
In the present world, the commotion centering Big Data is somewhat obscuring the craft of mining information from smaller samples. Populations with limited examples but huge dimensionality are a common phenomenon, otherwise known as the curse of dimensionality-especially in the health sector-thanks to the recently-discovered potential of data mining and the enthusiasm for feature engineering. Correlated, noisy, redundant features are byproducts of this tendency, which makes learning algorithms converge with greater efforts. This paper proposes a novel feature-pruning technique relying on computational graph theory. Restoring the appeal of pre-AI conventional computing, the paper applies Disjoint Set Union (DSU) on unidirectional graphs prepared basis thresholded Spearman's rank correlation coefficient, r. Gradual withdrawal of leniency on Spearman's r caused a greater tendency in features to form clusters, causing the dimensionality to shrink. The results-extracting out finer, more representative roots as features-have been $k$-fold cross-validated on a case study examining subjects for Parkinson's. Qualitatively, the method overcomes Principal Component Analysis's (PCA) limitation of inexplicit merging of features and Linear Discriminant Analysis's (LDA) limitation of inextendibility to multiple classes. Statistical inference verified a significant rise in performance, establishing an example of conventional hard computing reinforcing modern soft computing.