Suprapto Suprapto, W. Wahyono, Nur Rokhman, Faisal Dharma Adhinata
{"title":"在不平衡数据集上使用 MPI4Py 进行级联建模的并行方法","authors":"Suprapto Suprapto, W. Wahyono, Nur Rokhman, Faisal Dharma Adhinata","doi":"10.12785/ijcds/150191","DOIUrl":null,"url":null,"abstract":": Machine learning is crucial in categorizing data into specific classes based on their features. However, challenges emerge, especially in classification, when dealing with imbalanced datasets. An imbalanced dataset occurs when there is a disproportionate number of samples across di ff erent classes. It leads to a machine learning model’s bias towards the majority class and poor recognition of minority classes, often resulting in notable prediction inaccuracies for those less represented classes. This research proposes a cascade and parallel architecture in the training process to enhance accuracy and speed compared to non-cascade and sequential. This research will evaluate the performance of the SVM and Random Forest methods. Our findings reveal that employing the Random Forest method, configured with 100 trees, substantially enhances classification accuracy by 4.72%, elevating it from 58.87% to 63.59% compared to non-cascade classifiers. Furthermore, adopting the Message Passing Interface for Python (MPI4Py) for parallel processing across multiple cores or nodes demonstrates a remarkable increase in training speed. Specifically, parallel processing was found to accelerate the training process by up to 4.35 times, reducing the duration from 1725.86 milliseconds to a mere 396.54 milliseconds. These results highlight the advantages of integrating parallel processing with a cascade architecture in machine learning models, particularly in addressing the challenges associated with imbalanced datasets. This research demonstrates the potential for substantial improvements in classification tasks’","PeriodicalId":37180,"journal":{"name":"International Journal of Computing and Digital Systems","volume":"63 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Parallel Approach of Cascade Modelling Using MPI4Py on\\nImbalanced Dataset\",\"authors\":\"Suprapto Suprapto, W. Wahyono, Nur Rokhman, Faisal Dharma Adhinata\",\"doi\":\"10.12785/ijcds/150191\",\"DOIUrl\":null,\"url\":null,\"abstract\":\": Machine learning is crucial in categorizing data into specific classes based on their features. However, challenges emerge, especially in classification, when dealing with imbalanced datasets. An imbalanced dataset occurs when there is a disproportionate number of samples across di ff erent classes. It leads to a machine learning model’s bias towards the majority class and poor recognition of minority classes, often resulting in notable prediction inaccuracies for those less represented classes. This research proposes a cascade and parallel architecture in the training process to enhance accuracy and speed compared to non-cascade and sequential. This research will evaluate the performance of the SVM and Random Forest methods. Our findings reveal that employing the Random Forest method, configured with 100 trees, substantially enhances classification accuracy by 4.72%, elevating it from 58.87% to 63.59% compared to non-cascade classifiers. Furthermore, adopting the Message Passing Interface for Python (MPI4Py) for parallel processing across multiple cores or nodes demonstrates a remarkable increase in training speed. Specifically, parallel processing was found to accelerate the training process by up to 4.35 times, reducing the duration from 1725.86 milliseconds to a mere 396.54 milliseconds. These results highlight the advantages of integrating parallel processing with a cascade architecture in machine learning models, particularly in addressing the challenges associated with imbalanced datasets. This research demonstrates the potential for substantial improvements in classification tasks’\",\"PeriodicalId\":37180,\"journal\":{\"name\":\"International Journal of Computing and Digital Systems\",\"volume\":\"63 5\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computing and Digital Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12785/ijcds/150191\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computing and Digital Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12785/ijcds/150191","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Parallel Approach of Cascade Modelling Using MPI4Py on
Imbalanced Dataset
: Machine learning is crucial in categorizing data into specific classes based on their features. However, challenges emerge, especially in classification, when dealing with imbalanced datasets. An imbalanced dataset occurs when there is a disproportionate number of samples across di ff erent classes. It leads to a machine learning model’s bias towards the majority class and poor recognition of minority classes, often resulting in notable prediction inaccuracies for those less represented classes. This research proposes a cascade and parallel architecture in the training process to enhance accuracy and speed compared to non-cascade and sequential. This research will evaluate the performance of the SVM and Random Forest methods. Our findings reveal that employing the Random Forest method, configured with 100 trees, substantially enhances classification accuracy by 4.72%, elevating it from 58.87% to 63.59% compared to non-cascade classifiers. Furthermore, adopting the Message Passing Interface for Python (MPI4Py) for parallel processing across multiple cores or nodes demonstrates a remarkable increase in training speed. Specifically, parallel processing was found to accelerate the training process by up to 4.35 times, reducing the duration from 1725.86 milliseconds to a mere 396.54 milliseconds. These results highlight the advantages of integrating parallel processing with a cascade architecture in machine learning models, particularly in addressing the challenges associated with imbalanced datasets. This research demonstrates the potential for substantial improvements in classification tasks’