Pub Date : 2020-12-04DOI: 10.1142/s2196888821500172
Hong-Quan Nguyen, Thuy-Binh Nguyen, Thi-Lan Le
Fusion techniques with the aim to leverage the discriminative power of different appearance features for person representation have been widely applied in person re-identification. They are performed by concatenating all feature vectors (known as early fusion) or by combining matching scores of different classifiers (known as late fusion). Previous studies have proved that late fusion techniques achieve better results than early fusion ones. However, majority of the studies focus on determining the suitable weighting schemes that can reflect the role of each feature. The determined weights are then integrated in conventional similarity functions, such as Cosine [L. Zheng, S. Wang, L. Tian, F. He, Z. Liu and Q. Tian, Queryadaptive late fusion for image search and person reidentification, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2015, pp. 1741–1750]. The contribution of this paper is two-fold. First, a robust person re-identification method by combining the metric learning with late fusion techniques is proposed. The metric learning method Cross-view Quadratic Discriminant Analysis (XQDA) is employed to learn a discriminant low dimensional subspace to minimize the intra-person distance while maximize the inter-person distance. Moreover, product rule-based and sum rule-based late fusion techniques are applied on these distances. Second, concerning feature engineering, the ResNet extraction process has been modified in order to extract local features of different stripes in person images. To show the effectiveness of the proposed method, both single-shot and multi-shot scenarios are considered. Three state-of-the-art features that are Gaussians of Gaussians (GOG), Local Maximal Occurrence (LOMO) and deep-learned features extracted through a Residual network (ResNet) are extracted from person images. The experimental results on three benchmark datasets that are iLIDS-VID, PRID-2011 and VIPeR show that the proposed method [Formula: see text]% [Formula: see text]% of improvement over the best results obtained with the single feature. The proposed method that achieves the accuracy of 85.73%, 93.82% and 50.85% at rank-1 for iLIDS-VID, PRID-2011 and VIPeR, respectively, outperforms different SOTA methods including deep learning ones. Source code is publicly available to facilitate the development of person re-ID system.
{"title":"Robust Person Re-Identification Through the Combination of Metric Learning and Late Fusion Techniques","authors":"Hong-Quan Nguyen, Thuy-Binh Nguyen, Thi-Lan Le","doi":"10.1142/s2196888821500172","DOIUrl":"https://doi.org/10.1142/s2196888821500172","url":null,"abstract":"Fusion techniques with the aim to leverage the discriminative power of different appearance features for person representation have been widely applied in person re-identification. They are performed by concatenating all feature vectors (known as early fusion) or by combining matching scores of different classifiers (known as late fusion). Previous studies have proved that late fusion techniques achieve better results than early fusion ones. However, majority of the studies focus on determining the suitable weighting schemes that can reflect the role of each feature. The determined weights are then integrated in conventional similarity functions, such as Cosine [L. Zheng, S. Wang, L. Tian, F. He, Z. Liu and Q. Tian, Queryadaptive late fusion for image search and person reidentification, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2015, pp. 1741–1750]. The contribution of this paper is two-fold. First, a robust person re-identification method by combining the metric learning with late fusion techniques is proposed. The metric learning method Cross-view Quadratic Discriminant Analysis (XQDA) is employed to learn a discriminant low dimensional subspace to minimize the intra-person distance while maximize the inter-person distance. Moreover, product rule-based and sum rule-based late fusion techniques are applied on these distances. Second, concerning feature engineering, the ResNet extraction process has been modified in order to extract local features of different stripes in person images. To show the effectiveness of the proposed method, both single-shot and multi-shot scenarios are considered. Three state-of-the-art features that are Gaussians of Gaussians (GOG), Local Maximal Occurrence (LOMO) and deep-learned features extracted through a Residual network (ResNet) are extracted from person images. The experimental results on three benchmark datasets that are iLIDS-VID, PRID-2011 and VIPeR show that the proposed method [Formula: see text]% [Formula: see text]% of improvement over the best results obtained with the single feature. The proposed method that achieves the accuracy of 85.73%, 93.82% and 50.85% at rank-1 for iLIDS-VID, PRID-2011 and VIPeR, respectively, outperforms different SOTA methods including deep learning ones. Source code is publicly available to facilitate the development of person re-ID system.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123789249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-30DOI: 10.1142/s2196888821500214
Michael Kofi Afriyie, V. M. Nofong, John Wondoh, Hamidu Abdel-Fatao
Periodic frequent patterns are frequent patterns which occur at periodic intervals in databases. They are useful in decision making where event occurrence intervals are vital. Traditional algorithms for discovering periodic frequent patterns, however, often report a large number of such patterns, most of which are often redundant as their periodic occurrences can be derived from other periodic frequent patterns. Using such redundant periodic frequent patterns in decision making would often be detrimental, if not trivial. This paper addresses the challenge of eliminating redundant periodic frequent patterns by employing the concept of deduction rules in mining and reporting only the set of non-redundant periodic frequent patterns. It subsequently proposes and develops a Non-redundant Periodic Frequent Pattern Miner (NPFPM) to achieve this purpose. Experimental analysis on benchmark datasets shows that NPFPM is efficient and can effectively prune the set of redundant periodic frequent patterns.
{"title":"Efficient Mining of Non-Redundant Periodic Frequent Patterns","authors":"Michael Kofi Afriyie, V. M. Nofong, John Wondoh, Hamidu Abdel-Fatao","doi":"10.1142/s2196888821500214","DOIUrl":"https://doi.org/10.1142/s2196888821500214","url":null,"abstract":"Periodic frequent patterns are frequent patterns which occur at periodic intervals in databases. They are useful in decision making where event occurrence intervals are vital. Traditional algorithms for discovering periodic frequent patterns, however, often report a large number of such patterns, most of which are often redundant as their periodic occurrences can be derived from other periodic frequent patterns. Using such redundant periodic frequent patterns in decision making would often be detrimental, if not trivial. This paper addresses the challenge of eliminating redundant periodic frequent patterns by employing the concept of deduction rules in mining and reporting only the set of non-redundant periodic frequent patterns. It subsequently proposes and develops a Non-redundant Periodic Frequent Pattern Miner (NPFPM) to achieve this purpose. Experimental analysis on benchmark datasets shows that NPFPM is efficient and can effectively prune the set of redundant periodic frequent patterns.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130251400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-30DOI: 10.1142/s2196888821500196
Katherina Meißner, Julia Rieck
As road accidents are the leading cause of death for young adults all over the world, it is necessary for the police to evaluate the accident circumstances carefully in order to take appropriate prevention measures. The circumstances of an accident vary in their frequency over time and depend on the local conditions at the accident site. An evaluation under geographical and temporal aspects is therefore necessary. On the basis of the time series, we investigate the various accident circumstances, which show interdependencies with each other, and their influence on the number of accidents. Moreover, a multivariate forecasting is used to indicate the future progression of accidents in different geographical regions. Forecast values are determined with a special extension of the ARIMA method. In order to identify geographical regions of interest, we present two different concepts for segmentation of accident data, which allow the adaptation of police measures to local characteristics.
{"title":"Multivariate Forecasting of Road Accidents Based on Geographically Separated Data","authors":"Katherina Meißner, Julia Rieck","doi":"10.1142/s2196888821500196","DOIUrl":"https://doi.org/10.1142/s2196888821500196","url":null,"abstract":"As road accidents are the leading cause of death for young adults all over the world, it is necessary for the police to evaluate the accident circumstances carefully in order to take appropriate prevention measures. The circumstances of an accident vary in their frequency over time and depend on the local conditions at the accident site. An evaluation under geographical and temporal aspects is therefore necessary. On the basis of the time series, we investigate the various accident circumstances, which show interdependencies with each other, and their influence on the number of accidents. Moreover, a multivariate forecasting is used to indicate the future progression of accidents in different geographical regions. Forecast values are determined with a special extension of the ARIMA method. In order to identify geographical regions of interest, we present two different concepts for segmentation of accident data, which allow the adaptation of police measures to local characteristics.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129536539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-30DOI: 10.1142/s2196888821500202
Helun Bu, K. Kuwabara
This paper presents the use of gamified crowdsourcing for knowledge content validation. Constructing a high-quality knowledge base is crucial for building an intelligent system. We develop a refinement process for the knowledge base of our word retrieval assistant system, where each piece of knowledge is represented as a triple. To validate triples acquired from various sources, we introduce yes/no quizzes and present them to many casual users for their inputs. Only the triples voted “yes” by a sufficient number of users are incorporated into the main knowledge base. Users are incentivized by rewards based on their contribution to the validation process. To ensure transparency of the reward-giving process, blockchain is utilized to store logs of the users’ inputs from which the rewards are calculated. Different strategies are also proposed for selecting the next quiz. The simulation results indicate that the proposed approach has the potential to validate knowledge contents. This paper is a revised version of our conference paper presented at the 12th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2020).
{"title":"Validating Knowledge Contents with Blockchain-Assisted Gamified Crowdsourcing","authors":"Helun Bu, K. Kuwabara","doi":"10.1142/s2196888821500202","DOIUrl":"https://doi.org/10.1142/s2196888821500202","url":null,"abstract":"This paper presents the use of gamified crowdsourcing for knowledge content validation. Constructing a high-quality knowledge base is crucial for building an intelligent system. We develop a refinement process for the knowledge base of our word retrieval assistant system, where each piece of knowledge is represented as a triple. To validate triples acquired from various sources, we introduce yes/no quizzes and present them to many casual users for their inputs. Only the triples voted “yes” by a sufficient number of users are incorporated into the main knowledge base. Users are incentivized by rewards based on their contribution to the validation process. To ensure transparency of the reward-giving process, blockchain is utilized to store logs of the users’ inputs from which the rewards are calculated. Different strategies are also proposed for selecting the next quiz. The simulation results indicate that the proposed approach has the potential to validate knowledge contents. This paper is a revised version of our conference paper presented at the 12th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2020).","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"392 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115982710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-27DOI: 10.1142/S2196888821500135
Vo Thi Ngoc Chau, N. H. Phung
In educational data mining, student classification is an important and popular task by predicting final study status of each student. In the existing works, this task has been considered in many various contexts at both course and program levels with different learning approaches. However, its real-world characteristics such as temporal aspects, data imbalance, data overlapping, and data shortage with sparseness have not yet been fully investigated. Making the most of deep learning, our work is the first one addressing those challenges for the program-level student classification task. In a simple but effective manner, convolutional neural networks (CNNs) are proposed to exploit their well-known advantages on images for temporal educational data. As a result, the task is resolved by our enhanced CNN models with more effectiveness and practicability on real datasets. Our CNN models outperform other traditional models and their various variants on a consistent basis for program-level student classification.
{"title":"Enhanced CNN Models for Binary and Multiclass Student Classification on Temporal Educational Data at the Program Level","authors":"Vo Thi Ngoc Chau, N. H. Phung","doi":"10.1142/S2196888821500135","DOIUrl":"https://doi.org/10.1142/S2196888821500135","url":null,"abstract":"In educational data mining, student classification is an important and popular task by predicting final study status of each student. In the existing works, this task has been considered in many various contexts at both course and program levels with different learning approaches. However, its real-world characteristics such as temporal aspects, data imbalance, data overlapping, and data shortage with sparseness have not yet been fully investigated. Making the most of deep learning, our work is the first one addressing those challenges for the program-level student classification task. In a simple but effective manner, convolutional neural networks (CNNs) are proposed to exploit their well-known advantages on images for temporal educational data. As a result, the task is resolved by our enhanced CNN models with more effectiveness and practicability on real datasets. Our CNN models outperform other traditional models and their various variants on a consistent basis for program-level student classification.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129107993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-05DOI: 10.1142/s2196888821500123
Mehedi Hasan Raj, A. Rahman, Umma Habiba Akter, K. Riya, Anika Tasneem Nijhum, R. Rahman
Nowadays, the Internet of Things (IoT) is a common word for the people because of its increasing number of users. Statistical results show that the users of IoT devices are dramatically increasing, and in the future, it will be to an ever-increasing extent. Because of the increasing number of users, security experts are now concerned about its security. In this research, we would like to improve the security system of IoT devices, particularly in IoT botnet, by applying various machine learning (ML) techniques. In this paper, we have set up an approach to detect botnet of IoT devices using three one-class classifier ML algorithms. The algorithms are: one-class support vector machine (OCSVM), elliptic envelope (EE), and local outlier factor (LOF). Our method is a network flow-based botnet detection technique, and we use the input packet, protocol, source port, destination port, and time as features of our algorithms. After a number of preprocessing steps, we feed the preprocessed data to our algorithms that can achieve a good precision score that is approximately 77–99%. The one-class SVM achieves the best accuracy score, approximately 99% in every dataset, and EE’s accuracy score varies from 91% to 98%; however, the LOF factor achieves lowest accuracy score that is from 77% to 99%. Our algorithms are cost-effective and provide good accuracy in short execution time.
{"title":"IoT Botnet Detection Using Various One-Class Classifiers","authors":"Mehedi Hasan Raj, A. Rahman, Umma Habiba Akter, K. Riya, Anika Tasneem Nijhum, R. Rahman","doi":"10.1142/s2196888821500123","DOIUrl":"https://doi.org/10.1142/s2196888821500123","url":null,"abstract":"Nowadays, the Internet of Things (IoT) is a common word for the people because of its increasing number of users. Statistical results show that the users of IoT devices are dramatically increasing, and in the future, it will be to an ever-increasing extent. Because of the increasing number of users, security experts are now concerned about its security. In this research, we would like to improve the security system of IoT devices, particularly in IoT botnet, by applying various machine learning (ML) techniques. In this paper, we have set up an approach to detect botnet of IoT devices using three one-class classifier ML algorithms. The algorithms are: one-class support vector machine (OCSVM), elliptic envelope (EE), and local outlier factor (LOF). Our method is a network flow-based botnet detection technique, and we use the input packet, protocol, source port, destination port, and time as features of our algorithms. After a number of preprocessing steps, we feed the preprocessed data to our algorithms that can achieve a good precision score that is approximately 77–99%. The one-class SVM achieves the best accuracy score, approximately 99% in every dataset, and EE’s accuracy score varies from 91% to 98%; however, the LOF factor achieves lowest accuracy score that is from 77% to 99%. Our algorithms are cost-effective and provide good accuracy in short execution time.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126177597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-05DOI: 10.1142/s2196888821500111
Hongyu Wang, Lynne M Chepulis, R. Paul, Michael Mayo
Metaheuristic search algorithms are used to develop new protocols for optimal intravenous insulin infusion rate recommendations in scenarios involving hospital in-patients with Type 1 Diabetes. Two metaheuristic search algorithms are used, namely, Particle Swarm Optimization and Covariance Matrix Adaption Evolution Strategy. The Glucose Regulation for Intensive Care Patients (GRIP) serves as the starting point of the optimization process. We base our experiments on a methodology in the literature to evaluate the favorability of insulin protocols, with a dataset of blood glucose level/insulin infusion rate time series records from 16 patients obtained from the Waikato District Health Board. New and significantly better insulin infusion strategies than GRIP are discovered from the data through metaheuristic search. The newly discovered strategies are further validated and show good performance against various competitive benchmarks using a virtual patient simulator.
{"title":"Metaheuristic Optimization of Insulin Infusion Protocols Using Historical Data with Validation Using a Patient Simulator","authors":"Hongyu Wang, Lynne M Chepulis, R. Paul, Michael Mayo","doi":"10.1142/s2196888821500111","DOIUrl":"https://doi.org/10.1142/s2196888821500111","url":null,"abstract":"Metaheuristic search algorithms are used to develop new protocols for optimal intravenous insulin infusion rate recommendations in scenarios involving hospital in-patients with Type 1 Diabetes. Two metaheuristic search algorithms are used, namely, Particle Swarm Optimization and Covariance Matrix Adaption Evolution Strategy. The Glucose Regulation for Intensive Care Patients (GRIP) serves as the starting point of the optimization process. We base our experiments on a methodology in the literature to evaluate the favorability of insulin protocols, with a dataset of blood glucose level/insulin infusion rate time series records from 16 patients obtained from the Waikato District Health Board. New and significantly better insulin infusion strategies than GRIP are discovered from the data through metaheuristic search. The newly discovered strategies are further validated and show good performance against various competitive benchmarks using a virtual patient simulator.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125650851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-05DOI: 10.1142/S2196888821500147
P. Nguyen, C. Vo
Nowadays, teaching and learning activities in a course are greatly supported by information technologies. Forums are among information technologies utilized in a course to encourage students to communicate with lecturers more outside a traditional class. Free-styled textual posts in those communications express the problems that the students are facing as well as the interest and activeness of the students with respect to each topic of a course. Exploiting such textual data in a course forum for course-level student prediction is considered in our work. Due to hierarchical structures in course forum texts, we propose a solution in this paper which combines a deep convolutional neural network (CNN) and a loss function to extract the features from textual data in such a manner that more correct recognitions of instances of the minority class which includes students with failure can be supported. In addition, other numeric data are examined and used for the task so that all the students with and without posts can be predicted in the task. Therefore, our work is the first one that defines and solves this prediction task with heterogeneous educational data at the course level as compared to the existing works. In the proposed solution, Random Forests are suggested as an effective ensemble model suitable for our heterogeneous data when many single prediction models which are random trees can be built for many various subspaces with different random features in a supervised learning process. Experimental results in an empirical evaluation on two real datasets show that a heterogeneous combination of textual and numeric data with a Random Forest model can enhance the effectiveness of our solution to the task. The best accuracy and [Formula: see text]-measure values can be obtained for early predictions of the students with either success or failure. Such better predictions can help both students and lecturers beware of students’ study and support them in time for ultimate success in a course.
{"title":"Heterogeneous Educational Data Classification at the Course Level","authors":"P. Nguyen, C. Vo","doi":"10.1142/S2196888821500147","DOIUrl":"https://doi.org/10.1142/S2196888821500147","url":null,"abstract":"Nowadays, teaching and learning activities in a course are greatly supported by information technologies. Forums are among information technologies utilized in a course to encourage students to communicate with lecturers more outside a traditional class. Free-styled textual posts in those communications express the problems that the students are facing as well as the interest and activeness of the students with respect to each topic of a course. Exploiting such textual data in a course forum for course-level student prediction is considered in our work. Due to hierarchical structures in course forum texts, we propose a solution in this paper which combines a deep convolutional neural network (CNN) and a loss function to extract the features from textual data in such a manner that more correct recognitions of instances of the minority class which includes students with failure can be supported. In addition, other numeric data are examined and used for the task so that all the students with and without posts can be predicted in the task. Therefore, our work is the first one that defines and solves this prediction task with heterogeneous educational data at the course level as compared to the existing works. In the proposed solution, Random Forests are suggested as an effective ensemble model suitable for our heterogeneous data when many single prediction models which are random trees can be built for many various subspaces with different random features in a supervised learning process. Experimental results in an empirical evaluation on two real datasets show that a heterogeneous combination of textual and numeric data with a Random Forest model can enhance the effectiveness of our solution to the task. The best accuracy and [Formula: see text]-measure values can be obtained for early predictions of the students with either success or failure. Such better predictions can help both students and lecturers beware of students’ study and support them in time for ultimate success in a course.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133354847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-05DOI: 10.1142/s219688882150007x
Sunil Kumar, M. Singh
Breast cancer is the leading cause of high fatality among women population. Identification of the benign and malignant tumor at correct time plays a critical role in the diagnosis of breast cancer....
乳腺癌是妇女死亡率高的主要原因。正确识别肿瘤的良恶性在乳腺癌的诊断中起着至关重要的作用....
{"title":"Breast Cancer Detection Based on Feature Selection Using Enhanced Grey Wolf Optimizer and Support Vector Machine Algorithms","authors":"Sunil Kumar, M. Singh","doi":"10.1142/s219688882150007x","DOIUrl":"https://doi.org/10.1142/s219688882150007x","url":null,"abstract":"Breast cancer is the leading cause of high fatality among women population. Identification of the benign and malignant tumor at correct time plays a critical role in the diagnosis of breast cancer....","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129833496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-04DOI: 10.1142/s2196888821500159
Ashish Gupta
The Biswapped-Torus is a recently reported optoelectronic node-symmetrical member of Biswapped-framework family. In this paper,optimized parallel approach is presented for prefix sum computation on [Formula: see text] Biswapped-Torus. The proposed parallel algorithm demands total 7[Formula: see text] electronic and three optical moves on odd network size or 7[Formula: see text] electronic and three optical moves on even network size. The algorithmic performance of the suggested parallel algorithm is also compared with the performances of recently reported optimal prefix sum algorithms on [Formula: see text] Biswapped-Mesh and [Formula: see text]-dimensional Biswapped Hyper Hexa-cell. Based on the comparative analysis, Biswapped-Torus claims to map prefix sum faster that require fewer communication moves compared to the Grid-based traditional architecture of biswapped family named Biswapped-Mesh. Moreover, the former also has architectural benefit of node-symmetry that leads to advantages such as easy embedding, mapping and designing of routing algorithms. Compared to symmetrical counter-part of biswapped family named Biswapped-Hyper Hexa-Cell, Biswapped-Torus is cost-efficient, but requires comparatively more communication moves for mapping prefix sum.
{"title":"Optimized Parallel Prefix Sum Algorithm on Optoelectronic Biswapped-Torus Architecture","authors":"Ashish Gupta","doi":"10.1142/s2196888821500159","DOIUrl":"https://doi.org/10.1142/s2196888821500159","url":null,"abstract":"The Biswapped-Torus is a recently reported optoelectronic node-symmetrical member of Biswapped-framework family. In this paper,optimized parallel approach is presented for prefix sum computation on [Formula: see text] Biswapped-Torus. The proposed parallel algorithm demands total 7[Formula: see text] electronic and three optical moves on odd network size or 7[Formula: see text] electronic and three optical moves on even network size. The algorithmic performance of the suggested parallel algorithm is also compared with the performances of recently reported optimal prefix sum algorithms on [Formula: see text] Biswapped-Mesh and [Formula: see text]-dimensional Biswapped Hyper Hexa-cell. Based on the comparative analysis, Biswapped-Torus claims to map prefix sum faster that require fewer communication moves compared to the Grid-based traditional architecture of biswapped family named Biswapped-Mesh. Moreover, the former also has architectural benefit of node-symmetry that leads to advantages such as easy embedding, mapping and designing of routing algorithms. Compared to symmetrical counter-part of biswapped family named Biswapped-Hyper Hexa-Cell, Biswapped-Torus is cost-efficient, but requires comparatively more communication moves for mapping prefix sum.","PeriodicalId":256649,"journal":{"name":"Vietnam. J. Comput. Sci.","volume":"1829 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125275668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}