Augmented Reality (AR) have been widely explored worldwide for their potential as a technology that enhances information representation. As technology progresses, smartphones (handheld devices) now have sophisticated processors and cameras for capturing static photographs and video, as well as a variety of sensors for tracking the user's position, orientation, and motion. Hence, this paper would discuss a finger-ray pointing technique in real-time for interaction in handheld AR and comparing the technique with the conventional technique in handheld, touch-screen interaction. The aim of this paper is to explore the ray pointing interaction in handheld AR for 3D object selection. Previous works in handheld AR and also covers Mixed Reality (MR) have been recapped.
{"title":"Designing Ray-Pointing using Real hand and Touch-based in Handheld Augmented Reality for Object Selection","authors":"Nur Ameerah Binti Abdul Halim, A. W. Ismail","doi":"10.11113/ijic.v11n2.316","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.316","url":null,"abstract":"Augmented Reality (AR) have been widely explored worldwide for their potential as a technology that enhances information representation. As technology progresses, smartphones (handheld devices) now have sophisticated processors and cameras for capturing static photographs and video, as well as a variety of sensors for tracking the user's position, orientation, and motion. Hence, this paper would discuss a finger-ray pointing technique in real-time for interaction in handheld AR and comparing the technique with the conventional technique in handheld, touch-screen interaction. The aim of this paper is to explore the ray pointing interaction in handheld AR for 3D object selection. Previous works in handheld AR and also covers Mixed Reality (MR) have been recapped.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"15 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72656407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azurah A Samah, Siti Nurul Aqilah Ahmad, Hairudin Abdul Majid, Zuraini Ali Shah, H. Hashim, Nuraina Syaza Azman, Nur Sabrina Azmi, D. Nasien
Attention Deficit Hyperactivity Disorder (ADHD) categorize as one of the typical neurodevelopmental and mental disorders. Over the years, researchers have identified ADHD as a complicated disorder since it is not directly tested with a standard medical test such as a blood or urine test on the early-stage diagnosis. Apart from the physical symptoms of ADHD, clinical data of ADHD patients show that most of them have learning problems. Therefore, functional Magnetic Resonance Imaging (fMRI) is considered the most suitable method to determine functional activity in the brain region to understand brain disorders of ADHD. One of the ways to diagnose ADHD is by using deep learning techniques, which can increase the accuracy of predicting ADHD using the fMRI dataset. Past attempts of classifying ADHD based on functional connectivity coefficient using the Deep Neural Network (DNN) result in 95% accuracy. As Variational Autoencoder (VAE) is the most popular in extracting high-level data, this model is applied in this study. This study aims to enhance the performance of VAE to increase the accuracy in classifying ADHD using fMRI data based on functional connectivity analysis. The preprocessed fMRI dataset is used for decomposition to find the region of interest (ROI), followed by Independent Component Analysis (ICA) that calculates the correlation between brain regions and creates functional connectivity matrices for each subject. As a result, the VAE model achieved an accuracy of 75% on classifying ADHD.
{"title":"Classification of Attention Deficit Hyperactivity Disorder using Variational Autoencoder","authors":"Azurah A Samah, Siti Nurul Aqilah Ahmad, Hairudin Abdul Majid, Zuraini Ali Shah, H. Hashim, Nuraina Syaza Azman, Nur Sabrina Azmi, D. Nasien","doi":"10.11113/ijic.v11n2.352","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.352","url":null,"abstract":"Attention Deficit Hyperactivity Disorder (ADHD) categorize as one of the typical neurodevelopmental and mental disorders. Over the years, researchers have identified ADHD as a complicated disorder since it is not directly tested with a standard medical test such as a blood or urine test on the early-stage diagnosis. Apart from the physical symptoms of ADHD, clinical data of ADHD patients show that most of them have learning problems. Therefore, functional Magnetic Resonance Imaging (fMRI) is considered the most suitable method to determine functional activity in the brain region to understand brain disorders of ADHD. One of the ways to diagnose ADHD is by using deep learning techniques, which can increase the accuracy of predicting ADHD using the fMRI dataset. Past attempts of classifying ADHD based on functional connectivity coefficient using the Deep Neural Network (DNN) result in 95% accuracy. As Variational Autoencoder (VAE) is the most popular in extracting high-level data, this model is applied in this study. This study aims to enhance the performance of VAE to increase the accuracy in classifying ADHD using fMRI data based on functional connectivity analysis. The preprocessed fMRI dataset is used for decomposition to find the region of interest (ROI), followed by Independent Component Analysis (ICA) that calculates the correlation between brain regions and creates functional connectivity matrices for each subject. As a result, the VAE model achieved an accuracy of 75% on classifying ADHD.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"21 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74743183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increase in mobile phones accessibility and technological advancement in almost every corner of the world has shaped how banks offer financial service. Such services were extended to low-end customers without a smartphone providing Alternative Banking Channels (ABCs) service, rendering regular financial service same as those on smartphones. One of the services of this ABC’s is Unstructured Supplementary Service Data (USSD), two-way communication between mobile phones and applications, which is used to render financial services all from the bank accounts linked for this USSD service. Fraudsters have taken advantage of innocent customers on this channel to carry out fraudulent activities with high impart of fraudulent there is still not an implemented fraud detection model to detect this fraud activities. This paper is an investigation into fraud detection model using machine learning techniques for Unstructured Supplementary Service Data based on short-term memory. Statistical features were derived by aggregating customers activities using a short window size to improve the model performance on selected machine learning classifiers, employing the best set of features to improve the model performance. Based on the results obtained, the proposed Fraudulent detection model demonstrated that with the appropriate machine learning techniques for USSD, best performance was achieved with Random forest having the best result of 100% across all its performance measure, KNeighbors was second in performance measure having an average of 99% across all its performance measure while Gradient boosting was third in its performance measure, its achieved accuracy is 91.94%, precession is 86%, recall is 100% and f1 score is 92.54%. Result obtained shows two of the selected machine learning random forest and decision tree are best fit for the fraud detection in this model. With the right features derived and an appropriate machine learning algorithm, the proposed model offers the best fraud detection accuracy.
{"title":"Fraudulent Detection Model Using Machine Learning Techniques for Unstructured Supplementary Service Data","authors":"Ayorinde O. Akinje, A. Fuad","doi":"10.11113/ijic.v11n2.299","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.299","url":null,"abstract":"The increase in mobile phones accessibility and technological advancement in almost every corner of the world has shaped how banks offer financial service. Such services were extended to low-end customers without a smartphone providing Alternative Banking Channels (ABCs) service, rendering regular financial service same as those on smartphones. One of the services of this ABC’s is Unstructured Supplementary Service Data (USSD), two-way communication between mobile phones and applications, which is used to render financial services all from the bank accounts linked for this USSD service. Fraudsters have taken advantage of innocent customers on this channel to carry out fraudulent activities with high impart of fraudulent there is still not an implemented fraud detection model to detect this fraud activities. This paper is an investigation into fraud detection model using machine learning techniques for Unstructured Supplementary Service Data based on short-term memory. Statistical features were derived by aggregating customers activities using a short window size to improve the model performance on selected machine learning classifiers, employing the best set of features to improve the model performance. Based on the results obtained, the proposed Fraudulent detection model demonstrated that with the appropriate machine learning techniques for USSD, best performance was achieved with Random forest having the best result of 100% across all its performance measure, KNeighbors was second in performance measure having an average of 99% across all its performance measure while Gradient boosting was third in its performance measure, its achieved accuracy is 91.94%, precession is 86%, recall is 100% and f1 score is 92.54%. Result obtained shows two of the selected machine learning random forest and decision tree are best fit for the fraud detection in this model. With the right features derived and an appropriate machine learning algorithm, the proposed model offers the best fraud detection accuracy.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"55 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73652642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interaction is one of the important topics to be discussed since it includes the interface where the end-user communicates with the augmented reality (AR) system. In handheld AR interface, the traditional interaction techniques are not suitable for some AR applications due to the different attributes of handheld devices that always refer to smartphones and tablets. Currently interaction techniques in handheld AR are known as touch-based technique, mid-air gesture-based technique and device-based technique that can led to a wide discussion in related research areas. However, this paper will focus to discover the device-based interaction technique because it has proven in the previous studies to be more suitable and robust in several aspects. A novel device-based 3D object rotation technique is proposed to solve the current problem in performing 3DOF rotation of 3D object. The goal is to produce a precise and faster 3D object rotation. Therefore, the determination of the rotation amplitudes per second is required before the fully implementation. This paper discusses the implementation in depth and provides a guideline for those who works in related to device-based interaction.
{"title":"Pre-define Rotation Amplitudes Object Rotation in Handheld Augmented Reality","authors":"Goh Eg Su, A. W. Ismail","doi":"10.11113/ijic.v11n2.315","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.315","url":null,"abstract":"Interaction is one of the important topics to be discussed since it includes the interface where the end-user communicates with the augmented reality (AR) system. In handheld AR interface, the traditional interaction techniques are not suitable for some AR applications due to the different attributes of handheld devices that always refer to smartphones and tablets. Currently interaction techniques in handheld AR are known as touch-based technique, mid-air gesture-based technique and device-based technique that can led to a wide discussion in related research areas. However, this paper will focus to discover the device-based interaction technique because it has proven in the previous studies to be more suitable and robust in several aspects. A novel device-based 3D object rotation technique is proposed to solve the current problem in performing 3DOF rotation of 3D object. The goal is to produce a precise and faster 3D object rotation. Therefore, the determination of the rotation amplitudes per second is required before the fully implementation. This paper discusses the implementation in depth and provides a guideline for those who works in related to device-based interaction.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82002696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adlina Abdul Samad, Marina Md Arshad, M. Md. Siraj, Nur Aishah Shamsudin
Visual Analytics is very effective in many applications especially in education field and improved the decision making on enhancing the student assessment. Student assessment has become very important and is identified as a systematic process that measures and collects data such as marks and scores in a manner that enables the educator to analyze the achievement of the intended learning outcomes. The objective of this study is to investigate the suitable visual analytics design to represent the student assessment data with the suitable interaction techniques of the visual analytics approach. sheet. There are six types of analytical models, such as the Generalized Linear Model, Deep Learning, Decision Tree Model, Random Forest Model, Gradient Boosted Model, and Support Vector Machine were used to conduct this research. Our experimental results show that the Decision Tree Models were the fastest way to optimize the result. The Gradient Boosted Model was the best performance to optimize the result.
{"title":"Visual Analytics Design for Students Assessment Representation based on Supervised Learning Algorithms","authors":"Adlina Abdul Samad, Marina Md Arshad, M. Md. Siraj, Nur Aishah Shamsudin","doi":"10.11113/ijic.v11n2.346","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.346","url":null,"abstract":"Visual Analytics is very effective in many applications especially in education field and improved the decision making on enhancing the student assessment. Student assessment has become very important and is identified as a systematic process that measures and collects data such as marks and scores in a manner that enables the educator to analyze the achievement of the intended learning outcomes. The objective of this study is to investigate the suitable visual analytics design to represent the student assessment data with the suitable interaction techniques of the visual analytics approach. sheet. There are six types of analytical models, such as the Generalized Linear Model, Deep Learning, Decision Tree Model, Random Forest Model, Gradient Boosted Model, and Support Vector Machine were used to conduct this research. Our experimental results show that the Decision Tree Models were the fastest way to optimize the result. The Gradient Boosted Model was the best performance to optimize the result.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"25 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82245783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kidney failure will give effect to the human body, and it can lead to a series of seriously illness and even causing death. Machine learning plays important role in disease classification with high accuracy and shorter processing time as compared to clinical lab test. There are 24 attributes in the Chronic K idney Disease (CKD) clinical dataset, which is considered as too much of attributes. To improve the performance of the classification, filter feature selection methods used to reduce the dimensions of the feature and then the ensemble algorithm is used to identify the union features that selected from each filter feature selection. The filter feature selection that implemented in this research are Information Gain (IG), Chi-Squares, ReliefF and Fisher Score. Genetic Algorithm (GA) is used to select the best subset from the ensemble result of the filter feature selection. In this research, Random Forest (RF), XGBoost, Support Vector Machine (SVM), K-Nearest Neighbor (KNN) and Naïve Bayes classification techniques were used to diagnose the CKD. The features subset that selected are different and specialised for each classifier. By implementing the proposed method irrelevant features through filter feature selection able to reduce the burden and computational cost for the genetic algorithm. Then, the genetic algorithm able to perform better and select the best subset that able to improve the performance of the classifier with less attributes. The proposed genetic algorithm union filter feature selections improve the performance of the classification algorithm. The accuracy of RF, XGBoost, KNN and SVM can achieve to 100% and NB can achieve to 99.17%. The proposed method successfully improves the performance of the classifier by using less features as compared to other previous work.
肾衰竭会对人体产生影响,可导致一系列严重疾病,甚至导致死亡。与临床实验室测试相比,机器学习在疾病分类中具有很高的准确性和更短的处理时间。慢性肾病(Chronic K idney Disease, CKD)临床数据集中有24个属性,被认为属性过多。为了提高分类的性能,首先采用滤波器特征选择方法对特征进行降维,然后采用集成算法对从每个滤波器特征选择中选择的联合特征进行识别。本研究实现的滤波器特征选择有信息增益(Information Gain, IG)、卡方、ReliefF和Fisher Score。利用遗传算法从滤波器特征选择的综合结果中选择最优子集。本研究采用随机森林(Random Forest, RF)、XGBoost、支持向量机(Support Vector Machine, SVM)、k -最近邻(K-Nearest Neighbor, KNN)和Naïve贝叶斯分类技术对CKD进行诊断。所选择的特征子集对于每个分类器是不同的和专门的。通过对不相关特征进行滤波特征选择,可以减少遗传算法的负担和计算量。然后,遗传算法能够更好地执行并选择能够提高分类器性能的属性较少的最佳子集。提出的遗传算法联合滤波特征选择提高了分类算法的性能。RF、XGBoost、KNN和SVM的准确率可以达到100%,NB可以达到99.17%。与以往的工作相比,该方法通过使用更少的特征,成功地提高了分类器的性能。
{"title":"Genetic Algorithm Ensemble Filter Methods on Kidney Disease Classification","authors":"S. Huspi, Chong Ke Ting","doi":"10.11113/ijic.v11n2.345","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.345","url":null,"abstract":"Kidney failure will give effect to the human body, and it can lead to a series of seriously illness and even causing death. Machine learning plays important role in disease classification with high accuracy and shorter processing time as compared to clinical lab test. There are 24 attributes in the Chronic K idney Disease (CKD) clinical dataset, which is considered as too much of attributes. To improve the performance of the classification, filter feature selection methods used to reduce the dimensions of the feature and then the ensemble algorithm is used to identify the union features that selected from each filter feature selection. The filter feature selection that implemented in this research are Information Gain (IG), Chi-Squares, ReliefF and Fisher Score. Genetic Algorithm (GA) is used to select the best subset from the ensemble result of the filter feature selection. In this research, Random Forest (RF), XGBoost, Support Vector Machine (SVM), K-Nearest Neighbor (KNN) and Naïve Bayes classification techniques were used to diagnose the CKD. The features subset that selected are different and specialised for each classifier. By implementing the proposed method irrelevant features through filter feature selection able to reduce the burden and computational cost for the genetic algorithm. Then, the genetic algorithm able to perform better and select the best subset that able to improve the performance of the classifier with less attributes. The proposed genetic algorithm union filter feature selections improve the performance of the classification algorithm. The accuracy of RF, XGBoost, KNN and SVM can achieve to 100% and NB can achieve to 99.17%. The proposed method successfully improves the performance of the classifier by using less features as compared to other previous work.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"22 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89950209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oyinkansola Oluwapelumi Kemi Afolabi-B, M. Md. Siraj
Security and protection of information is an ever-evolving process in the field of information security. One of the major tools of protection is the Intrusion Detection Systems (IDS). For so many years, IDS have been developed for use in computer networks, they have been widely used to detect a range of network attacks; but one of its major drawbacks is that attackers, with the evolution of time and technology make it harder for IDS systems to cope. A sub-branch of IDS-Intrusion Alert Analysis was introduced into the research system to combat these problems and help support IDS by analyzing the alert triggered by the IDS. Intrusion Alert analysis has served as a good support for IDS systems for many years but also has its own short comings which are the amount of the voluminous number of alerts produced by IDS systems. From years of research, it has been observed that majority of the alerts produced are undesirables such as duplicates, false alerts, etc., leading to huge amounts of alerts causing alert flooding. This research proposed the reduction alert by targeting these undesirable alerts through the integration of supervised and unsupervised algorithms and approach. The research first selects significant features by comparing two feature ranking techniques this targets duplicates, low priority and irrelevant alert. To achieve further reduction, the research proposed the integration of supervised and unsupervised algorithms to filter out false alerts. Based on this, an effective model was gotten which achieved 94.02% reduction rate of alerts. Making use of the dataset ISCX 2012, experiments were conducted and the model with the highest reduction rate was chosen. The model was evaluated against other experimental results and benchmarked against a related work, it also improved on the said related work.
{"title":"Intrusion Alert Reduction Based on Unsupervised and Supervised Learning Algorithms","authors":"Oyinkansola Oluwapelumi Kemi Afolabi-B, M. Md. Siraj","doi":"10.11113/ijic.v11n2.331","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.331","url":null,"abstract":"Security and protection of information is an ever-evolving process in the field of information security. One of the major tools of protection is the Intrusion Detection Systems (IDS). For so many years, IDS have been developed for use in computer networks, they have been widely used to detect a range of network attacks; but one of its major drawbacks is that attackers, with the evolution of time and technology make it harder for IDS systems to cope. A sub-branch of IDS-Intrusion Alert Analysis was introduced into the research system to combat these problems and help support IDS by analyzing the alert triggered by the IDS. Intrusion Alert analysis has served as a good support for IDS systems for many years but also has its own short comings which are the amount of the voluminous number of alerts produced by IDS systems. From years of research, it has been observed that majority of the alerts produced are undesirables such as duplicates, false alerts, etc., leading to huge amounts of alerts causing alert flooding. This research proposed the reduction alert by targeting these undesirable alerts through the integration of supervised and unsupervised algorithms and approach. The research first selects significant features by comparing two feature ranking techniques this targets duplicates, low priority and irrelevant alert. To achieve further reduction, the research proposed the integration of supervised and unsupervised algorithms to filter out false alerts. Based on this, an effective model was gotten which achieved 94.02% reduction rate of alerts. Making use of the dataset ISCX 2012, experiments were conducted and the model with the highest reduction rate was chosen. The model was evaluated against other experimental results and benchmarked against a related work, it also improved on the said related work.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"48 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86616851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sabrina Jahan Maisha, Nuren Nafisa, Abdul Kadar Muhammad Masum
We can state undoubtedly that Bangla language is rich enough to work with and implement various Natural Language Processing (NLP) tasks. Though it needs proper attention, hardly NLP field has been explored with it. In this age of digitalization, large amount of Bangla news contents are generated in online platforms. Some of the contents are inappropriate for the children or aged people. With the motivation to filter out news contents easily, the aim of this work is to perform document level sentiment analysis (SA) on Bangla online news. In this respect, the dataset is created by collecting news from online Bangla newspaper archive. Further, the documents are manually annotated into positive and negative classes. Composite process technique of “Pipeline” class including Count Vectorizer, transformer (TF-IDF) and machine learning (ML) classifiers are employed to extract features and to train the dataset. Six supervised ML classifiers (i.e. Multinomial Naive Bayes (MNB), K-Nearest Neighbor (K-NN), Random Forest (RF), (C4.5) Decision Tree (DT), Logistic Regression (LR) and Linear Support Vector Machine (LSVM)) are used to analyze the best classifier for the proposed model. There has been very few works on SA of Bangla news. So, this work is a small attempt to contribute in this field. This model showed remarkable efficiency through better results in both the validation process of percentage split method and 10-fold cross validation. Among all six classifiers, RF has outperformed others by 99% accuracy. Even though LSVM has shown lowest accuracy of 80%, it is also considered as good output. However, this work has also exhibited surpassing outcome for recent and critical Bangla news indicating proper feature extraction to build up the model.
我们可以毫无疑问地说,孟加拉语足够丰富,可以处理和实现各种自然语言处理(NLP)任务。虽然值得重视,但在自然语言处理领域却鲜有涉足。在这个数字化的时代,大量的孟加拉语新闻内容在网络平台上产生。有些内容不适合儿童或老年人。为了方便地过滤新闻内容,本工作的目的是对孟加拉在线新闻进行文档级情感分析(SA)。在这方面,数据集是通过收集在线孟加拉报纸档案中的新闻来创建的。此外,文档被手工标注为正类和负类。采用“Pipeline”类的复合处理技术,包括计数矢量器(Count Vectorizer)、变压器(TF-IDF)和机器学习(ML)分类器来提取特征并对数据集进行训练。使用六个监督机器学习分类器(即多项朴素贝叶斯(MNB), k -近邻(K-NN),随机森林(RF), (C4.5)决策树(DT),逻辑回归(LR)和线性支持向量机(LSVM))来分析所提出模型的最佳分类器。关于孟加拉新闻SA的作品很少。因此,这项工作是在这个领域做出贡献的一个小小的尝试。该模型在百分比分割法的验证过程和10倍交叉验证过程中均取得了较好的结果,显示了显著的效率。在所有六个分类器中,RF的准确率超过其他分类器99%。尽管LSVM的准确率最低,只有80%,但它也被认为是一个很好的输出。然而,这项工作也为最近和关键的孟加拉国新闻展示了超越的结果,表明适当的特征提取来建立模型。
{"title":"Supervised Machine Learning Algorithms for Sentiment Analysis of Bangla Newspaper","authors":"Sabrina Jahan Maisha, Nuren Nafisa, Abdul Kadar Muhammad Masum","doi":"10.11113/ijic.v11n2.321","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.321","url":null,"abstract":"We can state undoubtedly that Bangla language is rich enough to work with and implement various Natural Language Processing (NLP) tasks. Though it needs proper attention, hardly NLP field has been explored with it. In this age of digitalization, large amount of Bangla news contents are generated in online platforms. Some of the contents are inappropriate for the children or aged people. With the motivation to filter out news contents easily, the aim of this work is to perform document level sentiment analysis (SA) on Bangla online news. In this respect, the dataset is created by collecting news from online Bangla newspaper archive. Further, the documents are manually annotated into positive and negative classes. Composite process technique of “Pipeline” class including Count Vectorizer, transformer (TF-IDF) and machine learning (ML) classifiers are employed to extract features and to train the dataset. Six supervised ML classifiers (i.e. Multinomial Naive Bayes (MNB), K-Nearest Neighbor (K-NN), Random Forest (RF), (C4.5) Decision Tree (DT), Logistic Regression (LR) and Linear Support Vector Machine (LSVM)) are used to analyze the best classifier for the proposed model. There has been very few works on SA of Bangla news. So, this work is a small attempt to contribute in this field. This model showed remarkable efficiency through better results in both the validation process of percentage split method and 10-fold cross validation. Among all six classifiers, RF has outperformed others by 99% accuracy. Even though LSVM has shown lowest accuracy of 80%, it is also considered as good output. However, this work has also exhibited surpassing outcome for recent and critical Bangla news indicating proper feature extraction to build up the model.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90424836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IOT) is an essential paradigm where devices are interconnected into network. The operations of these devices can be through service-oriented software engineering (SOSE) principles for efficient service provision. SOSE is an important software development method for flexible, agile, loose-coupled, heterogeneous and interoperable applications. Despite all these benefits, its adoption for IOT services is slow due to security challenges. The security challenge of integration of IOT with service-oriented architecture (SOA) is man-in-the-middle attack on the messages exchanged. The transport layer security (TLS) creates a secured socket channel between the client and server. This is efficient in securing messages exchanged at the transport layer only. SOSE-based IOT systems needs an end-to-end security to handle its vulnerabilities. This integration enables interoperability of heterogeneous devices, but renders the system vulnerable to passive attacks. The confidentiality problem is hereby addressed by message level hybrid encryption. This is by encrypting the messages by AES for efficiency. However, to enable end-to-end security, the key sharing problem of advanced encryption standard (AES) is handled by RSA public key encryption. The results shows that this solution addressed data contents security and credentials security privacy issues. Furthermore, the solution enables end-to- end security of interaction in SOSE-based IOT systems.
{"title":"Hybrid Encryption for Messages’ Confidentiality in SOSE-Based IOT Service Systems","authors":"M. Ahmed","doi":"10.11113/ijic.v11n2.292","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.292","url":null,"abstract":"Internet of Things (IOT) is an essential paradigm where devices are interconnected into network. The operations of these devices can be through service-oriented software engineering (SOSE) principles for efficient service provision. SOSE is an important software development method for flexible, agile, loose-coupled, heterogeneous and interoperable applications. Despite all these benefits, its adoption for IOT services is slow due to security challenges. The security challenge of integration of IOT with service-oriented architecture (SOA) is man-in-the-middle attack on the messages exchanged. The transport layer security (TLS) creates a secured socket channel between the client and server. This is efficient in securing messages exchanged at the transport layer only. SOSE-based IOT systems needs an end-to-end security to handle its vulnerabilities. This integration enables interoperability of heterogeneous devices, but renders the system vulnerable to passive attacks. The confidentiality problem is hereby addressed by message level hybrid encryption. This is by encrypting the messages by AES for efficiency. However, to enable end-to-end security, the key sharing problem of advanced encryption standard (AES) is handled by RSA public key encryption. The results shows that this solution addressed data contents security and credentials security privacy issues. Furthermore, the solution enables end-to- end security of interaction in SOSE-based IOT systems.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"57 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75305299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Salman Humdullah, S. H. Othman, Muhammad Najib Razali, Hazinah Kutty Mammi, R. Javed
The land is a very valuable asset for any government. It’s government job to ensure that the land registration and transfer are done without any fraud, good speed and transparency. The current land registration method employed by the governments are not open to frauds, hacks, and corruption of land records. Fraud is one of the major problems in land registration methods. In this study, the goal is to develop the framework by incorporating the blockchain technique that secures the land data during the land registration and transfer phases by preventing the fraud. The use of blockchain gives us the transparent, decentralized and robust infrastructure to build our framework upon. The blockchain technology is implemented with the asymmetric keys encryption/decryption that securely stores the land registration/transfer data. The data is held using encrypting with the public key of the landowner and storing a hash of the data. The use of the cryptographic function of hashing using SHA. The comparison of using SHA 256 and SHA 512 is given and discussed. The dataset used to compare results is created using 200 records of JSON objects with each object being identical for both SHA256 and SHA512 to remove data bias. The proposed framework with the SHA 512 performed 29% faster than the SHA 256. The results indicate our proposed framework performing better than the works proposed in current research land registration techniques.
{"title":"An Improved Blockchain Technique for Secure Land Registration Data Records","authors":"Salman Humdullah, S. H. Othman, Muhammad Najib Razali, Hazinah Kutty Mammi, R. Javed","doi":"10.11113/ijic.v11n2.291","DOIUrl":"https://doi.org/10.11113/ijic.v11n2.291","url":null,"abstract":"The land is a very valuable asset for any government. It’s government job to ensure that the land registration and transfer are done without any fraud, good speed and transparency. The current land registration method employed by the governments are not open to frauds, hacks, and corruption of land records. Fraud is one of the major problems in land registration methods. In this study, the goal is to develop the framework by incorporating the blockchain technique that secures the land data during the land registration and transfer phases by preventing the fraud. The use of blockchain gives us the transparent, decentralized and robust infrastructure to build our framework upon. The blockchain technology is implemented with the asymmetric keys encryption/decryption that securely stores the land registration/transfer data. The data is held using encrypting with the public key of the landowner and storing a hash of the data. The use of the cryptographic function of hashing using SHA. The comparison of using SHA 256 and SHA 512 is given and discussed. The dataset used to compare results is created using 200 records of JSON objects with each object being identical for both SHA256 and SHA512 to remove data bias. The proposed framework with the SHA 512 performed 29% faster than the SHA 256. The results indicate our proposed framework performing better than the works proposed in current research land registration techniques.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":"6 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84174187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}