Pub Date : 2022-12-27DOI: 10.5614/itbj.ict.res.appl.2022.16.3.2
Hurriyatul Fitriyah
Many researchers have used images to measure the volume and weight of fruits so that the measurement can be done remotely and non-contact. There are various methods for fruit volume estimation based on images, i.e., Basic Shape, Solid of Revolution, Conical Frustum, and Regression. The weight estimation generally uses Regression. This study analyzed the accuracy of these methods. Tests were done by taking images of symmetrical fruits (represented by tangerines) and non-symmetrical fruits (represented by strawberries). The images were processed using segmentation in saturation color space to get binary images. The Regression method used Diameter, Projection Area, and Perimeter as features that were extracted from the binary images. For symmetrical fruits, the best accuracy was obtained with the Linear Regression based on Diameter (LDD), which gave the highest R2 (0.96 for volume and 0.93 for weight) and the lowest RMSE (5.7 mm3 for volume and 5.3 gram for volume). For non-symmetrical fruits, the highest accuracy for non-symmetric fruits was given by the Linear Regression based on Diameter (LRD) and Linear Regression based on Area (LRA) with an R2 of 0.8 for volume and weight. The RMSE for LRD and LRA for strawberries was 3.3 mm3 for volume and 1.4 grams for weight.
许多研究人员使用图像来测量水果的体积和重量,以便可以远程和非接触地进行测量。基于图像的水果体积估计方法有基本形状(Basic Shape)、旋转固体(Solid of Revolution)、圆锥截体(Conical Frustum)和回归(Regression)等。权重估计一般采用回归方法。本研究分析了这些方法的准确性。通过拍摄对称水果(以橘子为代表)和非对称水果(以草莓为代表)的图像来完成测试。在饱和色彩空间中对图像进行分割,得到二值图像。回归方法使用直径、投影面积和周长作为从二值图像中提取的特征。对于对称型果实,基于直径(LDD)的线性回归精度最高,R2最高(体积为0.96,重量为0.93),RMSE最低(体积为5.7 mm3, 5.3 g)。对于非对称水果,基于直径的线性回归(LRD)和基于面积的线性回归(LRA)对非对称水果的精度最高,体积和重量的R2均为0.8。草莓的LRD和LRA的RMSE分别为3.3 mm3体积和1.4 g重量。
{"title":"Accuracy of Various Methods to Estimate Volume and Weight of Symmetrical and Non-Symmetrical Fruits using Computer Vision","authors":"Hurriyatul Fitriyah","doi":"10.5614/itbj.ict.res.appl.2022.16.3.2","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.3.2","url":null,"abstract":"Many researchers have used images to measure the volume and weight of fruits so that the measurement can be done remotely and non-contact. There are various methods for fruit volume estimation based on images, i.e., Basic Shape, Solid of Revolution, Conical Frustum, and Regression. The weight estimation generally uses Regression. This study analyzed the accuracy of these methods. Tests were done by taking images of symmetrical fruits (represented by tangerines) and non-symmetrical fruits (represented by strawberries). The images were processed using segmentation in saturation color space to get binary images. The Regression method used Diameter, Projection Area, and Perimeter as features that were extracted from the binary images. For symmetrical fruits, the best accuracy was obtained with the Linear Regression based on Diameter (LDD), which gave the highest R2 (0.96 for volume and 0.93 for weight) and the lowest RMSE (5.7 mm3 for volume and 5.3 gram for volume). For non-symmetrical fruits, the highest accuracy for non-symmetric fruits was given by the Linear Regression based on Diameter (LRD) and Linear Regression based on Area (LRA) with an R2 of 0.8 for volume and weight. The RMSE for LRD and LRA for strawberries was 3.3 mm3 for volume and 1.4 grams for weight.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45102373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-27DOI: 10.5614/itbj.ict.res.appl.2022.16.3.3
Myo Thet Htun
The Myanmar music industry urgently needs an efficient broadcast monitoring system to solve copyright infringement issues and illegal benefit-sharing between artists and broadcasting stations. In this paper, a broadcast monitoring system is proposed for Myanmar FM radio stations by utilizing space-saving audio fingerprint extraction based on the Mel Frequency Cepstral Coefficient (MFCC). This study focused on reducing the memory requirement for fingerprint storage while preserving the robustness of the audio fingerprints to common distortions such as compression, noise addition, etc. In this system, a three-second audio clip is represented by a 2,712-bit fingerprint block. This significantly reduces the memory requirement when compared to Philips Robust Hashing (PRH), one of the dominant audio fingerprinting methods, where a three-second audio clip is represented by an 8,192-bit fingerprint block. The proposed system is easy to implement and achieves correct and speedy music identification even on noisy and distorted broadcast audio streams. In this research work, we deployed an audio fingerprint database of 7,094 songs and broadcast audio streams of four local FM channels in Myanmar to evaluate the performance of the proposed system. The experimental results showed that the system achieved reliable performance.
{"title":"Compact and Robust MFCC-based Space-Saving Audio Fingerprint Extraction for Efficient Music Identification on FM Broadcast Monitoring","authors":"Myo Thet Htun","doi":"10.5614/itbj.ict.res.appl.2022.16.3.3","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.3.3","url":null,"abstract":"The Myanmar music industry urgently needs an efficient broadcast monitoring system to solve copyright infringement issues and illegal benefit-sharing between artists and broadcasting stations. In this paper, a broadcast monitoring system is proposed for Myanmar FM radio stations by utilizing space-saving audio fingerprint extraction based on the Mel Frequency Cepstral Coefficient (MFCC). This study focused on reducing the memory requirement for fingerprint storage while preserving the robustness of the audio fingerprints to common distortions such as compression, noise addition, etc. In this system, a three-second audio clip is represented by a 2,712-bit fingerprint block. This significantly reduces the memory requirement when compared to Philips Robust Hashing (PRH), one of the dominant audio fingerprinting methods, where a three-second audio clip is represented by an 8,192-bit fingerprint block. The proposed system is easy to implement and achieves correct and speedy music identification even on noisy and distorted broadcast audio streams. In this research work, we deployed an audio fingerprint database of 7,094 songs and broadcast audio streams of four local FM channels in Myanmar to evaluate the performance of the proposed system. The experimental results showed that the system achieved reliable performance.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47138589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-27DOI: 10.5614/itbj.ict.res.appl.2022.16.3.1
M. Ahmada, Saiful Akbar
Building energy problems have various kinds of aspects, one of which is the difficulty of measuring energy efficiency. With current data development, energy efficiency measurements can be made by developing predictive models to estimate future building needs. However, with the massive amount of data, several problems arise regarding data quality and the lack of scalability in terms of computation memory and time in modeling. In this study, we used data reduction and ensemble learning techniques to overcome these problems. We used numerosity reduction, dimension reduction, and a LightGBM model based on boosting added with a bagging technique, which we compared with incremental learning. Our experimental results showed that the numerosity reduction and dimension reduction techniques could speed up the training process and model prediction without reducing the accuracy. Testing the ensemble learning model also revealed that bagging had the best performance in terms of RMSE and speed, with an RMSE of 262.304 and 1.67 times faster than the model with incremental learning.
{"title":"Energy Consumption Prediction Using Data Reduction and Ensemble Learning Techniques","authors":"M. Ahmada, Saiful Akbar","doi":"10.5614/itbj.ict.res.appl.2022.16.3.1","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.3.1","url":null,"abstract":"Building energy problems have various kinds of aspects, one of which is the difficulty of measuring energy efficiency. With current data development, energy efficiency measurements can be made by developing predictive models to estimate future building needs. However, with the massive amount of data, several problems arise regarding data quality and the lack of scalability in terms of computation memory and time in modeling. In this study, we used data reduction and ensemble learning techniques to overcome these problems. We used numerosity reduction, dimension reduction, and a LightGBM model based on boosting added with a bagging technique, which we compared with incremental learning. Our experimental results showed that the numerosity reduction and dimension reduction techniques could speed up the training process and model prediction without reducing the accuracy. Testing the ensemble learning model also revealed that bagging had the best performance in terms of RMSE and speed, with an RMSE of 262.304 and 1.67 times faster than the model with incremental learning.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41637909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-11DOI: 10.5614/itbj.ict.res.appl.2022.16.2.6
Kemas Wiharja, D. Murdiansyah, M. Z. Romdlony, Tiwa Ramdhani, Muhammad Ramadhan Gandidi
Several works have presented the Hadith on different digital platforms, ranging from websites to mobile apps. These works were successful in presenting the text of the Hadith to users, but this does not help them to answer any particular questions about religious matters. Therefore, in this work we propose a question-answering system that was built on a Hadith knowledge graph. To interpret the user questions correctly, we used the Levenshtein distance function, and for storing the Hadith in graph format we used Neo4J as the graph database. Our main findings were: (i) a knowledge graph is suitable for representing the Hadith and also for doing the reasoning task, and (ii) our proposed approach achieved 95% for top-1 accuracy.
{"title":"A Questions Answering System on Hadith Knowledge Graph","authors":"Kemas Wiharja, D. Murdiansyah, M. Z. Romdlony, Tiwa Ramdhani, Muhammad Ramadhan Gandidi","doi":"10.5614/itbj.ict.res.appl.2022.16.2.6","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.2.6","url":null,"abstract":"Several works have presented the Hadith on different digital platforms, ranging from websites to mobile apps. These works were successful in presenting the text of the Hadith to users, but this does not help them to answer any particular questions about religious matters. Therefore, in this work we propose a question-answering system that was built on a Hadith knowledge graph. To interpret the user questions correctly, we used the Levenshtein distance function, and for storing the Hadith in graph format we used Neo4J as the graph database. Our main findings were: (i) a knowledge graph is suitable for representing the Hadith and also for doing the reasoning task, and (ii) our proposed approach achieved 95% for top-1 accuracy.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44244857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-23DOI: 10.5614/itbj.ict.res.appl.2022.16.2.5
B. O. Sadiq, H. Bello-Salau, Latifat Abduraheem-Olaniyi, B. Muhammed, Sikiru Olayinka Zakariyya
The large amounts of surveillance video data are recorded, containing many redundant video frames, which makes video browsing and retrieval difficult, thus increasing bandwidth utilization, storage capacity, and time consumed. To ensure the reduction in bandwidth utilization and storage capacity to the barest minimum, keyframe extraction strategies have been developed. These strategies are implemented to extract unique keyframes whilst removing redundancies. Despite the achieved improvement in keyframe extraction processes, there still exist a significant number of redundant frames in summarized videos. With a view to addressing this issue, the current paper proposes an enhanced keyframe extraction strategy using k-means clustering and a statistical approach. Surveillance footage, movie clips, advertisements, and sports videos from a benchmark database as well as Compeng IP surveillance videos were used to evaluate the performance of the proposed method. In terms of compression ratio, the results showed that the proposed scheme outperformed existing schemes by 2.82%. This implies that the proposed scheme further removed redundant frames whiles retaining video quality. In terms of video playtime, there was an average reduction of 27.32%, thus making video content retrieval less cumbersome when compared with existing schemes. Implementation was done using MATLAB R2020b.
{"title":"Towards Enhancing Keyframe Extraction Strategy for Summarizing Surveillance Video: An Implementation Study","authors":"B. O. Sadiq, H. Bello-Salau, Latifat Abduraheem-Olaniyi, B. Muhammed, Sikiru Olayinka Zakariyya","doi":"10.5614/itbj.ict.res.appl.2022.16.2.5","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.2.5","url":null,"abstract":"The large amounts of surveillance video data are recorded, containing many redundant video frames, which makes video browsing and retrieval difficult, thus increasing bandwidth utilization, storage capacity, and time consumed. To ensure the reduction in bandwidth utilization and storage capacity to the barest minimum, keyframe extraction strategies have been developed. These strategies are implemented to extract unique keyframes whilst removing redundancies. Despite the achieved improvement in keyframe extraction processes, there still exist a significant number of redundant frames in summarized videos. With a view to addressing this issue, the current paper proposes an enhanced keyframe extraction strategy using k-means clustering and a statistical approach. Surveillance footage, movie clips, advertisements, and sports videos from a benchmark database as well as Compeng IP surveillance videos were used to evaluate the performance of the proposed method. In terms of compression ratio, the results showed that the proposed scheme outperformed existing schemes by 2.82%. This implies that the proposed scheme further removed redundant frames whiles retaining video quality. In terms of video playtime, there was an average reduction of 27.32%, thus making video content retrieval less cumbersome when compared with existing schemes. Implementation was done using MATLAB R2020b.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41421766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-09DOI: 10.5614/itbj.ict.res.appl.2022.16.2.4
O. Fagbuagun, O. Folorunsho, Lawrence Bunmi Adewole, Titilayo Akin-Olayemi
Breast cancer is a deadly disease affecting women around the world. It can spread rapidly into other parts of the body, causing untimely death when undetected due to rapid growth and division of cells in the breast. Early diagnosis of this disease tends to increase the survival rate of women suffering from the disease. The use of technology to detect breast cancer in women has been explored over the years. A major drawback of most research in this area is low accuracy in the detection rate of breast cancer in women. This is partly due to the availability of few data sets to train classifiers and the lack of efficient algorithms that achieve optimal results. This research aimed to develop a model that uses a machine learning approach (convolution neural network) to detect breast cancer in women with significantly high accuracy. In this paper, a model was developed using 569 mammograms of various breasts diagnosed with benign and maligned cancers. The model achieved an accuracy of 98.25% and sensitivity of 99.5% after 80 iterations.
{"title":"Breast Cancer Diagnosis in Women Using Neural Networks and Deep Learning","authors":"O. Fagbuagun, O. Folorunsho, Lawrence Bunmi Adewole, Titilayo Akin-Olayemi","doi":"10.5614/itbj.ict.res.appl.2022.16.2.4","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.2.4","url":null,"abstract":"Breast cancer is a deadly disease affecting women around the world. It can spread rapidly into other parts of the body, causing untimely death when undetected due to rapid growth and division of cells in the breast. Early diagnosis of this disease tends to increase the survival rate of women suffering from the disease. The use of technology to detect breast cancer in women has been explored over the years. A major drawback of most research in this area is low accuracy in the detection rate of breast cancer in women. This is partly due to the availability of few data sets to train classifiers and the lack of efficient algorithms that achieve optimal results. This research aimed to develop a model that uses a machine learning approach (convolution neural network) to detect breast cancer in women with significantly high accuracy. In this paper, a model was developed using 569 mammograms of various breasts diagnosed with benign and maligned cancers. The model achieved an accuracy of 98.25% and sensitivity of 99.5% after 80 iterations. ","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45101778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.5614/itbj.ict.res.appl.2022.16.2.1
Soni Yora, A. Barmawi
The noiseless steganography method that has been proposed by Wibowo can embed up to six characters in the provided cover text, but more than 59% of Indonesian words have a length of more than six characters, so there is room to improve Wibowo’s method. This paper proposes an improvement of Wibowo’s method by modifying the shifting codes and using context-based language generation. Based on 300 test messages, 99% of messages with more than six characters could be embedded by the proposed method, while using Wibowo’s method this was only 34%. Wibowo’s method can embed more than six characters only if the number of shifting codes is less than three, while the proposed method can embed more than six characters even if there are more than three shifting codes. Furthermore, the security for representing the number of code digits is increased by introducing a private key with the probability of guessing less than 1, while in Wibowo’s method this is 1. The naturalness of the cover sentences generated by the proposed method was maintained, which was about 99% when using the proposed method, while it was 98.61% when using Wibowo’s method.
{"title":"Strengthening INORMALS Using Context-based Natural Language Generation","authors":"Soni Yora, A. Barmawi","doi":"10.5614/itbj.ict.res.appl.2022.16.2.1","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.2.1","url":null,"abstract":"The noiseless steganography method that has been proposed by Wibowo can embed up to six characters in the provided cover text, but more than 59% of Indonesian words have a length of more than six characters, so there is room to improve Wibowo’s method. This paper proposes an improvement of Wibowo’s method by modifying the shifting codes and using context-based language generation. Based on 300 test messages, 99% of messages with more than six characters could be embedded by the proposed method, while using Wibowo’s method this was only 34%. Wibowo’s method can embed more than six characters only if the number of shifting codes is less than three, while the proposed method can embed more than six characters even if there are more than three shifting codes. Furthermore, the security for representing the number of code digits is increased by introducing a private key with the probability of guessing less than 1, while in Wibowo’s method this is 1. The naturalness of the cover sentences generated by the proposed method was maintained, which was about 99% when using the proposed method, while it was 98.61% when using Wibowo’s method.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45706613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.5614/itbj.ict.res.appl.2022.16.2.3
Bashar Tahayna, R. Ayyasamy, Rehan Akbar
The large source of information space produced by the plethora of social media platforms in general and microblogging in particular has spawned a slew of new applications and prompted the rise and expansion of sentiment analysis research. We propose a sentiment analysis technique that identifies the main parts to describe tweet intent and also enriches them with relevant words, phrases, or even inferred variables. We followed a state-of-the-art hybrid deep learning model to combine Convolutional Neural Network (CNN) and the Long Short-Term Memory network (LSTM) to classify tweet data based on their polarity. To preserve the latent relationships between tweet terms and their expanded representation, sentence encoding and contextualized word embeddings are utilized. To investigate the performance of tweet embeddings on the sentiment analysis task, we tested several context-free models (Word2Vec, Sentence2Vec, Glove, and FastText), a dynamic embedding model (BERT), deep contextualized word representations (ELMo), and an entity-based model (Wikipedia). The proposed method and results prove that text enrichment improves the accuracy of sentiment polarity classification with a notable percentage.
{"title":"Context-Aware Sentiment Analysis using Tweet Expansion Method","authors":"Bashar Tahayna, R. Ayyasamy, Rehan Akbar","doi":"10.5614/itbj.ict.res.appl.2022.16.2.3","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.2.3","url":null,"abstract":"The large source of information space produced by the plethora of social media platforms in general and microblogging in particular has spawned a slew of new applications and prompted the rise and expansion of sentiment analysis research. We propose a sentiment analysis technique that identifies the main parts to describe tweet intent and also enriches them with relevant words, phrases, or even inferred variables. We followed a state-of-the-art hybrid deep learning model to combine Convolutional Neural Network (CNN) and the Long Short-Term Memory network (LSTM) to classify tweet data based on their polarity. To preserve the latent relationships between tweet terms and their expanded representation, sentence encoding and contextualized word embeddings are utilized. To investigate the performance of tweet embeddings on the sentiment analysis task, we tested several context-free models (Word2Vec, Sentence2Vec, Glove, and FastText), a dynamic embedding model (BERT), deep contextualized word representations (ELMo), and an entity-based model (Wikipedia). The proposed method and results prove that text enrichment improves the accuracy of sentiment polarity classification with a notable percentage.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48984080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-31DOI: 10.5614/itbj.ict.res.appl.2022.16.2.2
Veronica Windha Mahyastuty, I. Iskandar, H. Hendrawan, M. S. Arifianto
Massive Machine Type Communication (mMTC) can be used to connect a large number of sensors over a wide coverage area. One of the places where mMTC can be applied is in wireless sensor networks (WSNs). A WSN consists of several sensor nodes that send their sensing information to the cluster head (CH), which can then be forwarded to a high altitude platform (HAP) station. Sensing information can be sent by the sensor nodes at the same time through the same medium, which means collision can occur. When this happens, the sensor node must re-send the sensing information, which causes energy wastage in the WSN. In this paper, we propose a Medium Access Control (MAC) protocol to control access from several sensor nodes during data transmission to avoid collision. The sensor nodes send Round Robin, Interrupt and Query data every eight hours. The initial slot for transmission of the Round Robin data can be either randomized or reserved. Analysis performance was done to see the efficiency of the network with the proposed MAC protocol. Based on the series of simulations that was conducted, the proposed MAC protocol can support a WSN system-based HAP for monitoring every eight hours. The proposed MAC protocol with an initial slot that is reserved for transmission of Round Robin data has greater network efficiency than a randomized slot.
{"title":"Medium Access Control Protocol for High Altitude Platform Based Massive Machine Type Communication","authors":"Veronica Windha Mahyastuty, I. Iskandar, H. Hendrawan, M. S. Arifianto","doi":"10.5614/itbj.ict.res.appl.2022.16.2.2","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.2.2","url":null,"abstract":"Massive Machine Type Communication (mMTC) can be used to connect a large number of sensors over a wide coverage area. One of the places where mMTC can be applied is in wireless sensor networks (WSNs). A WSN consists of several sensor nodes that send their sensing information to the cluster head (CH), which can then be forwarded to a high altitude platform (HAP) station. Sensing information can be sent by the sensor nodes at the same time through the same medium, which means collision can occur. When this happens, the sensor node must re-send the sensing information, which causes energy wastage in the WSN. In this paper, we propose a Medium Access Control (MAC) protocol to control access from several sensor nodes during data transmission to avoid collision. The sensor nodes send Round Robin, Interrupt and Query data every eight hours. The initial slot for transmission of the Round Robin data can be either randomized or reserved. Analysis performance was done to see the efficiency of the network with the proposed MAC protocol. Based on the series of simulations that was conducted, the proposed MAC protocol can support a WSN system-based HAP for monitoring every eight hours. The proposed MAC protocol with an initial slot that is reserved for transmission of Round Robin data has greater network efficiency than a randomized slot.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44020077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-17DOI: 10.5614/itbj.ict.res.appl.2022.16.1.6
Yahya M. Tashtoush, Dirar A. Darweesh, Omar M. Darwish, B. Alsinglawi, Rasha Obeidat
Currently, most organizations have a defense system to protect their digital communication network against cyberattacks. However, these defense systems deal with all network traffic regardless if it is from profit or non-profit websites. This leads to enforcing more security policies, which negatively affects network speed. Since most dangerous cyberattacks are aimed at commercial websites, because they contain more critical data such as credit card numbers, it is better to set up the defense system priorities towards actual attacks that come from profit websites. This study evaluated the effect of textual website metrics in determining the type of website as profit or nonprofit for security purposes. Classifiers were built to predict the type of website as profit or non-profit by applying machine learning techniques on a dataset. The corpus used for this research included profit and non-profit websites. Both traditional and deep machine learning techniques were applied. The results showed that J48 performed best in terms of accuracy according to its outcomes in all cases. The newly built models can be a significant tool for defense systems of organizations, as they will help them to implement the necessary security policies associated with attacks that come from both profit and non-profit websites. This will have a positive impact on the security and efficiency of the network.
{"title":"A Classifier to Detect Profit and Non Profit Websites Upon Textual Metrics for Security Purposes","authors":"Yahya M. Tashtoush, Dirar A. Darweesh, Omar M. Darwish, B. Alsinglawi, Rasha Obeidat","doi":"10.5614/itbj.ict.res.appl.2022.16.1.6","DOIUrl":"https://doi.org/10.5614/itbj.ict.res.appl.2022.16.1.6","url":null,"abstract":"Currently, most organizations have a defense system to protect their digital communication network against cyberattacks. However, these defense systems deal with all network traffic regardless if it is from profit or non-profit websites. This leads to enforcing more security policies, which negatively affects network speed. Since most dangerous cyberattacks are aimed at commercial websites, because they contain more critical data such as credit card numbers, it is better to set up the defense system priorities towards actual attacks that come from profit websites. This study evaluated the effect of textual website metrics in determining the type of website as profit or nonprofit for security purposes. Classifiers were built to predict the type of website as profit or non-profit by applying machine learning techniques on a dataset. The corpus used for this research included profit and non-profit websites. Both traditional and deep machine learning techniques were applied. The results showed that J48 performed best in terms of accuracy according to its outcomes in all cases. The newly built models can be a significant tool for defense systems of organizations, as they will help them to implement the necessary security policies associated with attacks that come from both profit and non-profit websites. This will have a positive impact on the security and efficiency of the network.","PeriodicalId":42785,"journal":{"name":"Journal of ICT Research and Applications","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43710726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}