Pub Date : 2023-12-19DOI: 10.1504/ijict.2022.10052558
P. Sarker, Mir Lutfur Rahman
The short message service (SMS) is a wireless medium of transmission that allows you to send brief text messages. Cell phone devices have an uttermost SMS capacity of 1,120 bits in the traditional system. Moreover, the conventional SMS employs seven bits for each character, allowing the highest 160 characters for an SMS text message to be transmitted. This research demonstrated that an SMS message could contain more than 200 characters by representing around five bits each, introducing a data structure, namely, adjacent distance array (ADA) using the Huffman principle. Allowing the concept of lossless data compression technique, the proposed method of the research generates character's codeword utilising the standard Huffman. However, the ADA encodes the message by putting the ASCII value distances of all characters, and decoding performs by avoiding the whole Huffman tree traverse, which is the pivotal contribution of the research to develop an effective SMS compression technique for personal digital assistants (PDAs). The encoding and decoding processes have been discussed and contrasted with the conventional SMS text message system, where our proposed ADA technique performs outstandingly better from every aspect discovered after evaluating all outcomes.
{"title":"A Huffman based short message service compression technique using adjacent distance array","authors":"P. Sarker, Mir Lutfur Rahman","doi":"10.1504/ijict.2022.10052558","DOIUrl":"https://doi.org/10.1504/ijict.2022.10052558","url":null,"abstract":"The short message service (SMS) is a wireless medium of transmission that allows you to send brief text messages. Cell phone devices have an uttermost SMS capacity of 1,120 bits in the traditional system. Moreover, the conventional SMS employs seven bits for each character, allowing the highest 160 characters for an SMS text message to be transmitted. This research demonstrated that an SMS message could contain more than 200 characters by representing around five bits each, introducing a data structure, namely, adjacent distance array (ADA) using the Huffman principle. Allowing the concept of lossless data compression technique, the proposed method of the research generates character's codeword utilising the standard Huffman. However, the ADA encodes the message by putting the ASCII value distances of all characters, and decoding performs by avoiding the whole Huffman tree traverse, which is the pivotal contribution of the research to develop an effective SMS compression technique for personal digital assistants (PDAs). The encoding and decoding processes have been discussed and contrasted with the conventional SMS text message system, where our proposed ADA technique performs outstandingly better from every aspect discovered after evaluating all outcomes.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67043157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.32890/jict2023.22.3.1
Mohd Shareduwan Mohd Kasihmuddin, Nur Shahira Abdul Halim, Siti Zulaikha Mohd Jamaludin, M. Mansor, Alyaa Alway, Nur Ezlin Zamri, Siti Aishah Azhar, Muhammad Fadhil Marsani
Online shopping is a multi-billion-dollar industry worldwide. However, several challenges related to purchase intention can impact the sales of e-commerce. For example, e-commerce platforms are unable to identify which factors contribute to the high sales of a product. Besides, online sellers have difficulty finding products that align with customers’ preferences. Therefore, this work will utilize an artificial neural network to provide knowledge extraction for the online shopping industry or e-commerce platforms that might improve their sales and services. There are limited attempts to propose knowledge extraction with neural network models in the online shopping field, especially research revolving around online shoppers’ purchasing intentions. In this study, 2-satisfiability logic was used to represent the shopping attribute and a special recurrent artificial neural network named Hopfield neural network was employed. In reducing the learning complexity, a genetic algorithm was implemented to optimize the logical rule throughout the learning phase in performing a 2-satisfiability-based reverse analysis method, implemented during the learning phase as this method was compared. The performance of the genetic algorithm with 2-satisfiability-based reverse analysis was measured according to the selected performance evaluation metrics. The simulation suggested that the proposed model outperformed the existing model in doing logic mining for the online shoppers dataset.
{"title":"Logic Mining Approach: Shoppers’ Purchasing Data Extraction via Evolutionary Algorithm","authors":"Mohd Shareduwan Mohd Kasihmuddin, Nur Shahira Abdul Halim, Siti Zulaikha Mohd Jamaludin, M. Mansor, Alyaa Alway, Nur Ezlin Zamri, Siti Aishah Azhar, Muhammad Fadhil Marsani","doi":"10.32890/jict2023.22.3.1","DOIUrl":"https://doi.org/10.32890/jict2023.22.3.1","url":null,"abstract":"Online shopping is a multi-billion-dollar industry worldwide. However, several challenges related to purchase intention can impact the sales of e-commerce. For example, e-commerce platforms are unable to identify which factors contribute to the high sales of a product. Besides, online sellers have difficulty finding products that align with customers’ preferences. Therefore, this work will utilize an artificial neural network to provide knowledge extraction for the online shopping industry or e-commerce platforms that might improve their sales and services. There are limited attempts to propose knowledge extraction with neural network models in the online shopping field, especially research revolving around online shoppers’ purchasing intentions. In this study, 2-satisfiability logic was used to represent the shopping attribute and a special recurrent artificial neural network named Hopfield neural network was employed. In reducing the learning complexity, a genetic algorithm was implemented to optimize the logical rule throughout the learning phase in performing a 2-satisfiability-based reverse analysis method, implemented during the learning phase as this method was compared. The performance of the genetic algorithm with 2-satisfiability-based reverse analysis was measured according to the selected performance evaluation metrics. The simulation suggested that the proposed model outperformed the existing model in doing logic mining for the online shoppers dataset. ","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90674541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.32890/jict2023.22.3.2
Ajiwasesa Harumeka, Taly Purwa
Poverty is one of the biggest challenges facing the world nowadays. Numerous studies have concentrated on the characteristics thatdetermine poverty to identify poor households. One of the most important factors is the physical type of the house. The physical typeof houses includes floor type, wall type, roof type, and floor area per inhabitant in Indonesia, especially Surabaya, one of Indonesia’s bigcities and the capital of East Java Province. This factor gave promising results to the country. Therefore, it was assumed that these variablescould no longer distinguish between those in wealth and those in poverty. Poor household data are one example of imbalanced data interms of classification, which is a concern. The Rare Event Weighted Logistic Regression (RE-WLR) and Entropy-based Fuzzy Weighted Logistic Regression (EFWLR) methods were utilised to overcome these problems. As a result, the only factor, including the physical design of a house, which had a substantial impact on the likelihood that a household would be classified as poor, was the floor area per capita. The other three variables were not statistically significant, namely floor type, wall type, and roof type. In addition, the elimination of the physical type of house would reduce the Area Under the Curve of the RE-WLR and EFWLR methods by 6.78 percent and 6.85 percent, respectively.
{"title":"Does the Physical Type of House Still Affect Household Poverty in Indonesia? An Entropy-based Fuzzy Weighted Logistic Regression Approach","authors":"Ajiwasesa Harumeka, Taly Purwa","doi":"10.32890/jict2023.22.3.2","DOIUrl":"https://doi.org/10.32890/jict2023.22.3.2","url":null,"abstract":"Poverty is one of the biggest challenges facing the world nowadays. Numerous studies have concentrated on the characteristics thatdetermine poverty to identify poor households. One of the most important factors is the physical type of the house. The physical typeof houses includes floor type, wall type, roof type, and floor area per inhabitant in Indonesia, especially Surabaya, one of Indonesia’s bigcities and the capital of East Java Province. This factor gave promising results to the country. Therefore, it was assumed that these variablescould no longer distinguish between those in wealth and those in poverty. Poor household data are one example of imbalanced data interms of classification, which is a concern. The Rare Event Weighted Logistic Regression (RE-WLR) and Entropy-based Fuzzy Weighted Logistic Regression (EFWLR) methods were utilised to overcome these problems. As a result, the only factor, including the physical design of a house, which had a substantial impact on the likelihood that a household would be classified as poor, was the floor area per capita. The other three variables were not statistically significant, namely floor type, wall type, and roof type. In addition, the elimination of the physical type of house would reduce the Area Under the Curve of the RE-WLR and EFWLR methods by 6.78 percent and 6.85 percent, respectively.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"164 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86449325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.32890/jict2023.22.3.3
Nurul Su'aidah Ahmad Radzali, A. Abu Bakar, Amri Izaffi Zamahsasri
Analysis of animal movement data using statistical applications and machine learning has developed rapidly in line with the developmentand use of various tracking devices. Location and movement data at temporal and spatial scales are collected using the Global PositioningSystem (GPS) to estimate the location of animals. In contrast, installing a satellite collar can ensure continuous monitoring, as the receiveddata will be sent directly to the electronic mailbox. Nevertheless, identifying an exact pattern of elephant activity from satellite collar data is still challenging. This study aimed to propose a machine learning model to predict the behavioural diversity of Asian elephants. The study involved four main phases, including two levels of model development, to produce initial and primary classification models. The phases were data collection and preparation, data labelling and initial classification model development, all data classification, and primary classification model development. The elephant behaviour data were collected from the satellite collars attached to five elephants, three males and two females, in forest reserves from 2018 to 2020 by the Department of Wildlife and National Parks, Malaysia. The study’s outcome was a novel classification model that can predict the behaviour of the Asian elephant movement. The findings showed that the XGBoost method could produce the predictive model to classify Asian elephants’ behaviour with 100 percent accuracy. This study revealed the capability of machine learning to identify behaviour classes and decision-making in setting initiatives to preserve this species in the future.
{"title":"Machine Learning Models for Behavioural Diversity of Asian Elephants Prediction Using Satellite Collar Data","authors":"Nurul Su'aidah Ahmad Radzali, A. Abu Bakar, Amri Izaffi Zamahsasri","doi":"10.32890/jict2023.22.3.3","DOIUrl":"https://doi.org/10.32890/jict2023.22.3.3","url":null,"abstract":"Analysis of animal movement data using statistical applications and machine learning has developed rapidly in line with the developmentand use of various tracking devices. Location and movement data at temporal and spatial scales are collected using the Global PositioningSystem (GPS) to estimate the location of animals. In contrast, installing a satellite collar can ensure continuous monitoring, as the receiveddata will be sent directly to the electronic mailbox. Nevertheless, identifying an exact pattern of elephant activity from satellite collar data is still challenging. This study aimed to propose a machine learning model to predict the behavioural diversity of Asian elephants. The study involved four main phases, including two levels of model development, to produce initial and primary classification models. The phases were data collection and preparation, data labelling and initial classification model development, all data classification, and primary classification model development. The elephant behaviour data were collected from the satellite collars attached to five elephants, three males and two females, in forest reserves from 2018 to 2020 by the Department of Wildlife and National Parks, Malaysia. The study’s outcome was a novel classification model that can predict the behaviour of the Asian elephant movement. The findings showed that the XGBoost method could produce the predictive model to classify Asian elephants’ behaviour with 100 percent accuracy. This study revealed the capability of machine learning to identify behaviour classes and decision-making in setting initiatives to preserve this species in the future.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72861966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.32890/jict2023.22.3.4
Mohamad Farhan Mohamad Mohsin, A. Abu Bakar, A. Hamdan, M. Sahani, Zainudin Mohd Ali
Dengue is a virus that is spreading quickly and poses a severe threat in Malaysia. It is essential to have an accurate early detection systemthat can trigger prompt response, reducing deaths and morbidity. Nevertheless, uncertainties in the dengue outbreak dataset reducethe robustness of existing detection models, which require a training phase and thus fail to detect previously unseen outbreak patterns.Consequently, the model fails to detect newly discovered outbreak patterns. This outcome leads to inaccurate decision-making and delaysin implementing prevention plans. Anomaly detection and other detection-based problems have already been widely implemented withsome success using danger theory (DT), a variation of the artificial immune system and a nature-inspired computer technique. Therefore,this study employed DT to develop a novel outbreak detection model. A Malaysian dengue profile dataset was used for the experiment.The results revealed that the proposed DT model performed better than existing methods and significantly improved dengue outbreakdetection. The findings demonstrated that the inclusion of a DT detection mechanism enhanced the dengue outbreak detectionmodel’s accuracy. Even without a training phase, the proposed model consistently demonstrated high sensitivity, high specificity,high accuracy, and lower false alarm rate for distinguishing between outbreak and non-outbreak instances.
{"title":"Dengue Outbreak Detection Model Using Artificial Immune System: A Malaysian Case Study","authors":"Mohamad Farhan Mohamad Mohsin, A. Abu Bakar, A. Hamdan, M. Sahani, Zainudin Mohd Ali","doi":"10.32890/jict2023.22.3.4","DOIUrl":"https://doi.org/10.32890/jict2023.22.3.4","url":null,"abstract":"Dengue is a virus that is spreading quickly and poses a severe threat in Malaysia. It is essential to have an accurate early detection systemthat can trigger prompt response, reducing deaths and morbidity. Nevertheless, uncertainties in the dengue outbreak dataset reducethe robustness of existing detection models, which require a training phase and thus fail to detect previously unseen outbreak patterns.Consequently, the model fails to detect newly discovered outbreak patterns. This outcome leads to inaccurate decision-making and delaysin implementing prevention plans. Anomaly detection and other detection-based problems have already been widely implemented withsome success using danger theory (DT), a variation of the artificial immune system and a nature-inspired computer technique. Therefore,this study employed DT to develop a novel outbreak detection model. A Malaysian dengue profile dataset was used for the experiment.The results revealed that the proposed DT model performed better than existing methods and significantly improved dengue outbreakdetection. The findings demonstrated that the inclusion of a DT detection mechanism enhanced the dengue outbreak detectionmodel’s accuracy. Even without a training phase, the proposed model consistently demonstrated high sensitivity, high specificity,high accuracy, and lower false alarm rate for distinguishing between outbreak and non-outbreak instances.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77686359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.32890/jict2023.22.3.5
Munya Saleh Ba Matraf, N. Hashim, A. Hussain
The definition of an e-book is a book in an electronic format, which can be beneficial to all readers, mainly those struggling with print books because of their vision impairments. Nevertheless, the visually impaired cannot access regular e-books because they do not meet their unique needs, and they require a more accessible e-book to reach the same expected advantages as those typically seen. Due to the lack of a clear list of these needs, developers are not aware of the specific requirements of the visually impaired for e-book applications. This paper aimed to analyse the visually impaired usability requirements for usable and accessible e-book applications. Three main activities were conducted: reviewing the literature, conducting an online survey of the visually impaired, and comparing the two results obtained earlier to acquire verified usability requirements. This study reviewed current works on the usability and accessibility of e-books from 2010 to 2022. Besides, this study also conducted reviews on common accessibility needs and standards for mobile applications. A total of 24 usability requirements were identified from the literature and compared with ten results from seven visually impaired respondents using an online survey. With these verified usability requirements, designers and practitioners can use them as a checklist to ensure all needs are considered when designing mobile e-books for the visually impaired.
{"title":"Visually Impaired Usability Requirements for Accessible Mobile Applications: A Checklist for Mobile E-book Applications","authors":"Munya Saleh Ba Matraf, N. Hashim, A. Hussain","doi":"10.32890/jict2023.22.3.5","DOIUrl":"https://doi.org/10.32890/jict2023.22.3.5","url":null,"abstract":"The definition of an e-book is a book in an electronic format, which can be beneficial to all readers, mainly those struggling with print books because of their vision impairments. Nevertheless, the visually impaired cannot access regular e-books because they do not meet their unique needs, and they require a more accessible e-book to reach the same expected advantages as those typically seen. Due to the lack of a clear list of these needs, developers are not aware of the specific requirements of the visually impaired for e-book applications. This paper aimed to analyse the visually impaired usability requirements for usable and accessible e-book applications. Three main activities were conducted: reviewing the literature, conducting an online survey of the visually impaired, and comparing the two results obtained earlier to acquire verified usability requirements. This study reviewed current works on the usability and accessibility of e-books from 2010 to 2022. Besides, this study also conducted reviews on common accessibility needs and standards for mobile applications. A total of 24 usability requirements were identified from the literature and compared with ten results from seven visually impaired respondents using an online survey. With these verified usability requirements, designers and practitioners can use them as a checklist to ensure all needs are considered when designing mobile e-books for the visually impaired.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74442310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.32890/jict2023.22.3.6
Nur Maisarah Abdul Rashid, M. Ismail
The prediction of cryptocurrency prices is a hot topic among academics. Nevertheless, predicting the cryptocurrency price accurately can bechallenging in the real world. Numerous studies have been undertaken to determine the best model for successful prediction. However,they lacked correct results because they avoided identifying the critical features. It is important to remember that trends are criticalfeatures in time series to obtain data information. A dearth of research demonstrates that the cryptocurrency trend comprises linear andnonlinear patterns. Therefore, this study attempted to fill this gap and focused on modelling and forecasting trends in cryptocurrency. Thisstudy examined the linear and nonlinear dependency trend patterns of the top five cryptocurrency closing prices. The weekly historical data of each cryptocurrency were taken at different periods due to the availability of data on the system. In achieving its goal, this study examined the results by plotting based on residual trend and diagnostic statistic checking using three deterministic methods: linear trend regression, quadratic trend, and exponential trend. Based on the minimum Akaike Information Criterion (AIC), the result showed that the top five cryptocurrency closing price data series contained nonlinear and linear trend patterns. The information of this study will assist traders and investors in comprehending the trend of the top five cryptocurrencies and choosing the suitable model to predict cryptocurrency prices. Additionally, accurately measuring the forecast will protect investors from losing their investment.
{"title":"Modelling and Forecasting the Trend in Cryptocurrency Prices","authors":"Nur Maisarah Abdul Rashid, M. Ismail","doi":"10.32890/jict2023.22.3.6","DOIUrl":"https://doi.org/10.32890/jict2023.22.3.6","url":null,"abstract":"The prediction of cryptocurrency prices is a hot topic among academics. Nevertheless, predicting the cryptocurrency price accurately can bechallenging in the real world. Numerous studies have been undertaken to determine the best model for successful prediction. However,they lacked correct results because they avoided identifying the critical features. It is important to remember that trends are criticalfeatures in time series to obtain data information. A dearth of research demonstrates that the cryptocurrency trend comprises linear andnonlinear patterns. Therefore, this study attempted to fill this gap and focused on modelling and forecasting trends in cryptocurrency. Thisstudy examined the linear and nonlinear dependency trend patterns of the top five cryptocurrency closing prices. The weekly historical data of each cryptocurrency were taken at different periods due to the availability of data on the system. In achieving its goal, this study examined the results by plotting based on residual trend and diagnostic statistic checking using three deterministic methods: linear trend regression, quadratic trend, and exponential trend. Based on the minimum Akaike Information Criterion (AIC), the result showed that the top five cryptocurrency closing price data series contained nonlinear and linear trend patterns. The information of this study will assist traders and investors in comprehending the trend of the top five cryptocurrencies and choosing the suitable model to predict cryptocurrency prices. Additionally, accurately measuring the forecast will protect investors from losing their investment.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84141452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.32890/jict2023.22.2.5
Muhammad Harith Noor Azam, Farida Hazwani MOHD RIDZUAN, M. N. S. Mohd Sayuti
Existing embedding techniques depend on cover audio selected by users. Unknowingly, users may make a poor cover audio selectionthat is not optimised in its capacity or imperceptibility features, which could reduce the effectiveness of any embedding technique. As a trade-off exists between capacity and imperceptibility, producing a method focused on optimising both features is crucial. One ofthe search methods commonly used to find solutions for the trade-off problem in various fields is the Multi-Objective Evolutionary Algorithm (MOEA). Therefore, this research proposed a new method for optimising cover audio selection for audio steganography using the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which falls under the MOEA Pareto dominance paradigm. The proposed method provided suggestions for cover audio to users based on imperceptibility and capacity features. The sample difference calculation was initially formulated to determine the maximum capacity for each cover audio defined in the cover audio database. Next, NSGA-II was implemented to determine the optimised solutions based on the parameters provided by each chromosome. The experimental results demonstrated the effectiveness of the proposed method as it managed to dominate thesolutions from the previous method selected based on one criterion only. In addition, the proposed method considered that the trade-off managed to select the solution as the highest priority compared to the previous method, which put the same solution as low as 71 in the priority ranking. In conclusion, the method optimised the cover audio selected, thus, improving the effectiveness of the audio steganography used. It can be a response to help people whose computers and mobile devices continue to be unfamiliar with audio steganography in an age where information security is crucial.
{"title":"Optimized Cover Selection for Audio Steganography Using Multi-Objective Evolutionary Algorithm","authors":"Muhammad Harith Noor Azam, Farida Hazwani MOHD RIDZUAN, M. N. S. Mohd Sayuti","doi":"10.32890/jict2023.22.2.5","DOIUrl":"https://doi.org/10.32890/jict2023.22.2.5","url":null,"abstract":"Existing embedding techniques depend on cover audio selected by users. Unknowingly, users may make a poor cover audio selectionthat is not optimised in its capacity or imperceptibility features, which could reduce the effectiveness of any embedding technique. As a trade-off exists between capacity and imperceptibility, producing a method focused on optimising both features is crucial. One ofthe search methods commonly used to find solutions for the trade-off problem in various fields is the Multi-Objective Evolutionary Algorithm (MOEA). Therefore, this research proposed a new method for optimising cover audio selection for audio steganography using the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which falls under the MOEA Pareto dominance paradigm. The proposed method provided suggestions for cover audio to users based on imperceptibility and capacity features. The sample difference calculation was initially formulated to determine the maximum capacity for each cover audio defined in the cover audio database. Next, NSGA-II was implemented to determine the optimised solutions based on the parameters provided by each chromosome. The experimental results demonstrated the effectiveness of the proposed method as it managed to dominate thesolutions from the previous method selected based on one criterion only. In addition, the proposed method considered that the trade-off managed to select the solution as the highest priority compared to the previous method, which put the same solution as low as 71 in the priority ranking. In conclusion, the method optimised the cover audio selected, thus, improving the effectiveness of the audio steganography used. It can be a response to help people whose computers and mobile devices continue to be unfamiliar with audio steganography in an age where information security is crucial. ","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87150151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.32890/jict2023.22.2.1
Shatha Abdulhadi Muthana, K. Ku-Mahamud
In any metaheuristic, the parameter values strongly affect the efficiency of an algorithm’s search. This research aims to find the optimal parameter values for the Pareto Ant Colony System (PACS) algorithm, which is used to obtain solutions for the generator maintenance scheduling problem. For optimal maintenance scheduling with low cost, high reliability, and low violation, the parameter values of the PACS algorithm were tuned using the Taguchi and Gray Relational Analysis (Taguchi-GRA) method through search-based approach. The new parameter values were tested on two systems. i.e., 26- and 36-unit systems for window with operational hours [3000-5000]. The gray relational grade (GRG) performance metric and the Friedman test were used to evaluate the algorithm’s performance. The Taguchi-GRA method that produced the new values for the algorithm’s parameters was shown to be able to provide a better multi-objective generator maintenance scheduling (GMS) solution. These values can be benchmarked in solving multi-objective GMS problems using the multi-objective PACS algorithm and its variants.
在任何元启发式算法中,参数值都强烈影响算法的搜索效率。本研究旨在寻找帕累托蚁群系统(Pareto Ant Colony System, PACS)算法的最优参数值,并将其用于求解发电机维修调度问题。为了实现低成本、高可靠性、低违章的最优维修调度,采用基于搜索的田口灰色关联分析(Taguchi- gra)方法对PACS算法的参数值进行了调整。在两个系统上对新参数值进行了测试。即,对于运行时间[3000-5000]的窗口,采用26单元和36单元系统。采用灰色关联度(GRG)性能指标和Friedman检验来评价算法的性能。结果表明,采用Taguchi-GRA方法对算法参数产生新值,能够提供较好的多目标发电机维修调度方案。这些值可以在使用多目标PACS算法及其变体解决多目标GMS问题时作为基准。
{"title":"Taguchi-Grey Relational Analysis Method for Parameter Tuning of Multi-objective Pareto Ant Colony System Algorithm","authors":"Shatha Abdulhadi Muthana, K. Ku-Mahamud","doi":"10.32890/jict2023.22.2.1","DOIUrl":"https://doi.org/10.32890/jict2023.22.2.1","url":null,"abstract":"In any metaheuristic, the parameter values strongly affect the efficiency of an algorithm’s search. This research aims to find the optimal parameter values for the Pareto Ant Colony System (PACS) algorithm, which is used to obtain solutions for the generator maintenance scheduling problem. For optimal maintenance scheduling with low cost, high reliability, and low violation, the parameter values of the PACS algorithm were tuned using the Taguchi and Gray Relational Analysis (Taguchi-GRA) method through search-based approach. The new parameter values were tested on two systems. i.e., 26- and 36-unit systems for window with operational hours [3000-5000]. The gray relational grade (GRG) performance metric and the Friedman test were used to evaluate the algorithm’s performance. The Taguchi-GRA method that produced the new values for the algorithm’s parameters was shown to be able to provide a better multi-objective generator maintenance scheduling (GMS) solution. These values can be benchmarked in solving multi-objective GMS problems using the multi-objective PACS algorithm and its variants.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81333589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.32890/jict2023.22.2.3
A. R. Khairuddin, R. Alwee, H. Haron
Crime forecasting is beneficial as it provides valuable information to the government and authorities in planning an efficient crimeprevention measure. Most criminology studies found that influence from several factors, such as social, demographic, and economicfactors, significantly affects crime occurrence. Therefore, most criminology experts and researchers study and observe the effectof factors on criminal activities as it provides relevant insight into possible future crime trends. Based on the literature review, theapplications of proper analysis in identifying significant factors that influence crime are scarce and limited. Therefore, this study proposed a hybrid model that integrates Neighbourhood Component Analysis (NCA) with Gradient Tree Boosting (GTB) in modelling the United States (US) crime rate data. NCA is a feature selection technique used in this study to identify the significant factors influencing crime rate. Once the significant factors were identified, an artificial intelligence technique, i.e., GTB, was implemented in modelling the crime data, where the crime rate value was predicted. The performance of the proposed model was compared with other existing models using quantitative measurement error analysis. Based on the result, the proposed NCA-GTB model outperformed other crime models in predicting the crime rate. As proven by the experimental result, the proposed model produced the smallest quantitative measurement error in the case study.
{"title":"Hybrid Neighbourhood Component Analysis with Gradient Tree Boosting for Feature Selection in Forecasting Crime Rate","authors":"A. R. Khairuddin, R. Alwee, H. Haron","doi":"10.32890/jict2023.22.2.3","DOIUrl":"https://doi.org/10.32890/jict2023.22.2.3","url":null,"abstract":"Crime forecasting is beneficial as it provides valuable information to the government and authorities in planning an efficient crimeprevention measure. Most criminology studies found that influence from several factors, such as social, demographic, and economicfactors, significantly affects crime occurrence. Therefore, most criminology experts and researchers study and observe the effectof factors on criminal activities as it provides relevant insight into possible future crime trends. Based on the literature review, theapplications of proper analysis in identifying significant factors that influence crime are scarce and limited. Therefore, this study proposed a hybrid model that integrates Neighbourhood Component Analysis (NCA) with Gradient Tree Boosting (GTB) in modelling the United States (US) crime rate data. NCA is a feature selection technique used in this study to identify the significant factors influencing crime rate. Once the significant factors were identified, an artificial intelligence technique, i.e., GTB, was implemented in modelling the crime data, where the crime rate value was predicted. The performance of the proposed model was compared with other existing models using quantitative measurement error analysis. Based on the result, the proposed NCA-GTB model outperformed other crime models in predicting the crime rate. As proven by the experimental result, the proposed model produced the smallest quantitative measurement error in the case study.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77115757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}