Abstract Let $H$ be a cartesian product graph of even cycles and paths, where the first multiplier is an even cycle of length at least $4$ and the second multiplier is a path with at least two nodes or an even cycle. Then $H$ is an equitable bipartite graph, which takes the torus, the column-torus and the even $k$-ary $n$-cube as its special cases. For any node $w$ of $H$ and any two different nodes $u$ and $v$ in the partite set of $H$ not containing $w$, an algorithm was introduced to construct a hamiltonian path connecting $u$ and $v$ in $H-w$.
{"title":"Hyper-Hamiltonian Laceability of Cartesian Products of Cycles and Paths","authors":"Yuxing Yang","doi":"10.1093/comjnl/bxac196","DOIUrl":"https://doi.org/10.1093/comjnl/bxac196","url":null,"abstract":"Abstract Let $H$ be a cartesian product graph of even cycles and paths, where the first multiplier is an even cycle of length at least $4$ and the second multiplier is a path with at least two nodes or an even cycle. Then $H$ is an equitable bipartite graph, which takes the torus, the column-torus and the even $k$-ary $n$-cube as its special cases. For any node $w$ of $H$ and any two different nodes $u$ and $v$ in the partite set of $H$ not containing $w$, an algorithm was introduced to construct a hamiltonian path connecting $u$ and $v$ in $H-w$.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135047365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gokhan Ozogur, Mehmet Ali Erturk, Zeynep Gurkas Aydin, Muhammed Ali Aydin
Abstract Android is the dominant operating system in the smartphone market and there exists millions of applications in various application stores. The increase in the number of applications has necessitated the detection of malicious applications in a short time. As opposed to dynamic analysis, it is possible to obtain results in a shorter time in static analysis as there is no need to run the applications. However, obtaining various information from application packages using reverse engineering techniques still requires a substantial amount of processing power. Although some attempts have been made to solve this problem by analyzing binary files without decoding the source code, there is still more work to be done in this area. In this study, we analyzed the applications in bytecode level without decoding the binary source files. We proposed a model using Term Frequency - Inverse Document Frequency (TF-IDF) word representation for feature extraction and Extreme Gradient Boosting (XGBoost) method for classification. The experimental results show that our model classifies a given application package as a malware or benign in 2.75 s with 99.05% F1-score on a balanced dataset, and in 3.30 s with 99.35% F1-score on an imbalanced dataset containing obfuscated malwares.
{"title":"Android Malware Detection in Bytecode Level Using TF-IDF and XGBoost","authors":"Gokhan Ozogur, Mehmet Ali Erturk, Zeynep Gurkas Aydin, Muhammed Ali Aydin","doi":"10.1093/comjnl/bxac198","DOIUrl":"https://doi.org/10.1093/comjnl/bxac198","url":null,"abstract":"Abstract Android is the dominant operating system in the smartphone market and there exists millions of applications in various application stores. The increase in the number of applications has necessitated the detection of malicious applications in a short time. As opposed to dynamic analysis, it is possible to obtain results in a shorter time in static analysis as there is no need to run the applications. However, obtaining various information from application packages using reverse engineering techniques still requires a substantial amount of processing power. Although some attempts have been made to solve this problem by analyzing binary files without decoding the source code, there is still more work to be done in this area. In this study, we analyzed the applications in bytecode level without decoding the binary source files. We proposed a model using Term Frequency - Inverse Document Frequency (TF-IDF) word representation for feature extraction and Extreme Gradient Boosting (XGBoost) method for classification. The experimental results show that our model classifies a given application package as a malware or benign in 2.75 s with 99.05% F1-score on a balanced dataset, and in 3.30 s with 99.35% F1-score on an imbalanced dataset containing obfuscated malwares.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135047366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reza Ebrahim Pourian, Mehdi Fartash, Javad Akbari Torkestani
Abstract This paper presents an artificial intelligence deep learning model for an energy-aware task scheduling algorithm based on learning automata (LA) in the Fog Computing (FC) Applications. FC is a distributed computing model that serves as an intermediate layer between the cloud and Internet of Things (IoT) to improve the quality of service. The IoT is the closest model to the wireless sensor network (WSN). One of its important applications is to create a global approach to health care system infrastructure development that reflects recent advances in WSN. The most influential factor in energy consumption is task scheduling. In this paper, the issue of reducing energy consumption is investigated as an important challenge in the fog environment. Also, an algorithm is presented to solve the task scheduling problem based on LA and measure the makespan (MK) and cost parameters. Then, a new artificial neural network deep model is proposed, based on the presented LA task scheduling fog computing algorithm. The proposed neural model can predict the relation among MK, energy and cost parameters versus VM length for the first time. The proposed model results show that all of the desired parameters can be predicted with high precision.
{"title":"A Deep Learning Model for Energy-Aware Task Scheduling Algorithm Based on Learning Automata for Fog Computing","authors":"Reza Ebrahim Pourian, Mehdi Fartash, Javad Akbari Torkestani","doi":"10.1093/comjnl/bxac192","DOIUrl":"https://doi.org/10.1093/comjnl/bxac192","url":null,"abstract":"Abstract This paper presents an artificial intelligence deep learning model for an energy-aware task scheduling algorithm based on learning automata (LA) in the Fog Computing (FC) Applications. FC is a distributed computing model that serves as an intermediate layer between the cloud and Internet of Things (IoT) to improve the quality of service. The IoT is the closest model to the wireless sensor network (WSN). One of its important applications is to create a global approach to health care system infrastructure development that reflects recent advances in WSN. The most influential factor in energy consumption is task scheduling. In this paper, the issue of reducing energy consumption is investigated as an important challenge in the fog environment. Also, an algorithm is presented to solve the task scheduling problem based on LA and measure the makespan (MK) and cost parameters. Then, a new artificial neural network deep model is proposed, based on the presented LA task scheduling fog computing algorithm. The proposed neural model can predict the relation among MK, energy and cost parameters versus VM length for the first time. The proposed model results show that all of the desired parameters can be predicted with high precision.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136266660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingzhe Zhu, Wanyue Xu, Wei Li, Zhongzhi Zhang, Haibin Kan
Abstract Graph products have been extensively applied to model complex networks with striking properties observed in real-world complex systems. In this paper, we study the hitting times for random walks on a class of graphs generated iteratively by edge corona product. We first derive recursive solutions to the eigenvalues and eigenvectors of the normalized adjacency matrix associated with the graphs. Based on these results, we further obtain interesting quantities about hitting times of random walks, providing iterative formulas for two-node hitting time, as well as closed-form expressions for the Kemeny’s constant defined as a weighted average of hitting times over all node pairs, as well as the arithmetic mean of hitting times of all pairs of nodes.
{"title":"Hitting Times of Random Walks on Edge Corona Product Graphs","authors":"Mingzhe Zhu, Wanyue Xu, Wei Li, Zhongzhi Zhang, Haibin Kan","doi":"10.1093/comjnl/bxac189","DOIUrl":"https://doi.org/10.1093/comjnl/bxac189","url":null,"abstract":"Abstract Graph products have been extensively applied to model complex networks with striking properties observed in real-world complex systems. In this paper, we study the hitting times for random walks on a class of graphs generated iteratively by edge corona product. We first derive recursive solutions to the eigenvalues and eigenvectors of the normalized adjacency matrix associated with the graphs. Based on these results, we further obtain interesting quantities about hitting times of random walks, providing iterative formulas for two-node hitting time, as well as closed-form expressions for the Kemeny’s constant defined as a weighted average of hitting times over all node pairs, as well as the arithmetic mean of hitting times of all pairs of nodes.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135014476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taha Mansouri;Mohammad Reza Sadeghi Moghadam;Fatemeh Monshizadeh;Ahad Zareravasan
In the Internet of Things (IoT), data gathered from dozens of devices are the base for creating business value and developing new products and services. If data are of poor quality, decisions are likely to be non-sense. Data quality is crucial to gain business value of the IoT initiatives. This paper presents a systematic literature review regarding IoT data quality from 2000 to 2020. We analyzed 58 articles to identify IoT data quality dimensions and issues and their categorizations. According to this analysis, we offer a classification of IoT data characterizations using the focus group method and clarify the link between dimensions and issues in each category. Manifesting a link between dimensions and issues in each category is incumbent, while this critical affair in extant categorizations is ignored. We also examine data security as an important data quality issue and suggest potential solutions to overcome IoT's security issues. The finding of this study proposes a new research discipline for additional examination for researchers and practitioners in determining data quality in the context of IoT.
{"title":"IoT Data Quality Issues and Potential Solutions: A Literature Review","authors":"Taha Mansouri;Mohammad Reza Sadeghi Moghadam;Fatemeh Monshizadeh;Ahad Zareravasan","doi":"10.1093/comjnl/bxab183","DOIUrl":"https://doi.org/10.1093/comjnl/bxab183","url":null,"abstract":"In the Internet of Things (IoT), data gathered from dozens of devices are the base for creating business value and developing new products and services. If data are of poor quality, decisions are likely to be non-sense. Data quality is crucial to gain business value of the IoT initiatives. This paper presents a systematic literature review regarding IoT data quality from 2000 to 2020. We analyzed 58 articles to identify IoT data quality dimensions and issues and their categorizations. According to this analysis, we offer a classification of IoT data characterizations using the focus group method and clarify the link between dimensions and issues in each category. Manifesting a link between dimensions and issues in each category is incumbent, while this critical affair in extant categorizations is ignored. We also examine data security as an important data quality issue and suggest potential solutions to overcome IoT's security issues. The finding of this study proposes a new research discipline for additional examination for researchers and practitioners in determining data quality in the context of IoT.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"66 3","pages":"615-625"},"PeriodicalIF":1.4,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49977702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a novel cryptographic primitive named reusable group fuzzy extractor (RGFE) allowing any member of a group to extract and reproduce random strings from a fuzzy and non-uniform source of high entropy (called fingerprint). Any group member can anonymously generate a random string for the group using his fingerprint and can be traced when needed, whereas other members can reproduce the string using their own fingerprints. Moreover, a fingerprint can be repeatedly used to generate multiple random strings. Basing on RGFE, we present group-shared Bitcoin wallet, which can be used by a group of users to receive or spend coins via biometrics in a traceable way.
{"title":"Reusable Group Fuzzy Extractor and Group-Shared Bitcoin Wallet","authors":"Jie Ma;Bin Qi;Kewei Lv","doi":"10.1093/comjnl/bxab185","DOIUrl":"https://doi.org/10.1093/comjnl/bxab185","url":null,"abstract":"In this paper, we propose a novel cryptographic primitive named reusable group fuzzy extractor (RGFE) allowing any member of a group to extract and reproduce random strings from a fuzzy and non-uniform source of high entropy (called fingerprint). Any group member can anonymously generate a random string for the group using his fingerprint and can be traced when needed, whereas other members can reproduce the string using their own fingerprints. Moreover, a fingerprint can be repeatedly used to generate multiple random strings. Basing on RGFE, we present group-shared Bitcoin wallet, which can be used by a group of users to receive or spend coins via biometrics in a traceable way.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"66 3","pages":"643-661"},"PeriodicalIF":1.4,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49977705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetic Resonance Images (MRI) is an imperative imaging modality employed in the medical diagnosis tool for detecting brain tumors. However, the major obstacle in MR images classification is the semantic gap between low-level visual information obtained by MRI machines and high-level information alleged by the clinician. Hence, this research article introduces a novel technique, namely Dendritic-Squirrel Search Algorithm-based Artificial immune classifier (Dendritic-SSA-AIC) using MRI for brain tumor classification. Initially the pre-processing is performed followed by segmentation is devised using sparse fuzzy-c-means (Sparse FCM) is employed for segmentation to extract statistical and texture features. Furthermore, the Particle Rider mutual information (PRMI) is employed for feature selection, which is devised by integrating Particle swarm optimization, Rider optimization algorithm and mutual information. AIC is employed to classify the brain tumor, in which the Dendritic-SSA algorithm designed by combining dendritic cell algorithm and Squirrel search algorithm (SSA). The proposed PRMI-Dendritic-SSA-AIC provides superior performance with maximal accuracy of 97.789%, sensitivity of 97.577% and specificity of 98%.
{"title":"Particle Rider Mutual Information and Dendritic-Squirrel Search Algorithm With Artificial Immune Classifier for Brain Tumor Classification","authors":"Rahul Ramesh Chakre;Dipak V Patil","doi":"10.1093/comjnl/bxab194","DOIUrl":"https://doi.org/10.1093/comjnl/bxab194","url":null,"abstract":"Magnetic Resonance Images (MRI) is an imperative imaging modality employed in the medical diagnosis tool for detecting brain tumors. However, the major obstacle in MR images classification is the semantic gap between low-level visual information obtained by MRI machines and high-level information alleged by the clinician. Hence, this research article introduces a novel technique, namely Dendritic-Squirrel Search Algorithm-based Artificial immune classifier (Dendritic-SSA-AIC) using MRI for brain tumor classification. Initially the pre-processing is performed followed by segmentation is devised using sparse fuzzy-c-means (Sparse FCM) is employed for segmentation to extract statistical and texture features. Furthermore, the Particle Rider mutual information (PRMI) is employed for feature selection, which is devised by integrating Particle swarm optimization, Rider optimization algorithm and mutual information. AIC is employed to classify the brain tumor, in which the Dendritic-SSA algorithm designed by combining dendritic cell algorithm and Squirrel search algorithm (SSA). The proposed PRMI-Dendritic-SSA-AIC provides superior performance with maximal accuracy of 97.789%, sensitivity of 97.577% and specificity of 98%.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"66 3","pages":"743-762"},"PeriodicalIF":1.4,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49946833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liver cancer is the fourth common cancer in the world and the third leading reason of cancer mortality. The conventional methods for detecting liver cancer are blood tests, biopsy and image tests. In this paper, we propose an automated computer-aided diagnosis technique for the classification of multi-class liver cancer i.e. primary, hepatocellular carcinoma, and secondary, metastases using computed tomography (CT) images. The proposed algorithm is a two-step process: enhancement of CT images using contrast limited adaptive histogram equalization algorithm and extraction of features for the detection and the classification of the different classes of the tumor. The overall achieved accuracy, sensitivity and specificity with the proposed method for the classification of multi-class tumors are 97%, 94.3% and 100% with experiment 1 and 84% all of them with experiment 2, respectively. By automatic feature selection scheme accuracy is deviated maximum by 10.5% from the overall and the ratio features accuracy decreases linearly by 5.5% with 20 to 5 selected features. The proposed methodology can help to assist radiologists in liver cancer diagnosis.
{"title":"Multi-Class Liver Cancer Diseases Classification Using CT Images","authors":"A Krishan;D Mittal","doi":"10.1093/comjnl/bxab162","DOIUrl":"https://doi.org/10.1093/comjnl/bxab162","url":null,"abstract":"Liver cancer is the fourth common cancer in the world and the third leading reason of cancer mortality. The conventional methods for detecting liver cancer are blood tests, biopsy and image tests. In this paper, we propose an automated computer-aided diagnosis technique for the classification of multi-class liver cancer i.e. primary, hepatocellular carcinoma, and secondary, metastases using computed tomography (CT) images. The proposed algorithm is a two-step process: enhancement of CT images using contrast limited adaptive histogram equalization algorithm and extraction of features for the detection and the classification of the different classes of the tumor. The overall achieved accuracy, sensitivity and specificity with the proposed method for the classification of multi-class tumors are 97%, 94.3% and 100% with experiment 1 and 84% all of them with experiment 2, respectively. By automatic feature selection scheme accuracy is deviated maximum by 10.5% from the overall and the ratio features accuracy decreases linearly by 5.5% with 20 to 5 selected features. The proposed methodology can help to assist radiologists in liver cancer diagnosis.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"66 3","pages":"525-539"},"PeriodicalIF":1.4,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49946838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated learning (FL) is an emerging privacy-preserving technology for machine learning, which enables end devices to cooperatively train a global model without uploading their local sensitive data. Because of limited network bandwidth and considerable communication overhead, communication efficiency has become an essential bottleneck for FL. Existing solutions attempt to improve this situation by reducing communication rounds while usually come with more computation resource consumption or model accuracy deterioration. In this paper, we propose a parameter Prediction-Based DL (PBFL). In which an extended Kalman filter-based prediction algorithm, a practical prediction error threshold setting mechanism and an effective global model updating strategy are included. Instead of collecting all updates from participants, PBFL takes advantage of predicting values to aggregate the model, which substantially reduces required communication rounds while guaranteeing model accuracy. Inspired by the idea of prediction, each participant checks whether its prediction value is out of the tolerance threshold limits and only uploads local updates that have an inaccurate prediction value. In this way, no additional local computational resources are required. Experimental results on both multilayer perceptrons and convolutional neural networks show that PBFL outperforms the state-of-the-art methods and improves the communication efficiency by >66% with 1% higher model accuracy.
{"title":"PBFL: Communication-Efficient Federated Learning via Parameter Predicting","authors":"Kaiju Li;Chunhua Xiao","doi":"10.1093/comjnl/bxab184","DOIUrl":"https://doi.org/10.1093/comjnl/bxab184","url":null,"abstract":"Federated learning (FL) is an emerging privacy-preserving technology for machine learning, which enables end devices to cooperatively train a global model without uploading their local sensitive data. Because of limited network bandwidth and considerable communication overhead, communication efficiency has become an essential bottleneck for FL. Existing solutions attempt to improve this situation by reducing communication rounds while usually come with more computation resource consumption or model accuracy deterioration. In this paper, we propose a parameter Prediction-Based DL (PBFL). In which an extended Kalman filter-based prediction algorithm, a practical prediction error threshold setting mechanism and an effective global model updating strategy are included. Instead of collecting all updates from participants, PBFL takes advantage of predicting values to aggregate the model, which substantially reduces required communication rounds while guaranteeing model accuracy. Inspired by the idea of prediction, each participant checks whether its prediction value is out of the tolerance threshold limits and only uploads local updates that have an inaccurate prediction value. In this way, no additional local computational resources are required. Experimental results on both multilayer perceptrons and convolutional neural networks show that PBFL outperforms the state-of-the-art methods and improves the communication efficiency by >66% with 1% higher model accuracy.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"66 3","pages":"626-642"},"PeriodicalIF":1.4,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49977706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of things (IoT) is an architecture of connected physical objects; these objects can communicate with each other and transmit and receive data. Also, fog-based IoT is a distributed platform that provides reliable access to virtualized resources based on various technologies such as high-performance computing and service-oriented design. A fog recommender system is an intelligent engine that suggests suitable services for fog users with less answer time and more accuracy. With the rapid growth of files and information sharing, fog recommender systems’ importance is also increased. Besides, the resource management problem appears challenging in fog-based IoT because of the fog's unpredictable and highly variable environment. However, many current methods suffer from the low accuracy of fog recommendations. Due to this problem's Non-deterministic Polynomial-time (NP)-hard nature, a new approach is presented for resource recommendation in the fog-based IoT using a hybrid optimization algorithm. To simulate the suggested method, the CloudSim simulation environment is used. The experimental results show that the accuracy is optimized by about 1–8% compared with the Cooperative Filtering method utilizing Smoothing and Fusing and Artificial Bee Colony algorithm. The outcomes of the present paper are notable for scholars, and they supply insights into subsequent study domains in this field.
{"title":"A New Approach for Resource Recommendation in the Fog-Based IoT Using a Hybrid Algorithm","authors":"Zhiwang Xu;Huibin Qin;Shengying Yang;Seyedeh Maryam Arefzadeh","doi":"10.1093/comjnl/bxab189","DOIUrl":"https://doi.org/10.1093/comjnl/bxab189","url":null,"abstract":"Internet of things (IoT) is an architecture of connected physical objects; these objects can communicate with each other and transmit and receive data. Also, fog-based IoT is a distributed platform that provides reliable access to virtualized resources based on various technologies such as high-performance computing and service-oriented design. A fog recommender system is an intelligent engine that suggests suitable services for fog users with less answer time and more accuracy. With the rapid growth of files and information sharing, fog recommender systems’ importance is also increased. Besides, the resource management problem appears challenging in fog-based IoT because of the fog's unpredictable and highly variable environment. However, many current methods suffer from the low accuracy of fog recommendations. Due to this problem's Non-deterministic Polynomial-time (NP)-hard nature, a new approach is presented for resource recommendation in the fog-based IoT using a hybrid optimization algorithm. To simulate the suggested method, the CloudSim simulation environment is used. The experimental results show that the accuracy is optimized by about 1–8% compared with the Cooperative Filtering method utilizing Smoothing and Fusing and Artificial Bee Colony algorithm. The outcomes of the present paper are notable for scholars, and they supply insights into subsequent study domains in this field.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"66 3","pages":"692-710"},"PeriodicalIF":1.4,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49977707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}