Mobile Ad-hoc Networks (MANET) is a set of computing nodes with there is no fixed infrastructure support. Every node in the network communicates with one another through wireless links. However, in MANET, the dynamic topology of the nodes is the vital demanding duty to produce security to the network and the black hole attacks get identified and prevented. In this paper, a novel fuzzy inference system is designed for black hole attack detection depending on the node authentication, trust value, Certificate Authority (CA), energy level, and message integrity. Before initiating the route discovery process in MANET, the proposed work mainly concentrates on node authentication. The simulation gets carried out using the Network Simulator (NS2), wherein the fuzzy inference system designed shows better performance by providing a certificate to only the trusted nodes. This helps the malicious nodes detection and prevents the black hole attack. The improvement in Packet Delivery Ratio (PDR) enhances throughput and the end to end delay gets reduced through better performance results. This proves that the system is more reliable and recovered to be used in military applications
{"title":"Fuzzy Heuristics for Detecting and Preventing Black Hole Attack","authors":"Elamparithi Pandian, Ruba Soundar, Shenbagalakshmi Gunasekaran, Shenbagarajan Anantharajan","doi":"10.34028/iajit/21/1/8","DOIUrl":"https://doi.org/10.34028/iajit/21/1/8","url":null,"abstract":"Mobile Ad-hoc Networks (MANET) is a set of computing nodes with there is no fixed infrastructure support. Every node in the network communicates with one another through wireless links. However, in MANET, the dynamic topology of the nodes is the vital demanding duty to produce security to the network and the black hole attacks get identified and prevented. In this paper, a novel fuzzy inference system is designed for black hole attack detection depending on the node authentication, trust value, Certificate Authority (CA), energy level, and message integrity. Before initiating the route discovery process in MANET, the proposed work mainly concentrates on node authentication. The simulation gets carried out using the Network Simulator (NS2), wherein the fuzzy inference system designed shows better performance by providing a certificate to only the trusted nodes. This helps the malicious nodes detection and prevents the black hole attack. The improvement in Packet Delivery Ratio (PDR) enhances throughput and the end to end delay gets reduced through better performance results. This proves that the system is more reliable and recovered to be used in military applications","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"121 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139126039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advancement in internet technology, augmentation in regular data generation has been amplified at a drastic level. Several different industries, for instance hospitality, defense, railways, health care, social media, education, etc., are creating and crafting different and several types of raw and processed data at a significant level, whereas, each of them has their own unique reason to shelter and call their data imperative and crucial. Such large and huge amount of data needs some space to get saved and secured, this is what Big Data is. A Data Stream Processing Technology (DSPT) is the significant mechanism and the mainstay for compiling and computing the large amount of data as well as the way to collect and process the raw data to call it information. There are varieties of DSPT like Apache Spark, Flink, Kafka, Storm, Samza, Hadoop, Atlas.ti, Cassandra, etc. This paper aims at comparing the five well- known and widely used open source big data DSPT (i.e., Apache Spark, Flink, Kafka, Storm, and Samza). An extensive comparison will be performed based on 12 different yet interconnected standards. A matrix has been designed through which five different experiments were executed, based on which the juxtaposition will be prepared. This paper summarizes an extensive study of open source big data DPST with a practical experimental approach in a well-controlled and sophisticated environment
{"title":"An Experimental Based Study to Evaluate the Efficiency among Stream Processing Tools","authors":"Akshay Mudgal, Shaveta Bhatia","doi":"10.34028/iajit/20/6/11","DOIUrl":"https://doi.org/10.34028/iajit/20/6/11","url":null,"abstract":"With the advancement in internet technology, augmentation in regular data generation has been amplified at a drastic level. Several different industries, for instance hospitality, defense, railways, health care, social media, education, etc., are creating and crafting different and several types of raw and processed data at a significant level, whereas, each of them has their own unique reason to shelter and call their data imperative and crucial. Such large and huge amount of data needs some space to get saved and secured, this is what Big Data is. A Data Stream Processing Technology (DSPT) is the significant mechanism and the mainstay for compiling and computing the large amount of data as well as the way to collect and process the raw data to call it information. There are varieties of DSPT like Apache Spark, Flink, Kafka, Storm, Samza, Hadoop, Atlas.ti, Cassandra, etc. This paper aims at comparing the five well- known and widely used open source big data DSPT (i.e., Apache Spark, Flink, Kafka, Storm, and Samza). An extensive comparison will be performed based on 12 different yet interconnected standards. A matrix has been designed through which five different experiments were executed, based on which the juxtaposition will be prepared. This paper summarizes an extensive study of open source big data DPST with a practical experimental approach in a well-controlled and sophisticated environment","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135261768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shahd Alqam, Nasser Alzeidi, Abderrezak Touzene, Khaled Day
Fog computing is an emerging paradigm which extends the functionality of the cloud near to the end users. Its introduction helped in running different real-time applications where latency is a critical factor. This paradigm is motivated by the fast growth of Internet of Things (IoT) applications in different fields. By running Virtual Machines (VMs) on fog devices, different users will be able to offload their computational tasks to fog devices to get them done in a smooth, transparent, and faster manner. Nevertheless, the performance of real-time applications might suffer if no proper live virtual machine migration mechanism is adopted. Live VM migration aims to move the running VM from one physical fog node to another with minimal or zero downtime due to mobility issues. Many efforts have been made in this field to solve the challenges facing live VM migration in fog computing. However, there are remaining issues that require solutions and improvements. In this paper, the following presents the research outcomes: An extensive survey of existing literature on live VM migration mechanisms in fog computing. Also, a new novel classification approach for categorizing live VM migration mechanisms based on conventional and Artificial Intelligence (AI) approaches to address live VM migration challenges is presented. Moreover, an identification of research gaps and in the existing literature and highlighting the areas where further investigation is required is done and finally a conclusion with a discussion of potential future research directions is drawn
{"title":"Live Virtual Machine Migration in Fog Computing: State of the Art","authors":"Shahd Alqam, Nasser Alzeidi, Abderrezak Touzene, Khaled Day","doi":"10.34028/iajit/20/6/14","DOIUrl":"https://doi.org/10.34028/iajit/20/6/14","url":null,"abstract":"Fog computing is an emerging paradigm which extends the functionality of the cloud near to the end users. Its introduction helped in running different real-time applications where latency is a critical factor. This paradigm is motivated by the fast growth of Internet of Things (IoT) applications in different fields. By running Virtual Machines (VMs) on fog devices, different users will be able to offload their computational tasks to fog devices to get them done in a smooth, transparent, and faster manner. Nevertheless, the performance of real-time applications might suffer if no proper live virtual machine migration mechanism is adopted. Live VM migration aims to move the running VM from one physical fog node to another with minimal or zero downtime due to mobility issues. Many efforts have been made in this field to solve the challenges facing live VM migration in fog computing. However, there are remaining issues that require solutions and improvements. In this paper, the following presents the research outcomes: An extensive survey of existing literature on live VM migration mechanisms in fog computing. Also, a new novel classification approach for categorizing live VM migration mechanisms based on conventional and Artificial Intelligence (AI) approaches to address live VM migration challenges is presented. Moreover, an identification of research gaps and in the existing literature and highlighting the areas where further investigation is required is done and finally a conclusion with a discussion of potential future research directions is drawn","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135312822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Community detection is the most common and growing area of interest in social and real-time network applications. In recent years, several community detection methods have been developed. Particularly, community detection in Local expansion methods have been proved as effective and efficiently. However, there are some fundamental issues to uncover the overlapping communities. The maximum methods are sensitive to enable the seeds initialization and construct the parameters, while others are insufficient to establish the pervasive overlaps. In this paper, we proposed the new unsupervised Map Reduce based local expansion method for uncovering overlapping communities depends seed nodes. The goal of the proposed method is to locate the leader nodes (seed nodes) of communities with the basic graph measures such as degree, betweenness and closeness centralities and then derive the communities based on the leader nodes. We proposed Map-Reduce based Fuzzy C- Means Clustering Algorithm to derive the overlapping communities based on leader nodes. We tested our proposed method Leader based Community Detection (LBCD) on the real-world data sets of totals of 11 and the experimental results shows the more effective and optimistic in terms of network graph enabled overlapping community structures evaluation.
{"title":"A Hadoop Based Approach for Community Detection on Social Networks Using Leader Nodes","authors":"Mohamed Iqbal, Kesavarao Latha","doi":"10.34028//iajit/20/6/2","DOIUrl":"https://doi.org/10.34028//iajit/20/6/2","url":null,"abstract":"Community detection is the most common and growing area of interest in social and real-time network applications. In recent years, several community detection methods have been developed. Particularly, community detection in Local expansion methods have been proved as effective and efficiently. However, there are some fundamental issues to uncover the overlapping communities. The maximum methods are sensitive to enable the seeds initialization and construct the parameters, while others are insufficient to establish the pervasive overlaps. In this paper, we proposed the new unsupervised Map Reduce based local expansion method for uncovering overlapping communities depends seed nodes. The goal of the proposed method is to locate the leader nodes (seed nodes) of communities with the basic graph measures such as degree, betweenness and closeness centralities and then derive the communities based on the leader nodes. We proposed Map-Reduce based Fuzzy C- Means Clustering Algorithm to derive the overlapping communities based on leader nodes. We tested our proposed method Leader based Community Detection (LBCD) on the real-world data sets of totals of 11 and the experimental results shows the more effective and optimistic in terms of network graph enabled overlapping community structures evaluation.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136373642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Search Engine Optimization (SEO) aims to improve a website's reputation and user experience. Without effective SEO strategies, it requires significant investment in paid advertisements. Search Engines (SEs) use algorithms to rank results, assessing on-page and off-page factors for relevance. Machine learning techniques have been used to build classifiers for estimating page rank. However, no research has compared rank estimation with other languages or analyzed the effects of different languages on performance or differences between SEO factors. The study aims to improve rank estimation algorithms for Arabic web pages on desktop devices using a new multi-category dataset from Google Search Engine Results Page (SERP). The experimental findings suggest that Arabic web pages are more suitable than English ones for training a model to estimate the ranking of Arabic web pages. Machine learning models were applied to two datasets. SE scraping was used to collect URLs, descriptions, and other data from the Google SE. Data preprocessing steps were taken before using the datasets for rank estimation algorithms. Experiments were conducted to assess the implications of using Arabic and English web page datasets
{"title":"Effects of Using Arabic Web Pages in Building Rank Estimation Algorithm for Google Search Engine Results Page","authors":"Mohamed Almadhoun, Nurul Malim","doi":"10.34028/iajit/20/6/15","DOIUrl":"https://doi.org/10.34028/iajit/20/6/15","url":null,"abstract":"Search Engine Optimization (SEO) aims to improve a website's reputation and user experience. Without effective SEO strategies, it requires significant investment in paid advertisements. Search Engines (SEs) use algorithms to rank results, assessing on-page and off-page factors for relevance. Machine learning techniques have been used to build classifiers for estimating page rank. However, no research has compared rank estimation with other languages or analyzed the effects of different languages on performance or differences between SEO factors. The study aims to improve rank estimation algorithms for Arabic web pages on desktop devices using a new multi-category dataset from Google Search Engine Results Page (SERP). The experimental findings suggest that Arabic web pages are more suitable than English ones for training a model to estimate the ranking of Arabic web pages. Machine learning models were applied to two datasets. SE scraping was used to collect URLs, descriptions, and other data from the Google SE. Data preprocessing steps were taken before using the datasets for rank estimation algorithms. Experiments were conducted to assess the implications of using Arabic and English web page datasets","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135311983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) is the collection of low-power devices deployed in real-time applications like industries, health care and agriculture. The real-time applications must quickly sense, analyze and react to the data within a time frame. So the data’s should be transmitted without any delay. The Routing Protocol for Low-power and Lossy Networks (RPL) is used to route the data by finding the optimal path. RPL forward the data packets from source to destination based on the objective functions. The objective functions can be designed using different routing metrics and most of the existing objective functions are not designed based on the characteristics of IoT applications. The Industrial Internet of Things (IIoT) environment with real-time data transfer characteristic is considered for this proposed work. Packet loss, power depletion and load balancing are the problems faced by real-time environment. Neighbor Indexed based RPL (NI-RPL) is implemented in two steps to improve efficiency of RPL. First, based on the Received Signal Strength Indicator (RSSI) and path-cost the preferred-parent set is formed from the set of neighboring nodes. Second, the rank of the nodes from the preferred-parent set is calculated based on the Neighbor Index (NI), Expected Transmission count (ETX) and Residual Energy (RE), and then the best route is selected based on the rank. The NI is used to avoid congestion, the ETX and RE helps in improving the Quality of Service (QoS) and lifetime of the network. The proposed objective function, NI-RPL is compared with other objective functions. NI-RPL guarantees the delivery of real –time data with better QoS, because it has improved the packet delivery ratio by 3% to 5% and decreases latency by 7 to 12 seconds
物联网(IoT)是部署在工业、医疗保健和农业等实时应用中的低功耗设备的集合。实时应用程序必须在一定的时间范围内快速感知、分析和响应数据。所以数据传输应该没有任何延迟。RPL (Routing Protocol for Low-power and Lossy Networks)是通过寻找最优路径来实现数据路由的协议。RPL根据目标函数将数据包从源端转发到目的端。目标函数可以使用不同的路由度量来设计,大多数现有的目标函数并不是基于物联网应用的特征来设计的。本文考虑了具有实时数据传输特性的工业物联网环境。丢包、功耗和负载均衡是实时环境中面临的问题。基于邻居索引的RPL (Neighbor Indexed based RPL, NI-RPL)分为两步实现,以提高RPL的效率。首先,基于接收信号强度指标(Received Signal Strength Indicator, RSSI)和路径代价,从邻近节点集合中形成首选父集;其次,根据邻居指数(NI)、期望传输数(ETX)和剩余能量(RE)计算优选父集中节点的秩,并根据秩选择最佳路由。NI用于避免拥塞,ETX和RE用于提高网络的QoS (Quality of Service)和生命周期。将提出的目标函数NI-RPL与其他目标函数进行了比较。NI-RPL可以将数据包的投递率提高3% ~ 5%,将时延降低7 ~ 12秒,从而保证了实时数据的传输,并提供了更好的QoS
{"title":"Additive Metric Composition-Based Load Aware Reliable Routing Protocol for Improving the Quality of Service in Industrial Internet of Things","authors":"Anitha Dharmalingaswamy, Latha Pitchai","doi":"10.34028/iajit/20/6/12","DOIUrl":"https://doi.org/10.34028/iajit/20/6/12","url":null,"abstract":"The Internet of Things (IoT) is the collection of low-power devices deployed in real-time applications like industries, health care and agriculture. The real-time applications must quickly sense, analyze and react to the data within a time frame. So the data’s should be transmitted without any delay. The Routing Protocol for Low-power and Lossy Networks (RPL) is used to route the data by finding the optimal path. RPL forward the data packets from source to destination based on the objective functions. The objective functions can be designed using different routing metrics and most of the existing objective functions are not designed based on the characteristics of IoT applications. The Industrial Internet of Things (IIoT) environment with real-time data transfer characteristic is considered for this proposed work. Packet loss, power depletion and load balancing are the problems faced by real-time environment. Neighbor Indexed based RPL (NI-RPL) is implemented in two steps to improve efficiency of RPL. First, based on the Received Signal Strength Indicator (RSSI) and path-cost the preferred-parent set is formed from the set of neighboring nodes. Second, the rank of the nodes from the preferred-parent set is calculated based on the Neighbor Index (NI), Expected Transmission count (ETX) and Residual Energy (RE), and then the best route is selected based on the rank. The NI is used to avoid congestion, the ETX and RE helps in improving the Quality of Service (QoS) and lifetime of the network. The proposed objective function, NI-RPL is compared with other objective functions. NI-RPL guarantees the delivery of real –time data with better QoS, because it has improved the packet delivery ratio by 3% to 5% and decreases latency by 7 to 12 seconds","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134884743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Community detection is the most common and growing area of interest in social and real-time network applications. In recent years, several community detection methods have been developed. Particularly, community detection in Local expansion methods have been proved as effective and efficiently. However, there are some fundamental issues to uncover the overlapping communities. The maximum methods are sensitive to enable the seeds initialization and construct the parameters, while others are insufficient to establish the pervasive overlaps. In this paper, we proposed the new unsupervised Map Reduce based local expansion method for uncovering overlapping communities depends seed nodes. The goal of the proposed method is to locate the leader nodes (seed nodes) of communities with the basic graph measures such as degree, betweenness and closeness centralities and then derive the communities based on the leader nodes. We proposed Map-Reduce based Fuzzy C- Means Clustering Algorithm to derive the overlapping communities based on leader nodes. We tested our proposed method Leader based Community Detection (LBCD) on the real-world data sets of totals of 11 and the experimental results shows the more effective and optimistic in terms of network graph enabled overlapping community structures evaluation.
{"title":"A Hadoop Based Approach for Community Detection on Social Networks Using Leader Nodes","authors":"Mohamed Iqbal, Kesavarao Latha","doi":"10.34028/iajit/20/6/2","DOIUrl":"https://doi.org/10.34028/iajit/20/6/2","url":null,"abstract":"Community detection is the most common and growing area of interest in social and real-time network applications. In recent years, several community detection methods have been developed. Particularly, community detection in Local expansion methods have been proved as effective and efficiently. However, there are some fundamental issues to uncover the overlapping communities. The maximum methods are sensitive to enable the seeds initialization and construct the parameters, while others are insufficient to establish the pervasive overlaps. In this paper, we proposed the new unsupervised Map Reduce based local expansion method for uncovering overlapping communities depends seed nodes. The goal of the proposed method is to locate the leader nodes (seed nodes) of communities with the basic graph measures such as degree, betweenness and closeness centralities and then derive the communities based on the leader nodes. We proposed Map-Reduce based Fuzzy C- Means Clustering Algorithm to derive the overlapping communities based on leader nodes. We tested our proposed method Leader based Community Detection (LBCD) on the real-world data sets of totals of 11 and the experimental results shows the more effective and optimistic in terms of network graph enabled overlapping community structures evaluation.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136373136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the realm of cryptography, computational statistics, gaming, simulation processes, gambling, and other related fields, the design of Cryptographically Secure Pseudo-Random Number Generators (CSPRNGs) poses a significant challenge. With the rapid advancement of quantum computing, the imminent "quantum-threat" looms closer, posing a risk to our current cryptographically secure PRNGs. Consequently, it becomes crucial to address these threats seriously and develop diverse tools and techniques to ensure that cryptographically secure Pseudo-Random Number Generators (PRNGs) remain unbreakable by both classical and quantum computers. this paper presents a novel approach to constructing an effective Quantum-Resistant Pseudo-Random Number Generator (QRPRNG) using the principles of lattice-based Learning with Errors (LWE). LWE is considered quantum-resistant due to its reliance on the hardness of problems like the Shortest Vector Problem and Closest Vector Problem. Our work focuses on developing a QRPRNG that utilizes a Linear Feedback Shift Register (LFSR) to generate a stream of pseudo-random bits. To construct a secure seed for the QRPRNG, LWE is employed. The proposed QRPRNG incorporates a secure seed input to the LFSR, and employs a Homomorphic function to protect the security of the finite states within the LFSR. NIST statistical tests are conducted to evaluate the randomness of the generated output by the constructed QRPRNG. The proposed QRPRNG achieves a throughput of 35.172 Mbit/s.
{"title":"LWE Based Quantum-Resistant Pseudo-Random Number Generator","authors":"Atul Kumar, Arun Mishra","doi":"10.34028/iajit/20/6/8","DOIUrl":"https://doi.org/10.34028/iajit/20/6/8","url":null,"abstract":"In the realm of cryptography, computational statistics, gaming, simulation processes, gambling, and other related fields, the design of Cryptographically Secure Pseudo-Random Number Generators (CSPRNGs) poses a significant challenge. With the rapid advancement of quantum computing, the imminent \"quantum-threat\" looms closer, posing a risk to our current cryptographically secure PRNGs. Consequently, it becomes crucial to address these threats seriously and develop diverse tools and techniques to ensure that cryptographically secure Pseudo-Random Number Generators (PRNGs) remain unbreakable by both classical and quantum computers. this paper presents a novel approach to constructing an effective Quantum-Resistant Pseudo-Random Number Generator (QRPRNG) using the principles of lattice-based Learning with Errors (LWE). LWE is considered quantum-resistant due to its reliance on the hardness of problems like the Shortest Vector Problem and Closest Vector Problem. Our work focuses on developing a QRPRNG that utilizes a Linear Feedback Shift Register (LFSR) to generate a stream of pseudo-random bits. To construct a secure seed for the QRPRNG, LWE is employed. The proposed QRPRNG incorporates a secure seed input to the LFSR, and employs a Homomorphic function to protect the security of the finite states within the LFSR. NIST statistical tests are conducted to evaluate the randomness of the generated output by the constructed QRPRNG. The proposed QRPRNG achieves a throughput of 35.172 Mbit/s.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136374420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data Mining (DM) is a combination of several fields that effectively extracts hidden patterns from vast amounts of historical data. One of the DM activities used to produce association rules is Association Rule Mining (ARM). To significantly reduce time and space complexities, the proposed method utilizes an effective bi-directional frequent itemset generation approach. The dataset is explicitly bifurcated into dense and sparse regions in the process of mining frequent itemset. One more feature is proposed in this paper which sensibly predetermines a candidate subset called, Reference-Points-Set (RPS), to reduce the complexities associated with mining of frequent itemsets. The RPS helps to reduce the number of scans over the actual dataset. The novelty is to look at possible candidates during the initial database scans, which can cut down on the number of additional database scans that are required. According to experimental data, the average scan count of the proposed method is respectively, 24% and 65%, lower than that of Dynamic Itemset Counting (DIC) and M-Apriori, across different support counts. The proposed method typically results in a 10% reduction in execution time over DIC and is three times more efficient than M-Apriori. These results significantly outperform those of their predecessors, which strongly supports the proposed approach when creating frequent itemsets from large datasets
{"title":"An Effective Reference-Point-Set (RPS) Based Bi-Directional Frequent Itemset Generation","authors":"Ambily Balaram, Nedunchezhian Raju","doi":"10.34028/iajit/20/6/6","DOIUrl":"https://doi.org/10.34028/iajit/20/6/6","url":null,"abstract":"Data Mining (DM) is a combination of several fields that effectively extracts hidden patterns from vast amounts of historical data. One of the DM activities used to produce association rules is Association Rule Mining (ARM). To significantly reduce time and space complexities, the proposed method utilizes an effective bi-directional frequent itemset generation approach. The dataset is explicitly bifurcated into dense and sparse regions in the process of mining frequent itemset. One more feature is proposed in this paper which sensibly predetermines a candidate subset called, Reference-Points-Set (RPS), to reduce the complexities associated with mining of frequent itemsets. The RPS helps to reduce the number of scans over the actual dataset. The novelty is to look at possible candidates during the initial database scans, which can cut down on the number of additional database scans that are required. According to experimental data, the average scan count of the proposed method is respectively, 24% and 65%, lower than that of Dynamic Itemset Counting (DIC) and M-Apriori, across different support counts. The proposed method typically results in a 10% reduction in execution time over DIC and is three times more efficient than M-Apriori. These results significantly outperform those of their predecessors, which strongly supports the proposed approach when creating frequent itemsets from large datasets","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136374724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big Medical Data (BMD) is generated by cellular telephones, clinics, academics, suppliers, and organizations. Collecting, finding, analyzing, and managing the big data to make people's lives better, comprehending novel illnesses, and treatments, predicting results at initial phases, and making real-time choices are the actual issues in healthcare systems. Dealing with big medical data in resource scheduling is a major issue that aims to offer higher quality healthcare services. Hadoop MapReduce has been widely used for parallel processing of large data tasks and efficient job scheduling. The number of big data tasks is constantly growing; it is becoming more essential to minimize their energy usage to reduce the environmental effect and operating expenses. Hence to overcome these disadvantages, we propose a novel resource scheduler for big data using a Hybrid 2-GW Optimization Algorithm (H2-GWOA). We employ the Improved GlowWorm Swarm Optimization Algorithm (IGSOA) and Mean GreyWolf Optimization Algorithm (MGWOA) for optimizing the MapReduce framework in heterogeneous big data. The CloudSim platform was used for the simulations. The performance of the proposed scheduler is proved to be better than the conventional methods in terms of metrics like latency, makespan, resource utilization, skewness, and Central Processing Unit (CPU) consumption.
{"title":"A Novel Resource Scheduler for Resource Allocation and Scheduling in Big Data Using Hybrid Optimization Algorithm at Cloud Environment","authors":"Aarthee Selvaraj, Prabakaran Rajendran, Kanimozhi Rajangam","doi":"10.34028/iajit/20/6/3","DOIUrl":"https://doi.org/10.34028/iajit/20/6/3","url":null,"abstract":"Big Medical Data (BMD) is generated by cellular telephones, clinics, academics, suppliers, and organizations. Collecting, finding, analyzing, and managing the big data to make people's lives better, comprehending novel illnesses, and treatments, predicting results at initial phases, and making real-time choices are the actual issues in healthcare systems. Dealing with big medical data in resource scheduling is a major issue that aims to offer higher quality healthcare services. Hadoop MapReduce has been widely used for parallel processing of large data tasks and efficient job scheduling. The number of big data tasks is constantly growing; it is becoming more essential to minimize their energy usage to reduce the environmental effect and operating expenses. Hence to overcome these disadvantages, we propose a novel resource scheduler for big data using a Hybrid 2-GW Optimization Algorithm (H2-GWOA). We employ the Improved GlowWorm Swarm Optimization Algorithm (IGSOA) and Mean GreyWolf Optimization Algorithm (MGWOA) for optimizing the MapReduce framework in heterogeneous big data. The CloudSim platform was used for the simulations. The performance of the proposed scheduler is proved to be better than the conventional methods in terms of metrics like latency, makespan, resource utilization, skewness, and Central Processing Unit (CPU) consumption.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136373679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}