Latin cubes are the high-dimensional form of Latin squares. Latin cubes have discreteness, uniformity and 3D attribute. There have been some applications of Latin squares in hash algorithms, but few applications of Latin cubes in this field. In this paper, a highly parallelizable hash algorithm based on four Latin cubes of order 4 is proposed. The parallelism is reflected in two aspects: on the one hand, the whole message is divided into several blocks, and all the blocks are processed in parallel; on the other hand, each block is further divided into several channels, and these channels are also processed in parallel. The whole hash procedure is based on four fixed Latin cubes. By the aid of uniformity and 3D attribute of Latin cubes, the algorithm has good statistical performances and strong collision resistance. Furthermore, the parallel structure makes the algorithm have satisfactory computation speed. Therefore the algorithm is quite suitable for the current applications of communication security
{"title":"A Highly Parallelizable Hash Algorithm Based on Latin Cubes","authors":"Ming Xu","doi":"10.34028/iajit/20/6/10","DOIUrl":"https://doi.org/10.34028/iajit/20/6/10","url":null,"abstract":"Latin cubes are the high-dimensional form of Latin squares. Latin cubes have discreteness, uniformity and 3D attribute. There have been some applications of Latin squares in hash algorithms, but few applications of Latin cubes in this field. In this paper, a highly parallelizable hash algorithm based on four Latin cubes of order 4 is proposed. The parallelism is reflected in two aspects: on the one hand, the whole message is divided into several blocks, and all the blocks are processed in parallel; on the other hand, each block is further divided into several channels, and these channels are also processed in parallel. The whole hash procedure is based on four fixed Latin cubes. By the aid of uniformity and 3D attribute of Latin cubes, the algorithm has good statistical performances and strong collision resistance. Furthermore, the parallel structure makes the algorithm have satisfactory computation speed. Therefore the algorithm is quite suitable for the current applications of communication security","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134884746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emre Tercan, Serkan Tapkın, Furkan Küçük, Ali Demirtaş, Ahmet Özbayoğlu, Abdussamet Türker
Latest advancement of the computer vision literature and Convolutional Neural Networks (CNN) reveal many opportunities that are being actively used in various research areas. One of the most important examples for these areas is autonomous vehicles and mapping systems. Point of interest detection is a rising field within autonomous video tracking and autonomous mapping systems. Within the last few years, the number of implementations and research papers started rising due to the advancements in the new deep learning systems. In this paper, our aim is to survey the existing studies implemented on point of interest detection systems that focus on objects on the road (like lanes, road marks), or objects on the roadside (like road signs, restaurants or temporary establishments) so that they can be used for autonomous vehicles and automatic mapping systems. Meanwhile, the roadside point of interest detection problem has been addressed from a transportation industry perspective. At the same time, a deep learning based point of interest detection model based on roadside gas station identification will be introduced as proof of the anticipated concept. Instead of using an internet connection for point of interest retrieval, the proposed model has the capability to work offline for more robustness. A variety of models have been analysed and their detection speed and accuracy performances are compared. Our preliminary results show that it is possible to develop a model achieving a satisfactory real-time performance that can be embedded into autonomous cars such that streaming video analysis and point of interest detection might be achievable in actual utilisation for future implementations.
{"title":"Computational Intelligence Based Point of Interest Detection by Video Surveillance Implementations","authors":"Emre Tercan, Serkan Tapkın, Furkan Küçük, Ali Demirtaş, Ahmet Özbayoğlu, Abdussamet Türker","doi":"10.34028/iajit/20/6/7","DOIUrl":"https://doi.org/10.34028/iajit/20/6/7","url":null,"abstract":"Latest advancement of the computer vision literature and Convolutional Neural Networks (CNN) reveal many opportunities that are being actively used in various research areas. One of the most important examples for these areas is autonomous vehicles and mapping systems. Point of interest detection is a rising field within autonomous video tracking and autonomous mapping systems. Within the last few years, the number of implementations and research papers started rising due to the advancements in the new deep learning systems. In this paper, our aim is to survey the existing studies implemented on point of interest detection systems that focus on objects on the road (like lanes, road marks), or objects on the roadside (like road signs, restaurants or temporary establishments) so that they can be used for autonomous vehicles and automatic mapping systems. Meanwhile, the roadside point of interest detection problem has been addressed from a transportation industry perspective. At the same time, a deep learning based point of interest detection model based on roadside gas station identification will be introduced as proof of the anticipated concept. Instead of using an internet connection for point of interest retrieval, the proposed model has the capability to work offline for more robustness. A variety of models have been analysed and their detection speed and accuracy performances are compared. Our preliminary results show that it is possible to develop a model achieving a satisfactory real-time performance that can be embedded into autonomous cars such that streaming video analysis and point of interest detection might be achievable in actual utilisation for future implementations.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136374727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A mobile ad-hoc (MANET) network has the main challenge to provide the needed data for the desired mobile nodes. An efficient on request routing protocol for MANET is Ad-hoc on-demand Distance Vector (AODV), which is based on two main methods: route discovery and route maintenance. Route discovery is the process used to detect a route to the destination from the packet source, while route maintenance is the process used to detect a link failure and repair it. Cooperative caching tends improving data availability in mobile ad-hoc networks, the coordination of cache discovery and cache management strategies is very significant in the cooperative caching of MANETs because requests for data and answers to requested data can be reduced simply due to interference, network congestion, or when a forwarding node is out of reach and the route breaks down. Cooperative cache management is much more complicated in cooperative caching because it also depends on neighbouring nodes to decide what to cache. In this paper, three algorithms were proposed: (1) a combination algorithm for cache admission control based on cache data and location of data to save space and reduce data redundancy, (2) a value-based policy for cache placement and replacement instead of the more common least recently used strategy, depending on metrics that describe cached items to increase the local cache hit ratio, and (3) a combined algorithm for cache consistency that includes time-to-live, pull, and push policies to enhance data availability and system scalability. The proposed algorithm implemented by the NS3 simulation program; which used to create a network using the AODV protocol in several parameters and achieve better system performance.
{"title":"An Effective Management Model for Data Caching in MANET Environment","authors":"Amer Abu Salem","doi":"10.34028/iajit/20/6/1","DOIUrl":"https://doi.org/10.34028/iajit/20/6/1","url":null,"abstract":"A mobile ad-hoc (MANET) network has the main challenge to provide the needed data for the desired mobile nodes. An efficient on request routing protocol for MANET is Ad-hoc on-demand Distance Vector (AODV), which is based on two main methods: route discovery and route maintenance. Route discovery is the process used to detect a route to the destination from the packet source, while route maintenance is the process used to detect a link failure and repair it. Cooperative caching tends improving data availability in mobile ad-hoc networks, the coordination of cache discovery and cache management strategies is very significant in the cooperative caching of MANETs because requests for data and answers to requested data can be reduced simply due to interference, network congestion, or when a forwarding node is out of reach and the route breaks down. Cooperative cache management is much more complicated in cooperative caching because it also depends on neighbouring nodes to decide what to cache. In this paper, three algorithms were proposed: (1) a combination algorithm for cache admission control based on cache data and location of data to save space and reduce data redundancy, (2) a value-based policy for cache placement and replacement instead of the more common least recently used strategy, depending on metrics that describe cached items to increase the local cache hit ratio, and (3) a combined algorithm for cache consistency that includes time-to-live, pull, and push policies to enhance data availability and system scalability. The proposed algorithm implemented by the NS3 simulation program; which used to create a network using the AODV protocol in several parameters and achieve better system performance.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136372897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image inpainting is a method to restore the missing pixels on damaged images. Initially, the traditional inpainting method uses the statistics of the surrounding pixels to find the missing pixels. It sometimes fails to read the hidden information to attain plausible imagery. The deep learning inpainting methods are introduced to overcome these challenges. A deep neural network learns the semantic priors and hidden representation pixels in an end-to-end fashion in the digital and medical. This paper discusses the following: 1) The difference between the supervised and the unsupervised deep learning inpainting algorithm used in medical and digital images. 2) Discusses the merits and demerits of each deep learning inpainting model. 3) Discusses the challenges and solution for the deep learning inpainting model. 4) Discusses each model's quantitative and qualitative analysis in the digital and other medical images
{"title":"Deep Learning Inpainting Model on Digital and Medical Images-A Review","authors":"Jennyfer Susan, Parthasarathy Subashini","doi":"10.34028/iajit/20/6/9","DOIUrl":"https://doi.org/10.34028/iajit/20/6/9","url":null,"abstract":"Image inpainting is a method to restore the missing pixels on damaged images. Initially, the traditional inpainting method uses the statistics of the surrounding pixels to find the missing pixels. It sometimes fails to read the hidden information to attain plausible imagery. The deep learning inpainting methods are introduced to overcome these challenges. A deep neural network learns the semantic priors and hidden representation pixels in an end-to-end fashion in the digital and medical. This paper discusses the following: 1) The difference between the supervised and the unsupervised deep learning inpainting algorithm used in medical and digital images. 2) Discusses the merits and demerits of each deep learning inpainting model. 3) Discusses the challenges and solution for the deep learning inpainting model. 4) Discusses each model's quantitative and qualitative analysis in the digital and other medical images","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134884404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robots are becoming increasingly common in critical healthcare, transportation, and manufacturing applications. However, these systems are vulnerable to malware attacks, compromising reliability and security. Previous research has investigated the use of Machine Learning (ML) to detect malware in robots. However, existing approaches have faced several challenges, including class imbalance, high dimensionality, data heterogeneity, and balancing detection accuracy with false positives. This study introduces a novel approach to malware detection in robots that uses ensemble learning combined with the Synthetic Minority Over-sampling Technique (SMOTE). The proposed approach stacks three (ML models Random Forest (RF), Artificial Neural Networks (ANN), and Support Vector Machines (SVM) to improve accuracy and system robustness. SMOTE addresses the class imbalance in the dataset. Evaluation of the proposed approach on a publicly available dataset of robotic systems yielded promising results. The approach outperformed individual models and existing approaches regarding detection accuracy and false positive rates. This study represents a significant advancement in malware detection for robots. It could enhance the reliability and security of these systems in various critical applications
{"title":"RoboGuard: Enhancing Robotic System Security with Ensemble Learning","authors":"Ali Al Maqousi, Mohammad Alauthman","doi":"10.34028/iajit/20/6/13","DOIUrl":"https://doi.org/10.34028/iajit/20/6/13","url":null,"abstract":"Robots are becoming increasingly common in critical healthcare, transportation, and manufacturing applications. However, these systems are vulnerable to malware attacks, compromising reliability and security. Previous research has investigated the use of Machine Learning (ML) to detect malware in robots. However, existing approaches have faced several challenges, including class imbalance, high dimensionality, data heterogeneity, and balancing detection accuracy with false positives. This study introduces a novel approach to malware detection in robots that uses ensemble learning combined with the Synthetic Minority Over-sampling Technique (SMOTE). The proposed approach stacks three (ML models Random Forest (RF), Artificial Neural Networks (ANN), and Support Vector Machines (SVM) to improve accuracy and system robustness. SMOTE addresses the class imbalance in the dataset. Evaluation of the proposed approach on a publicly available dataset of robotic systems yielded promising results. The approach outperformed individual models and existing approaches regarding detection accuracy and false positive rates. This study represents a significant advancement in malware detection for robots. It could enhance the reliability and security of these systems in various critical applications","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134884764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At present, deep learning-based joint entity-relation extraction models are gradually able to accomplish complex tasks, but the research progress in specific fields is relatively slow. Compared with other fields, emergency plan text has the characteristics of high entity density, long text, and many professional terms, which make some general models unable to handle the semantic information of emergency plan text well. Therefore, this paper addresses the problem of complex semantics of emergency plan text, and proposes a joint extraction model of emergency plan organization and relationship based on multi-Head Attention Mechanism (MA-JE) to enrich semantic information, starting from multiple perspectives and different levels to obtain contextual information, aiming to deeply mine and use sentence semantic information through deep feature extraction of emergency plan text. The proposed model and the baseline model are experimented separately on the Chinese emergency response plan dataset, and the results show that the proposed approach outperforms existing baseline models for joint extraction of entity and their relations. In addition, ablation experiments were performed to verify the validity of each module in the model.
{"title":"Joint Extraction of Organizations and Relations for Emergency Response Plans With Rich Semantic Information Based On Multi-Head Attention Mechanism","authors":"Tong Liu, Haoyu Liu, Weijian Ni, Mengxiao Si","doi":"10.34028/iajit/20/6/5","DOIUrl":"https://doi.org/10.34028/iajit/20/6/5","url":null,"abstract":"At present, deep learning-based joint entity-relation extraction models are gradually able to accomplish complex tasks, but the research progress in specific fields is relatively slow. Compared with other fields, emergency plan text has the characteristics of high entity density, long text, and many professional terms, which make some general models unable to handle the semantic information of emergency plan text well. Therefore, this paper addresses the problem of complex semantics of emergency plan text, and proposes a joint extraction model of emergency plan organization and relationship based on multi-Head Attention Mechanism (MA-JE) to enrich semantic information, starting from multiple perspectives and different levels to obtain contextual information, aiming to deeply mine and use sentence semantic information through deep feature extraction of emergency plan text. The proposed model and the baseline model are experimented separately on the Chinese emergency response plan dataset, and the results show that the proposed approach outperforms existing baseline models for joint extraction of entity and their relations. In addition, ablation experiments were performed to verify the validity of each module in the model.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136374432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software development in Open-Source Software systems (OSS) allow developers to share their code and modify other developers' code. That leads to collaboration in the development. They can either discuss on the items to be developed, including the errors and technical problems that were faced. One popular OSS platform is github which already has a large number of developers and projects. The data residing in the issues part of github is sufficiently large, complex and unstructured. It could be processed to find novel discoveries. This work concentrates on one selected project to be analyzed systematically. Routine Extract, Transform and Load (ETL) steps have been identified to clean the data before applying natural language processing for prioritizing and taking actions for the requirements. In a collaborative environment. Our work uses terms and guides developers for tracking the co-occurrence of the terms used together to help them focus on the important issues.
{"title":"Induction of Co-existing Items Available in Distributed Version Control Systems for Software Development","authors":"Sibel Özyer","doi":"10.34028/iajit/20/6/4","DOIUrl":"https://doi.org/10.34028/iajit/20/6/4","url":null,"abstract":"Software development in Open-Source Software systems (OSS) allow developers to share their code and modify other developers' code. That leads to collaboration in the development. They can either discuss on the items to be developed, including the errors and technical problems that were faced. One popular OSS platform is github which already has a large number of developers and projects. The data residing in the issues part of github is sufficiently large, complex and unstructured. It could be processed to find novel discoveries. This work concentrates on one selected project to be analyzed systematically. Routine Extract, Transform and Load (ETL) steps have been identified to clean the data before applying natural language processing for prioritizing and taking actions for the requirements. In a collaborative environment. Our work uses terms and guides developers for tracking the co-occurrence of the terms used together to help them focus on the important issues.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136373175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A mobile ad-hoc (MANET) network has the main challenge to provide the needed data for the desired mobile nodes. An efficient on request routing protocol for MANET is Ad-hoc on-demand Distance Vector (AODV), which is based on two main methods: route discovery and route maintenance. Route discovery is the process used to detect a route to the destination from the packet source, while route maintenance is the process used to detect a link failure and repair it. Cooperative caching tends improving data availability in mobile ad-hoc networks, the coordination of cache discovery and cache management strategies is very significant in the cooperative caching of MANETs because requests for data and answers to requested data can be reduced simply due to interference, network congestion, or when a forwarding node is out of reach and the route breaks down. Cooperative cache management is much more complicated in cooperative caching because it also depends on neighbouring nodes to decide what to cache. In this paper, three algorithms were proposed: (1) a combination algorithm for cache admission control based on cache data and location of data to save space and reduce data redundancy, (2) a value-based policy for cache placement and replacement instead of the more common least recently used strategy, depending on metrics that describe cached items to increase the local cache hit ratio, and (3) a combined algorithm for cache consistency that includes time-to-live, pull, and push policies to enhance data availability and system scalability. The proposed algorithm implemented by the NS3 simulation program; which used to create a network using the AODV protocol in several parameters and achieve better system performance.
{"title":"An Effective Management Model for Data Caching in MANET Environment","authors":"Amer Abu Salem","doi":"10.34028//iajit/20/6/1","DOIUrl":"https://doi.org/10.34028//iajit/20/6/1","url":null,"abstract":"A mobile ad-hoc (MANET) network has the main challenge to provide the needed data for the desired mobile nodes. An efficient on request routing protocol for MANET is Ad-hoc on-demand Distance Vector (AODV), which is based on two main methods: route discovery and route maintenance. Route discovery is the process used to detect a route to the destination from the packet source, while route maintenance is the process used to detect a link failure and repair it. Cooperative caching tends improving data availability in mobile ad-hoc networks, the coordination of cache discovery and cache management strategies is very significant in the cooperative caching of MANETs because requests for data and answers to requested data can be reduced simply due to interference, network congestion, or when a forwarding node is out of reach and the route breaks down. Cooperative cache management is much more complicated in cooperative caching because it also depends on neighbouring nodes to decide what to cache. In this paper, three algorithms were proposed: (1) a combination algorithm for cache admission control based on cache data and location of data to save space and reduce data redundancy, (2) a value-based policy for cache placement and replacement instead of the more common least recently used strategy, depending on metrics that describe cached items to increase the local cache hit ratio, and (3) a combined algorithm for cache consistency that includes time-to-live, pull, and push policies to enhance data availability and system scalability. The proposed algorithm implemented by the NS3 simulation program; which used to create a network using the AODV protocol in several parameters and achieve better system performance.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135007925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaotic systems behavior attracts many researchers in the field of image encryption. The major advantage of using chaos as the basis for developing a crypto-system is due to its sensitivity to initial conditions and parameter tunning as well as the random-like behavior which resembles the main ingredients of a good cipher namely the confusion and diffusion properties. In this article, we present a new scheme based on the synchronization of dual chaotic systems namely Lorenz and Chen chaotic systems and prove that those chaotic maps can be completely synchronized with other under suitable conditions and specific parameters that make a new addition to the chaotic based encryption systems. This addition provides a master-slave configuration that is utilized to construct the proposed dual synchronized chaos-based cipher scheme. The common security analyses are performed to validate the effectiveness of the proposed scheme. Based on all experiments and analyses, we can conclude that this scheme is secure, efficient, robust, reliable, and can be directly applied successfully for many practical security applications in insecure network channels such as the Internet
{"title":"A New Image Encryption Scheme Using Dual Chaotic Map Synchronization","authors":"","doi":"10.34028/iajit/18/1/11","DOIUrl":"https://doi.org/10.34028/iajit/18/1/11","url":null,"abstract":"Chaotic systems behavior attracts many researchers in the field of image encryption. The major advantage of using chaos as the basis for developing a crypto-system is due to its sensitivity to initial conditions and parameter tunning as well as the random-like behavior which resembles the main ingredients of a good cipher namely the confusion and diffusion properties. In this article, we present a new scheme based on the synchronization of dual chaotic systems namely Lorenz and Chen chaotic systems and prove that those chaotic maps can be completely synchronized with other under suitable conditions and specific parameters that make a new addition to the chaotic based encryption systems. This addition provides a master-slave configuration that is utilized to construct the proposed dual synchronized chaos-based cipher scheme. The common security analyses are performed to validate the effectiveness of the proposed scheme. Based on all experiments and analyses, we can conclude that this scheme is secure, efficient, robust, reliable, and can be directly applied successfully for many practical security applications in insecure network channels such as the Internet","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115679010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wrapper based Feature Selection using Integrative Teaching Learning Based Optimization Algorithm","authors":"Mohan Allam, Nandhini Malaiyappan","doi":"10.34028/IAJIT/17/6/7","DOIUrl":"https://doi.org/10.34028/IAJIT/17/6/7","url":null,"abstract":"","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121159072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}