Pub Date : 2023-01-01DOI: 10.12720/jait.14.3.601-605
A. Mezhenin, V. Izvozchikova, Ivan A. Mezhenin
—The issues of assessing the quality of lighting computer 3D scenes using different lighting systems are considered. Quality lighting increases realism, immersion and improves the perception of shape, color and texture of objects in the image. Existing engineering professional lighting calculation programs are not well suited to the design, art solutions or gaming scenes. To obtain objective estimates of illumination, we propose to use metrics for evaluating the quality of rendering systems. Particular attention is paid to the use of such tools as heat maps. Their visual analysis by hue or intensity helps to compare and evaluate the quality of illumination of scenes. However, such a comparison does not give a cumulative score. A possible solution is to treat heat maps as images and use them as the basis for a generalized heat map to produce a single cumulative statistic. In order to create a generalized heat map, several ways of constructing a difference matrix based on normalization methods have been proposed. The proposed approach is implemented as a prototype application. Experiments were carried out on test scenes with different illumination systems. The generalized heat maps made it possible to obtain cumulative estimates of the comparison of different lighting approaches and to identify areas most sensitive to changes in illumination. According to the authors, the proposed approach to illuminance estimation for staged lighting can be used to improve the realism of visualization in 3D modeling.
{"title":"Evaluation of Illumination in 3D Scenes Based on Heat Maps Comparison","authors":"A. Mezhenin, V. Izvozchikova, Ivan A. Mezhenin","doi":"10.12720/jait.14.3.601-605","DOIUrl":"https://doi.org/10.12720/jait.14.3.601-605","url":null,"abstract":"—The issues of assessing the quality of lighting computer 3D scenes using different lighting systems are considered. Quality lighting increases realism, immersion and improves the perception of shape, color and texture of objects in the image. Existing engineering professional lighting calculation programs are not well suited to the design, art solutions or gaming scenes. To obtain objective estimates of illumination, we propose to use metrics for evaluating the quality of rendering systems. Particular attention is paid to the use of such tools as heat maps. Their visual analysis by hue or intensity helps to compare and evaluate the quality of illumination of scenes. However, such a comparison does not give a cumulative score. A possible solution is to treat heat maps as images and use them as the basis for a generalized heat map to produce a single cumulative statistic. In order to create a generalized heat map, several ways of constructing a difference matrix based on normalization methods have been proposed. The proposed approach is implemented as a prototype application. Experiments were carried out on test scenes with different illumination systems. The generalized heat maps made it possible to obtain cumulative estimates of the comparison of different lighting approaches and to identify areas most sensitive to changes in illumination. According to the authors, the proposed approach to illuminance estimation for staged lighting can be used to improve the realism of visualization in 3D modeling.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66332130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.4.616-624
N. Alsharabi, Maha Alqunun, Belal Abdullah Hezam Murshed
—Many organizations worldwide encounter security risks on their local network caused by malware, which might result in losing sensitive data. Thus, network administrators should use efficient tools to observe the instantaneous network traffic and detect any suspicious activity. This project aims to detect incidents in local networks based on snort and Wireshark tools. Wireshark and snort tools combine their advantages to achieve maximum benefit, enhance the security level of local networks, and protect data. Snort Intrusion Detection System (Snort-IDS) is a security tool for network security. Snort-IDS rules use to match packet traffic. If some packets match the rules, Snort-IDS will generate alert messages. First, this project uses a virtual dataset that includes normal and abnormal traffic for the performance evaluation test. In addition, design local rules to detect anomalous activities. Second, use Wireshark software to analyze data packets. Second, use Wireshark software to analyze data packets. This project categorizes the detected patterns into two groups, anomaly-based detection, and signature-based detection. The results revealed the efficiency of the snort-IDS system in detecting unusual activities in both patterns and generating more information by analyzing it by Wireshark, such as source, destination, and protocol type. The promoted experience was tested on the virtual local network to ensure the effectiveness of this method.
{"title":"Detecting Unusual Activities in Local Network Using Snort and Wireshark Tools","authors":"N. Alsharabi, Maha Alqunun, Belal Abdullah Hezam Murshed","doi":"10.12720/jait.14.4.616-624","DOIUrl":"https://doi.org/10.12720/jait.14.4.616-624","url":null,"abstract":"—Many organizations worldwide encounter security risks on their local network caused by malware, which might result in losing sensitive data. Thus, network administrators should use efficient tools to observe the instantaneous network traffic and detect any suspicious activity. This project aims to detect incidents in local networks based on snort and Wireshark tools. Wireshark and snort tools combine their advantages to achieve maximum benefit, enhance the security level of local networks, and protect data. Snort Intrusion Detection System (Snort-IDS) is a security tool for network security. Snort-IDS rules use to match packet traffic. If some packets match the rules, Snort-IDS will generate alert messages. First, this project uses a virtual dataset that includes normal and abnormal traffic for the performance evaluation test. In addition, design local rules to detect anomalous activities. Second, use Wireshark software to analyze data packets. Second, use Wireshark software to analyze data packets. This project categorizes the detected patterns into two groups, anomaly-based detection, and signature-based detection. The results revealed the efficiency of the snort-IDS system in detecting unusual activities in both patterns and generating more information by analyzing it by Wireshark, such as source, destination, and protocol type. The promoted experience was tested on the virtual local network to ensure the effectiveness of this method.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66332412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.4.625-629
Elijah M. Maseno, Z. Wang, Fangzhou Liu
—In recent years, we have witnessed rapid growth in the application of IoT globally. IoT has found its applications in governmental and non-governmental institutions. The integration of a large number of electronic devices exposes IoT technologies to various forms of cyber-attacks. Cybercriminals have shifted their focus to the IoT as it provides a broad network intrusion surface area. To better protect IoT devices, we need intelligent intrusion detection systems. This work proposes a hybrid detection system based on Genetic Algorithm (GA) and Extreme Learning Method (ELM). The main limitation of ELM is that the initial parameters (weights and biases) are chosen randomly affecting the algorithm’s performance. To overcome this challenge, GA is used for the selection of the input weights. In addition, the choice of activation function is key for the optimal performance of a model. In this work, we have used different activation functions to demonstrate the importance of activation functions in the construction of GA-ELM. The proposed model was evaluated using the TON_IoT network data set. This data set is an up-to-date heterogeneous data set that captures the sophisticated cyber threats in the IoT environment. The results show that the GA-ELM model has a high accuracy compared to single ELM. In addition, Relu outperformed other activation functions, and this can be attributed to the fact that it is known to have fast learning capabilities and solves the challenge of vanishing gradient witnessed in the sigmoid activation function.
{"title":"Intrusion Detection System in IoT Based on GA-ELM Hybrid Method","authors":"Elijah M. Maseno, Z. Wang, Fangzhou Liu","doi":"10.12720/jait.14.4.625-629","DOIUrl":"https://doi.org/10.12720/jait.14.4.625-629","url":null,"abstract":"—In recent years, we have witnessed rapid growth in the application of IoT globally. IoT has found its applications in governmental and non-governmental institutions. The integration of a large number of electronic devices exposes IoT technologies to various forms of cyber-attacks. Cybercriminals have shifted their focus to the IoT as it provides a broad network intrusion surface area. To better protect IoT devices, we need intelligent intrusion detection systems. This work proposes a hybrid detection system based on Genetic Algorithm (GA) and Extreme Learning Method (ELM). The main limitation of ELM is that the initial parameters (weights and biases) are chosen randomly affecting the algorithm’s performance. To overcome this challenge, GA is used for the selection of the input weights. In addition, the choice of activation function is key for the optimal performance of a model. In this work, we have used different activation functions to demonstrate the importance of activation functions in the construction of GA-ELM. The proposed model was evaluated using the TON_IoT network data set. This data set is an up-to-date heterogeneous data set that captures the sophisticated cyber threats in the IoT environment. The results show that the GA-ELM model has a high accuracy compared to single ELM. In addition, Relu outperformed other activation functions, and this can be attributed to the fact that it is known to have fast learning capabilities and solves the challenge of vanishing gradient witnessed in the sigmoid activation function.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66332535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.4.668-673
Sarva Naveen Kumar, Ch. Sumanth Kumar
—As the growth of digital images is been widely increased over the last few years on internet, the retrieval of required image is been a big problem. In this paper, a combinational approach is designed for retrieval of image form big data. The approach is CNN-QCSO, one is deep learning technique, i.e., Convolutional Neural Network (CNN) and another is optimization technique, i.e., Quantm Cuckoo Search Optimization (QCSO). CNN is used for extracting of features for the given query image and optimization techniques helps in achieving the global best features by changing the internal parameters of processing layers. The Content Based Image Retrieval (CBIR) is proposed in this study. In big data analysis, CNN is vastly used and have many applications like identifying objects, medical imaging fields, security analysis and so on. In this paper, the combination of two efficient techniques helps in identifying the image and achieves good results. The results shows that CNN alone achieves an accuracy of 94.8% and when combined with QCSO the rate of accuracy improved by 1.6%. The entire experimental values are evaluated using matlab tool.
{"title":"Fusion of CNN-QCSO for Content Based Image Retrieval","authors":"Sarva Naveen Kumar, Ch. Sumanth Kumar","doi":"10.12720/jait.14.4.668-673","DOIUrl":"https://doi.org/10.12720/jait.14.4.668-673","url":null,"abstract":"—As the growth of digital images is been widely increased over the last few years on internet, the retrieval of required image is been a big problem. In this paper, a combinational approach is designed for retrieval of image form big data. The approach is CNN-QCSO, one is deep learning technique, i.e., Convolutional Neural Network (CNN) and another is optimization technique, i.e., Quantm Cuckoo Search Optimization (QCSO). CNN is used for extracting of features for the given query image and optimization techniques helps in achieving the global best features by changing the internal parameters of processing layers. The Content Based Image Retrieval (CBIR) is proposed in this study. In big data analysis, CNN is vastly used and have many applications like identifying objects, medical imaging fields, security analysis and so on. In this paper, the combination of two efficient techniques helps in identifying the image and achieves good results. The results shows that CNN alone achieves an accuracy of 94.8% and when combined with QCSO the rate of accuracy improved by 1.6%. The entire experimental values are evaluated using matlab tool.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66332812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.4.729-740
Mohamed Hamed Mohamed Hefny, Y. Helmy, M. Abdelsalam
—Blockchain technology is considered to have a high impact on the banking industry due to its potential to enable new ways of organizing and handling banking industry activities. It reduces costs and time associated with intermediaries and improves trust and security. This study explores how blockchain technology could enhance fund transfer transactions between local banks in Egypt by providing a blockchain-based framework to conduct instant payments and financial transactions. Due to its properties, blockchain is qualified to play a vital role in the financial sector by helping financial institutions protect their daily routine financial transactions with a more secure, instant, and low-cost model. The findings show that blockchain technology’s characteristics (enhanced security, transparency, data integrity, information immutability, and instant settlement) and using open Application Programming Interface (API) architecture will give seamless integration of financial services and applications. This approach will improve Egypt’s financial transactions between local banks as well as the growth of e-payments and digital transformation. The proposed framework, which uses blockchain and open banking API architecture in fund transfer between local banks, will provide a great opportunity and space for banks to improve and positively impact digital transformation strategy, financial inclusion, digitization of payments, online SME finance, increasing access points, partnerships with FinTech’s, and using innovative technologies further to bring efficiency in banking and payments. By using a blockchain network for domestic remittance Automated Clearing House (ACH), banks should be able to offer customers a faster, cheaper, and more efficient service.
{"title":"Open Banking API Framework to Improve the Online Transaction between Local Banks in Egypt Using Blockchain Technology","authors":"Mohamed Hamed Mohamed Hefny, Y. Helmy, M. Abdelsalam","doi":"10.12720/jait.14.4.729-740","DOIUrl":"https://doi.org/10.12720/jait.14.4.729-740","url":null,"abstract":"—Blockchain technology is considered to have a high impact on the banking industry due to its potential to enable new ways of organizing and handling banking industry activities. It reduces costs and time associated with intermediaries and improves trust and security. This study explores how blockchain technology could enhance fund transfer transactions between local banks in Egypt by providing a blockchain-based framework to conduct instant payments and financial transactions. Due to its properties, blockchain is qualified to play a vital role in the financial sector by helping financial institutions protect their daily routine financial transactions with a more secure, instant, and low-cost model. The findings show that blockchain technology’s characteristics (enhanced security, transparency, data integrity, information immutability, and instant settlement) and using open Application Programming Interface (API) architecture will give seamless integration of financial services and applications. This approach will improve Egypt’s financial transactions between local banks as well as the growth of e-payments and digital transformation. The proposed framework, which uses blockchain and open banking API architecture in fund transfer between local banks, will provide a great opportunity and space for banks to improve and positively impact digital transformation strategy, financial inclusion, digitization of payments, online SME finance, increasing access points, partnerships with FinTech’s, and using innovative technologies further to bring efficiency in banking and payments. By using a blockchain network for domestic remittance Automated Clearing House (ACH), banks should be able to offer customers a faster, cheaper, and more efficient service.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66333181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.4.846-856
Thangaraj Ethilu, Abirami Sathappan, P. Rodrigues
—Using distributed Software Defined Networking (SDN)control, SDN delivers additional flexibility to network management, and it has been a significant breakthrough in network innovation. Switch migration is often used for distributed controller workload balancing. The Time-Sharing Switch Migration (TSSM) scheme proposed a strategy in which multiple controllers are allowed to share the workload of a switch via time sharing during an overloaded condition, resulting in reduced ping-pong controller difficulty, fewer overload occurrences, and improved controller efficiency. However, it requires more than one controller to accomplish, it has greater migration costs and higher controller resource usage during the TSSM operating time. As a result, we presented a coalitional game strategy that optimizes controller selection throughout the TSSM phase depending on flow characteristics. The new TSSM method reduces migration costs and controller resource usage while still providing TSSM benefits. For the sake of practicality, the proposed strategy is implemented using an open network operating system. The experimental findings reveal that, as compared to the typical TSSM system, the proposed technique reduces migration costs and controller resource usage by approximately 18%.
{"title":"An Effective Time-Sharing Switch Migration Scheme for Load Balancing in Software Defined Networking","authors":"Thangaraj Ethilu, Abirami Sathappan, P. Rodrigues","doi":"10.12720/jait.14.4.846-856","DOIUrl":"https://doi.org/10.12720/jait.14.4.846-856","url":null,"abstract":"—Using distributed Software Defined Networking (SDN)control, SDN delivers additional flexibility to network management, and it has been a significant breakthrough in network innovation. Switch migration is often used for distributed controller workload balancing. The Time-Sharing Switch Migration (TSSM) scheme proposed a strategy in which multiple controllers are allowed to share the workload of a switch via time sharing during an overloaded condition, resulting in reduced ping-pong controller difficulty, fewer overload occurrences, and improved controller efficiency. However, it requires more than one controller to accomplish, it has greater migration costs and higher controller resource usage during the TSSM operating time. As a result, we presented a coalitional game strategy that optimizes controller selection throughout the TSSM phase depending on flow characteristics. The new TSSM method reduces migration costs and controller resource usage while still providing TSSM benefits. For the sake of practicality, the proposed strategy is implemented using an open network operating system. The experimental findings reveal that, as compared to the typical TSSM system, the proposed technique reduces migration costs and controller resource usage by approximately 18%.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66334522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.5.1063-1072
G. Senthilkumar, K. Tamilarasi, N. Velmurugan, J. K. Periasamy
—Cloud computing seems to be currently the hottest new trend in data storage, processing, visualization, and analysis. There has also been a significant rise in cloud computing as government organizations and commercial businesses have migrated toward the cloud system. It has to do with dynamic resource allocation on demand to provide guaranteed services to clients. Another of the fastest-growing segments of computer business involves cloud computing. It was a brand-new approach to delivering IT services through the Internet. This paradigm allows consumers to access computing resources as in puddles over the Internet. It is necessary and challenging to deal with the allocation of resources and planning in cloud computing. The Random Forest (RF) and the Genetic Algorithm (GA) are used in a hybrid strategy for virtual machine allocation in this work. This is a supervised machine-learning technique. Power consumption will be minimized while resources are better distributed and utilized, and the project’s goal is to maximize resource usage. There is an approach that can be used to produce training data that can be used to train a random forest. Planet Lab’s real-time workload traces are utilized to test the method. The suggested GA-RF model outperformed in terms of data center and host resource utilization, energy consumption, and execution time. Resource utilization, Power consumption, and execution time were employed as performance measures in this work. Random Forest provides better results compared with the Genetic Algorithm.
{"title":"Resource Allocation in Cloud Computing","authors":"G. Senthilkumar, K. Tamilarasi, N. Velmurugan, J. K. Periasamy","doi":"10.12720/jait.14.5.1063-1072","DOIUrl":"https://doi.org/10.12720/jait.14.5.1063-1072","url":null,"abstract":"—Cloud computing seems to be currently the hottest new trend in data storage, processing, visualization, and analysis. There has also been a significant rise in cloud computing as government organizations and commercial businesses have migrated toward the cloud system. It has to do with dynamic resource allocation on demand to provide guaranteed services to clients. Another of the fastest-growing segments of computer business involves cloud computing. It was a brand-new approach to delivering IT services through the Internet. This paradigm allows consumers to access computing resources as in puddles over the Internet. It is necessary and challenging to deal with the allocation of resources and planning in cloud computing. The Random Forest (RF) and the Genetic Algorithm (GA) are used in a hybrid strategy for virtual machine allocation in this work. This is a supervised machine-learning technique. Power consumption will be minimized while resources are better distributed and utilized, and the project’s goal is to maximize resource usage. There is an approach that can be used to produce training data that can be used to train a random forest. Planet Lab’s real-time workload traces are utilized to test the method. The suggested GA-RF model outperformed in terms of data center and host resource utilization, energy consumption, and execution time. Resource utilization, Power consumption, and execution time were employed as performance measures in this work. Random Forest provides better results compared with the Genetic Algorithm.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135052623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.5.991-1002
Verónica C. Tapia, Carlos M. Gaona
—The growth in the development of microservices has sparked interest in evaluating their quality. This study seeks to determine the key criteria and challenges in evaluating microservices to drive research and optimize processes. The systematic review of the literature presented in this research identified that the most commonly used evaluation criteria are performance, scalability, security, cohesion, coupling, and granularity. Although evaluation tools exist, they mainly measure performance aspects such as latency and resource consumption. Challenges were identified in security, granularity, throughput, monitoring, organizational strategy, orchestration, choreography, scalability, decomposition, and monolith refactoring. In addition, research opportunities in empirical studies, analysis of quality trade-offs, and broadening of relevant perspectives and tools are noted. Challenges in the interrelation of quality attributes, metrics and patterns, automatic evaluation, architectural decisions and technical debt, domain-based design, testing, monitoring, and performance modeling are also highlighted. Challenges in orchestration, communication management and consistency between microservices, independent evolution, and scalability are also mentioned. Therefore, it is critical to address these particular challenges in microservices and to continue research to improve the understanding and practices related to quality.
{"title":"Research Opportunities in Microservices Quality Assessment: A Systematic Literature Review","authors":"Verónica C. Tapia, Carlos M. Gaona","doi":"10.12720/jait.14.5.991-1002","DOIUrl":"https://doi.org/10.12720/jait.14.5.991-1002","url":null,"abstract":"—The growth in the development of microservices has sparked interest in evaluating their quality. This study seeks to determine the key criteria and challenges in evaluating microservices to drive research and optimize processes. The systematic review of the literature presented in this research identified that the most commonly used evaluation criteria are performance, scalability, security, cohesion, coupling, and granularity. Although evaluation tools exist, they mainly measure performance aspects such as latency and resource consumption. Challenges were identified in security, granularity, throughput, monitoring, organizational strategy, orchestration, choreography, scalability, decomposition, and monolith refactoring. In addition, research opportunities in empirical studies, analysis of quality trade-offs, and broadening of relevant perspectives and tools are noted. Challenges in the interrelation of quality attributes, metrics and patterns, automatic evaluation, architectural decisions and technical debt, domain-based design, testing, monitoring, and performance modeling are also highlighted. Challenges in orchestration, communication management and consistency between microservices, independent evolution, and scalability are also mentioned. Therefore, it is critical to address these particular challenges in microservices and to continue research to improve the understanding and practices related to quality.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136202961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.5.970-979
Diana Earshia V., Sumathi M.
—Image interpolation techniques based on learning have been shown to be efficient in recent days, due to their promising results. Deep neural networks can considerably enhance the quality of image super-resolution, according to recent studies. Convolutional neural networks with deeper layers are commonly used in current research to improve the performance of image interpolation. As the network’s depth grows, more issues with training arise. This research intends to implement an advanced deep learning mechanism called Deep Multi-Scaled Residual Network (DMResNet) for effective image interpolation. A network cannot be substantially improved by merely increasing the depth of the network. New training strategies are required for improving the accuracy of interpolated images. By using the proposed framework, the Low Resolution (LR) images are reconstructed to the High Resolution (HR) images with low computational burden and time complexity. In order to dynamically discover the image features at multiple scales, convolution kernels of various sizes based on the residual blocks have been utilized in this work. In the meantime, the multi-scaled residual architecture is formulated to allow these characteristics to interact with one another for obtaining the most accurate image data. The interpolation performance and image reconstruction efficiency of the proposed model have been validated by using a variety of measures such as PSNR, SSIM, RMSE, Run time analysis, and FSIM. Popular datasets IAPR TC-12, DIV 2K, and CVDS are used for validating the proposed model. This model outperforms the state-of-art interpolation techniques in its performance, by yielding an increase of 8% in PSNR, 6% in SSIM, 1.2% in FSIM, and a decrease of 38.79% in RMSE, 5.875 times in run time analysis.
{"title":"An Intelligent Deep Learning Architecture Using Multi-scale Residual Network Model for Image Interpolation","authors":"Diana Earshia V., Sumathi M.","doi":"10.12720/jait.14.5.970-979","DOIUrl":"https://doi.org/10.12720/jait.14.5.970-979","url":null,"abstract":"—Image interpolation techniques based on learning have been shown to be efficient in recent days, due to their promising results. Deep neural networks can considerably enhance the quality of image super-resolution, according to recent studies. Convolutional neural networks with deeper layers are commonly used in current research to improve the performance of image interpolation. As the network’s depth grows, more issues with training arise. This research intends to implement an advanced deep learning mechanism called Deep Multi-Scaled Residual Network (DMResNet) for effective image interpolation. A network cannot be substantially improved by merely increasing the depth of the network. New training strategies are required for improving the accuracy of interpolated images. By using the proposed framework, the Low Resolution (LR) images are reconstructed to the High Resolution (HR) images with low computational burden and time complexity. In order to dynamically discover the image features at multiple scales, convolution kernels of various sizes based on the residual blocks have been utilized in this work. In the meantime, the multi-scaled residual architecture is formulated to allow these characteristics to interact with one another for obtaining the most accurate image data. The interpolation performance and image reconstruction efficiency of the proposed model have been validated by using a variety of measures such as PSNR, SSIM, RMSE, Run time analysis, and FSIM. Popular datasets IAPR TC-12, DIV 2K, and CVDS are used for validating the proposed model. This model outperforms the state-of-art interpolation techniques in its performance, by yielding an increase of 8% in PSNR, 6% in SSIM, 1.2% in FSIM, and a decrease of 38.79% in RMSE, 5.875 times in run time analysis.","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136202962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.12720/jait.14.5.1037-1045
Swathi Darla, C. Naveena
—In this research work, a blockchain-based secure routing model is proposed for Internet of Sensor Things (IoST), with the assistance acquired from deep learning-based hybrid meta-heuristic optimization model. The proposed model includes three major phases: (a) optimal cluster head selection, (b) lightweight blockchain-based registration and authentication mechanism, (c) optimized deep learning based malicious node identification and (d) optimal path identification. Initially, the network is constructed with N number of nodes. Among those nodes certain count of nodes is selected as optimal cluster head based on the two-fold objectives (energy consumption and delay) based hybrid optimization model. The proposed Chimp social incentive-based Mutated Poor Rich Optimization (CMPRO) Algorithm is the conceptual amalgamation of the standard Chimp Optimization Algorithm (ChOA) and Poor and Rich Optimization (PRO) approach. Moreover, blockchain is deployed on the optimal CHs and base station because they have sufficient storage and computational resources. Subsequently, a lightweight blockchain-based registration and authentication mechanism is undergone. After the authentication of the network, the presence of malicious nodes in the network is detected using the new Optimized Deep Belief Network. To enhance the detection accuracy of the model, the hidden layers of Deep Belief Network (DBN) is optimized using the new hybrid optimization model (CMPRO). After the detection of malicious nodes, the source node selects the shortest path to the destination and performs secure routing in the absence of malicious node. In the proposed model, the optimal path for routing the data is identified using the Dijkstra algorithm. As a whole the network becomes secured. Finally, the performance of the model is validated to manifest its efficiency over the existing models
{"title":"An Optimized Deep Learning Based Malicious Nodes Detection in Intelligent Sensor-Based Systems Using Blockchain","authors":"Swathi Darla, C. Naveena","doi":"10.12720/jait.14.5.1037-1045","DOIUrl":"https://doi.org/10.12720/jait.14.5.1037-1045","url":null,"abstract":"—In this research work, a blockchain-based secure routing model is proposed for Internet of Sensor Things (IoST), with the assistance acquired from deep learning-based hybrid meta-heuristic optimization model. The proposed model includes three major phases: (a) optimal cluster head selection, (b) lightweight blockchain-based registration and authentication mechanism, (c) optimized deep learning based malicious node identification and (d) optimal path identification. Initially, the network is constructed with N number of nodes. Among those nodes certain count of nodes is selected as optimal cluster head based on the two-fold objectives (energy consumption and delay) based hybrid optimization model. The proposed Chimp social incentive-based Mutated Poor Rich Optimization (CMPRO) Algorithm is the conceptual amalgamation of the standard Chimp Optimization Algorithm (ChOA) and Poor and Rich Optimization (PRO) approach. Moreover, blockchain is deployed on the optimal CHs and base station because they have sufficient storage and computational resources. Subsequently, a lightweight blockchain-based registration and authentication mechanism is undergone. After the authentication of the network, the presence of malicious nodes in the network is detected using the new Optimized Deep Belief Network. To enhance the detection accuracy of the model, the hidden layers of Deep Belief Network (DBN) is optimized using the new hybrid optimization model (CMPRO). After the detection of malicious nodes, the source node selects the shortest path to the destination and performs secure routing in the absence of malicious node. In the proposed model, the optimal path for routing the data is identified using the Dijkstra algorithm. As a whole the network becomes secured. Finally, the performance of the model is validated to manifest its efficiency over the existing models","PeriodicalId":36452,"journal":{"name":"Journal of Advances in Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136305713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}