Pub Date : 2024-05-30DOI: 10.1007/s13198-024-02376-x
Hayder M. A. Ghanimi, Sudhakar Sengan, Vijaya Bhaskar Sadu, Parvinder Kaur, Manju Kaushik, Roobaea Alroobaea, Abdullah M. Baqasah, Majed Alsafyani, Pankaj Dadheech
The communication barriers experienced by deaf and hard-of-hearing individuals often lead to social isolation and limited access to essential services, underlining a critical need for effective and accessible solutions. Recognizing the unique challenges this community faces—such as the scarcity of sign language interpreters, particularly in remote areas, and the lack of real-time translation tools. This paper proposes the development of a smartphone-runnable sign language recognition model to address the communication problems faced by deaf and hard-of-hearing persons. This proposed model combines Mediapipe hand tracking with particle filtering (PF) to accurately detect and track hand movements, and a convolutional neural network (CNN) and bidirectional long short-term memory based gesture recognition model to model the temporal dynamics of Sign Language gestures. These models use a small number of layers and filters, depthwise separable convolutions, and dropout layers to minimize the computational costs and prevent overfitting, making them suitable for smartphone implementation. This article discusses the existing challenges handled by the deaf and hard-of-hearing community and explains how the proposed model could help overcome these challenges. A MediaPipe + PF model performs feature extraction from the image and data preprocessing. During training, with fewer activation functions and parameters, this proposed model performed better to other CNN with RNN variant models (CNN + LSTM, CNN + GRU) used in the experiments of convergence speed and learning efficiency.
{"title":"An open-source MP + CNN + BiLSTM model-based hybrid model for recognizing sign language on smartphones","authors":"Hayder M. A. Ghanimi, Sudhakar Sengan, Vijaya Bhaskar Sadu, Parvinder Kaur, Manju Kaushik, Roobaea Alroobaea, Abdullah M. Baqasah, Majed Alsafyani, Pankaj Dadheech","doi":"10.1007/s13198-024-02376-x","DOIUrl":"https://doi.org/10.1007/s13198-024-02376-x","url":null,"abstract":"<p>The communication barriers experienced by deaf and hard-of-hearing individuals often lead to social isolation and limited access to essential services, underlining a critical need for effective and accessible solutions. Recognizing the unique challenges this community faces—such as the scarcity of sign language interpreters, particularly in remote areas, and the lack of real-time translation tools. This paper proposes the development of a smartphone-runnable sign language recognition model to address the communication problems faced by deaf and hard-of-hearing persons. This proposed model combines Mediapipe hand tracking with particle filtering (PF) to accurately detect and track hand movements, and a convolutional neural network (CNN) and bidirectional long short-term memory based gesture recognition model to model the temporal dynamics of Sign Language gestures. These models use a small number of layers and filters, depthwise separable convolutions, and dropout layers to minimize the computational costs and prevent overfitting, making them suitable for smartphone implementation. This article discusses the existing challenges handled by the deaf and hard-of-hearing community and explains how the proposed model could help overcome these challenges. A MediaPipe + PF model performs feature extraction from the image and data preprocessing. During training, with fewer activation functions and parameters, this proposed model performed better to other CNN with RNN variant models (CNN + LSTM, CNN + GRU) used in the experiments of convergence speed and learning efficiency.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"130 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1007/s13198-024-02262-6
Shobhit Kumar Saurabh, Deepak Kumar
Static code analysis (SAST is a well-known concept) to identify security flaws in the code to improve software product quality. A SAST tool called SonarQube which can scan source code of an application and identify the vulnerabilities present in software. It can also find the RCA of the vulnerabilities found in software products. it helps in rehabilitating the securities flaws found in analysis of the software products. SAST tools analyses upside-down for an application. It does not need s system to be in running state to perform analysis. The scan provides instant feedback to developers in terms of reducing security risks for an application. It helps to resolve issues which was present during development and helps developers to increase their knowledge. As a result, developers become competent about knowledge of security for software product. The sonar analysis report provides on demand access to all recommendations. The user can navigate to line-of-code which have vulnerabilities and they can do faster discovery and auditing. And hence the developers can write more code which is less vulnerable. This way they have more secure and quality product delivered. To conduct static analysis, the Authors have used SonarQube as a tool, which compile and measure the code quality for the code kept in repositories. The Authors observed SAST is important step in conductingsecurity and vulnerabilities scan for software product, it was also observed that most of the organisationconduct this SAST at later stage in DevOps/DevSecOps Phase which actually increases pipeline execution time. This motivated Authors topropose a better Model to reduce the build pipeline execution time. As Devops/DevSecOps standards, SonarQube is used to do SASTin DevSecOps pipelines which normally increases the build pipeline execution time. This increases the effort and time to complete the build pipeline and hence it also impacts overall budget of the software product. In the proposed solution, the Authors tried to reduce build pipeline execution time by conducting static analysis early in DevSecOps phases using shift left. Proposed solution uses GitHub open-source project written in C#.NET language, Azure Devops, dotnet sonar scanner tool and SonarQube to conduct static analysis and testing. The authors(s) tried to enhance the software quality in early Devops phases which will be helpful in reducing the build time and cost. Proposed Model will be helpful in increasing reliability, efficiency, and performance of software product.
{"title":"Model to reduce DevOps pipeline execution time using SAST","authors":"Shobhit Kumar Saurabh, Deepak Kumar","doi":"10.1007/s13198-024-02262-6","DOIUrl":"https://doi.org/10.1007/s13198-024-02262-6","url":null,"abstract":"<p>Static code analysis (SAST is a well-known concept) to identify security flaws in the code to improve software product quality. A SAST tool called SonarQube which can scan source code of an application and identify the vulnerabilities present in software. It can also find the RCA of the vulnerabilities found in software products. it helps in rehabilitating the securities flaws found in analysis of the software products. SAST tools analyses upside-down for an application. It does not need s system to be in running state to perform analysis. The scan provides instant feedback to developers in terms of reducing security risks for an application. It helps to resolve issues which was present during development and helps developers to increase their knowledge. As a result, developers become competent about knowledge of security for software product. The sonar analysis report provides on demand access to all recommendations. The user can navigate to line-of-code which have vulnerabilities and they can do faster discovery and auditing. And hence the developers can write more code which is less vulnerable. This way they have more secure and quality product delivered. To conduct static analysis, the Authors have used SonarQube as a tool, which compile and measure the code quality for the code kept in repositories. The Authors observed SAST is important step in conductingsecurity and vulnerabilities scan for software product, it was also observed that most of the organisationconduct this SAST at later stage in DevOps/DevSecOps Phase which actually increases pipeline execution time. This motivated Authors topropose a better Model to reduce the build pipeline execution time. As Devops/DevSecOps standards, SonarQube is used to do SASTin DevSecOps pipelines which normally increases the build pipeline execution time. This increases the effort and time to complete the build pipeline and hence it also impacts overall budget of the software product. In the proposed solution, the Authors tried to reduce build pipeline execution time by conducting static analysis early in DevSecOps phases using shift left. Proposed solution uses GitHub open-source project written in C#.NET language, Azure Devops, dotnet sonar scanner tool and SonarQube to conduct static analysis and testing. The authors(s) tried to enhance the software quality in early Devops phases which will be helpful in reducing the build time and cost. Proposed Model will be helpful in increasing reliability, efficiency, and performance of software product.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"50 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s13198-024-02370-3
Anupama Sindgi, U. B. Mahadevaswamy
The Wireless Network-on-Chip (WiNoC) technology has emerged as a promising approach to overcome the growing communication constraints present in multi-core systems. Nevertheless, a significant obstacle is presented by WiNoCs’ steadily rising energy consumption. In this article, we present a novel method for addressing this issue by combining adaptive joint source coding with low-density parity-check (LDPC) encoding. This strategy is presented as an innovative way to handle the problem. Two key modifications are involved in the implementation of our method: firstly, the accurate tuning of the transform coding threshold in compressive sensing to achieve effective data compression, and secondly, the intelligent control of the number of parity checks in LDPC coding to reduce both energy consumption and latency. These adaptive techniques are tailored to meet the signal-to-noise ratio estimates and the dependability standards unique to the application. Our findings demonstrate a substantial accomplishment, with a remarkable 4.2% reduction in power consumption compared to other methods currently in use. This achievement highlights the vast potential for achieving significant energy savings in real-world applications and is a pioneering contribution to the development of energy-efficient communication systems.
{"title":"Adaptive joint source coding LDPC for energy efficient communication in wireless network on chip","authors":"Anupama Sindgi, U. B. Mahadevaswamy","doi":"10.1007/s13198-024-02370-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02370-3","url":null,"abstract":"<p>The Wireless Network-on-Chip (WiNoC) technology has emerged as a promising approach to overcome the growing communication constraints present in multi-core systems. Nevertheless, a significant obstacle is presented by WiNoCs’ steadily rising energy consumption. In this article, we present a novel method for addressing this issue by combining adaptive joint source coding with low-density parity-check (LDPC) encoding. This strategy is presented as an innovative way to handle the problem. Two key modifications are involved in the implementation of our method: firstly, the accurate tuning of the transform coding threshold in compressive sensing to achieve effective data compression, and secondly, the intelligent control of the number of parity checks in LDPC coding to reduce both energy consumption and latency. These adaptive techniques are tailored to meet the signal-to-noise ratio estimates and the dependability standards unique to the application. Our findings demonstrate a substantial accomplishment, with a remarkable 4.2% reduction in power consumption compared to other methods currently in use. This achievement highlights the vast potential for achieving significant energy savings in real-world applications and is a pioneering contribution to the development of energy-efficient communication systems.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"33 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s13198-024-02365-0
Samia Daas, Fares Innal
The Emergency safety barrier is one of the active technical barriers related to the safety of liquefied petroleum gas storage tanks. However, this study assesses the reliability of emergency safety barriers to help decision-makers understand how they can support decisions to reduce the risks associated with LPG storage. This paper aims to develop an integrated approach that uses an intuitionistic fuzzy sets aggregation procedure, subjective safety analysis, and emergency event tree analysis to handle uncertainty in the reliability assessment of emergency safety barriers. In addition, a case study on the reliability assessment of the emergency safety barriers of the LPG plant in Algeria based on the proposed methodology is provided and carried out to illustrate its effectiveness and feasibility. The results demonstrated the ability of intuitionistic fuzzy sets aggregation procedure and subjective safety analysis to provide highly reliable results and evaluate the reliability of emergency safety barriers. However, the classical event tree analysis does not consider the possibility of assessing the emergency consequences of different accident scenarios. Consequently, it only allows you to estimate the occurrence probability of accident scenarios. The results of this study show that the reliability of emergency safety barriers can be used to estimate the probability of emergency consequences under different accident scenarios, improve reliability, and help prioritize emergency improvement measures. The study provides scientific and operational references for analyzing the emergency consequences of various accident scenarios.
{"title":"Reliability assessment of emergency safety barriers based on an intuitionistic fuzzy sets aggregation procedure and subjective safety analysis: a case study","authors":"Samia Daas, Fares Innal","doi":"10.1007/s13198-024-02365-0","DOIUrl":"https://doi.org/10.1007/s13198-024-02365-0","url":null,"abstract":"<p>The Emergency safety barrier is one of the active technical barriers related to the safety of liquefied petroleum gas storage tanks. However, this study assesses the reliability of emergency safety barriers to help decision-makers understand how they can support decisions to reduce the risks associated with LPG storage. This paper aims to develop an integrated approach that uses an intuitionistic fuzzy sets aggregation procedure, subjective safety analysis, and emergency event tree analysis to handle uncertainty in the reliability assessment of emergency safety barriers. In addition, a case study on the reliability assessment of the emergency safety barriers of the LPG plant in Algeria based on the proposed methodology is provided and carried out to illustrate its effectiveness and feasibility. The results demonstrated the ability of intuitionistic fuzzy sets aggregation procedure and subjective safety analysis to provide highly reliable results and evaluate the reliability of emergency safety barriers. However, the classical event tree analysis does not consider the possibility of assessing the emergency consequences of different accident scenarios. Consequently, it only allows you to estimate the occurrence probability of accident scenarios. The results of this study show that the reliability of emergency safety barriers can be used to estimate the probability of emergency consequences under different accident scenarios, improve reliability, and help prioritize emergency improvement measures. The study provides scientific and operational references for analyzing the emergency consequences of various accident scenarios.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"81 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s13198-024-02306-x
Lekhraj, Alok Kumar, Anoj Kumar
A network of wireless sensors (WSN) is an outstanding technology that can aid in the various applications. Batteries run the sensor nodes those are used in WSN. The battery is impossible to charge or repair, so the most valuable resource for wireless sensor networks is power. Over the years, several strategies have been invented and used to preserve this precious WSN resource. One of the most successful approach for this purpose has turned out to be clustering. The aim of this paper is to suggest an effective technique for choosing cluster heads in WSNs to increase the lifetime of the network. To accomplish this task, Grey Wolf Optimizer (GWO) technique has been used. The general GWO was updated in this paper to meet the particular purpose of cluster head selection in WSNs. In this article, we have considered eleven attributes in the fitness function for the proposed algorithm. The simulation is carried out under different conditions. The results obtained show that the proposed protocol is superior in terms of energy consumption and network lifetime by evaluating the proposed protocol (i.e. CH-GWO protocol) with some well-existing cluster protocols. The suggested protocol forms energy-efficient and scalable clusters.
{"title":"Load balanced and optimal clustering in WSNs using grey wolf optimizer","authors":"Lekhraj, Alok Kumar, Anoj Kumar","doi":"10.1007/s13198-024-02306-x","DOIUrl":"https://doi.org/10.1007/s13198-024-02306-x","url":null,"abstract":"<p>A network of wireless sensors (WSN) is an outstanding technology that can aid in the various applications. Batteries run the sensor nodes those are used in WSN. The battery is impossible to charge or repair, so the most valuable resource for wireless sensor networks is power. Over the years, several strategies have been invented and used to preserve this precious WSN resource. One of the most successful approach for this purpose has turned out to be clustering. The aim of this paper is to suggest an effective technique for choosing cluster heads in WSNs to increase the lifetime of the network. To accomplish this task, Grey Wolf Optimizer (GWO) technique has been used. The general GWO was updated in this paper to meet the particular purpose of cluster head selection in WSNs. In this article, we have considered eleven attributes in the fitness function for the proposed algorithm. The simulation is carried out under different conditions. The results obtained show that the proposed protocol is superior in terms of energy consumption and network lifetime by evaluating the proposed protocol (i.e. CH-GWO protocol) with some well-existing cluster protocols. The suggested protocol forms energy-efficient and scalable clusters.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"119 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s13198-024-02367-y
Kouami A. Guinhouya
Web services (WS) are the preferred approach in realizing the service-oriented computing paradigm. However, this comes with challenges such as complexity and uncertainty that hinder their practical application. Bayesian networks (BNs) are one of the techniques used to address these challenges. The objective of this mapping study was to determine what is known about the use of Bayesian networks in web services research. To do this, we identified and selected rigorously 69 articles (out of the 532 identified) published on the subject in 2001–2021. We then classified and analyzed these articles by Web service themes (Service Composition, Service Management, Service Engineering), Objectives (Description, Prediction, Prescription), Types of BN (Basic, Combined, Extended), and Evaluation methods (Proof of concept, Experiment, No evaluation). In doing so, we hope to provide a clear understanding of the subject. We also identify and suggest avenues for future research. Thus, the review results can help researchers and practitioners interested by the application of BNs in WS research.
{"title":"A review on the applications of Bayesian network in web service","authors":"Kouami A. Guinhouya","doi":"10.1007/s13198-024-02367-y","DOIUrl":"https://doi.org/10.1007/s13198-024-02367-y","url":null,"abstract":"<p>Web services (WS) are the preferred approach in realizing the service-oriented computing paradigm. However, this comes with challenges such as complexity and uncertainty that hinder their practical application. Bayesian networks (BNs) are one of the techniques used to address these challenges. The objective of this mapping study was to determine what is known about the use of Bayesian networks in web services research. To do this, we identified and selected rigorously 69 articles (out of the 532 identified) published on the subject in 2001–2021. We then classified and analyzed these articles by <b>Web service themes</b> (Service Composition, Service Management, Service Engineering), <b>Objectives</b> (Description, Prediction, Prescription), <b>Types of BN</b> (Basic, Combined, Extended), and <b>Evaluation methods</b> (Proof of concept, Experiment, No evaluation). In doing so, we hope to provide a clear understanding of the subject. We also identify and suggest avenues for future research. Thus, the review results can help researchers and practitioners interested by the application of BNs in WS research.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"37 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-26DOI: 10.1007/s13198-024-02300-3
Babak Masoudi
Brain tumors are one of the leading causes of death worldwide. Different types of brain tumors are known, so the choice of treatment depends directly on the type of tumor. The classification of brain tumors is very important as a complex and challenging problem in the field of image processing. Today, deep learning methods are used to classify brain tumors. In addition to being able to detect and automatically classify all types of brain tumors, these methods significantly reduce the diagnosis time and increase accuracy. In this paper, a deep learning-based model is proposed to classify brain tumors into three classes: glioma, meningioma, and pituitary tumor. In the first phase, the pre-trained network ResNet50 is used to extract features from MRI images. In the second phase, by proposing two attention mechanisms (depth-separable convolution-based channel attention mechanism and an innovative multi-head-attention mechanism), the most effective spatial and channel features are extracted and integrated. Finally, the classification phase is performed. Evaluations on the Figshare dataset showed an accuracy of 99.32%, which performs better than existing models. Therefore, the proposed model can accurately classify brain tumors and help neurologists and physicians make accurate diagnostic decisions.
{"title":"An optimized dual attention-based network for brain tumor classification","authors":"Babak Masoudi","doi":"10.1007/s13198-024-02300-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02300-3","url":null,"abstract":"<p>Brain tumors are one of the leading causes of death worldwide. Different types of brain tumors are known, so the choice of treatment depends directly on the type of tumor. The classification of brain tumors is very important as a complex and challenging problem in the field of image processing. Today, deep learning methods are used to classify brain tumors. In addition to being able to detect and automatically classify all types of brain tumors, these methods significantly reduce the diagnosis time and increase accuracy. In this paper, a deep learning-based model is proposed to classify brain tumors into three classes: glioma, meningioma, and pituitary tumor. In the first phase, the pre-trained network ResNet50 is used to extract features from MRI images. In the second phase, by proposing two attention mechanisms (depth-separable convolution-based channel attention mechanism and an innovative multi-head-attention mechanism), the most effective spatial and channel features are extracted and integrated. Finally, the classification phase is performed. Evaluations on the Figshare dataset showed an accuracy of 99.32%, which performs better than existing models. Therefore, the proposed model can accurately classify brain tumors and help neurologists and physicians make accurate diagnostic decisions.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"44 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-25DOI: 10.1007/s13198-024-02356-1
M. L. Sworna Kokila, E. Fenil, N. P. Ponnuviji, G. Nirmala
Cloud computing is one of the advanced technologies to process rapidly growing data. At the same instant, the necessity of storage space for the voluminous digital medical data has been amplified thanks to the mounting electronic health records. It influences the employment of cloud outsourcing methodology. Data outsourced to the cloud space must be highly secured. For this, the paper presents a DKS-CWH algorithm that is based on a dual kernal support vector (DKS) and crossover-based wild horse optimization algorithm. In this paper, the input grayscale images are gathered from the medical MINST dataset which includes 58,954 images comprising six classes of CXR (chest X-ray), breast MRI, abdomen CT, chest CT, hand (hand X-ray), and head CT. The classification and feature extraction processes are performed at the cloud layer using the DKS-CWH algorithm. The hyperparameters of the DKS approach are optimized with the crossover-based WHO algorithm. The performance evaluation involves analyzing its effectiveness according to prominent metrics such as precision, accuracy, recall, and F1-score and comparing the outputs with the other competent methods. The results showed the DKS-CWH model offered robust performance with 97% accuracy.
云计算是处理快速增长的数据的先进技术之一。同时,由于电子病历的不断增加,大量数字医疗数据的存储空间需求也在不断扩大。这影响了云计算外包方法的应用。外包到云空间的数据必须高度安全。为此,本文提出了一种基于双核支持向量(DKS)和基于交叉的野马优化算法的 DKS-CWH 算法。本文的输入灰度图像来自医学 MINST 数据集,该数据集包括 58 954 张图像,由 CXR(胸部 X 光)、乳腺 MRI、腹部 CT、胸部 CT、手部(手部 X 光)和头部 CT 六类图像组成。使用 DKS-CWH 算法在云层执行分类和特征提取过程。DKS 方法的超参数采用基于交叉的 WHO 算法进行优化。性能评估包括根据精确度、准确度、召回率和 F1 分数等重要指标分析其有效性,并将输出结果与其他有效方法进行比较。结果表明,DKS-CWH 模型的准确率高达 97%,性能稳定。
{"title":"Securing cloud-based medical data: an optimal dual kernal support vector approach for enhanced EHR management","authors":"M. L. Sworna Kokila, E. Fenil, N. P. Ponnuviji, G. Nirmala","doi":"10.1007/s13198-024-02356-1","DOIUrl":"https://doi.org/10.1007/s13198-024-02356-1","url":null,"abstract":"<p>Cloud computing is one of the advanced technologies to process rapidly growing data. At the same instant, the necessity of storage space for the voluminous digital medical data has been amplified thanks to the mounting electronic health records. It influences the employment of cloud outsourcing methodology. Data outsourced to the cloud space must be highly secured. For this, the paper presents a DKS-CWH algorithm that is based on a dual kernal support vector (DKS) and crossover-based wild horse optimization algorithm. In this paper, the input grayscale images are gathered from the medical MINST dataset which includes 58,954 images comprising six classes of CXR (chest X-ray), breast MRI, abdomen CT, chest CT, hand (hand X-ray), and head CT. The classification and feature extraction processes are performed at the cloud layer using the DKS-CWH algorithm. The hyperparameters of the DKS approach are optimized with the crossover-based WHO algorithm. The performance evaluation involves analyzing its effectiveness according to prominent metrics such as precision, accuracy, recall, and F1-score and comparing the outputs with the other competent methods. The results showed the DKS-CWH model offered robust performance with 97% accuracy.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"24 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, technology like Blockchain is gaining attention all over the world today, because it provides a secure, decentralized framework for all types of commercial interactions. When choosing the optimal blockchain platform, one needs to consider its usefulness, adaptability, and compatibility with existing software. Because novice software engineers and developers are not experts in every discipline, they should seek advice from outside experts or educate themselves. As the number of decision-makers, choices, and criteria grows, the decision-making process becomes increasingly complicated. The success of Bitcoin has spiked the demand for blockchain-based solutions in different domains in the sector such as health, education, energy, etc. Organizations, researchers, government bodies, etc. are moving towards more secure and accountable technology to build trust and reliability. In this paper, we introduce a model for the prediction of blockchain development platforms (Hyperledger, Ethereum, Corda, Stellar, Bitcoin, etc.). The proposed work utilizes multiple data sets based on blockchain development platforms and applies various traditional Machine Learning classification techniques. The obtained results show that models like Decision Tree and Random Forest have outperformed other traditional classification models concerning multiple data sets with 100% accuracy.
{"title":"Applying machine learning models on blockchain platform selection","authors":"Chhaya Dubey, Dharmendra Kumar, Ashutosh Kumar Singh, Vijay Kumar Dwivedi","doi":"10.1007/s13198-024-02363-2","DOIUrl":"https://doi.org/10.1007/s13198-024-02363-2","url":null,"abstract":"<p>Recently, technology like Blockchain is gaining attention all over the world today, because it provides a secure, decentralized framework for all types of commercial interactions. When choosing the optimal blockchain platform, one needs to consider its usefulness, adaptability, and compatibility with existing software. Because novice software engineers and developers are not experts in every discipline, they should seek advice from outside experts or educate themselves. As the number of decision-makers, choices, and criteria grows, the decision-making process becomes increasingly complicated. The success of Bitcoin has spiked the demand for blockchain-based solutions in different domains in the sector such as health, education, energy, etc. Organizations, researchers, government bodies, etc. are moving towards more secure and accountable technology to build trust and reliability. In this paper, we introduce a model for the prediction of blockchain development platforms (Hyperledger, Ethereum, Corda, Stellar, Bitcoin, etc.). The proposed work utilizes multiple data sets based on blockchain development platforms and applies various traditional Machine Learning classification techniques. The obtained results show that models like Decision Tree and Random Forest have outperformed other traditional classification models concerning multiple data sets with 100% accuracy.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"9 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-24DOI: 10.1007/s13198-024-02368-x
Umashankar Samal, Ajay Kumar
In this study, we present an approach to enhance software reliability, acknowledging the evolving understanding of error dynamics within software development. While traditional models predominantly attribute errors to coding mistakes, recent insights emphasize the role of human factors such as learning processes and fatigue. Our method integrates these insights by incorporating the fatigue factor of software testers and optimizing fault removal efficiency within the debugging process. This integration leads to the formulation of more realistic software reliability growth models, characterized by S-shaped learning curves and an exponential fatigue function. We conduct a thorough analysis of the models’ quality, predictive abilities, and accuracy, evaluating them against three established fit criteria. By encompassing learning, fatigue, and fault removal efficiency within our models, we provide a comprehensive framework for understanding the dynamics of software reliability.
在本研究中,我们提出了一种提高软件可靠性的方法,同时承认对软件开发过程中错误动态的认识在不断发展。传统模型主要将错误归咎于编码错误,而最近的研究则强调了学习过程和疲劳等人为因素的作用。我们的方法整合了这些见解,将软件测试人员的疲劳因素纳入其中,并优化了调试过程中的故障排除效率。通过这种整合,我们提出了更切合实际的软件可靠性增长模型,其特点是 S 型学习曲线和指数疲劳函数。我们对模型的质量、预测能力和准确性进行了全面分析,并根据三个既定的拟合标准对其进行了评估。通过将学习、疲劳和故障排除效率纳入模型,我们为理解软件可靠性的动态变化提供了一个全面的框架。
{"title":"Incorporating human dynamics into software reliability analysis: learning, fatigue, and efficiency considerations","authors":"Umashankar Samal, Ajay Kumar","doi":"10.1007/s13198-024-02368-x","DOIUrl":"https://doi.org/10.1007/s13198-024-02368-x","url":null,"abstract":"<p>In this study, we present an approach to enhance software reliability, acknowledging the evolving understanding of error dynamics within software development. While traditional models predominantly attribute errors to coding mistakes, recent insights emphasize the role of human factors such as learning processes and fatigue. Our method integrates these insights by incorporating the fatigue factor of software testers and optimizing fault removal efficiency within the debugging process. This integration leads to the formulation of more realistic software reliability growth models, characterized by S-shaped learning curves and an exponential fatigue function. We conduct a thorough analysis of the models’ quality, predictive abilities, and accuracy, evaluating them against three established fit criteria. By encompassing learning, fatigue, and fault removal efficiency within our models, we provide a comprehensive framework for understanding the dynamics of software reliability.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"51 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}