Pub Date : 2024-06-25DOI: 10.1007/s13198-024-02400-0
Mohammad Taghitahooneh, Aidin Shaghaghi, Reza Dashti, Abolfazl Ahmadi
This article examines the research carried out regarding the failure rate in electricity distribution systems. It introduces a comprehensive framework for managing failure rates in power distribution systems. This framework highlights that studies on failure rates in power distribution systems can be categorized into three distinct groups: modifying asset management activities in order to reduce failure rate, evaluate and control threats and risks, emergency measures after failure. In this article, all the studies conducted on the failure rate of electricity distribution systems are listed and presented, and categorized in the form of a comprehensive and conceptual framework. The relation of each category with the failure rate is explained and by studying the process of studies, the research gaps and the roadmap of future studies in the field of failure rate in electricity distribution systems are determined.
{"title":"A review of failure rate studies in power distribution networks","authors":"Mohammad Taghitahooneh, Aidin Shaghaghi, Reza Dashti, Abolfazl Ahmadi","doi":"10.1007/s13198-024-02400-0","DOIUrl":"https://doi.org/10.1007/s13198-024-02400-0","url":null,"abstract":"<p>This article examines the research carried out regarding the failure rate in electricity distribution systems. It introduces a comprehensive framework for managing failure rates in power distribution systems. This framework highlights that studies on failure rates in power distribution systems can be categorized into three distinct groups: modifying asset management activities in order to reduce failure rate, evaluate and control threats and risks, emergency measures after failure. In this article, all the studies conducted on the failure rate of electricity distribution systems are listed and presented, and categorized in the form of a comprehensive and conceptual framework. The relation of each category with the failure rate is explained and by studying the process of studies, the research gaps and the roadmap of future studies in the field of failure rate in electricity distribution systems are determined.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"4 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-23DOI: 10.1007/s13198-024-02394-9
Shivani Kushwaha, Ajay Kumar
In our contemporary world, where technology is omnipresent and essential to daily life, the reliability of software systems is indispensable. Consequently, efforts to optimize software release time and decision-making processes have become imperative. Software reliability growth models (SRGMs) have emerged as valuable tools in gauging software reliability, with researchers studying various factors such as change point and testing effort. However, uncertainties persist throughout testing processes, which are inherently influenced by human factors. Fuzzy set theory has emerged as a valuable tool in addressing the inherent uncertainties and complexities associated with software systems. Its ability to model imprecise, uncertain, and vague information makes it particularly well-suited for capturing the nuances of software reliability. In this research, we propose a novel approach that amalgamates change point detection, logistic testing effort function modeling, and triangular fuzzy numbers (TFNs) to tackle uncertainty and vagueness in software reliability modeling. Additionally, we explore release time optimization considering TFNs, aiming to enhance decision-making in software development and release planning.
{"title":"Optimizing software release decisions: a TFN-based uncertainty modeling approach","authors":"Shivani Kushwaha, Ajay Kumar","doi":"10.1007/s13198-024-02394-9","DOIUrl":"https://doi.org/10.1007/s13198-024-02394-9","url":null,"abstract":"<p>In our contemporary world, where technology is omnipresent and essential to daily life, the reliability of software systems is indispensable. Consequently, efforts to optimize software release time and decision-making processes have become imperative. Software reliability growth models (SRGMs) have emerged as valuable tools in gauging software reliability, with researchers studying various factors such as change point and testing effort. However, uncertainties persist throughout testing processes, which are inherently influenced by human factors. Fuzzy set theory has emerged as a valuable tool in addressing the inherent uncertainties and complexities associated with software systems. Its ability to model imprecise, uncertain, and vague information makes it particularly well-suited for capturing the nuances of software reliability. In this research, we propose a novel approach that amalgamates change point detection, logistic testing effort function modeling, and triangular fuzzy numbers (TFNs) to tackle uncertainty and vagueness in software reliability modeling. Additionally, we explore release time optimization considering TFNs, aiming to enhance decision-making in software development and release planning.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"32 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s13198-024-02396-7
Mohammad Rezaei Dashtaki, Ali Jandaghi Jafari, Behzad Ghodrati, Seyed Hadi Hoseinie
Utilization of the shovel fleet as a capital-intensive and operationally important asset in open-pit mines is a key indicator for mine production analysis. This paper investigates shovel utilization in surface mining using a novel smart platform integrated with the shovel operating joystick. It utilizes a unique algorithm to identify and differentiate operational and non-operational time based on comparing real-time data and average loading cycle time. This data is then employed to calculate overall uptime and identify downtime periods. A field study was carried out on six electric cable shovels consisting of P&H 2100 and TZ WK-12, at Sarcheshmeh Copper Mine. The analysis revealed that the average utilization of the whole fleet is equal to 33%, ranging from 16 to 48%, which is dramatically lower than the mine expectations. The statistical analysis showed that in 10–13% of the operating time, the utilization is higher than 75%, which is a moderately acceptable level. Finally, according to the outcomes of the field study and the developed smart platform, it could be concluded that improvements in dispatching system accuracy, revising the grade blending strategies, increasing processing plant flexibility and improved operator training could enhance shovel fleet utilization and whole mine productivity.
{"title":"Analysis of shovel fleet utilization in Sarcheshmeh Copper Mine using a smart monitoring platform","authors":"Mohammad Rezaei Dashtaki, Ali Jandaghi Jafari, Behzad Ghodrati, Seyed Hadi Hoseinie","doi":"10.1007/s13198-024-02396-7","DOIUrl":"https://doi.org/10.1007/s13198-024-02396-7","url":null,"abstract":"<p>Utilization of the shovel fleet as a capital-intensive and operationally important asset in open-pit mines is a key indicator for mine production analysis. This paper investigates shovel utilization in surface mining using a novel smart platform integrated with the shovel operating joystick. It utilizes a unique algorithm to identify and differentiate operational and non-operational time based on comparing real-time data and average loading cycle time. This data is then employed to calculate overall uptime and identify downtime periods. A field study was carried out on six electric cable shovels consisting of P&H 2100 and TZ WK-12, at Sarcheshmeh Copper Mine. The analysis revealed that the average utilization of the whole fleet is equal to 33%, ranging from 16 to 48%, which is dramatically lower than the mine expectations. The statistical analysis showed that in 10–13% of the operating time, the utilization is higher than 75%, which is a moderately acceptable level. Finally, according to the outcomes of the field study and the developed smart platform, it could be concluded that improvements in dispatching system accuracy, revising the grade blending strategies, increasing processing plant flexibility and improved operator training could enhance shovel fleet utilization and whole mine productivity.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"11 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s13198-024-02384-x
Surbhi Gupta, H. D. Arora, Anjali Naithani
Refrigeration is a critical component of thermal environment engineering. The process of removing heat from a substance under precise conditions is referred to as refrigeration. It also includes the process of lowering and maintaining a body's temperature below the ambient temperature. In this paper, we examine the availability and cost function of the system of the Refrigeration plant. This system has three modes: normal, degraded, and failed. The system is divided into four sections: A (Compressor), B (Condenser), C (two standby expansion valves), and D. (three evaporators in series). A standby expansion valve is installed to improve the performance of the refrigeration plant. The supplementary variable technique is used to obtain state probabilities and the inversion process is used to obtain the expression of operational availability and profit functions. The MTTF (mean time to failure) is also estimated. A numerical example is presented with a graphical presentation to illustrate the practical advantages of the model.
{"title":"Availability and cost analysis of a multistage, multi-evaporator type compressor","authors":"Surbhi Gupta, H. D. Arora, Anjali Naithani","doi":"10.1007/s13198-024-02384-x","DOIUrl":"https://doi.org/10.1007/s13198-024-02384-x","url":null,"abstract":"<p>Refrigeration is a critical component of thermal environment engineering. The process of removing heat from a substance under precise conditions is referred to as refrigeration. It also includes the process of lowering and maintaining a body's temperature below the ambient temperature. In this paper, we examine the availability and cost function of the system of the Refrigeration plant. This system has three modes: normal, degraded, and failed. The system is divided into four sections: A (Compressor), B (Condenser), C (two standby expansion valves), and D. (three evaporators in series). A standby expansion valve is installed to improve the performance of the refrigeration plant. The supplementary variable technique is used to obtain state probabilities and the inversion process is used to obtain the expression of operational availability and profit functions. The MTTF (mean time to failure) is also estimated. A numerical example is presented with a graphical presentation to illustrate the practical advantages of the model.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"26 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s13198-024-02379-8
V. K. Rathaur, N. Chandra, Parmeet Kumar Vinit
This paper deals with multicomponent stress–strength system reliability (MSR) and its maximum likelihood (ML) as well as Bayesian estimation. We assume that ({X}_{1},{X}_{2},dots ,{X}_{k}) being the random strengths of k- components of a system and Y is the applied common random stress on them, which independently follows gamma distribution with parameters (left({alpha }_{1},{lambda }_{1}right)) and (left({alpha }_{2},{lambda }_{2}right)) respectively. The system works only if (sleft(1le sle kright)) or more of the strengths exceed the common load/stress is called s-out-of-k: G system. Maximum likelihood and asymptotic interval estimators of MSR are obtained. Bayes estimates are computed under symmetric and asymmetric loss functions assuming informative and non-informative priors. ML and Bayes estimators are numerically evaluated and compared based on mean square errors and absolute biases through simulation study employing the Metropolis–Hastings algorithm.
{"title":"On Bayesian estimation of stress–strength reliability in multicomponent system for two-parameter gamma distribution","authors":"V. K. Rathaur, N. Chandra, Parmeet Kumar Vinit","doi":"10.1007/s13198-024-02379-8","DOIUrl":"https://doi.org/10.1007/s13198-024-02379-8","url":null,"abstract":"<p>This paper deals with multicomponent stress–strength system reliability (MSR) and its maximum likelihood (ML) as well as Bayesian estimation. We assume that <span>({X}_{1},{X}_{2},dots ,{X}_{k})</span> being the random strengths of k- components of a system and <i>Y</i> is the applied common random stress on them, which independently follows gamma distribution with parameters <span>(left({alpha }_{1},{lambda }_{1}right))</span> and <span>(left({alpha }_{2},{lambda }_{2}right))</span> respectively. The system works only if <span>(sleft(1le sle kright))</span> or more of the strengths exceed the common load/stress is called s-out-of-k: G system. Maximum likelihood and asymptotic interval estimators of MSR are obtained. Bayes estimates are computed under symmetric and asymmetric loss functions assuming informative and non-informative priors. ML and Bayes estimators are numerically evaluated and compared based on mean square errors and absolute biases through simulation study employing the Metropolis–Hastings algorithm.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"83 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1007/s13198-024-02381-0
Naresh Chandra Kabdwal, Qazi J. Azhad, Rashi Hora
This article is concerned with the estimation of parameters, reliability and hazard rate functions of the exponentiated exponential distribution under progressive type-II censoring data. The maximum likelihood estimation and maximum product of spacing methods are presented to estimate the unknown parameters of the model in classical theme. In the Bayesian paradigm, we have considered both likelihood as well as product of spacing functions to estimates of the model parameters, reliability and hazard rate functions. Bayes estimates are considered under squared error loss function (SELF) using gamma prior for the shape parameter and a discrete prior for the scale parameter. Asymptotic confidence and highest posterior density credible intervals have also been obtained for the model parameters and reliability characteristics. Optimal criteria is also employed to find the best censoring scheme among the considered censoring schemes. A Monte Carlo simulation study is used to compare the performances the derived estimators under different progressive type-II censoring schemes. Finally, to illustrate the practical application of the proposed methodology, two real data analysis are conducted.
本文主要研究渐进式 II 型剔除数据下指数分布的参数、可靠性和危险率函数的估计。文章介绍了最大似然估计法和最大间距乘积法,以估计经典主题中模型的未知参数。在贝叶斯范式中,我们考虑了似然法和间距积函数来估计模型参数、可靠性和危险率函数。贝叶斯估计是在平方误差损失函数(SELF)下考虑的,对形状参数使用伽马先验,对规模参数使用离散先验。此外,还获得了模型参数和可靠性特征的渐近置信度和最高后验密度可信区间。此外,还采用了最优标准,以便在所考虑的剔除方案中找到最佳剔除方案。蒙特卡罗模拟研究用于比较不同渐进式 II 型剔除方案下得出的估计值的性能。最后,为了说明所提方法的实际应用,我们进行了两项真实数据分析。
{"title":"Statistical inference of the exponentiated exponential distribution based on progressive type-II censoring with optimal scheme","authors":"Naresh Chandra Kabdwal, Qazi J. Azhad, Rashi Hora","doi":"10.1007/s13198-024-02381-0","DOIUrl":"https://doi.org/10.1007/s13198-024-02381-0","url":null,"abstract":"<p>This article is concerned with the estimation of parameters, reliability and hazard rate functions of the exponentiated exponential distribution under progressive type-II censoring data. The maximum likelihood estimation and maximum product of spacing methods are presented to estimate the unknown parameters of the model in classical theme. In the Bayesian paradigm, we have considered both likelihood as well as product of spacing functions to estimates of the model parameters, reliability and hazard rate functions. Bayes estimates are considered under squared error loss function (SELF) using gamma prior for the shape parameter and a discrete prior for the scale parameter. Asymptotic confidence and highest posterior density credible intervals have also been obtained for the model parameters and reliability characteristics. Optimal criteria is also employed to find the best censoring scheme among the considered censoring schemes. A Monte Carlo simulation study is used to compare the performances the derived estimators under different progressive type-II censoring schemes. Finally, to illustrate the practical application of the proposed methodology, two real data analysis are conducted.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"24 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1007/s13198-024-02366-z
V. S. Iswarya, M. Babima, Muhila M. Gnana, R. Dhaneesh
There is no such thing as stress-free work in today's environment. Every company gave their staff a challenging assignment to do in a certain amount of time. All of the employees are stressed out at work as a result of that work. Professionals in the Information Technology (IT) industry are frequently stressed at work and are at risk of developing health problems as a result of their jobs. The IT sector has a lot of severe workloads and has to deal with several issues like role ambiguity, gender inequality, and long working hours. The current research examines the numerous elements that lead to work-related stress, as well as the influence of demographic factors on stress among IT professionals. A sample of 240 data has been collected from the northern, central, and southern regions of Tamil Nadu. A Convenience Sampling Technique has been performed to collect the information. The results reveal the impact of stress factors on IT professionals in their work environment. Also, the outcome shows the significant impact of demographic factors like age, gender, marital status, and education of employees causing stress in their work environment.
在当今的环境中,没有无压力的工作。每家公司都给员工布置了具有挑战性的任务,要求他们在一定时间内完成。由于这些工作,所有员工的工作压力都很大。信息技术(IT)行业的专业人员经常工作压力过大,并有可能因工作而出现健康问题。IT 行业的工作量非常大,而且必须应对角色模糊、性别不平等和工作时间长等问题。目前的研究探讨了导致工作压力的诸多因素,以及人口因素对 IT 专业人员压力的影响。研究从泰米尔纳德邦的北部、中部和南部地区收集了 240 份样本数据。采用便利抽样技术收集信息。结果显示了工作环境中的压力因素对 IT 专业人员的影响。此外,研究结果还表明,员工的年龄、性别、婚姻状况和教育程度等人口统计因素对其工作环境造成的压力有重大影响。
{"title":"An empirical study on the factors causing stress among IT professionals in the urban city of Chennai","authors":"V. S. Iswarya, M. Babima, Muhila M. Gnana, R. Dhaneesh","doi":"10.1007/s13198-024-02366-z","DOIUrl":"https://doi.org/10.1007/s13198-024-02366-z","url":null,"abstract":"<p>There is no such thing as stress-free work in today's environment. Every company gave their staff a challenging assignment to do in a certain amount of time. All of the employees are stressed out at work as a result of that work. Professionals in the Information Technology (IT) industry are frequently stressed at work and are at risk of developing health problems as a result of their jobs. The IT sector has a lot of severe workloads and has to deal with several issues like role ambiguity, gender inequality, and long working hours. The current research examines the numerous elements that lead to work-related stress, as well as the influence of demographic factors on stress among IT professionals. A sample of 240 data has been collected from the northern, central, and southern regions of Tamil Nadu. A Convenience Sampling Technique has been performed to collect the information. The results reveal the impact of stress factors on IT professionals in their work environment. Also, the outcome shows the significant impact of demographic factors like age, gender, marital status, and education of employees causing stress in their work environment.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"76 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1007/s13198-024-02371-2
Ruchika Malhotra, Vidushi
Software vulnerabilities reported every year increase exponentially, leading to the exploitation of software systems. Hence, when a vulnerability is reported, a requirement arises to patch it as early as possible. Generally, this process requires some time and effort. For proper channelizing of the efforts, a requirement comes to predict the severity of the vulnerability so that the more critical ones can be given a higher priority. Therefore, a need arises to build a model that can analyze the data available on vulnerabilities and predict their severity. The experiment of this study is conducted on vulnerability reports of five software of Mozilla. As the data is textual, text mining techniques are applied to preprocess the data and form feature vectors. This input as text creates very high dimensional feature vectors leading to the requirement of dimensionality reduction. Hence, feature selection is done using chi-square and information gain. To develop the classifier, seven machine learning algorithms are chosen. Hence, fourteen software vulnerability severity prediction models (SVSPM) are developed. The result analysis allowed us to find the best-performing SVSPM. It is concluded that the model performed better for the medium and the critical severity level of the vulnerability. Out of the two feature selection techniques, information gain gave better results. An optimum number of features is also determined at which SVSPM gave good results. The best SVSPM using a machine learning algorithm corresponding to each dataset is found as well. A comparison is also made to identify significant differences among various SVSPMs developed using Friedman and Wilcoxon Signed Rank test.
每年报告的软件漏洞呈指数增长,导致软件系统被利用。因此,一旦有漏洞报告,就需要尽早修补。一般来说,这个过程需要一定的时间和精力。为了合理安排时间和精力,需要预测漏洞的严重性,以便优先处理更重要的漏洞。因此,需要建立一个能够分析现有漏洞数据并预测其严重性的模型。本研究的实验对象是 Mozilla 五款软件的漏洞报告。由于数据是文本数据,因此采用文本挖掘技术对数据进行预处理并形成特征向量。这种文本输入会产生非常高维的特征向量,因此需要降维。因此,特征选择使用了奇偶校验和信息增益。为了开发分类器,选择了七种机器学习算法。因此,我们开发了 14 个软件漏洞严重性预测模型(SVSPM)。通过结果分析,我们找到了表现最好的 SVSPM。结论是,该模型在中等和严重程度的漏洞中表现较好。在两种特征选择技术中,信息增益的结果更好。此外,还确定了 SVSPM 能取得良好结果的最佳特征数量。此外,还找到了与每个数据集相对应的使用机器学习算法的最佳 SVSPM。此外,还利用 Friedman 和 Wilcoxon Signed Rank 检验进行了比较,以确定所开发的各种 SVSPM 之间的显著差异。
{"title":"Text mining based an automatic model for software vulnerability severity prediction","authors":"Ruchika Malhotra, Vidushi","doi":"10.1007/s13198-024-02371-2","DOIUrl":"https://doi.org/10.1007/s13198-024-02371-2","url":null,"abstract":"<p>Software vulnerabilities reported every year increase exponentially, leading to the exploitation of software systems. Hence, when a vulnerability is reported, a requirement arises to patch it as early as possible. Generally, this process requires some time and effort. For proper channelizing of the efforts, a requirement comes to predict the severity of the vulnerability so that the more critical ones can be given a higher priority. Therefore, a need arises to build a model that can analyze the data available on vulnerabilities and predict their severity. The experiment of this study is conducted on vulnerability reports of five software of Mozilla. As the data is textual, text mining techniques are applied to preprocess the data and form feature vectors. This input as text creates very high dimensional feature vectors leading to the requirement of dimensionality reduction. Hence, feature selection is done using chi-square and information gain. To develop the classifier, seven machine learning algorithms are chosen. Hence, fourteen software vulnerability severity prediction models (SVSPM) are developed. The result analysis allowed us to find the best-performing SVSPM. It is concluded that the model performed better for the medium and the critical severity level of the vulnerability. Out of the two feature selection techniques, information gain gave better results. An optimum number of features is also determined at which SVSPM gave good results. The best SVSPM using a machine learning algorithm corresponding to each dataset is found as well. A comparison is also made to identify significant differences among various SVSPMs developed using Friedman and Wilcoxon Signed Rank test.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"41 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141191020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1007/s13198-024-02372-1
Praveen Kumar Poonia
The Gumbel-Hougaard family’s invention of copula distribution paved the way for new research, and it has been widely applied in recent years to a range of series–parallel multi-state complicated engineering systems, but not to agricultural applications. Recent study undertaken by a variety of organizations reveals that food grain production is not keeping up with population growth. Many technocrats use wireless sensing networks to collect and analyze data to increase production; nevertheless, by focusing on general repair, they fall short of their goal. To avoid this problem and restore the broken system as soon as achievable, in this paper we have developed a reliability formula in a way that numerical solutions can be obtained systematically in a reasonable computational time for precision agriculture that makes use of the copula distribution. This paper aims to analyze the various reliability measures such as availability, reliability, mean time to failure, and cost analysis of a wireless computer network for precision agriculture made up of three subsystems in series configuration. Hazard rates of all the units are assumed to be constant and follow exponential distribution, while repair supports general distribution and copula distribution. The system is analyzed by supplementary variable technique, Laplace transformation and Gumbel-Hougaard copula distribution. This paper we have used a significant feature of copula distribution under catastrophic failure by assuming two different forms of failure between neighboring transitions from which one can check the behavioral analysis of the designed system. This research may be beneficial for precision agriculture whereas a k-out-of-n-type configuration exists.
{"title":"Exact reliability formula for precision agriculture through copula repair approach","authors":"Praveen Kumar Poonia","doi":"10.1007/s13198-024-02372-1","DOIUrl":"https://doi.org/10.1007/s13198-024-02372-1","url":null,"abstract":"<p>The Gumbel-Hougaard family’s invention of copula distribution paved the way for new research, and it has been widely applied in recent years to a range of series–parallel multi-state complicated engineering systems, but not to agricultural applications. Recent study undertaken by a variety of organizations reveals that food grain production is not keeping up with population growth. Many technocrats use wireless sensing networks to collect and analyze data to increase production; nevertheless, by focusing on general repair, they fall short of their goal. To avoid this problem and restore the broken system as soon as achievable, in this paper we have developed a reliability formula in a way that numerical solutions can be obtained systematically in a reasonable computational time for precision agriculture that makes use of the copula distribution. This paper aims to analyze the various reliability measures such as availability, reliability, mean time to failure, and cost analysis of a wireless computer network for precision agriculture made up of three subsystems in series configuration. Hazard rates of all the units are assumed to be constant and follow exponential distribution, while repair supports general distribution and copula distribution. The system is analyzed by supplementary variable technique, Laplace transformation and Gumbel-Hougaard copula distribution. This paper we have used a significant feature of copula distribution under catastrophic failure by assuming two different forms of failure between neighboring transitions from which one can check the behavioral analysis of the designed system. This research may be beneficial for precision agriculture whereas a k-out-of-n-type configuration exists.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"33 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims to identify and analyze critical success factors (CSFs) of Lean Six Sigma (LSS) implementation in context to Industry 4.0 (I4.0) in Indian manufacturing industries. Twenty CSFs are identified from literature and expert’s opinion. A survey was conducted through administration of designed questionnaire in Indian manufacturing industries and reliability of the factors was tested calculating Cronbach’s alfa (α) value for all responses. Thereafter, out of twenty CSFs, sixteen were found reliable. Further, these sixteen factors were analyzed employing Interpretive Structural Modeling (ISM) technique and leveled as per developed model. The MICMAC analysis is employed for determining driving and dependence power of CSFs. The developed model provides a platform for the practitioners/researchers to design a framework for successful implementation of LSS in view of current manufacturing paradigm of I4.0. On analyzing the data using ISM technique, the ‘Organizational culture and belief’, ‘Effective top management commitment and attitude’ and ‘Motivated and skilled manpower’ are observed to be the most significant CSFs which drive the path for proper implementation of LSS in Indian manufacturing industries. The developed model will enable the practitioners to draw the effective strategy for proper implementation of LSS in view of Industry 4.0. The results will give an edge to the management to think strategically for improvements in this competitive environment.
{"title":"Interpretive structural modeling of lean six sigma critical success factors in perspective of industry 4.0 for Indian manufacturing industries","authors":"Pramod Kumar, Jaiprakash Bhamu, Sunkulp Goel, Dharmendra Singh","doi":"10.1007/s13198-024-02375-y","DOIUrl":"https://doi.org/10.1007/s13198-024-02375-y","url":null,"abstract":"<p>This paper aims to identify and analyze critical success factors (CSFs) of Lean Six Sigma (LSS) implementation in context to Industry 4.0 (I4.0) in Indian manufacturing industries. Twenty CSFs are identified from literature and expert’s opinion. A survey was conducted through administration of designed questionnaire in Indian manufacturing industries and reliability of the factors was tested calculating Cronbach’s alfa (α) value for all responses. Thereafter, out of twenty CSFs, sixteen were found reliable. Further, these sixteen factors were analyzed employing Interpretive Structural Modeling (ISM) technique and leveled as per developed model. The MICMAC analysis is employed for determining driving and dependence power of CSFs. The developed model provides a platform for the practitioners/researchers to design a framework for successful implementation of LSS in view of current manufacturing paradigm of I4.0. On analyzing the data using ISM technique, the ‘<i>Organizational culture and belief</i>’, ‘<i>Effective top management commitment and attitude</i>’ and ‘<i>Motivated and skilled manpower</i>’ are observed to be the most significant CSFs which drive the path for proper implementation of LSS in Indian manufacturing industries. The developed model will enable the practitioners to draw the effective strategy for proper implementation of LSS in view of Industry 4.0. The results will give an edge to the management to think strategically for improvements in this competitive environment.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"101-102 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}