Pub Date : 2024-07-13DOI: 10.1007/s13198-024-02410-y
Jun Zhao, Xigang Du, Huijuan Guo, Lingzhi Li
Building safety has become a serious and important topic for the development of the construction industry, as well as for the preservation of contractors' and workers' lives and property. With the development and expansion of a sensitive and complex monitoring system for the safety of buildings, allowing accidents to occur is no longer acceptable. Therefore, risk management identifies potential hazards before any operations take place, and the safety system operates based on a planned, organized, and systematic process known as "pre-incident." This plan is based on the analysis-control method. Failure to utilize risk management methods and the acceleration of the construction industry can lead to a decrease in the safety of residents and introduce unpredictable risks. While nowadays risk management is less utilized for project control, contractors face numerous problems after construction. Lack of resources and facilities in this regard can be problematic, but emerging building technologies, which are slowly being identified, can solve and separate most of the industry's safety issues. Therefore, utilizing innovative building technologies not only enhances quality, speed, and cost reduction in construction but also contributes significantly to industrialization and the reduction of risks resulting from deteriorated structures towards building safety. In this study, the extraordinary effects of innovative technologies on building safety have been examined, and the relationship between risk management and innovative technologies has been investigated using a questionnaire. The impacts of all risk management and safety aspects are examined in this research, which ultimately resulted in clarifying the direct and meaningful connection between risk management and safety with modern technologies and determining the necessary corrective measures to improve building safety performance through the use of innovative building technologies.
{"title":"Risk management and its relationship with innovative construction technologies with a focus on building safety","authors":"Jun Zhao, Xigang Du, Huijuan Guo, Lingzhi Li","doi":"10.1007/s13198-024-02410-y","DOIUrl":"https://doi.org/10.1007/s13198-024-02410-y","url":null,"abstract":"<p>Building safety has become a serious and important topic for the development of the construction industry, as well as for the preservation of contractors' and workers' lives and property. With the development and expansion of a sensitive and complex monitoring system for the safety of buildings, allowing accidents to occur is no longer acceptable. Therefore, risk management identifies potential hazards before any operations take place, and the safety system operates based on a planned, organized, and systematic process known as \"pre-incident.\" This plan is based on the analysis-control method. Failure to utilize risk management methods and the acceleration of the construction industry can lead to a decrease in the safety of residents and introduce unpredictable risks. While nowadays risk management is less utilized for project control, contractors face numerous problems after construction. Lack of resources and facilities in this regard can be problematic, but emerging building technologies, which are slowly being identified, can solve and separate most of the industry's safety issues. Therefore, utilizing innovative building technologies not only enhances quality, speed, and cost reduction in construction but also contributes significantly to industrialization and the reduction of risks resulting from deteriorated structures towards building safety. In this study, the extraordinary effects of innovative technologies on building safety have been examined, and the relationship between risk management and innovative technologies has been investigated using a questionnaire. The impacts of all risk management and safety aspects are examined in this research, which ultimately resulted in clarifying the direct and meaningful connection between risk management and safety with modern technologies and determining the necessary corrective measures to improve building safety performance through the use of innovative building technologies.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141609426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s13198-024-02374-z
Ankush Tripathi, M. Hari Prasad
In the modern world the availability of the machinery for any industry is of utmost importance. It is the right maintenance at right time which keeps these machineries available for their jobs. The primary goal of maintenance is to avoid or mitigate consequences of failure of equipment. There are various types of maintenance schemes available such as breakdown maintenance, preventive maintenance, condition based maintenance etc. Out of all these schemes Reliability Centred Maintenance (RCM) is most recent one and the application of which will enhance the productivity and availability. RCM ensures better system uptime along with understanding of risk involved. RCM has been used in various industries, however, it is very less explored and utilized in marine operations.Hence in the present study maintenance schemes of a marine diesel engine has been considered for optimization using RCM.Failure Modes and Effects Analysis and Fault Tree Analysis (FTA)are some of the basic steps involved in RCM. Due to the scarcity of reliability data particularly in the marine environment some of the components data had to be estimated based on the operating experience. As FTA is based on binary state perspective, assuming the system exist in either functioning or failed state, some of the components (whose performance varies with time and degrades) cannot be modeled using FTA. Hence, in this paper reliability modeling of performance degraded components is dealt with Markov models and the required data is evaluated from condition monitoring techniques. After obtaining the availability of the marine diesel engine, based on the importance ranking, critical components have been obtained for optimizing the maintenance schedules. In this paper genetic algorithm approach has been used for optimization. The results obtained have been compared and new maintenance scheme has been proposed.
{"title":"RCM based optimization of maintenance strategies for marine diesel engine using genetic algorithms","authors":"Ankush Tripathi, M. Hari Prasad","doi":"10.1007/s13198-024-02374-z","DOIUrl":"https://doi.org/10.1007/s13198-024-02374-z","url":null,"abstract":"<p>In the modern world the availability of the machinery for any industry is of utmost importance. It is the right maintenance at right time which keeps these machineries available for their jobs. The primary goal of maintenance is to avoid or mitigate consequences of failure of equipment. There are various types of maintenance schemes available such as breakdown maintenance, preventive maintenance, condition based maintenance etc. Out of all these schemes Reliability Centred Maintenance (RCM) is most recent one and the application of which will enhance the productivity and availability. RCM ensures better system uptime along with understanding of risk involved. RCM has been used in various industries, however, it is very less explored and utilized in marine operations.Hence in the present study maintenance schemes of a marine diesel engine has been considered for optimization using RCM.Failure Modes and Effects Analysis and Fault Tree Analysis (FTA)are some of the basic steps involved in RCM. Due to the scarcity of reliability data particularly in the marine environment some of the components data had to be estimated based on the operating experience. As FTA is based on binary state perspective, assuming the system exist in either functioning or failed state, some of the components (whose performance varies with time and degrades) cannot be modeled using FTA. Hence, in this paper reliability modeling of performance degraded components is dealt with Markov models and the required data is evaluated from condition monitoring techniques. After obtaining the availability of the marine diesel engine, based on the importance ranking, critical components have been obtained for optimizing the maintenance schedules. In this paper genetic algorithm approach has been used for optimization. The results obtained have been compared and new maintenance scheme has been proposed.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141566873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-05DOI: 10.1007/s13198-024-02415-7
Adil Mudasir Malla, Asif Ali Banka
Digital technology has increased the spread of fake news, leading to misconceptions, misunderstandings, and economic challenges. Researchers have developed automated techniques to identify false information using various data features, driven by advancements in AI. Most algorithms focus on signals from the news itself and its context, often ignoring user preferences. According to confirmation bias theory, individuals are more likely to spread false information that aligns with their beliefs. Users’ historical and social activities, such as their postings, can help identify fake news and inform their news choices. However, there is limited research on incorporating user preferences in fake news detection. This study introduces a framework based on Graph Neural Networks (GNNs) and natural language models to capture signals from both graph and content perspectives, considering user preferences. We chose GNNs for their ability to model complex relationships in graph-structured data. Specifically, we used the Graph Attention Network due to its ability to weigh the importance of different nodes, enhancing the capture of relevant signals. The framework integrates user preferences by analyzing social activities and news choices. Experimental results on a real-world dataset show our model achieves an accuracy of 98%. Outperforming models that do even consider user preferences. These findings highlight the potential of leveraging user preferences to enhance fake news detection, offering a more robust approach to tackling information pollution.
{"title":"Sustainable signals: a heterogeneous graph neural framework for fake news detection","authors":"Adil Mudasir Malla, Asif Ali Banka","doi":"10.1007/s13198-024-02415-7","DOIUrl":"https://doi.org/10.1007/s13198-024-02415-7","url":null,"abstract":"<p>Digital technology has increased the spread of fake news, leading to misconceptions, misunderstandings, and economic challenges. Researchers have developed automated techniques to identify false information using various data features, driven by advancements in AI. Most algorithms focus on signals from the news itself and its context, often ignoring user preferences. According to confirmation bias theory, individuals are more likely to spread false information that aligns with their beliefs. Users’ historical and social activities, such as their postings, can help identify fake news and inform their news choices. However, there is limited research on incorporating user preferences in fake news detection. This study introduces a framework based on Graph Neural Networks (GNNs) and natural language models to capture signals from both graph and content perspectives, considering user preferences. We chose GNNs for their ability to model complex relationships in graph-structured data. Specifically, we used the Graph Attention Network due to its ability to weigh the importance of different nodes, enhancing the capture of relevant signals. The framework integrates user preferences by analyzing social activities and news choices. Experimental results on a real-world dataset show our model achieves an accuracy of 98%. Outperforming models that do even consider user preferences. These findings highlight the potential of leveraging user preferences to enhance fake news detection, offering a more robust approach to tackling information pollution.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1007/s13198-024-02408-6
Melwin D. Souza, G. Ananth Prabhu, Varuna Kumara, K. M. Chaithra
Early-stage breast cancer detection remains a critical challenge in healthcare, demanding innovative approaches that leverage the power of deep learning and transfer learning techniques. The problem to be investigated involves designing a model capable of extracting meaningful features from mammographic images, maximizing transferability across datasets, and optimizing the trade-off between model complexity and computational efficiency. Existing methods often face limitations in achieving high accuracy, robustness, and efficiency. This research aims to address these challenges by proposing a novel transfer learning approach that combines the strengths of VGG11 and EfficientNet architectures for early-stage breast cancer detection. In the case of technological development, there is never a shortage of opportunities in the field of medical imaging. Cancer patients who have an earlier diagnosis of their disease have a lower probability of passing away from their illness. This research proposed an novel early neural network based on transfer learning names as ‘EARLYNET’ to automate breast cancer prediction. In this research, the new hybrid deep learning model was devised and built for distinguishing benign breast tumors from malignant ones. The trials were carried out on the Breast Histopathology Image dataset, and the model was evaluated using a Mobile net founded on the transfer learning method. In terms of accuracy, this model delivers 91.53% accuracy. Explored how the proposed transfer learning framework can enhance the accuracy and reliability of early-stage breast cancer detection, contributing to advancements in medical image analysis and positively impacting patient outcomes.
{"title":"EarlyNet: a novel transfer learning approach with VGG11 and EfficientNet for early-stage breast cancer detection","authors":"Melwin D. Souza, G. Ananth Prabhu, Varuna Kumara, K. M. Chaithra","doi":"10.1007/s13198-024-02408-6","DOIUrl":"https://doi.org/10.1007/s13198-024-02408-6","url":null,"abstract":"<p>Early-stage breast cancer detection remains a critical challenge in healthcare, demanding innovative approaches that leverage the power of deep learning and transfer learning techniques. The problem to be investigated involves designing a model capable of extracting meaningful features from mammographic images, maximizing transferability across datasets, and optimizing the trade-off between model complexity and computational efficiency. Existing methods often face limitations in achieving high accuracy, robustness, and efficiency. This research aims to address these challenges by proposing a novel transfer learning approach that combines the strengths of VGG11 and EfficientNet architectures for early-stage breast cancer detection. In the case of technological development, there is never a shortage of opportunities in the field of medical imaging. Cancer patients who have an earlier diagnosis of their disease have a lower probability of passing away from their illness. This research proposed an novel early neural network based on transfer learning names as ‘EARLYNET’ to automate breast cancer prediction. In this research, the new hybrid deep learning model was devised and built for distinguishing benign breast tumors from malignant ones. The trials were carried out on the Breast Histopathology Image dataset, and the model was evaluated using a Mobile net founded on the transfer learning method. In terms of accuracy, this model delivers 91.53% accuracy. Explored how the proposed transfer learning framework can enhance the accuracy and reliability of early-stage breast cancer detection, contributing to advancements in medical image analysis and positively impacting patient outcomes.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s13198-024-02400-0
Mohammad Taghitahooneh, Aidin Shaghaghi, Reza Dashti, Abolfazl Ahmadi
This article examines the research carried out regarding the failure rate in electricity distribution systems. It introduces a comprehensive framework for managing failure rates in power distribution systems. This framework highlights that studies on failure rates in power distribution systems can be categorized into three distinct groups: modifying asset management activities in order to reduce failure rate, evaluate and control threats and risks, emergency measures after failure. In this article, all the studies conducted on the failure rate of electricity distribution systems are listed and presented, and categorized in the form of a comprehensive and conceptual framework. The relation of each category with the failure rate is explained and by studying the process of studies, the research gaps and the roadmap of future studies in the field of failure rate in electricity distribution systems are determined.
{"title":"A review of failure rate studies in power distribution networks","authors":"Mohammad Taghitahooneh, Aidin Shaghaghi, Reza Dashti, Abolfazl Ahmadi","doi":"10.1007/s13198-024-02400-0","DOIUrl":"https://doi.org/10.1007/s13198-024-02400-0","url":null,"abstract":"<p>This article examines the research carried out regarding the failure rate in electricity distribution systems. It introduces a comprehensive framework for managing failure rates in power distribution systems. This framework highlights that studies on failure rates in power distribution systems can be categorized into three distinct groups: modifying asset management activities in order to reduce failure rate, evaluate and control threats and risks, emergency measures after failure. In this article, all the studies conducted on the failure rate of electricity distribution systems are listed and presented, and categorized in the form of a comprehensive and conceptual framework. The relation of each category with the failure rate is explained and by studying the process of studies, the research gaps and the roadmap of future studies in the field of failure rate in electricity distribution systems are determined.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-23DOI: 10.1007/s13198-024-02394-9
Shivani Kushwaha, Ajay Kumar
In our contemporary world, where technology is omnipresent and essential to daily life, the reliability of software systems is indispensable. Consequently, efforts to optimize software release time and decision-making processes have become imperative. Software reliability growth models (SRGMs) have emerged as valuable tools in gauging software reliability, with researchers studying various factors such as change point and testing effort. However, uncertainties persist throughout testing processes, which are inherently influenced by human factors. Fuzzy set theory has emerged as a valuable tool in addressing the inherent uncertainties and complexities associated with software systems. Its ability to model imprecise, uncertain, and vague information makes it particularly well-suited for capturing the nuances of software reliability. In this research, we propose a novel approach that amalgamates change point detection, logistic testing effort function modeling, and triangular fuzzy numbers (TFNs) to tackle uncertainty and vagueness in software reliability modeling. Additionally, we explore release time optimization considering TFNs, aiming to enhance decision-making in software development and release planning.
{"title":"Optimizing software release decisions: a TFN-based uncertainty modeling approach","authors":"Shivani Kushwaha, Ajay Kumar","doi":"10.1007/s13198-024-02394-9","DOIUrl":"https://doi.org/10.1007/s13198-024-02394-9","url":null,"abstract":"<p>In our contemporary world, where technology is omnipresent and essential to daily life, the reliability of software systems is indispensable. Consequently, efforts to optimize software release time and decision-making processes have become imperative. Software reliability growth models (SRGMs) have emerged as valuable tools in gauging software reliability, with researchers studying various factors such as change point and testing effort. However, uncertainties persist throughout testing processes, which are inherently influenced by human factors. Fuzzy set theory has emerged as a valuable tool in addressing the inherent uncertainties and complexities associated with software systems. Its ability to model imprecise, uncertain, and vague information makes it particularly well-suited for capturing the nuances of software reliability. In this research, we propose a novel approach that amalgamates change point detection, logistic testing effort function modeling, and triangular fuzzy numbers (TFNs) to tackle uncertainty and vagueness in software reliability modeling. Additionally, we explore release time optimization considering TFNs, aiming to enhance decision-making in software development and release planning.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s13198-024-02396-7
Mohammad Rezaei Dashtaki, Ali Jandaghi Jafari, Behzad Ghodrati, Seyed Hadi Hoseinie
Utilization of the shovel fleet as a capital-intensive and operationally important asset in open-pit mines is a key indicator for mine production analysis. This paper investigates shovel utilization in surface mining using a novel smart platform integrated with the shovel operating joystick. It utilizes a unique algorithm to identify and differentiate operational and non-operational time based on comparing real-time data and average loading cycle time. This data is then employed to calculate overall uptime and identify downtime periods. A field study was carried out on six electric cable shovels consisting of P&H 2100 and TZ WK-12, at Sarcheshmeh Copper Mine. The analysis revealed that the average utilization of the whole fleet is equal to 33%, ranging from 16 to 48%, which is dramatically lower than the mine expectations. The statistical analysis showed that in 10–13% of the operating time, the utilization is higher than 75%, which is a moderately acceptable level. Finally, according to the outcomes of the field study and the developed smart platform, it could be concluded that improvements in dispatching system accuracy, revising the grade blending strategies, increasing processing plant flexibility and improved operator training could enhance shovel fleet utilization and whole mine productivity.
{"title":"Analysis of shovel fleet utilization in Sarcheshmeh Copper Mine using a smart monitoring platform","authors":"Mohammad Rezaei Dashtaki, Ali Jandaghi Jafari, Behzad Ghodrati, Seyed Hadi Hoseinie","doi":"10.1007/s13198-024-02396-7","DOIUrl":"https://doi.org/10.1007/s13198-024-02396-7","url":null,"abstract":"<p>Utilization of the shovel fleet as a capital-intensive and operationally important asset in open-pit mines is a key indicator for mine production analysis. This paper investigates shovel utilization in surface mining using a novel smart platform integrated with the shovel operating joystick. It utilizes a unique algorithm to identify and differentiate operational and non-operational time based on comparing real-time data and average loading cycle time. This data is then employed to calculate overall uptime and identify downtime periods. A field study was carried out on six electric cable shovels consisting of P&H 2100 and TZ WK-12, at Sarcheshmeh Copper Mine. The analysis revealed that the average utilization of the whole fleet is equal to 33%, ranging from 16 to 48%, which is dramatically lower than the mine expectations. The statistical analysis showed that in 10–13% of the operating time, the utilization is higher than 75%, which is a moderately acceptable level. Finally, according to the outcomes of the field study and the developed smart platform, it could be concluded that improvements in dispatching system accuracy, revising the grade blending strategies, increasing processing plant flexibility and improved operator training could enhance shovel fleet utilization and whole mine productivity.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s13198-024-02384-x
Surbhi Gupta, H. D. Arora, Anjali Naithani
Refrigeration is a critical component of thermal environment engineering. The process of removing heat from a substance under precise conditions is referred to as refrigeration. It also includes the process of lowering and maintaining a body's temperature below the ambient temperature. In this paper, we examine the availability and cost function of the system of the Refrigeration plant. This system has three modes: normal, degraded, and failed. The system is divided into four sections: A (Compressor), B (Condenser), C (two standby expansion valves), and D. (three evaporators in series). A standby expansion valve is installed to improve the performance of the refrigeration plant. The supplementary variable technique is used to obtain state probabilities and the inversion process is used to obtain the expression of operational availability and profit functions. The MTTF (mean time to failure) is also estimated. A numerical example is presented with a graphical presentation to illustrate the practical advantages of the model.
{"title":"Availability and cost analysis of a multistage, multi-evaporator type compressor","authors":"Surbhi Gupta, H. D. Arora, Anjali Naithani","doi":"10.1007/s13198-024-02384-x","DOIUrl":"https://doi.org/10.1007/s13198-024-02384-x","url":null,"abstract":"<p>Refrigeration is a critical component of thermal environment engineering. The process of removing heat from a substance under precise conditions is referred to as refrigeration. It also includes the process of lowering and maintaining a body's temperature below the ambient temperature. In this paper, we examine the availability and cost function of the system of the Refrigeration plant. This system has three modes: normal, degraded, and failed. The system is divided into four sections: A (Compressor), B (Condenser), C (two standby expansion valves), and D. (three evaporators in series). A standby expansion valve is installed to improve the performance of the refrigeration plant. The supplementary variable technique is used to obtain state probabilities and the inversion process is used to obtain the expression of operational availability and profit functions. The MTTF (mean time to failure) is also estimated. A numerical example is presented with a graphical presentation to illustrate the practical advantages of the model.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s13198-024-02379-8
V. K. Rathaur, N. Chandra, Parmeet Kumar Vinit
This paper deals with multicomponent stress–strength system reliability (MSR) and its maximum likelihood (ML) as well as Bayesian estimation. We assume that ({X}_{1},{X}_{2},dots ,{X}_{k}) being the random strengths of k- components of a system and Y is the applied common random stress on them, which independently follows gamma distribution with parameters (left({alpha }_{1},{lambda }_{1}right)) and (left({alpha }_{2},{lambda }_{2}right)) respectively. The system works only if (sleft(1le sle kright)) or more of the strengths exceed the common load/stress is called s-out-of-k: G system. Maximum likelihood and asymptotic interval estimators of MSR are obtained. Bayes estimates are computed under symmetric and asymmetric loss functions assuming informative and non-informative priors. ML and Bayes estimators are numerically evaluated and compared based on mean square errors and absolute biases through simulation study employing the Metropolis–Hastings algorithm.
{"title":"On Bayesian estimation of stress–strength reliability in multicomponent system for two-parameter gamma distribution","authors":"V. K. Rathaur, N. Chandra, Parmeet Kumar Vinit","doi":"10.1007/s13198-024-02379-8","DOIUrl":"https://doi.org/10.1007/s13198-024-02379-8","url":null,"abstract":"<p>This paper deals with multicomponent stress–strength system reliability (MSR) and its maximum likelihood (ML) as well as Bayesian estimation. We assume that <span>({X}_{1},{X}_{2},dots ,{X}_{k})</span> being the random strengths of k- components of a system and <i>Y</i> is the applied common random stress on them, which independently follows gamma distribution with parameters <span>(left({alpha }_{1},{lambda }_{1}right))</span> and <span>(left({alpha }_{2},{lambda }_{2}right))</span> respectively. The system works only if <span>(sleft(1le sle kright))</span> or more of the strengths exceed the common load/stress is called s-out-of-k: G system. Maximum likelihood and asymptotic interval estimators of MSR are obtained. Bayes estimates are computed under symmetric and asymmetric loss functions assuming informative and non-informative priors. ML and Bayes estimators are numerically evaluated and compared based on mean square errors and absolute biases through simulation study employing the Metropolis–Hastings algorithm.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1007/s13198-024-02381-0
Naresh Chandra Kabdwal, Qazi J. Azhad, Rashi Hora
This article is concerned with the estimation of parameters, reliability and hazard rate functions of the exponentiated exponential distribution under progressive type-II censoring data. The maximum likelihood estimation and maximum product of spacing methods are presented to estimate the unknown parameters of the model in classical theme. In the Bayesian paradigm, we have considered both likelihood as well as product of spacing functions to estimates of the model parameters, reliability and hazard rate functions. Bayes estimates are considered under squared error loss function (SELF) using gamma prior for the shape parameter and a discrete prior for the scale parameter. Asymptotic confidence and highest posterior density credible intervals have also been obtained for the model parameters and reliability characteristics. Optimal criteria is also employed to find the best censoring scheme among the considered censoring schemes. A Monte Carlo simulation study is used to compare the performances the derived estimators under different progressive type-II censoring schemes. Finally, to illustrate the practical application of the proposed methodology, two real data analysis are conducted.
本文主要研究渐进式 II 型剔除数据下指数分布的参数、可靠性和危险率函数的估计。文章介绍了最大似然估计法和最大间距乘积法,以估计经典主题中模型的未知参数。在贝叶斯范式中,我们考虑了似然法和间距积函数来估计模型参数、可靠性和危险率函数。贝叶斯估计是在平方误差损失函数(SELF)下考虑的,对形状参数使用伽马先验,对规模参数使用离散先验。此外,还获得了模型参数和可靠性特征的渐近置信度和最高后验密度可信区间。此外,还采用了最优标准,以便在所考虑的剔除方案中找到最佳剔除方案。蒙特卡罗模拟研究用于比较不同渐进式 II 型剔除方案下得出的估计值的性能。最后,为了说明所提方法的实际应用,我们进行了两项真实数据分析。
{"title":"Statistical inference of the exponentiated exponential distribution based on progressive type-II censoring with optimal scheme","authors":"Naresh Chandra Kabdwal, Qazi J. Azhad, Rashi Hora","doi":"10.1007/s13198-024-02381-0","DOIUrl":"https://doi.org/10.1007/s13198-024-02381-0","url":null,"abstract":"<p>This article is concerned with the estimation of parameters, reliability and hazard rate functions of the exponentiated exponential distribution under progressive type-II censoring data. The maximum likelihood estimation and maximum product of spacing methods are presented to estimate the unknown parameters of the model in classical theme. In the Bayesian paradigm, we have considered both likelihood as well as product of spacing functions to estimates of the model parameters, reliability and hazard rate functions. Bayes estimates are considered under squared error loss function (SELF) using gamma prior for the shape parameter and a discrete prior for the scale parameter. Asymptotic confidence and highest posterior density credible intervals have also been obtained for the model parameters and reliability characteristics. Optimal criteria is also employed to find the best censoring scheme among the considered censoring schemes. A Monte Carlo simulation study is used to compare the performances the derived estimators under different progressive type-II censoring schemes. Finally, to illustrate the practical application of the proposed methodology, two real data analysis are conducted.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}