Pub Date : 2024-05-18DOI: 10.1007/s13198-024-02362-3
Adel Bouchahed, Abdelfettah Boussaid, Fatah Mekhloufi, Ahmed Belhani, Ali Belhamra
This article presents a modeling study and a control approach of photovoltaic system to provide continuous electrical energy at its output and feds a DC–DC booster converter. The last mentioned converter also provides a variable DC voltage applied directly across the terminals of a resistive load. In order to ensure a high static performance control for the different characteristics of the photovoltaic system. This study deals with three control strategies for the DC–DC boost converter; the first one is based on the maximum power point tracking (MPPT). Secondly, the authors move to the control technique based on proportional-integral (PI) regulator. At the end, a combination between the sliding mode strategies with the PI regulator is presented and discussed. The main purpose of these strategies is to obtain the best characteristics of the photovoltaic system so that it operates around the maximum power point with less oscillation, overtaking as well as a high stability for the different PV’s system characteristics when the solar irradiance changes. The obtained results show the effectiveness of the proposed algorithm in controlling the Photovoltaic system under different conditions in comparison to other strategies.The PV system is associated to the DC–DC boost converter where it is subjected to a variable irradiance between [200 and 1000] (text{w} /text{m}^2) and a constant temperature equal to 250 C, The DC voltage (V_{dc}) characteristics and the currents (I_{dc}) are obtained with a sampling time (T_{e} = 0.1) s and a simulation time (T_{s} = 0.5) s. The hybrid (P & O-MPPT)(SMC-PI) control technique gives better results than the two other strategies in terms of stability.
本文介绍了光伏系统的建模研究和控制方法,以在其输出端提供连续电能,并为直流-直流升压转换器供电。最后提到的转换器还可提供直接施加在电阻负载终端上的可变直流电压。为了确保针对光伏系统的不同特性进行高静态性能控制。本研究涉及直流-直流升压转换器的三种控制策略;第一种是基于最大功率点跟踪(MPPT)的控制策略。其次,作者转向基于比例积分(PI)调节器的控制技术。最后,作者介绍并讨论了滑动模式策略与 PI 调节器之间的组合。这些策略的主要目的是获得光伏系统的最佳特性,使其在最大功率点附近运行,减少振荡、超载,并在太阳辐照度变化时,针对不同的光伏系统特性保持高稳定性。所获得的结果表明,与其他策略相比,所提出的算法在不同条件下控制光伏系统非常有效。光伏系统与直流-直流升压转换器相关联,在[200和1000](text{w} /text{m}^2)之间的可变辐照度和等于250 C的恒定温度下,直流电压(V_{dc})特性和电流(I_{dc})通过采样时间(T_{e} = 0.在稳定性方面,混合(P& O-MPPT)和(SMC-PI)控制技术比其他两种策略效果更好。
{"title":"An enhanced control strategy for photovoltaic system control based on sliding mode-PI regulator","authors":"Adel Bouchahed, Abdelfettah Boussaid, Fatah Mekhloufi, Ahmed Belhani, Ali Belhamra","doi":"10.1007/s13198-024-02362-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02362-3","url":null,"abstract":"<p>This article presents a modeling study and a control approach of photovoltaic system to provide continuous electrical energy at its output and feds a DC–DC booster converter. The last mentioned converter also provides a variable DC voltage applied directly across the terminals of a resistive load. In order to ensure a high static performance control for the different characteristics of the photovoltaic system. This study deals with three control strategies for the DC–DC boost converter; the first one is based on the maximum power point tracking (MPPT). Secondly, the authors move to the control technique based on proportional-integral (PI) regulator. At the end, a combination between the sliding mode strategies with the PI regulator is presented and discussed. The main purpose of these strategies is to obtain the best characteristics of the photovoltaic system so that it operates around the maximum power point with less oscillation, overtaking as well as a high stability for the different PV’s system characteristics when the solar irradiance changes. The obtained results show the effectiveness of the proposed algorithm in controlling the Photovoltaic system under different conditions in comparison to other strategies.The PV system is associated to the DC–DC boost converter where it is subjected to a variable irradiance between [200 and 1000] <span>(text{w} /text{m}^2)</span> and a constant temperature equal to 250 C, The DC voltage <span>(V_{dc})</span> characteristics and the currents <span>(I_{dc})</span> are obtained with a sampling time <span>(T_{e} = 0.1)</span> s and a simulation time <span>(T_{s} = 0.5)</span> s. The hybrid <span>(P & O-MPPT)</span> <span>(SMC-PI)</span> control technique gives better results than the two other strategies in terms of stability.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"20 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141061845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-10DOI: 10.1007/s13198-024-02321-y
Shiwani Tiwari, Alka, Piyush Kant Rai
This article suggests a chain ratio-type estimator of population total based on calibration that takes into account auxiliary variables present on both occasions, and information on the study variable is not available on the first occasion. The optimal composite weights to choose, together with their performance range, are presented along with the bias expression. An empirical and simulation-based study is used to evaluate the effectiveness of the suggested estimator. The studies demonstrate that the proposed estimator outperforms the other estimators for various composite weight selections with varying matched and unmatched sample sizes.
{"title":"Calibration based chain ratio-type estimator of population total under successive sampling","authors":"Shiwani Tiwari, Alka, Piyush Kant Rai","doi":"10.1007/s13198-024-02321-y","DOIUrl":"https://doi.org/10.1007/s13198-024-02321-y","url":null,"abstract":"<p>This article suggests a chain ratio-type estimator of population total based on calibration that takes into account auxiliary variables present on both occasions, and information on the study variable is not available on the first occasion. The optimal composite weights to choose, together with their performance range, are presented along with the bias expression. An empirical and simulation-based study is used to evaluate the effectiveness of the suggested estimator. The studies demonstrate that the proposed estimator outperforms the other estimators for various composite weight selections with varying matched and unmatched sample sizes.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"131 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140932014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-10DOI: 10.1007/s13198-024-02348-1
Ozgur Satici, Esra Satici
All engineering projects involve risk management applications. Sometimes, risks cannot be effectively managed, leading to catastrophic consequences. Engineers must consciously or unconsciously manage these risks. Regardless of how risks are handled, project risks need to be systematically evaluated. Therefore, risk management procedures must be implemented in every project, particularly in geo-engineering projects, to mitigate undesirable consequences and achieve project objectives. However, the use of risk management procedures in underground excavation projects is not common. Numerous commonly employed underground excavation techniques lack assessment of risks, notably geotechnical risks. Most of them only evaluate rock structures and excavation stability in accordance with the geological structure. This paper combines a universal risk management perspective with the underground engineering discipline. The tunnel engineering design and construction steps were evaluated for uncertainties using Scenario Structuring Modeling techniques to identify both technical and non-technical risks associated with underground excavation. Bayesian Network models were employed to identify connections that contribute to risk. To achieve this, objective and quantitative risk assessment tables have been devised using risk management philosophy, in accordance with tunnel design engineering principles and Turkish procurement laws. The primary objective of this study is to increase awareness of the use of risk management processes in tunnel construction projects and introduce a systematic approach to risk assessment in tunnel engineering projects. As a result, a semi-quantitative risk assessment method based on risk management philosophy is proposed for tunnel design and construction for the first time, evaluating not only geotechnical and engineering risks but also human, financial, and various other sources of risks.
{"title":"Theoretical semi-quantitative risk assessment methodology for tunnel design and construction processes","authors":"Ozgur Satici, Esra Satici","doi":"10.1007/s13198-024-02348-1","DOIUrl":"https://doi.org/10.1007/s13198-024-02348-1","url":null,"abstract":"<p>All engineering projects involve risk management applications. Sometimes, risks cannot be effectively managed, leading to catastrophic consequences. Engineers must consciously or unconsciously manage these risks. Regardless of how risks are handled, project risks need to be systematically evaluated. Therefore, risk management procedures must be implemented in every project, particularly in geo-engineering projects, to mitigate undesirable consequences and achieve project objectives. However, the use of risk management procedures in underground excavation projects is not common. Numerous commonly employed underground excavation techniques lack assessment of risks, notably geotechnical risks. Most of them only evaluate rock structures and excavation stability in accordance with the geological structure. This paper combines a universal risk management perspective with the underground engineering discipline. The tunnel engineering design and construction steps were evaluated for uncertainties using Scenario Structuring Modeling techniques to identify both technical and non-technical risks associated with underground excavation. Bayesian Network models were employed to identify connections that contribute to risk. To achieve this, objective and quantitative risk assessment tables have been devised using risk management philosophy, in accordance with tunnel design engineering principles and Turkish procurement laws. The primary objective of this study is to increase awareness of the use of risk management processes in tunnel construction projects and introduce a systematic approach to risk assessment in tunnel engineering projects. As a result, a semi-quantitative risk assessment method based on risk management philosophy is proposed for tunnel design and construction for the first time, evaluating not only geotechnical and engineering risks but also human, financial, and various other sources of risks.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"66 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140932015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s13198-024-02349-0
Hua Gong
Individuals participate in the purchase and sale of securities affiliated with corporations on the stock market, which increases economic prosperity. The intricate interplay between economic factors, market dynamics, and investor psychology poses a significant challenge in accurately predicting outcomes within the field of finance. Additionally, the presence of non-stationarity, non-linearity, and high volatility in stock price time series data exacerbates the challenge of making precise estimations about stock prices in the securities market. The use of conventional techniques has the capacity to augment the accuracy of predictive modeling. However, it is important to acknowledge that these approaches also include computational intricacies, which might result in a higher likelihood of errors in predicting. This research introduces a novel model that adeptly addresses several issues via the integration of the Ant lion optimization methodology with the radial basis function method. The hybrid model showed greater effectiveness and performance in comparison to other models in the current study. The proposed model demonstrated a significant degree of effectiveness, characterized by optimum performance. The usefulness of a proposed predictive model for projecting stock prices was assessed by an analysis of data obtained from the Nasdaq index. The data covered the time period from January 1, 2015, to June 29, 2023. The findings suggest that the suggested model demonstrates reliability and effectiveness in its ability to analyze and predict the time series of stock prices. The empirical results suggest that the suggested model has a higher level of predictive accuracy in comparison to the other approaches by having the highest value of 0.991 for the coefficient of determination.
{"title":"An Enhanced Hybrid Model for financial market and economic analysis: a case study of the Nasdaq Index","authors":"Hua Gong","doi":"10.1007/s13198-024-02349-0","DOIUrl":"https://doi.org/10.1007/s13198-024-02349-0","url":null,"abstract":"<p>Individuals participate in the purchase and sale of securities affiliated with corporations on the stock market, which increases economic prosperity. The intricate interplay between economic factors, market dynamics, and investor psychology poses a significant challenge in accurately predicting outcomes within the field of finance. Additionally, the presence of non-stationarity, non-linearity, and high volatility in stock price time series data exacerbates the challenge of making precise estimations about stock prices in the securities market. The use of conventional techniques has the capacity to augment the accuracy of predictive modeling. However, it is important to acknowledge that these approaches also include computational intricacies, which might result in a higher likelihood of errors in predicting. This research introduces a novel model that adeptly addresses several issues via the integration of the Ant lion optimization methodology with the radial basis function method. The hybrid model showed greater effectiveness and performance in comparison to other models in the current study. The proposed model demonstrated a significant degree of effectiveness, characterized by optimum performance. The usefulness of a proposed predictive model for projecting stock prices was assessed by an analysis of data obtained from the Nasdaq index. The data covered the time period from January 1, 2015, to June 29, 2023. The findings suggest that the suggested model demonstrates reliability and effectiveness in its ability to analyze and predict the time series of stock prices. The empirical results suggest that the suggested model has a higher level of predictive accuracy in comparison to the other approaches by having the highest value of 0.991 for the coefficient of determination.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"204 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140932108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The global shipping industry is the cornerstone of contemporary culture and the economy. Swift trade facilitated by efficient and dependable shipping services forms the backbone of the rapid exchange of goods and ideas, making the availability of certain now-ubiquitous products possible. Maritime cargo strategies enable businesses to seamlessly and expeditiously transport their goods across nations and borders. Integrating Internet of Things (IoT) technology is a promising solution to enhance these operations. IoT refers to a network of interconnected devices, objects, or “things” that communicate and share data with each other over the internet. The primary purpose of IoT is to enable these devices to collect, exchange, and analyze information, creating a seamless and intelligent network. This paper addresses the barriers that organizations may face when contemplating the implementation of IoT in maritime freight operations. To identify and prioritize these challenges, a multi-criteria decision-making approach has been employed specifically the Fuzzy Analytical Hierarchy Process (AHP) method & Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), to rank these factors in descending order of their significance.
{"title":"Unveiling barriers to IoT adoption in the maritime freight industry","authors":"Suneet Singh, Lakshay, Saurabh Pratap, Sunil Kumar Jauhar","doi":"10.1007/s13198-024-02342-7","DOIUrl":"https://doi.org/10.1007/s13198-024-02342-7","url":null,"abstract":"<p>The global shipping industry is the cornerstone of contemporary culture and the economy. Swift trade facilitated by efficient and dependable shipping services forms the backbone of the rapid exchange of goods and ideas, making the availability of certain now-ubiquitous products possible. Maritime cargo strategies enable businesses to seamlessly and expeditiously transport their goods across nations and borders. Integrating Internet of Things (IoT) technology is a promising solution to enhance these operations. IoT refers to a network of interconnected devices, objects, or “things” that communicate and share data with each other over the internet. The primary purpose of IoT is to enable these devices to collect, exchange, and analyze information, creating a seamless and intelligent network. This paper addresses the barriers that organizations may face when contemplating the implementation of IoT in maritime freight operations. To identify and prioritize these challenges, a multi-criteria decision-making approach has been employed specifically the Fuzzy Analytical Hierarchy Process (AHP) method & Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), to rank these factors in descending order of their significance.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"13 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-06DOI: 10.1007/s13198-024-02353-4
J. Selwyn Paul, R. Suchithra
The paper proposes an encrypted image-based reversible data embedding approach using an inter-channel gradient-shifted MSB predictor for the application of cloud storage. In this approach, the data embedding was done on the cloud to store the encrypted images. Initially, the image was encrypted using permutation-based encryption by the user which is uploaded to the cloud. From the encrypted images, a primary channel and two secondary channels are estimated from which the gradient images are estimated. Using the histogram, the gradient images are estimated which is then shifted to perform embedding. Three different types of shifting approaches are proposed which include minimum value gradient shifting, threshold value gradient shifting, and maximum correlated gradient shifting (MC-GS). The gradient-shifted images are used to embed the data using the MSB predictor approach. The analysis of the algorithm was done using the standard color images obtained from the SIPI dataset and the evaluation was done with measures such as structural similarity index (SSI), peak signal-to-noise ratio, embedding rate, and entropy. The MC-GS gradient shifting results in an SSI, PSNR, and embedding rate of 0.1046, 8.13 dB, and 2.832 bpp respectively.
{"title":"A reversible data embedding approach based on inter-channel gradient shifted MSB predictor in encrypted images for cloud applications","authors":"J. Selwyn Paul, R. Suchithra","doi":"10.1007/s13198-024-02353-4","DOIUrl":"https://doi.org/10.1007/s13198-024-02353-4","url":null,"abstract":"<p>The paper proposes an encrypted image-based reversible data embedding approach using an inter-channel gradient-shifted MSB predictor for the application of cloud storage. In this approach, the data embedding was done on the cloud to store the encrypted images. Initially, the image was encrypted using permutation-based encryption by the user which is uploaded to the cloud. From the encrypted images, a primary channel and two secondary channels are estimated from which the gradient images are estimated. Using the histogram, the gradient images are estimated which is then shifted to perform embedding. Three different types of shifting approaches are proposed which include minimum value gradient shifting, threshold value gradient shifting, and maximum correlated gradient shifting (MC-GS). The gradient-shifted images are used to embed the data using the MSB predictor approach. The analysis of the algorithm was done using the standard color images obtained from the SIPI dataset and the evaluation was done with measures such as structural similarity index (SSI), peak signal-to-noise ratio, embedding rate, and entropy. The MC-GS gradient shifting results in an SSI, PSNR, and embedding rate of 0.1046, 8.13 dB, and 2.832 bpp respectively.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"51 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-06DOI: 10.1007/s13198-024-02340-9
Anshika Agrawal, Neha Singhal
In this paper, an improved algorithm has been proposed for solving fully fuzzy transportation problems. The proposed algorithm deals with finding a starting basic feasible solution to the transportation problem with parameters in fuzzy form. The proposed algorithm is an amalgamation of two existing approaches that can be applied to a balanced fuzzy transportation problem where uncertainties are represented by trapezoidal fuzzy numbers. Instead of transforming these uncertainties into crisp values, the proposed algorithm directly handles the fuzzy nature of the problem. To illustrate its effectiveness, the article presents several numerical examples in which parameter uncertainties are characterized using trapezoidal fuzzy numbers. A comparative analysis is performed between the algorithm’s outcomes and the existing results. The existing results are compared with the obtained results. A case study has also been discussed to enhance the significance of the algorithm.
{"title":"An efficient computational approach for basic feasible solution of fuzzy transportation problems","authors":"Anshika Agrawal, Neha Singhal","doi":"10.1007/s13198-024-02340-9","DOIUrl":"https://doi.org/10.1007/s13198-024-02340-9","url":null,"abstract":"<p>In this paper, an improved algorithm has been proposed for solving fully fuzzy transportation problems. The proposed algorithm deals with finding a starting basic feasible solution to the transportation problem with parameters in fuzzy form. The proposed algorithm is an amalgamation of two existing approaches that can be applied to a balanced fuzzy transportation problem where uncertainties are represented by trapezoidal fuzzy numbers. Instead of transforming these uncertainties into crisp values, the proposed algorithm directly handles the fuzzy nature of the problem. To illustrate its effectiveness, the article presents several numerical examples in which parameter uncertainties are characterized using trapezoidal fuzzy numbers. A comparative analysis is performed between the algorithm’s outcomes and the existing results. The existing results are compared with the obtained results. A case study has also been discussed to enhance the significance of the algorithm.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"1 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-05DOI: 10.1007/s13198-024-02331-w
Shipra
The practice of leveraging previously created software components to progress new software is identified as component-based software engineering (CBSE). Good software engineering design is the foundation of CBSE principles. The black box approach that underpins CBSE hides the execution of components in nature, and the components communicate with one another using strictly delineated interfaces. Component platforms are shared, which lowers the price of creation. To ascertain a system's complexity, various software metrics are employed. For superiority in software intricacy, coupling would be minimal, and cohesiveness must be high. It is predetermined that coupling should be low and cohesion should be increased for refinement in software complexity. We are identifying the combination of different software systems and improving the methods for doing so with our approach. Proposed: Cohm (cohesion of methods) and Cohv (cohesion of variables) are two cohesion metrics that have been proposed. The cohesiveness metrics in this study have been analytically and empirically evaluated, and a comparison has been made between them. Additionally, an effort was made to give the outcomes of an empirical estimation based on the case study. The T-test is used to determine the consequences of the metrics, and Python is used to validate the metrics. Python or R programming and the Matlab tool are used to determine the relationship between various variables and metrics. Findings: The consequence of the current investigation is very encouraging and might be used to estimate the involvedness of the parts. The proportional analysis of the proposed metrics and various cohesion metrics reveals that the suggested metrics are more cohesive than the present metrics, increasing the likelihood that they can be reused when creating new applications.
利用以前创建的软件组件来开发新软件的做法被称为基于组件的软件工程(CBSE)。良好的软件工程设计是 CBSE 原则的基础。作为 CBSE 基础的黑盒方法将组件的执行隐藏在自然中,组件之间通过严格划分的接口进行通信。组件平台是共享的,这降低了创建的成本。为了确定系统的复杂性,我们采用了各种软件指标。要想获得软件复杂性的优越性,耦合度必须最小,内聚度必须很高。为了提高软件复杂性,耦合度必须低,内聚力必须高。我们正在确定不同软件系统的组合,并通过我们的方法改进组合方法。建议:Cohm(方法内聚)和 Cohv(变量内聚)是已提出的两个内聚度量。本研究对这两个内聚度量进行了分析和经验评估,并对它们进行了比较。此外,还努力给出了基于案例研究的经验估算结果。使用 T 检验来确定指标的结果,并使用 Python 验证指标。Python 或 R 编程和 Matlab 工具用于确定各种变量和指标之间的关系。调查结果:当前调查的结果非常令人鼓舞,可用于估算各部分的参与度。对建议的度量标准和各种内聚度量标准进行的比例分析表明,建议的度量标准比现有的度量标准更具内聚性,从而增加了在创建新应用程序时重复使用这些度量标准的可能性。
{"title":"Cohesion measurements between variables and methods using component-based software systems","authors":"Shipra","doi":"10.1007/s13198-024-02331-w","DOIUrl":"https://doi.org/10.1007/s13198-024-02331-w","url":null,"abstract":"<p>The practice of leveraging previously created software components to progress new software is identified as component-based software engineering (CBSE). Good software engineering design is the foundation of CBSE principles. The black box approach that underpins CBSE hides the execution of components in nature, and the components communicate with one another using strictly delineated interfaces. Component platforms are shared, which lowers the price of creation. To ascertain a system's complexity, various software metrics are employed. For superiority in software intricacy, coupling would be minimal, and cohesiveness must be high. It is predetermined that coupling should be low and cohesion should be increased for refinement in software complexity. We are identifying the combination of different software systems and improving the methods for doing so with our approach. Proposed: Cohm (cohesion of methods) and Cohv (cohesion of variables) are two cohesion metrics that have been proposed. The cohesiveness metrics in this study have been analytically and empirically evaluated, and a comparison has been made between them. Additionally, an effort was made to give the outcomes of an empirical estimation based on the case study. The <i>T</i>-test is used to determine the consequences of the metrics, and Python is used to validate the metrics. Python or R programming and the Matlab tool are used to determine the relationship between various variables and metrics. Findings: The consequence of the current investigation is very encouraging and might be used to estimate the involvedness of the parts. The proportional analysis of the proposed metrics and various cohesion metrics reveals that the suggested metrics are more cohesive than the present metrics, increasing the likelihood that they can be reused when creating new applications.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"49 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-02DOI: 10.1007/s13198-024-02343-6
Ankita Panwar, Millie Pant
Data envelopment analysis (DEA) is a well-known multi-criteria decision-making technique which is used to measure the relative efficiency of decision-making units (DMUs). However, in the case of classical DEA, the discriminatory power is often weak particularly when the number of input and output variables are high. In the paper, combine analytic hierarchy process-principal component analysis, is applied to identify the most relevant criteria thereby reducing the number of criteria and increasing the discriminatory power of DEA. Further, in this study, super-efficiency-data envelopment analysis is applied to determine the efficiency of DMUs. The feasibility of the proposed process is illustrated for a real-world multi-criteria decision-making problem based on the hostel management system for the higher education institute and assesses the performance of the decision-making units.
数据包络分析(DEA)是一种著名的多标准决策技术,用于衡量决策单元(DMU)的相对效率。然而,就经典的 DEA 而言,其判别能力往往较弱,尤其是当输入和输出变量数量较多时。本文采用层次分析法--主成分分析法来确定最相关的标准,从而减少标准数量,提高 DEA 的判别能力。此外,本研究还采用了超效率-数据包络分析法来确定 DMU 的效率。在一个基于高等院校宿舍管理系统的真实世界多标准决策问题中,说明了所提议流程的可行性,并评估了决策单位的绩效。
{"title":"PCA integrated DEA for hostel assessment of a Higher Education Institution","authors":"Ankita Panwar, Millie Pant","doi":"10.1007/s13198-024-02343-6","DOIUrl":"https://doi.org/10.1007/s13198-024-02343-6","url":null,"abstract":"<p>Data envelopment analysis (DEA) is a well-known multi-criteria decision-making technique which is used to measure the relative efficiency of decision-making units (DMUs). However, in the case of classical DEA, the discriminatory power is often weak particularly when the number of input and output variables are high. In the paper, combine analytic hierarchy process-principal component analysis, is applied to identify the most relevant criteria thereby reducing the number of criteria and increasing the discriminatory power of DEA. Further, in this study, super-efficiency-data envelopment analysis is applied to determine the efficiency of DMUs. The feasibility of the proposed process is illustrated for a real-world multi-criteria decision-making problem based on the hostel management system for the higher education institute and assesses the performance of the decision-making units.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"242 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-02DOI: 10.1007/s13198-024-02344-5
Nusrat Mohi Ud Din, Assif Assad, Saqib Ul Sabha, Muzafar Rasool
The challenge of limited labeled data is a persistent concern across diverse domains, including healthcare, niche agricultural practices, astronomy and space exploration, anomaly detection, and many more. Limited data can lead to biased training, overfitting, and poor generalization in Artificial Intelligence (AI) models. In response to this ubiquitous problem, this research explores the potential of deep reinforcement learning (DRL) algorithms, specifically Double Deep Q-Network (Double DQN) and Dueling Deep Q-Network (Dueling DQN). The algorithms were trained on small training subsets generated by subsampling from the original training datasets. In this subsampling process, 10, 20, 30, and 40 instances were selected from each class to form the smaller training subsets. Subsequently, the performance of these algorithms was comprehensively assessed by evaluating them on the entire test set. We employed datasets from two different domains where this problem mainly exists to assess their performance in data-constrained scenarios. A comparative analysis was conducted against a transfer learning approach widely employed to tackle similar challenges. The comprehensive evaluation reveals compelling results. In the medical domain, Dueling DQN consistently outperformed Double DQN and transfer learning, while in the agriculture domain, Double DQN demonstrates superior performance compared to Dueling DQN and transfer learning. These findings underscore the remarkable effectiveness of DRL algorithms in addressing data scarcity across a spectrum of domains, positioning DRL as a potent tool for enhancing diverse applications with limited labeled data.
{"title":"Optimizing deep reinforcement learning in data-scarce domains: a cross-domain evaluation of double DQN and dueling DQN","authors":"Nusrat Mohi Ud Din, Assif Assad, Saqib Ul Sabha, Muzafar Rasool","doi":"10.1007/s13198-024-02344-5","DOIUrl":"https://doi.org/10.1007/s13198-024-02344-5","url":null,"abstract":"<p>The challenge of limited labeled data is a persistent concern across diverse domains, including healthcare, niche agricultural practices, astronomy and space exploration, anomaly detection, and many more. Limited data can lead to biased training, overfitting, and poor generalization in Artificial Intelligence (AI) models. In response to this ubiquitous problem, this research explores the potential of deep reinforcement learning (DRL) algorithms, specifically Double Deep Q-Network (Double DQN) and Dueling Deep Q-Network (Dueling DQN). The algorithms were trained on small training subsets generated by subsampling from the original training datasets. In this subsampling process, 10, 20, 30, and 40 instances were selected from each class to form the smaller training subsets. Subsequently, the performance of these algorithms was comprehensively assessed by evaluating them on the entire test set. We employed datasets from two different domains where this problem mainly exists to assess their performance in data-constrained scenarios. A comparative analysis was conducted against a transfer learning approach widely employed to tackle similar challenges. The comprehensive evaluation reveals compelling results. In the medical domain, Dueling DQN consistently outperformed Double DQN and transfer learning, while in the agriculture domain, Double DQN demonstrates superior performance compared to Dueling DQN and transfer learning. These findings underscore the remarkable effectiveness of DRL algorithms in addressing data scarcity across a spectrum of domains, positioning DRL as a potent tool for enhancing diverse applications with limited labeled data.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"13 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}