Pub Date : 2024-05-01DOI: 10.1007/s13198-024-02346-3
Saqib ul Sabha, Assif Assad, Sadaf Shafi, Nusrat Mohi Ud Din, Rayees Ahmad Dar, Muzafar Rasool Bhat
Deep learning, while transformative for computer vision, frequently falters when confronted with small and imbalanced datasets. Despite substantial progress in this domain, prevailing models often underachieve under these constraints. Addressing this, we introduce an innovative contrast-based learning strategy for small and imbalanced data that significantly bolsters the proficiency of deep learning architectures on these challenging datasets. By ingeniously concatenating training images, the effective training dataset expands from n to (n^2), affording richer data for model training, even when n is very small. Remarkably, our solution remains indifferent to specific loss functions or network architectures, endorsing its adaptability for diverse classification scenarios. Rigorously benchmarked against four benchmark datasets, our approach was juxtaposed with state-of-the-art oversampling paradigms. The empirical evidence underscores our method’s superior efficacy, outshining contemporaries across metrics like Balanced accuracy, F1 score, and Geometric mean. Noteworthy increments include 7–16% on the Covid-19 dataset, 4–20% for Honey bees, 1–6% on CIFAR-10, and 1–9% on FashionMNIST. In essence, our proposed method offers a potent remedy for the perennial issues stemming from scanty and skewed data in deep learning.
深度学习虽然对计算机视觉具有变革意义,但在面对小型和不平衡数据集时往往会出现问题。尽管在这一领域取得了长足进步,但现有模型在这些限制条件下往往表现不佳。为了解决这个问题,我们针对小数据和不平衡数据引入了一种基于对比度的创新学习策略,大大提高了深度学习架构在这些具有挑战性的数据集上的能力。通过巧妙地连接训练图像,有效的训练数据集从 n 扩展到 (n^2),即使 n 非常小,也能为模型训练提供更丰富的数据。值得注意的是,我们的解决方案对特定的损失函数或网络架构无动于衷,这证明了它对不同分类场景的适应性。根据四个基准数据集对我们的方法进行了严格的基准测试,并将其与最先进的超采样范例进行了对比。经验证明,我们的方法具有卓越的功效,在平衡准确率、F1 分数和几何平均数等指标上都优于同时代的方法。值得注意的是,Covid-19 数据集的准确率提高了 7-16%,蜜蜂的准确率提高了 4-20%,CIFAR-10 的准确率提高了 1-6%,FashionMNIST 的准确率提高了 1-9%。从本质上讲,我们提出的方法为深度学习中因数据稀少和偏斜而长期存在的问题提供了有效的解决方案。
{"title":"Imbalcbl: addressing deep learning challenges with small and imbalanced datasets","authors":"Saqib ul Sabha, Assif Assad, Sadaf Shafi, Nusrat Mohi Ud Din, Rayees Ahmad Dar, Muzafar Rasool Bhat","doi":"10.1007/s13198-024-02346-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02346-3","url":null,"abstract":"<p>Deep learning, while transformative for computer vision, frequently falters when confronted with small and imbalanced datasets. Despite substantial progress in this domain, prevailing models often underachieve under these constraints. Addressing this, we introduce an innovative contrast-based learning strategy for small and imbalanced data that significantly bolsters the proficiency of deep learning architectures on these challenging datasets. By ingeniously concatenating training images, the effective training dataset expands from <i>n</i> to <span>(n^2)</span>, affording richer data for model training, even when <i>n</i> is very small. Remarkably, our solution remains indifferent to specific loss functions or network architectures, endorsing its adaptability for diverse classification scenarios. Rigorously benchmarked against four benchmark datasets, our approach was juxtaposed with state-of-the-art oversampling paradigms. The empirical evidence underscores our method’s superior efficacy, outshining contemporaries across metrics like Balanced accuracy, F1 score, and Geometric mean. Noteworthy increments include 7–16% on the Covid-19 dataset, 4–20% for Honey bees, 1–6% on CIFAR-10, and 1–9% on FashionMNIST. In essence, our proposed method offers a potent remedy for the perennial issues stemming from scanty and skewed data in deep learning.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"88 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140826978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1007/s13198-024-02341-8
Rajan Mondal, Subhajit Das, Md Akhtar, Ali Akbar Shaikh, Asoke Kumar Bhunia
This work presents a two-storage inventory model developed considering the effect of deterioration of items, partial advanced payment and two-level trade credit financing policies ignoring the relationship between the credit periods offered for retailers as well as customers by the supplier and retailer respectively. Here, demand is dependent on freshness period of the items, credit period (offered by the retailer) of customers and item’s selling price. Here, shortages are permitted with partially backlogged. According to the length of credit period for retailer, three scenarios are investigated. Then these scenarios are discussed in details and the corresponding models are formulated with the objectives to determine the optimal policy by optimizing the average profit of each scenario subject to some constraints. The corresponding optimization problems of different scenarios are non-linear in nature and those problems are solved with the help of differential evolution (DE) algorithm and other eight existing metaheuristic algorithms. To validate the model, three numerical examples are considered and solved. The results obtained from DE algorithm are compared statistically with that of other algorithms. For justification of the comparison and also the verification of the statistical significance of DE algorithm, two different tests, viz. Friedman and analysis of variance (ANOVA) tests are carried out for the numerical examples. Finally, sensitivity analyses are conducted and the effects of different system parameters on best found (optimal) policy are presented graphically.
本研究提出了一个双存储库存模型,该模型考虑了物品变质、部分预付款和两级贸易信贷融资政策的影响,忽略了供应商和零售商分别为零售商和客户提供的信贷期之间的关系。在这里,需求取决于商品的保鲜期、客户的信用期(零售商提供)和商品的销售价格。在这种情况下,允许部分积压,允许短缺。根据零售商信用期的长短,研究了三种情况。然后对这些方案进行了详细讨论,并建立了相应的模型,其目标是在一定的约束条件下,通过优化每种方案的平均利润来确定最优政策。不同方案的相应优化问题在本质上是非线性的,这些问题将借助微分进化(DE)算法和其他 8 种现有的元启发式算法来解决。为验证模型,考虑并求解了三个数值示例。将微分进化算法与其他算法的结果进行了统计比较。为了证明比较的合理性,同时验证 DE 算法的统计意义,对数值示例进行了两种不同的检验,即弗里德曼检验和方差分析(ANOVA)检验。最后,还进行了敏感性分析,并以图表形式展示了不同系统参数对最佳(最优)策略的影响。
{"title":"A two-warehouse inventory model for deteriorating items with partially backlogged demand rate under trade credit policies","authors":"Rajan Mondal, Subhajit Das, Md Akhtar, Ali Akbar Shaikh, Asoke Kumar Bhunia","doi":"10.1007/s13198-024-02341-8","DOIUrl":"https://doi.org/10.1007/s13198-024-02341-8","url":null,"abstract":"<p>This work presents a two-storage inventory model developed considering the effect of deterioration of items, partial advanced payment and two-level trade credit financing policies ignoring the relationship between the credit periods offered for retailers as well as customers by the supplier and retailer respectively. Here, demand is dependent on freshness period of the items, credit period (offered by the retailer) of customers and item’s selling price. Here, shortages are permitted with partially backlogged. According to the length of credit period for retailer, three scenarios are investigated. Then these scenarios are discussed in details and the corresponding models are formulated with the objectives to determine the optimal policy by optimizing the average profit of each scenario subject to some constraints. The corresponding optimization problems of different scenarios are non-linear in nature and those problems are solved with the help of differential evolution (DE) algorithm and other eight existing metaheuristic algorithms. To validate the model, three numerical examples are considered and solved. The results obtained from DE algorithm are compared statistically with that of other algorithms. For justification of the comparison and also the verification of the statistical significance of DE algorithm, two different tests, viz. Friedman and analysis of variance (ANOVA) tests are carried out for the numerical examples. Finally, sensitivity analyses are conducted and the effects of different system parameters on best found (optimal) policy are presented graphically.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"1 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140826740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1007/s13198-024-02345-4
Saqib Ul Sabha, Assif Assad, Nusrat Mohi Ud Din, Muzafar Rasool Bhat
The widespread adoption of Convolutional Neural Networks (CNNs) in image recognition has undeniably marked a significant breakthrough. However, these networks need a lot of data to learn well, which can be challenging. This can make models prone to overfitting, where they perform well on training data but not on new data. Various strategies have emerged to address this issue, including reasonably selecting an appropriate network architecture. This study delves into mitigating data scarcity by undertaking a comparative analysis of two distinct methods: utilizing compact CNN architectures and applying transfer learning with pre-trained models. Our investigation extends across three disparate datasets, each hailing from a different domain. Remarkably, our findings unveil nuances in performance. The study reveals that using a complex pre-trained model like ResNet50 yields better results for the flower and Maize disease identification datasets, emphasizing the advantages of leveraging prior knowledge for specific data types. Conversely, starting from a simpler CNN architecture trained from scratch is the superior strategy with the Pneumonia dataset, highlighting the need to adapt the approach based on the specific dataset and domain.
{"title":"From scratch or pretrained? An in-depth analysis of deep learning approaches with limited data","authors":"Saqib Ul Sabha, Assif Assad, Nusrat Mohi Ud Din, Muzafar Rasool Bhat","doi":"10.1007/s13198-024-02345-4","DOIUrl":"https://doi.org/10.1007/s13198-024-02345-4","url":null,"abstract":"<p>The widespread adoption of Convolutional Neural Networks (CNNs) in image recognition has undeniably marked a significant breakthrough. However, these networks need a lot of data to learn well, which can be challenging. This can make models prone to overfitting, where they perform well on training data but not on new data. Various strategies have emerged to address this issue, including reasonably selecting an appropriate network architecture. This study delves into mitigating data scarcity by undertaking a comparative analysis of two distinct methods: utilizing compact CNN architectures and applying transfer learning with pre-trained models. Our investigation extends across three disparate datasets, each hailing from a different domain. Remarkably, our findings unveil nuances in performance. The study reveals that using a complex pre-trained model like ResNet50 yields better results for the flower and Maize disease identification datasets, emphasizing the advantages of leveraging prior knowledge for specific data types. Conversely, starting from a simpler CNN architecture trained from scratch is the superior strategy with the Pneumonia dataset, highlighting the need to adapt the approach based on the specific dataset and domain.\u0000</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"135 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140826627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1007/s13198-024-02351-6
Suryadi Ali, Choesnul Jaqin
Nowadays, each industrial process like installation, manufacturing and service industries process possesses the risk of process failure. The risk of process failure is collected from initial supply chain to final supply chain and the potential failure can affect the supply chain from one to another which is considered as a major problem in industries. In automotive motorcycle industry, the spare parts supply chain supports to generate the automotive vehicle spare parts that require the integration of a supply chain system to avoid delay from one supply chain to another supply chain. The installation process failure occurred due to the damage of one cylinder head product namely perforated cap camshaft so the assembly mechanism is used in the cylinder head for removing cracks on the torque. To overcome the failure and cracks in Cap Camshaft process, the Process Failure Modes and Effects analysis based Automotive Industry Action Group-Verband der Automobilindustrie (PFMEA-AIAG-VDA) version is proposed. The objective of this proposed method is to analyze the casting process and failure of cap camshaft on the cylinder head assembly parts such as camshaft and bolt flange. The optimization result improves the casting process over the porous camshaft cap by using casting process parameters and design of engineering factor analysis. The proposed method shows a positive impact on product output, wherefrom the monitoring is done by casting production for 20,000 shot castings, and there are no spray holes and cracks found in the suspect cap camshaft area so the production targets are achieved.
{"title":"Improvement and reduce risk of failure part -casting by multi-domain matrix- process failure modes and effects analysis based verband der automobilindustrie and design of experiment","authors":"Suryadi Ali, Choesnul Jaqin","doi":"10.1007/s13198-024-02351-6","DOIUrl":"https://doi.org/10.1007/s13198-024-02351-6","url":null,"abstract":"<p>Nowadays, each industrial process like installation, manufacturing and service industries process possesses the risk of process failure. The risk of process failure is collected from initial supply chain to final supply chain and the potential failure can affect the supply chain from one to another which is considered as a major problem in industries. In automotive motorcycle industry, the spare parts supply chain supports to generate the automotive vehicle spare parts that require the integration of a supply chain system to avoid delay from one supply chain to another supply chain. The installation process failure occurred due to the damage of one cylinder head product namely perforated cap camshaft so the assembly mechanism is used in the cylinder head for removing cracks on the torque. To overcome the failure and cracks in Cap Camshaft process, the Process Failure Modes and Effects analysis based Automotive Industry Action Group-Verband der Automobilindustrie (PFMEA-AIAG-VDA) version is proposed. The objective of this proposed method is to analyze the casting process and failure of cap camshaft on the cylinder head assembly parts such as camshaft and bolt flange. The optimization result improves the casting process over the porous camshaft cap by using casting process parameters and design of engineering factor analysis. The proposed method shows a positive impact on product output, wherefrom the monitoring is done by casting production for 20,000 shot castings, and there are no spray holes and cracks found in the suspect cap camshaft area so the production targets are achieved.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"33 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1007/s13198-024-02338-3
Admasu Tadesse, Srikumar Acharya, M. M. Acharya, Manoranjan Sahoo, Berhanu Belay
The pressure to conserve the environment as a result of global warming cannot be overstated. The necessity for operational managers to devise a sustainable green inventory stems from the fact that emissions from the production and inventory process contribute extremely to global warming. This study purposes a multi-objective multi-item fuzzy inventory and production management model with green investment in order to conserve the environment. The model is formulated in such a way that all of its ordering quantities (decision variables) and some of the input parameters are fuzzified. All the decision variables and some of the input parameters respectively are trapezoidal fuzzy decision variable and trapezoidal fuzzy number. The developed multi-objective model contains five objectives such as maximizing profit, minimizing total back-ordered quantity, minimizing the holding cost in the system, minimizing total waste produced by the inventory system per cycle and minimizing the total penalty cost due to green investment. Budget constraints, space restrictions, cost constraint on ordering each item, environmental waste disposal restrictions, pollution control costs, electricity consumption costs during production, and green house gas emission costs are among the restraints. To determine the crisp equivalent of this fuzzy model, an expected value method of defuzzification is used. The lexicographic method is applied on the resulting crisp mathematical model to find the compromise solutions. The methodology is demonstrated using a case study and the solution obtained provides a beneficial recommendation to industrial decision-makers.
{"title":"Multi-objective multi-item fuzzy inventory and production management problem involving fuzzy decision variables","authors":"Admasu Tadesse, Srikumar Acharya, M. M. Acharya, Manoranjan Sahoo, Berhanu Belay","doi":"10.1007/s13198-024-02338-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02338-3","url":null,"abstract":"<p>The pressure to conserve the environment as a result of global warming cannot be overstated. The necessity for operational managers to devise a sustainable green inventory stems from the fact that emissions from the production and inventory process contribute extremely to global warming. This study purposes a multi-objective multi-item fuzzy inventory and production management model with green investment in order to conserve the environment. The model is formulated in such a way that all of its ordering quantities (decision variables) and some of the input parameters are fuzzified. All the decision variables and some of the input parameters respectively are trapezoidal fuzzy decision variable and trapezoidal fuzzy number. The developed multi-objective model contains five objectives such as maximizing profit, minimizing total back-ordered quantity, minimizing the holding cost in the system, minimizing total waste produced by the inventory system per cycle and minimizing the total penalty cost due to green investment. Budget constraints, space restrictions, cost constraint on ordering each item, environmental waste disposal restrictions, pollution control costs, electricity consumption costs during production, and green house gas emission costs are among the restraints. To determine the crisp equivalent of this fuzzy model, an expected value method of defuzzification is used. The lexicographic method is applied on the resulting crisp mathematical model to find the compromise solutions. The methodology is demonstrated using a case study and the solution obtained provides a beneficial recommendation to industrial decision-makers.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"22 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-27DOI: 10.1007/s13198-024-02337-4
Vaibhav Bisht, S. B. Singh
This research introduces a new method, called Interval ({L}_{z})-transform (ILz), designed to estimate the reliability indices of Multi-State systems (MSS) even when data is uncertain or insufficient. Traditionally, precise values of state probabilities and performance metrics for each component were required, which could be challenging when data is lacking. To address this, the Interval ({L}_{z}) function is proposed, along with corresponding operators, enabling the calculation of interval-valued reliability indices for MSS. To demonstrate the effectiveness of the proposed method, it is applied to a numerical example of a series–parallel system. In this example, we determine interval-valued reliability indices such as reliability, availability, mean expected performance, and expected profit, considering uncertain values for the performance and failure rates of each multi-state component.
{"title":"Interval valued reliability indices assessment of multi-state system using interval $$L_{z}$$ -transform","authors":"Vaibhav Bisht, S. B. Singh","doi":"10.1007/s13198-024-02337-4","DOIUrl":"https://doi.org/10.1007/s13198-024-02337-4","url":null,"abstract":"<p>This research introduces a new method, called Interval <span>({L}_{z})</span>-transform (<i>IL</i><sub><i>z</i></sub>), designed to estimate the reliability indices of Multi-State systems (MSS) even when data is uncertain or insufficient. Traditionally, precise values of state probabilities and performance metrics for each component were required, which could be challenging when data is lacking. To address this, the Interval <span>({L}_{z})</span> function is proposed, along with corresponding operators, enabling the calculation of interval-valued reliability indices for MSS. To demonstrate the effectiveness of the proposed method, it is applied to a numerical example of a series–parallel system. In this example, we determine interval-valued reliability indices such as reliability, availability, mean expected performance, and expected profit, considering uncertain values for the performance and failure rates of each multi-state component.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"6 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140812915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-20DOI: 10.1007/s13198-024-02330-x
Nabeela Hasan, Kiran Chaudhary
The Industrial Internet of Things (IoT) comes together with different services, industrial applications, sensors, machines, and databases. Industrial IoT is improving the lives of the people in various ways such as smart cities, e-healthcare, and agriculture etc. Although Industrial IoT shares some characteristics with customer IoT, for both networks, separate cybersecurity techniques are used. Industrial IoT solutions are more likely to be incorporated into broader operational systems than customer IoT solutions, which are utilized by the single user for a particular purpose. As a result, Industrial IoT security solutions necessitate more preparation and awareness in order to ensure the system’s security and privacy. In this research paper, a random subspace and blockchain based technique is proposed. PCA is used as a preprocessing technique to preprocess the data. Furthermore, all the communication and node details are shared through blockchain to provide more secure communication. The integration of the blockchain in the existing approach gives better results in comparison to the other methods. The proposed methodology achieves better results in comparison to the previous techniques. The proposed methodology improves attack detection efficiency in comparison to the state-of-the-art machine learning techniques for IoT security.
{"title":"ρi-BLoM: a privacy preserving framework for the industrial IoT based on blockchain and machine learning","authors":"Nabeela Hasan, Kiran Chaudhary","doi":"10.1007/s13198-024-02330-x","DOIUrl":"https://doi.org/10.1007/s13198-024-02330-x","url":null,"abstract":"<p>The Industrial Internet of Things (IoT) comes together with different services, industrial applications, sensors, machines, and databases. Industrial IoT is improving the lives of the people in various ways such as smart cities, e-healthcare, and agriculture etc. Although Industrial IoT shares some characteristics with customer IoT, for both networks, separate cybersecurity techniques are used. Industrial IoT solutions are more likely to be incorporated into broader operational systems than customer IoT solutions, which are utilized by the single user for a particular purpose. As a result, Industrial IoT security solutions necessitate more preparation and awareness in order to ensure the system’s security and privacy. In this research paper, a random subspace and blockchain based technique is proposed. PCA is used as a preprocessing technique to preprocess the data. Furthermore, all the communication and node details are shared through blockchain to provide more secure communication. The integration of the blockchain in the existing approach gives better results in comparison to the other methods. The proposed methodology achieves better results in comparison to the previous techniques. The proposed methodology improves attack detection efficiency in comparison to the state-of-the-art machine learning techniques for IoT security.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"135 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140634869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main purpose of this study is to propose a decision support system that deals with the uncertainties in a model of atmospheric dispersion and in meteorological data (speed and direction of wind), which may negatively affect the model accuracy. This later helps the safety agencies in making decisions and allocating necessary materials and human resources to handle potential disastrous events. In order to investigate the aforementioned issues and provide a more reliable data we propose the adaptive Neuro-Fuzzy inference (ANFIS) system enhanced by the mean particle swarm optimization (PSO) to predict the concentration of Sulfur Dioxide release in the atmosphere. This method takes the advantages of fuzzy logic system to address the uncertainties and the ability of neural network to learn from the data. Furthermore our study attempts to estimate the severity index of the released material with the help of fuzzy logic. The result of our study shows that the presented method is successfully applied and it can be a powerful alternative to deal with Sulfur Dioxide release.
{"title":"Adaptive-neuro fuzzy inference trained with PSO for estimating the concentration and severity of sulfur dioxiderelease","authors":"Mourad Achouri, Youcef Zennir, Cherif Tolba, Fares Innal, Chaima Bensaci, Yiliu Liu","doi":"10.1007/s13198-024-02336-5","DOIUrl":"https://doi.org/10.1007/s13198-024-02336-5","url":null,"abstract":"<p>The main purpose of this study is to propose a decision support system that deals with the uncertainties in a model of atmospheric dispersion and in meteorological data (speed and direction of wind), which may negatively affect the model accuracy. This later helps the safety agencies in making decisions and allocating necessary materials and human resources to handle potential disastrous events. In order to investigate the aforementioned issues and provide a more reliable data we propose the adaptive Neuro-Fuzzy inference (ANFIS) system enhanced by the mean particle swarm optimization (PSO) to predict the concentration of Sulfur Dioxide release in the atmosphere. This method takes the advantages of fuzzy logic system to address the uncertainties and the ability of neural network to learn from the data. Furthermore our study attempts to estimate the severity index of the released material with the help of fuzzy logic. The result of our study shows that the presented method is successfully applied and it can be a powerful alternative to deal with Sulfur Dioxide release.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"43 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140637465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1007/s13198-024-02325-8
Amit Kumar, Pradeep Kumar
In this work, a novel SRF-PLL and DSOGI-PLL with the JAYA based optimization approach is presented herein for the control of a unified power quality conditioner (UPQC) system. The proposed UPQC system is linked to a three-phase distribution system that has nonlinear loads. Increased use of non-linear loads has contributed to harmonic effluence in power distribution systems and thus power quality issues have been elevated which is essential to be efficiently addressed. Since, the UPQC consists of a shunt and a series filters therefore, it is a most promising custom power device to mitigate power quality issues of instance voltage swell, sag, phase unbalance, current and voltage harmonics, DC-link voltage regulation, reactive power compensation etc. SRF-PLL and DSOGI-PLL perform grid synchronization and reference signal generation simultaneously in a single platform. Additionally, JAYA based optimization has been employed for determination of PI controller gains of both the controller. To validate the performance of UPQC and its controller, the complete UPQC system has been developed and fabricated in MATLAB/ Simulink as well as in hardware platform. The accuracy of simulation as well as hardware outcomes and their comparative power quality investigation is found to be satisfactory.
{"title":"JAYA based optimization strategy for UPQC PI tuning based on novel SRF-DSOGI PLL control","authors":"Amit Kumar, Pradeep Kumar","doi":"10.1007/s13198-024-02325-8","DOIUrl":"https://doi.org/10.1007/s13198-024-02325-8","url":null,"abstract":"<p>In this work, a novel SRF-PLL and DSOGI-PLL with the JAYA based optimization approach is presented herein for the control of a unified power quality conditioner (UPQC) system. The proposed UPQC system is linked to a three-phase distribution system that has nonlinear loads. Increased use of non-linear loads has contributed to harmonic effluence in power distribution systems and thus power quality issues have been elevated which is essential to be efficiently addressed. Since, the UPQC consists of a shunt and a series filters therefore, it is a most promising custom power device to mitigate power quality issues of instance voltage swell, sag, phase unbalance, current and voltage harmonics, DC-link voltage regulation, reactive power compensation etc. SRF-PLL and DSOGI-PLL perform grid synchronization and reference signal generation simultaneously in a single platform. Additionally, JAYA based optimization has been employed for determination of PI controller gains of both the controller. To validate the performance of UPQC and its controller, the complete UPQC system has been developed and fabricated in MATLAB/ Simulink as well as in hardware platform. The accuracy of simulation as well as hardware outcomes and their comparative power quality investigation is found to be satisfactory.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"15 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1007/s13198-024-02333-8
Ronghui Hu, Tong Zhen
Data-driven electricity theft detection (ETD) based on machine learning and deep learning has the advantages of automation, real-time performance, and efficiency while requiring a large amount of labeled data to train models. However, the imbalance ratio between positive and unlabeled samples has reached 1:200, which significantly limits the accuracy of the ETD model. In cases like this, we refer to it as positive-unlabeled learning. Down-sampling wastes a large amount of negative samples, while up-sampling will result in the ETD model not being robust. Both can lead to ETD models performing well in experimental environments but poorly in production environments. In this context, this paper proposes a semi-supervised electricity theft detection algorithm based on fuzzy c-means and logistic regression cross detection (FCM-LR). Firstly, a statistical feature set based on business data and load data is proposed to depict the profile of electricity users, which can achieve the effect of reducing the complexity of data structure. Furthermore, by using the FCM-LR method, the utilization of unlabeled data can be maximized, and new electricity theft patterns can be discovered. The simulation results show that the theft detection effect of this method is significant, with Precision, Recall, F1, and Area under Curve all approaching 99%.
{"title":"Research on FCM-LR cross electricity theft detection based on big data user profile","authors":"Ronghui Hu, Tong Zhen","doi":"10.1007/s13198-024-02333-8","DOIUrl":"https://doi.org/10.1007/s13198-024-02333-8","url":null,"abstract":"<p>Data-driven electricity theft detection (ETD) based on machine learning and deep learning has the advantages of automation, real-time performance, and efficiency while requiring a large amount of labeled data to train models. However, the imbalance ratio between positive and unlabeled samples has reached 1:200, which significantly limits the accuracy of the ETD model. In cases like this, we refer to it as positive-unlabeled learning. Down-sampling wastes a large amount of negative samples, while up-sampling will result in the ETD model not being robust. Both can lead to ETD models performing well in experimental environments but poorly in production environments. In this context, this paper proposes a semi-supervised electricity theft detection algorithm based on fuzzy c-means and logistic regression cross detection (FCM-LR). Firstly, a statistical feature set based on business data and load data is proposed to depict the profile of electricity users, which can achieve the effect of reducing the complexity of data structure. Furthermore, by using the FCM-LR method, the utilization of unlabeled data can be maximized, and new electricity theft patterns can be discovered. The simulation results show that the theft detection effect of this method is significant, with Precision, Recall, F1, and Area under Curve all approaching 99%.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"31 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}