Dominic Strobl, Jörg F. Unger, Chady Ghnatios, Annika Robens-Radermacher
Thermal transient problems, essential for modeling applications like welding and additive metal manufacturing, are characterized by a dynamic evolution of temperature. Accurately simulating these phenomena is often computationally expensive, thus limiting their applications, for example for model parameter estimation or online process control. Model order reduction, a solution to preserve the accuracy while reducing the computation time, is explored. This article addresses challenges in developing reduced order models using the proper generalized decomposition (PGD) for transient thermal problems with a specific treatment of the moving heat source within the reduced model. Factors affecting accuracy, convergence, and computational cost, such as discretization methods (finite element and finite difference), a dimensionless formulation, the size of the heat source, and the inclusion of material parameters as additional PGD variables are examined across progressively complex examples. The results demonstrate the influence of these factors on the PGD model's performance and emphasize the importance of their consideration when implementing such models. For thermal example, it is demonstrated that a PGD model with a finite difference discretization in time, a dimensionless representation, a mapping for a moving heat source, and a spatial domain non-separation yields the best approximation to the full order model.
{"title":"PGD in thermal transient problems with a moving heat source: A sensitivity study on factors affecting accuracy and efficiency","authors":"Dominic Strobl, Jörg F. Unger, Chady Ghnatios, Annika Robens-Radermacher","doi":"10.1002/eng2.12887","DOIUrl":"10.1002/eng2.12887","url":null,"abstract":"<p>Thermal transient problems, essential for modeling applications like welding and additive metal manufacturing, are characterized by a dynamic evolution of temperature. Accurately simulating these phenomena is often computationally expensive, thus limiting their applications, for example for model parameter estimation or online process control. Model order reduction, a solution to preserve the accuracy while reducing the computation time, is explored. This article addresses challenges in developing reduced order models using the proper generalized decomposition (PGD) for transient thermal problems with a specific treatment of the moving heat source within the reduced model. Factors affecting accuracy, convergence, and computational cost, such as discretization methods (finite element and finite difference), a dimensionless formulation, the size of the heat source, and the inclusion of material parameters as additional PGD variables are examined across progressively complex examples. The results demonstrate the influence of these factors on the PGD model's performance and emphasize the importance of their consideration when implementing such models. For thermal example, it is demonstrated that a PGD model with a finite difference discretization in time, a dimensionless representation, a mapping for a moving heat source, and a spatial domain non-separation yields the best approximation to the full order model.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12887","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140377274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Signal decomposition is crucial in several domains, particularly in the dissection of complex signals present in electrical power systems. Understanding the oscillations and patterns within these signals can significantly influence energy resource management, grid stability, and efficient system operation. This paper presents an advanced enhanced decomposition method based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) to mitigate the inherent drawbacks of the conventional CEEMDAN and its improved version. Unlike CEEMDAN's generalized noise approach, the proposed method introduces adaptive noise, enhancing target signal noise handling by incorporating a tailored filtering and updating process after each iteration. This leads to more accurate signal decomposition compared to traditional methods. Comprehensive tests were conducted using artificially generated signals characterized by mode mixing, varying frequency oscillations, complex real-world electrical demand signals, generator axis vibrations and partial discharge signals. The results demonstrate that the proposed method outperforms traditional techniques in two significant aspects. First, it provides superior spectral separation of the intrinsic modes (IMF) of the signal, thereby enhancing decomposition accuracy. Second, it significantly reduced the number of shifting iterations, thereby alleviating the computational load. These advancements have led to a more accurate and efficient framework that is essential for analyzing nonlinear and nonstationary signals.
{"title":"Enhanced complete ensemble EMD with superior noise handling capabilities: A robust signal decomposition method for power systems analysis","authors":"Manuel Soto Calvo, Han Soo Lee","doi":"10.1002/eng2.12862","DOIUrl":"10.1002/eng2.12862","url":null,"abstract":"<p>Signal decomposition is crucial in several domains, particularly in the dissection of complex signals present in electrical power systems. Understanding the oscillations and patterns within these signals can significantly influence energy resource management, grid stability, and efficient system operation. This paper presents an advanced enhanced decomposition method based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) to mitigate the inherent drawbacks of the conventional CEEMDAN and its improved version. Unlike CEEMDAN's generalized noise approach, the proposed method introduces adaptive noise, enhancing target signal noise handling by incorporating a tailored filtering and updating process after each iteration. This leads to more accurate signal decomposition compared to traditional methods. Comprehensive tests were conducted using artificially generated signals characterized by mode mixing, varying frequency oscillations, complex real-world electrical demand signals, generator axis vibrations and partial discharge signals. The results demonstrate that the proposed method outperforms traditional techniques in two significant aspects. First, it provides superior spectral separation of the intrinsic modes (IMF) of the signal, thereby enhancing decomposition accuracy. Second, it significantly reduced the number of shifting iterations, thereby alleviating the computational load. These advancements have led to a more accurate and efficient framework that is essential for analyzing nonlinear and nonstationary signals.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12862","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140221516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Named Data Networking is a novel network architecture for the future internet in which data is named rather than addressed. This enables routers to forward data more efficiently based on its name, outperforming IP-based routing. Another networking architecture that divides the control plane and the data plane is software-defined networking (SDN). In an SDN network, the control plane decides on routing while the data plane forwards data packets. This separation increases routing flexibility and scalability. Numerous techniques to improve routing can be achieved with NDN and SDN integration. This paper provides an in-depth examination of routing approaches in NDN based on SDN, emphasizing design principles, algorithms, and performance measures. We begin by summarizing the NDN architecture and delving into its essential components. We next go into the core routing ideas in NDN and categorize and study several routing solutions based on Software Defined Networks. Finally, we highlight the need for scalable, effective, and secure routing systems that may satisfy the expanding requirements of the contemporary internet. We also suggest open research topics in NDN routing based on SDN. This review provides an extensive overview of current centralized routing approaches in NDN, including their limitations and future possibilities.
命名数据网络(Named Data Networking)是未来互联网的一种新型网络架构,在这种架构中,数据被命名而不是编址。这样,路由器就能根据数据名称更有效地转发数据,从而超越基于 IP 的路由选择。另一种划分控制平面和数据平面的网络架构是软件定义网络(SDN)。在 SDN 网络中,控制平面决定路由,而数据平面转发数据包。这种分离提高了路由选择的灵活性和可扩展性。通过 NDN 和 SDN 集成,可以实现许多改进路由选择的技术。本文深入探讨了基于 SDN 的 NDN 路由方法,强调了设计原则、算法和性能衡量标准。我们首先总结了 NDN 体系结构,并深入探讨了其基本组成部分。接下来,我们深入探讨了 NDN 的核心路由思想,并对基于软件定义网络的几种路由解决方案进行了分类和研究。最后,我们强调了对可扩展、有效和安全路由系统的需求,以满足当代互联网不断扩展的要求。我们还提出了基于 SDN 的 NDN 路由的开放研究课题。本综述广泛概述了当前 NDN 中的集中式路由选择方法,包括其局限性和未来的可能性。
{"title":"A review of SDN-enabled routing protocols for Named Data Networking","authors":"Sembati Yassine, Naja Najib, Jamali Abdellah","doi":"10.1002/eng2.12884","DOIUrl":"10.1002/eng2.12884","url":null,"abstract":"<p>Named Data Networking is a novel network architecture for the future internet in which data is named rather than addressed. This enables routers to forward data more efficiently based on its name, outperforming IP-based routing. Another networking architecture that divides the control plane and the data plane is software-defined networking (SDN). In an SDN network, the control plane decides on routing while the data plane forwards data packets. This separation increases routing flexibility and scalability. Numerous techniques to improve routing can be achieved with NDN and SDN integration. This paper provides an in-depth examination of routing approaches in NDN based on SDN, emphasizing design principles, algorithms, and performance measures. We begin by summarizing the NDN architecture and delving into its essential components. We next go into the core routing ideas in NDN and categorize and study several routing solutions based on Software Defined Networks. Finally, we highlight the need for scalable, effective, and secure routing systems that may satisfy the expanding requirements of the contemporary internet. We also suggest open research topics in NDN routing based on SDN. This review provides an extensive overview of current centralized routing approaches in NDN, including their limitations and future possibilities.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12884","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140224586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semi-rigid steel and composite joints are widely used in steel structures for their effectiveness and low constructional costs and mechanical property research on the semi-rigid connections could not be ignored. There are still limitations to the performance analysis of mechanical components in connections based on component method, such as T-stub. This study mainly focuses on the modification of the stiffness formula of T-stub. Ten groups of different sizes of T-stub were designed for monotonic loading tests and accurate finite element models were established for comparable study with the experimental results. Parametric analysis was performed on the finite element models, and the influence of each parameter on the tensile stiffness of T-stub was analyzed. Finally, based on the correlation between tensile stiffness and the parameters, using the probabilistic design system (PDS) module, a T-stub initial tensile stiffness theoretical formula which has a wide range of application and easier to use is proposed.
半刚性钢连接和复合材料连接因其效果好、施工成本低而被广泛应用于钢结构中,对半刚性连接的机械性能研究也不容忽视。基于构件法的连接中机械构件的性能分析仍存在局限性,如 T 型管。本研究主要关注 T 形管刚度公式的修改。设计了十组不同尺寸的 T 形管进行单调加载试验,并建立了精确的有限元模型,以便与实验结果进行比较研究。对有限元模型进行了参数分析,分析了各参数对 T 形管拉伸刚度的影响。最后,根据拉伸刚度与参数之间的相关性,利用概率设计系统(PDS)模块,提出了应用范围更广、更易于使用的 T 形管初始拉伸刚度理论公式。
{"title":"Research on the initial tensile stiffness of T-stub based on correlation","authors":"Xinxing Xu, Shizhe Chen, Hui Yuan","doi":"10.1002/eng2.12869","DOIUrl":"10.1002/eng2.12869","url":null,"abstract":"<p>Semi-rigid steel and composite joints are widely used in steel structures for their effectiveness and low constructional costs and mechanical property research on the semi-rigid connections could not be ignored. There are still limitations to the performance analysis of mechanical components in connections based on component method, such as T-stub. This study mainly focuses on the modification of the stiffness formula of T-stub. Ten groups of different sizes of T-stub were designed for monotonic loading tests and accurate finite element models were established for comparable study with the experimental results. Parametric analysis was performed on the finite element models, and the influence of each parameter on the tensile stiffness of T-stub was analyzed. Finally, based on the correlation between tensile stiffness and the parameters, using the probabilistic design system (PDS) module, a T-stub initial tensile stiffness theoretical formula which has a wide range of application and easier to use is proposed.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12869","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140227283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahui Wen, Xuesong Chu, Liang Xu, Guangming Yu, Liang Li
A probabilistic limit equilibrium framework combining empirical load transfer factor and anisotropy of soil cohesion is developed to conduct pile-reinforced slope reliability analysis. The anisotropy of soil cohesion is determined conditioned on that the thrust force direction is parallel to the major principal direction and it is easily combined with load transfer factor, which are related with soil parameters, and pile parameters. The proposed method is illustrated against a homogeneous soil slope. The sensitivity studies of pile parameters on factor of safety (FS; calculated at respective means of soil parameters) and β demonstrated that the anisotropy of soil cohesion tends to pose significant effect on reliability index β than on FS. The effect of anisotropy of soil cohesion on FS is found to be slightly different under different pile locations, whereas its effect on β is observed to be least if piles are drilled at the middle part of slope and more significant effect is observed when piles are drilled at the lower and upper part of slope. The plots from the sensitivity studies provide an alternative tool for pile designs aiming at the target reliability index β. The proposed method contributes to the pile-reinforced slope stability within limit equilibrium framework.
{"title":"Probabilistic pile reinforced slope stability analysis using load transfer factor considering anisotropy of soil cohesion","authors":"Jiahui Wen, Xuesong Chu, Liang Xu, Guangming Yu, Liang Li","doi":"10.1002/eng2.12877","DOIUrl":"10.1002/eng2.12877","url":null,"abstract":"<p>A probabilistic limit equilibrium framework combining empirical load transfer factor and anisotropy of soil cohesion is developed to conduct pile-reinforced slope reliability analysis. The anisotropy of soil cohesion is determined conditioned on that the thrust force direction is parallel to the major principal direction and it is easily combined with load transfer factor, which are related with soil parameters, and pile parameters. The proposed method is illustrated against a homogeneous soil slope. The sensitivity studies of pile parameters on factor of safety (FS; calculated at respective means of soil parameters) and <i>β</i> demonstrated that the anisotropy of soil cohesion tends to pose significant effect on reliability index <i>β</i> than on FS. The effect of anisotropy of soil cohesion on FS is found to be slightly different under different pile locations, whereas its effect on <i>β</i> is observed to be least if piles are drilled at the middle part of slope and more significant effect is observed when piles are drilled at the lower and upper part of slope. The plots from the sensitivity studies provide an alternative tool for pile designs aiming at the target reliability index <i>β</i>. The proposed method contributes to the pile-reinforced slope stability within limit equilibrium framework.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12877","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140224244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojie Liu, Linqi Yan, Yang Zhang, Gang Ren, Mingdong Zhu
The lossless coding framework of High Efficiency Video Coding (HEVC) comprises prediction and entropy coding, while omitting the transform process. However, residuals derived from prediction continue to demonstrate significant spatial redundancy, which has the potential to compromise the coding efficiency. This article presents an optimized scheme for HEVC intra lossless coding that integrates transform and quantization. In the proposed scheme, a residual block, generated by HEVC, is partitioned into two components: the coefficient block and the error block. The coefficient block, responsible for lossless coding, is obtained through the adoption of the transform and quantization processes employed in HEVC lossy coding. Subsequently, the error block is derived by predicting the residual block using the generated coefficient block. Both the coefficient block and the error block can be effectively encoded utilizing the existing entropy coding scheme in HEVC lossy coding. Rate-distortion optimization is employed to determine whether the residual block will be partitioned into two components. The proposed scheme is implemented into the HM 12.1 software. Through the utilization of the discrete cosine transform kernel and the discrete sine transform kernel, with a quantization parameter set to 20, on HEVC standard test sequences and the all intra main configuration file, empirical findings substantiate that the proposed methodology attains an average bit-rate reduction of 3.31%, with a maximum improvement of 12.29% in comparison to HEVC lossless coding.
{"title":"An optimized transform and quantization scheme for HEVC intra lossless coding","authors":"Xiaojie Liu, Linqi Yan, Yang Zhang, Gang Ren, Mingdong Zhu","doi":"10.1002/eng2.12885","DOIUrl":"10.1002/eng2.12885","url":null,"abstract":"<p>The lossless coding framework of High Efficiency Video Coding (HEVC) comprises prediction and entropy coding, while omitting the transform process. However, residuals derived from prediction continue to demonstrate significant spatial redundancy, which has the potential to compromise the coding efficiency. This article presents an optimized scheme for HEVC intra lossless coding that integrates transform and quantization. In the proposed scheme, a residual block, generated by HEVC, is partitioned into two components: the coefficient block and the error block. The coefficient block, responsible for lossless coding, is obtained through the adoption of the transform and quantization processes employed in HEVC lossy coding. Subsequently, the error block is derived by predicting the residual block using the generated coefficient block. Both the coefficient block and the error block can be effectively encoded utilizing the existing entropy coding scheme in HEVC lossy coding. Rate-distortion optimization is employed to determine whether the residual block will be partitioned into two components. The proposed scheme is implemented into the HM 12.1 software. Through the utilization of the discrete cosine transform kernel and the discrete sine transform kernel, with a quantization parameter set to 20, on HEVC standard test sequences and the all intra main configuration file, empirical findings substantiate that the proposed methodology attains an average bit-rate reduction of 3.31%, with a maximum improvement of 12.29% in comparison to HEVC lossless coding.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12885","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140230238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pradnya Moon, Ganesh Yenurkar, Vincent O. Nyangaresi, Ayush Raut, Nikhil Dapkekar, Jay Rathod, Piyush Dabare
The biggest challenge the deaf and dumb group faces is that individuals around them do not understand sign language, which they use to communicate with one another. Written communication is slower than face-to-face contact, despite the fact that it can be used. Many sign languages have been developed around the world because they are more effective in emergency situations than text-based communication. India in-spite of having the large deaf population of almost 18 million and having only around 250 trained/untrained; skilled interpreters. The proposed system can utilize a custom convolution neural networks (CCNNs) model to identify hand motions in order to resolve this issue. This system uses a filter to process the hand before sending it through a classifier to identify the type of hand movements. CCNN strategy employs two levels of algorithm to predict and evaluate symbols that are increasingly similar to one another in order to get as close to precisely recognizing the symbol presented as possible. Convolutional neural networks (CNNs) are able to precisely identify a variety of gestures after being trained on large datasets of hand sign photographs. As a result of their frequent usage of many layers of filters and pooling to extract relevant information from the input images, these networks can recognize hand signs with an accuracy rate of 99.95%, which is much greater than previously built models like SIGNGRAPH, SVM, KNN, CNN + Bi-LSTM, 3D-CNN and 2D CNN network and 1D CNN skeleton network. The simulation result shows that a suggested CCNN-based learning approach is useful for hand sign detection and future usage research when compared with existing machine learning models.
{"title":"An improved custom convolutional neural network based hand sign recognition using machine learning algorithm","authors":"Pradnya Moon, Ganesh Yenurkar, Vincent O. Nyangaresi, Ayush Raut, Nikhil Dapkekar, Jay Rathod, Piyush Dabare","doi":"10.1002/eng2.12878","DOIUrl":"10.1002/eng2.12878","url":null,"abstract":"<p>The biggest challenge the deaf and dumb group faces is that individuals around them do not understand sign language, which they use to communicate with one another. Written communication is slower than face-to-face contact, despite the fact that it can be used. Many sign languages have been developed around the world because they are more effective in emergency situations than text-based communication. India in-spite of having the large deaf population of almost 18 million and having only around 250 trained/untrained; skilled interpreters. The proposed system can utilize a custom convolution neural networks (CCNNs) model to identify hand motions in order to resolve this issue. This system uses a filter to process the hand before sending it through a classifier to identify the type of hand movements. CCNN strategy employs two levels of algorithm to predict and evaluate symbols that are increasingly similar to one another in order to get as close to precisely recognizing the symbol presented as possible. Convolutional neural networks (CNNs) are able to precisely identify a variety of gestures after being trained on large datasets of hand sign photographs. As a result of their frequent usage of many layers of filters and pooling to extract relevant information from the input images, these networks can recognize hand signs with an accuracy rate of 99.95%, which is much greater than previously built models like SIGNGRAPH, SVM, KNN, CNN + Bi-LSTM, 3D-CNN and 2D CNN network and 1D CNN skeleton network. The simulation result shows that a suggested CCNN-based learning approach is useful for hand sign detection and future usage research when compared with existing machine learning models.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12878","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140230431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zaid Sifat, Md Tahmid Hussain, Mohd Anas Khan, Md Shahbaz Hussain, Adil Sarwar, Mohd Tariq, Mehedi Hasan
The use of a multilevel inverter has received significant attention in recent years due to its numerous advantages. Because of the widespread use of multilevel inverters in industries and applications that require a wide range of voltages, achieving high-quality voltage has presented a number of challenges. Many studies have been carried out to address the problem of unwanted harmonics in multilevel inverters. The inverter switching can be done at a low frequency using the Selective Harmonic Elimination (SHE) technique, and the unwanted harmonics can be significantly reduced; however, the issue with SHE is solving the transcendental equations and determining the optimum switching angle. To address this problem, a new and improved hybrid algorithm is proposed that combines two evolutionary algorithms, gray wolf optimization (GWO) with an improved and new convergence factor and Differential evolution (DE) with a dynamic scaling factor using a crossover operator. In this paper, the optimum switching angles for a packed U cell 5-level inverter are estimated using the proposed algorithm for distinct modulation index values, and simulation results are compared with different algorithms. The simulation result of the proposed algorithm, IGWO-DE is confirmed through experiment.
近年来,多电平逆变器因其众多优点而备受关注。由于多电平逆变器广泛应用于需要宽电压范围的工业和应用领域,因此实现高质量电压面临着诸多挑战。为了解决多电平逆变器中不需要的谐波问题,已经开展了许多研究。利用选择性谐波消除(SHE)技术,逆变器开关可在低频下进行,无用谐波可显著减少;然而,SHE 的问题在于求解超越方程和确定最佳开关角度。为了解决这个问题,本文提出了一种新的改进型混合算法,它结合了两种进化算法,一种是具有改进型新收敛因子的灰狼优化算法(GWO),另一种是使用交叉算子具有动态缩放因子的差分进化算法(DE)。本文针对不同的调制指数值,使用所提出的算法估算了紧凑型 U 单元 5 电平逆变器的最佳开关角,并将仿真结果与不同的算法进行了比较。实验证实了所提算法 IGWO-DE 的仿真结果。
{"title":"Selective harmonic elimination in PUC-5 multilevel inverter using hybrid IGWO-DE algorithm","authors":"Zaid Sifat, Md Tahmid Hussain, Mohd Anas Khan, Md Shahbaz Hussain, Adil Sarwar, Mohd Tariq, Mehedi Hasan","doi":"10.1002/eng2.12883","DOIUrl":"https://doi.org/10.1002/eng2.12883","url":null,"abstract":"<p>The use of a multilevel inverter has received significant attention in recent years due to its numerous advantages. Because of the widespread use of multilevel inverters in industries and applications that require a wide range of voltages, achieving high-quality voltage has presented a number of challenges. Many studies have been carried out to address the problem of unwanted harmonics in multilevel inverters. The inverter switching can be done at a low frequency using the Selective Harmonic Elimination (SHE) technique, and the unwanted harmonics can be significantly reduced; however, the issue with SHE is solving the transcendental equations and determining the optimum switching angle. To address this problem, a new and improved hybrid algorithm is proposed that combines two evolutionary algorithms, gray wolf optimization (GWO) with an improved and new convergence factor and Differential evolution (DE) with a dynamic scaling factor using a crossover operator. In this paper, the optimum switching angles for a packed U cell 5-level inverter are estimated using the proposed algorithm for distinct modulation index values, and simulation results are compared with different algorithms. The simulation result of the proposed algorithm, IGWO-DE is confirmed through experiment.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12883","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142439066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hamid Reza Khajeha, Mansoor Fateh, Vahid Abolghasemi
Glaucoma is defined as an eye disease leading to vision loss due to the optic nerve damage. It is often asymptomatic, thus, timely diagnosis and treatment is crucial. In this article, we propose a novel approach for diagnosing glaucoma using deep neural networks, trained on fundus images. Our proposed approach involves several key steps, including data sampling, pre-processing, and classification. To address the data imbalance issue, we employ a combination of suitable image augmentation techniques and Multi-Scale Attention Block (MAS Block) architecture in our deep neural network model. The MAS Block is a specific architecture design for CNNs that allows multiple convolutional filters of various sizes to capture features at several scales in parallel. This will prevent the over-fitting problem and increases the detection accuracy. Through extensive experiments with the ACRIMA dataset, we demonstrate that our proposed approach achieves high accuracy in diagnosing glaucoma. Notably, we recorded the highest accuracy (97.18%) among previous studies. The results from this study reveal the potential of our approach to improve early detection of glaucoma and offer more effective treatment strategies for doctors and clinicians in the future. Timely diagnosis plays a crucial role in managing glaucoma since it is often asymptomatic. Our proposed method utilizing deep neural networks shows promise in enhancing diagnostic accuracy and aiding healthcare professionals in making informed decisions.
{"title":"Diagnosis of glaucoma using multi-scale attention block in convolution neural network and data augmentation techniques","authors":"Hamid Reza Khajeha, Mansoor Fateh, Vahid Abolghasemi","doi":"10.1002/eng2.12866","DOIUrl":"10.1002/eng2.12866","url":null,"abstract":"<p>Glaucoma is defined as an eye disease leading to vision loss due to the optic nerve damage. It is often asymptomatic, thus, timely diagnosis and treatment is crucial. In this article, we propose a novel approach for diagnosing glaucoma using deep neural networks, trained on fundus images. Our proposed approach involves several key steps, including data sampling, pre-processing, and classification. To address the data imbalance issue, we employ a combination of suitable image augmentation techniques and Multi-Scale Attention Block (MAS Block) architecture in our deep neural network model. The MAS Block is a specific architecture design for CNNs that allows multiple convolutional filters of various sizes to capture features at several scales in parallel. This will prevent the over-fitting problem and increases the detection accuracy. Through extensive experiments with the ACRIMA dataset, we demonstrate that our proposed approach achieves high accuracy in diagnosing glaucoma. Notably, we recorded the highest accuracy (97.18%) among previous studies. The results from this study reveal the potential of our approach to improve early detection of glaucoma and offer more effective treatment strategies for doctors and clinicians in the future. Timely diagnosis plays a crucial role in managing glaucoma since it is often asymptomatic. Our proposed method utilizing deep neural networks shows promise in enhancing diagnostic accuracy and aiding healthcare professionals in making informed decisions.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12866","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140251325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the results of the development of stabilization measures aimed at the removal of zinc with the products of melting and accumulation of titanium in the hearth of a blast furnace. The relevance of the development and use in practice of such measures is due to the unstable fuel and raw materials conditions for the production of cast iron, when their stabilization is a complex and difficult task, as well as the need to extend the campaign of blast furnaces during the overhaul period. The negative effect of zinc oxides on the condition of the blast furnace shaft lining, accompanied by slab formation, and the overconsumption of specific coke consumption, which occurs when zinc circulates in the volume of the blast furnace, require measures to remove zinc from the smelting products. The article proposes such measures, which consist of flushing according to the proposed schedule during the operation of the blast furnace at planned blowing parameters and with the provision of the necessary thermal reserve. In order to lengthen the campaign of a blast furnace, one of the most common methods for protecting the hearth lining is the periodic introduction of titanium-containing materials into the charge of blast furnaces. The entry of titanium oxides into the furnace, as a rule, is ensured by the use of concentrate or specially prepared ilmenite briquettes with a high titanium content as part of the sinter charge, which can be introduced directly into the composition of the blast furnace charge. The article analyzes the experience of using titanium-containing materials as part of a blast furnace charge and formulates measures to intensify skull formation in the hearth.
{"title":"Development of stabilization measures aimed at removing zinc with smelting products and accumulating titanium in the hearth of a blast furnace","authors":"Yurii Semenov, Viktor Horupakha, Serhii Vashchenko, Oleksandr Khudyakov, Ievhen Shumelchyk, Kostiantyn Baiul","doi":"10.1002/eng2.12881","DOIUrl":"https://doi.org/10.1002/eng2.12881","url":null,"abstract":"<p>This paper presents the results of the development of stabilization measures aimed at the removal of zinc with the products of melting and accumulation of titanium in the hearth of a blast furnace. The relevance of the development and use in practice of such measures is due to the unstable fuel and raw materials conditions for the production of cast iron, when their stabilization is a complex and difficult task, as well as the need to extend the campaign of blast furnaces during the overhaul period. The negative effect of zinc oxides on the condition of the blast furnace shaft lining, accompanied by slab formation, and the overconsumption of specific coke consumption, which occurs when zinc circulates in the volume of the blast furnace, require measures to remove zinc from the smelting products. The article proposes such measures, which consist of flushing according to the proposed schedule during the operation of the blast furnace at planned blowing parameters and with the provision of the necessary thermal reserve. In order to lengthen the campaign of a blast furnace, one of the most common methods for protecting the hearth lining is the periodic introduction of titanium-containing materials into the charge of blast furnaces. The entry of titanium oxides into the furnace, as a rule, is ensured by the use of concentrate or specially prepared ilmenite briquettes with a high titanium content as part of the sinter charge, which can be introduced directly into the composition of the blast furnace charge. The article analyzes the experience of using titanium-containing materials as part of a blast furnace charge and formulates measures to intensify skull formation in the hearth.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.12881","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142438930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}