Pub Date : 2024-11-08DOI: 10.1016/j.compind.2024.104209
Lingyun Deng, Sanyang Liu
Parameter estimation of photovoltaic (PV) models, mathematically, is a typical complicated nonlinear multimodal optimization problem with box constraints. Although various methodologies have been explored in the literature, their performance tends to be unstable owing to inadequate adaptability. In this paper, an enhanced social learning swarm optimizer (ESLPSO) is developed to achieve more reliable parameter estimation in PV models. Firstly, using the non-stagnant distribution assumption, we obtain a sufficient and necessary condition to guarantee the stability of the basic social learning swarm optimizer (SLPSO). Secondly, a nonlinear control coefficient is introduced to balance convergence and diversity. Finally, an interactive learning mechanism is devised to preserve population diversity. The efficacy of ESLPSO is validated using three extensively applied PV models and several scalable optimization problems. Statistical outcomes highlight the robustness and competitiveness of ESLPSO compared to other state-of-the-art methodologies.
{"title":"Advancing photovoltaic system design: An enhanced social learning swarm optimizer with guaranteed stability","authors":"Lingyun Deng, Sanyang Liu","doi":"10.1016/j.compind.2024.104209","DOIUrl":"10.1016/j.compind.2024.104209","url":null,"abstract":"<div><div>Parameter estimation of photovoltaic (PV) models, mathematically, is a typical complicated nonlinear multimodal optimization problem with box constraints. Although various methodologies have been explored in the literature, their performance tends to be unstable owing to inadequate adaptability. In this paper, an enhanced social learning swarm optimizer (ESLPSO) is developed to achieve more reliable parameter estimation in PV models. Firstly, using the non-stagnant distribution assumption, we obtain a sufficient and necessary condition to guarantee the stability of the basic social learning swarm optimizer (SLPSO). Secondly, a nonlinear control coefficient is introduced to balance convergence and diversity. Finally, an interactive learning mechanism is devised to preserve population diversity. The efficacy of ESLPSO is validated using three extensively applied PV models and several scalable optimization problems. Statistical outcomes highlight the robustness and competitiveness of ESLPSO compared to other state-of-the-art methodologies.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104209"},"PeriodicalIF":8.2,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1016/j.compind.2024.104202
Y.P. Tsang , D.Y. Mo , K.T. Chung , C.K.M. Lee
In the realm of robotic palletisation, the quest for optimal space utilization remains vital but also presents a critical challenge, particularly due to the constraints of decision complexity and the need for real-time decision-making without complete prior information. The widely adopted rule-based heuristics approaches were ease to use, but failed to adapt dynamically to the complex and changing landscape of online 3D bin packing. This study is motivated by the need for a system that is both more agile and intelligent, capable of managing the intricacies of dual-bin scenarios and the variable inflow of items. This study introduces a novel deep reinforcement learning (DRL) optimiser, employing a double deep Q-network (DDQN) to obtain optimal packing policies in an online environment with two proposed bin replacement strategies. This approach surpasses the limitations of previous methods by facilitating the simultaneous management of multiple bins and enabling on-the-fly adjustments to decisions based on limited prior knowledge. In a case study involving a logistics company, the proposed optimizer demonstrated a significant improvement in average space utilization across various lookahead scenarios, outperforming traditional heuristics in simulation experiments. The proposed optimiser contributes significantly to the economic and environmental sustainability of robotic warehouses, positioning itself as a cornerstone for the future of smart logistics.
在机器人码垛领域,追求最佳空间利用率仍然至关重要,但也是一项严峻的挑战,特别是由于决策复杂性的限制,以及需要在没有完整先验信息的情况下进行实时决策。广泛采用的基于规则的启发式方法易于使用,但无法动态适应复杂多变的在线 3D 仓储包装环境。本研究的动机是需要一个更加敏捷和智能的系统,能够管理错综复杂的双仓场景和多变的物品流入。本研究引入了一种新颖的深度强化学习(DRL)优化器,利用双深度 Q 网络(DDQN)在在线环境中通过两种建议的垃圾箱替换策略获得最佳包装策略。这种方法超越了以往方法的局限性,有利于同时管理多个垃圾箱,并能根据有限的先验知识对决策进行即时调整。在一项涉及一家物流公司的案例研究中,所提出的优化器在各种前瞻性方案中显著提高了平均空间利用率,在模拟实验中优于传统的启发式方法。所提出的优化器极大地促进了机器人仓库的经济和环境可持续性,使其成为未来智能物流的基石。
{"title":"A deep reinforcement learning approach for online and concurrent 3D bin packing optimisation with bin replacement strategies","authors":"Y.P. Tsang , D.Y. Mo , K.T. Chung , C.K.M. Lee","doi":"10.1016/j.compind.2024.104202","DOIUrl":"10.1016/j.compind.2024.104202","url":null,"abstract":"<div><div>In the realm of robotic palletisation, the quest for optimal space utilization remains vital but also presents a critical challenge, particularly due to the constraints of decision complexity and the need for real-time decision-making without complete prior information. The widely adopted rule-based heuristics approaches were ease to use, but failed to adapt dynamically to the complex and changing landscape of online 3D bin packing. This study is motivated by the need for a system that is both more agile and intelligent, capable of managing the intricacies of dual-bin scenarios and the variable inflow of items. This study introduces a novel deep reinforcement learning (DRL) optimiser, employing a double deep Q-network (DDQN) to obtain optimal packing policies in an online environment with two proposed bin replacement strategies. This approach surpasses the limitations of previous methods by facilitating the simultaneous management of multiple bins and enabling on-the-fly adjustments to decisions based on limited prior knowledge. In a case study involving a logistics company, the proposed optimizer demonstrated a significant improvement in average space utilization across various lookahead scenarios, outperforming traditional heuristics in simulation experiments. The proposed optimiser contributes significantly to the economic and environmental sustainability of robotic warehouses, positioning itself as a cornerstone for the future of smart logistics.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104202"},"PeriodicalIF":8.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01DOI: 10.1016/j.compind.2024.104200
Zheng Wang , Shukai Yang , Jiaxing Zhang , Zhaoxiang Ji
In high-pressure transportation, characterizing the leakage status of coal dust is an effective means to reduce potential safety hazards in the optimization of energy structures, and it is also conducive to disaster prevention and safety management. Given the existing methods, manual inspection of leakage points requires high measurement skills, entails significant maintenance costs, and is time-consuming and challenging. Therefore, a synergetic network structure based on an instance segmentation, integrated with multiregression models, is proposed. This model is used to study the detailed characteristics of complex coal particles and estimate coal dust parameters, providing a practical means for onsite environmental assessment. First, a cascade mechanism of ghost convolution and a depthwise split attention module is added to the backbone network to reduce the number of network parameters and improve the channel correlation of coal dust images. Second, the multiscale feature pyramid network structure is introduced to increase low-level feature information in coal dust images and enhance attention to small particle characteristics of coal dust. Moreover, the head structure of the segmentation branch is optimized via the parameter-free attention module to improve mask precision. Finally, the optimized elastic network fusion model is used to estimate multiple regression coal dust parameters. The experimental results show that the proposed model outperforms the other models in terms of segmentation accuracy, the intersection ratio, and the recall ratio. The average error in the mass distribution characterization is less than ±10 %, which meets the theoretical expectations. An ideal balance is achieved between computational speed and segmentation accuracy.
{"title":"Estimation of coal dust parameters via an effective image-based deep learning model","authors":"Zheng Wang , Shukai Yang , Jiaxing Zhang , Zhaoxiang Ji","doi":"10.1016/j.compind.2024.104200","DOIUrl":"10.1016/j.compind.2024.104200","url":null,"abstract":"<div><div>In high-pressure transportation, characterizing the leakage status of coal dust is an effective means to reduce potential safety hazards in the optimization of energy structures, and it is also conducive to disaster prevention and safety management. Given the existing methods, manual inspection of leakage points requires high measurement skills, entails significant maintenance costs, and is time-consuming and challenging. Therefore, a synergetic network structure based on an instance segmentation, integrated with multiregression models, is proposed. This model is used to study the detailed characteristics of complex coal particles and estimate coal dust parameters, providing a practical means for onsite environmental assessment. First, a cascade mechanism of ghost convolution and a depthwise split attention module is added to the backbone network to reduce the number of network parameters and improve the channel correlation of coal dust images. Second, the multiscale feature pyramid network structure is introduced to increase low-level feature information in coal dust images and enhance attention to small particle characteristics of coal dust. Moreover, the head structure of the segmentation branch is optimized via the parameter-free attention module to improve mask precision. Finally, the optimized elastic network fusion model is used to estimate multiple regression coal dust parameters. The experimental results show that the proposed model outperforms the other models in terms of segmentation accuracy, the intersection ratio, and the recall ratio. The average error in the mass distribution characterization is less than ±10 %, which meets the theoretical expectations. An ideal balance is achieved between computational speed and segmentation accuracy.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104200"},"PeriodicalIF":8.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industrialized construction has been accepted as an effective production method for building project stakeholders to improve installation quality. Recent advancements in industrialized construction have focused on parametric designs for manufacturing and assembly to ensure accurate information flows and workflows across different project stages, however, they have not adequately addressed the challenges in mass customization of building projects to meet the diverse needs of communities. This study develops a technological framework based on Building Information Modeling (BIM) processes for mass customization of prefabricated buildings, which consists of parametric design and robotic manufacturing (RM) information flows to improve design flexibility and manufacturing precision. A proof of concept case study of a single-family house built with Light Gauge Steel (LGS) wall frames was conducted to demonstrate the usability of the proposed framework. Findings show that the BIM-RM framework not only helps bridge the technological interoperability gap between BIM and RM programs but also contributes to improved scalability, efficiency, and cost-effectiveness of design-to-manufacturing processes in construction projects.
{"title":"Developing a BIM-enabled robotic manufacturing framework to facilitate mass customization of prefabricated buildings","authors":"Saeid Metvaei , Kamyab Aghajamali , Qian Chen , Zhen Lei","doi":"10.1016/j.compind.2024.104201","DOIUrl":"10.1016/j.compind.2024.104201","url":null,"abstract":"<div><div>Industrialized construction has been accepted as an effective production method for building project stakeholders to improve installation quality. Recent advancements in industrialized construction have focused on parametric designs for manufacturing and assembly to ensure accurate information flows and workflows across different project stages, however, they have not adequately addressed the challenges in mass customization of building projects to meet the diverse needs of communities. This study develops a technological framework based on Building Information Modeling (BIM) processes for mass customization of prefabricated buildings, which consists of parametric design and robotic manufacturing (RM) information flows to improve design flexibility and manufacturing precision. A proof of concept case study of a single-family house built with Light Gauge Steel (LGS) wall frames was conducted to demonstrate the usability of the proposed framework. Findings show that the BIM-RM framework not only helps bridge the technological interoperability gap between BIM and RM programs but also contributes to improved scalability, efficiency, and cost-effectiveness of design-to-manufacturing processes in construction projects.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104201"},"PeriodicalIF":8.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142555585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.compind.2024.104197
Xinhui Yang , Jie Zhang , Qing Ye , Victor Chang
How to upgrade business processes to improve production efficiency is an ongoing concern in industrial research. While previous studies have extensively examined various prioritization schemes at each stage of the business process, there has been a lack of investigation into the financial resources required for their implementation. The attainment of sufficient and stable financial support necessitates stability in stock prices, making the control of significant volatility in stock markets a critical issue. This study examines the effectiveness of three design schemes of price limit policy, a prevalent policy that intends to control significant volatility in financial markets and stabilize the market. Utilizing a heterogeneous agent-based model that simulates trading agents' processes of updating strategies through genetic programming algorithms and incorporates specialized designs for price limit policies, this study demonstrates that an asymmetric limit policy—consisting solely of a lower price limit (without an upper price limit)—can significantly enhance market quality by achieving lower volatility, higher market liquidity and better price effectiveness. Furthermore, we investigate the applicable conditions of asymmetric price limits. The findings suggest that an extremely restrictive limit range could lead to volatility spillover, while a 10 % range is deemed appropriate for achieving better efficiency. Additionally, the asymmetric price limit mechanism has the potential to significantly reduce market volatility by up to 12.5 % in volatile, low liquidity, and low price efficiency markets, which aligns with the declining range from bubble-crash periods to stable periods in the Chinese stock market. These results are further supported by sensitivity analysis.
{"title":"Leveraging asymmetric price limits for financial stability in industrial applications: An agent-based model","authors":"Xinhui Yang , Jie Zhang , Qing Ye , Victor Chang","doi":"10.1016/j.compind.2024.104197","DOIUrl":"10.1016/j.compind.2024.104197","url":null,"abstract":"<div><div>How to upgrade business processes to improve production efficiency is an ongoing concern in industrial research. While previous studies have extensively examined various prioritization schemes at each stage of the business process, there has been a lack of investigation into the financial resources required for their implementation. The attainment of sufficient and stable financial support necessitates stability in stock prices, making the control of significant volatility in stock markets a critical issue. This study examines the effectiveness of three design schemes of price limit policy, a prevalent policy that intends to control significant volatility in financial markets and stabilize the market. Utilizing a heterogeneous agent-based model that simulates trading agents' processes of updating strategies through genetic programming algorithms and incorporates specialized designs for price limit policies, this study demonstrates that an asymmetric limit policy—consisting solely of a lower price limit (without an upper price limit)—can significantly enhance market quality by achieving lower volatility, higher market liquidity and better price effectiveness. Furthermore, we investigate the applicable conditions of asymmetric price limits. The findings suggest that an extremely restrictive limit range could lead to volatility spillover, while a 10 % range is deemed appropriate for achieving better efficiency. Additionally, the asymmetric price limit mechanism has the potential to significantly reduce market volatility by up to 12.5 % in volatile, low liquidity, and low price efficiency markets, which aligns with the declining range from bubble-crash periods to stable periods in the Chinese stock market. These results are further supported by sensitivity analysis.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104197"},"PeriodicalIF":8.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142555586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.compind.2024.104199
Kai Zhou , Pingfa Feng , Feng Feng , Haowen Ma , Nengsheng Kang , Jianjian Wang
Surface roughness is crucial for the functional and aesthetic properties of mechanical components and must be carefully controlled during machining. However, predicting it under varying machining parameters is challenging due to limited experimental data and fluctuating factors like tool wear and vibration. This study develops a deep transfer learning model that incorporates the correlation alignment method and tool wear to enhance model generalization and reduce data acquisition costs. It utilizes multi-sensor data and the ResNet18 with a convolutional block attention module (CBAM-ResNet) to extract features with improved generalization and accuracy for monitoring milled surface roughness under varying conditions. The performance of the model is evaluated from different perspectives. First, the proposed model achieves high accuracy with fewer than 500 experimental samples from the target domain by using the CORAL module in the CBAM-ResNet model. This demonstrates the model's strong generalization capability by minimizing second-order statistical discrepancies between different datasets. Second, ablation experiments reveal a significant reduction in test error when incorporating CORAL and tool wear, highlighting their contributions to improved model generalization. Integrating tool wear information significantly reduces test errors across various transfer conditions, as it reflects changes in cutting force, vibration, and built-up edge formation. Third, comparisons with existing deep transfer models further emphasize the advantages of the proposed approach in improving model generalization. In summary, the proposed surface roughness model, which incorporates tool wear and multi-sensor signal features as inputs and employs feature transfer and CBAM-ResNet, demonstrates superior generalization and accuracy across various machining parameters.
{"title":"A deep transfer learning model for online monitoring of surface roughness in milling with variable parameters","authors":"Kai Zhou , Pingfa Feng , Feng Feng , Haowen Ma , Nengsheng Kang , Jianjian Wang","doi":"10.1016/j.compind.2024.104199","DOIUrl":"10.1016/j.compind.2024.104199","url":null,"abstract":"<div><div>Surface roughness is crucial for the functional and aesthetic properties of mechanical components and must be carefully controlled during machining. However, predicting it under varying machining parameters is challenging due to limited experimental data and fluctuating factors like tool wear and vibration. This study develops a deep transfer learning model that incorporates the correlation alignment method and tool wear to enhance model generalization and reduce data acquisition costs. It utilizes multi-sensor data and the ResNet18 with a convolutional block attention module (CBAM-ResNet) to extract features with improved generalization and accuracy for monitoring milled surface roughness under varying conditions. The performance of the model is evaluated from different perspectives. First, the proposed model achieves high accuracy with fewer than 500 experimental samples from the target domain by using the CORAL module in the CBAM-ResNet model. This demonstrates the model's strong generalization capability by minimizing second-order statistical discrepancies between different datasets. Second, ablation experiments reveal a significant reduction in test error when incorporating CORAL and tool wear, highlighting their contributions to improved model generalization. Integrating tool wear information significantly reduces test errors across various transfer conditions, as it reflects changes in cutting force, vibration, and built-up edge formation. Third, comparisons with existing deep transfer models further emphasize the advantages of the proposed approach in improving model generalization. In summary, the proposed surface roughness model, which incorporates tool wear and multi-sensor signal features as inputs and employs feature transfer and CBAM-ResNet, demonstrates superior generalization and accuracy across various machining parameters.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104199"},"PeriodicalIF":8.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1016/j.compind.2024.104198
Nejc Kozamernik, Drago Bračun
The industrial implementation of automated visual inspection leveraging deep learning is limited due to the labor-intensive labeling of datasets and the lack of datasets containing images of defects, which is especially the case in high-volume manufacturing with zero defect constraints. In this study, we present the FuseDecode Autoencoder (FuseDecode AE), a novel reconstruction-based anomaly detection model featuring incremental learning. Initially, the FuseDecode AE operates in an unsupervised manner on noisy data containing predominantly normal images and a small number of anomalous images. The predictions generated assist experts in distinguishing between normal and anomalous samples. Later, it adapts to weakly labeled datasets by retraining in a semi-supervised manner on normal data augmented with synthetic anomalies. As more real anomalous samples become available, the model further refines its capabilities through mixed-supervision learning on both normal and anomalous samples. Evaluation on a real industrial dataset of coating defects shows the effectiveness of the incremental learning approach. Furthermore, validation on the publicly accessible MVTec AD dataset demonstrates the FuseDecode AE's superiority over other state-of-the-art reconstruction-based models. These findings underscore its generalizability and effectiveness in automated visual inspection tasks, particularly in industrial settings.
{"title":"A novel FuseDecode Autoencoder for industrial visual inspection: Incremental anomaly detection improvement with gradual transition from unsupervised to mixed-supervision learning with reduced human effort","authors":"Nejc Kozamernik, Drago Bračun","doi":"10.1016/j.compind.2024.104198","DOIUrl":"10.1016/j.compind.2024.104198","url":null,"abstract":"<div><div>The industrial implementation of automated visual inspection leveraging deep learning is limited due to the labor-intensive labeling of datasets and the lack of datasets containing images of defects, which is especially the case in high-volume manufacturing with zero defect constraints. In this study, we present the FuseDecode Autoencoder (FuseDecode AE), a novel reconstruction-based anomaly detection model featuring incremental learning. Initially, the FuseDecode AE operates in an unsupervised manner on noisy data containing predominantly normal images and a small number of anomalous images. The predictions generated assist experts in distinguishing between normal and anomalous samples. Later, it adapts to weakly labeled datasets by retraining in a semi-supervised manner on normal data augmented with synthetic anomalies. As more real anomalous samples become available, the model further refines its capabilities through mixed-supervision learning on both normal and anomalous samples. Evaluation on a real industrial dataset of coating defects shows the effectiveness of the incremental learning approach. Furthermore, validation on the publicly accessible MVTec AD dataset demonstrates the FuseDecode AE's superiority over other state-of-the-art reconstruction-based models. These findings underscore its generalizability and effectiveness in automated visual inspection tasks, particularly in industrial settings.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104198"},"PeriodicalIF":8.2,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1016/j.compind.2024.104196
Cheng Chang , Francesco Di Maio , Rajeev Bheemireddy , Perry Posthoorn , Abraham T. Gebremariam , Peter Rem
Recycling coarse aggregates from construction and demolition waste is essential for sustainable construction practices. However, the quality of recycled coarse aggregates (RCA) often fluctuates significantly, in contrast to the more stable quality of natural aggregates. Contaminants in RCA notably compromise its quality and usability. Therefore, automating the quality control of RCA is necessary for the recycling industry. This study introduces an industry-focused, innovative, and rapid quality control system that combines Laser-Induced Breakdown Spectroscopy (LIBS) with 3D scanning technologies to enhance the detection of contaminants in RCA streams. The system involves a synchronized application of LIBS for spectral analysis and 3D scanning for the physical characterization of different materials. Results reveal that the dependability of single-shot LIBS analysis has been enhanced, thus elevating the precision of contaminant detection. This improvement is achieved by accounting for the laser shot's angle of incidence and focal length adjustments. The introduced technology holds potential for application in the real-time examination of substantial volumes of RCA, facilitating a rapid and reliable quality control method. This rapid assessment technique delivers online data about the concentration of contaminants in RCA, including recycled fine aggregates, cement paste, bricks, foam, glass, gypsum, mineral fibers, plastics, and wood. This data is both essential and sufficient for choosing a cost-effective mortar recipe and guaranteeing the performance of the final concrete product in terms of strength and durability in construction projects. The system can monitor the quality of RCA flows at throughputs of 50 tons per hour per conveyor, characterizing approximately 4000 particles in every ton of RCA, in this way signaling the most critical contaminants at levels of less than 50 parts per million. With these characteristics, the system could also become relevant for other applications, such as characterizing mining waste or solid biofuels for power plants.
{"title":"Rapid quality control for recycled coarse aggregates (RCA) streams: Multi-sensor integration for advanced contaminant detection","authors":"Cheng Chang , Francesco Di Maio , Rajeev Bheemireddy , Perry Posthoorn , Abraham T. Gebremariam , Peter Rem","doi":"10.1016/j.compind.2024.104196","DOIUrl":"10.1016/j.compind.2024.104196","url":null,"abstract":"<div><div>Recycling coarse aggregates from construction and demolition waste is essential for sustainable construction practices. However, the quality of recycled coarse aggregates (RCA) often fluctuates significantly, in contrast to the more stable quality of natural aggregates. Contaminants in RCA notably compromise its quality and usability. Therefore, automating the quality control of RCA is necessary for the recycling industry. This study introduces an industry-focused, innovative, and rapid quality control system that combines Laser-Induced Breakdown Spectroscopy (LIBS) with 3D scanning technologies to enhance the detection of contaminants in RCA streams. The system involves a synchronized application of LIBS for spectral analysis and 3D scanning for the physical characterization of different materials. Results reveal that the dependability of single-shot LIBS analysis has been enhanced, thus elevating the precision of contaminant detection. This improvement is achieved by accounting for the laser shot's angle of incidence and focal length adjustments. The introduced technology holds potential for application in the real-time examination of substantial volumes of RCA, facilitating a rapid and reliable quality control method. This rapid assessment technique delivers online data about the concentration of contaminants in RCA, including recycled fine aggregates, cement paste, bricks, foam, glass, gypsum, mineral fibers, plastics, and wood. This data is both essential and sufficient for choosing a cost-effective mortar recipe and guaranteeing the performance of the final concrete product in terms of strength and durability in construction projects. The system can monitor the quality of RCA flows at throughputs of 50 tons per hour per conveyor, characterizing approximately 4000 particles in every ton of RCA, in this way signaling the most critical contaminants at levels of less than 50 parts per million. With these characteristics, the system could also become relevant for other applications, such as characterizing mining waste or solid biofuels for power plants.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104196"},"PeriodicalIF":8.2,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-28DOI: 10.1016/j.compind.2024.104191
Mustafa Mhamed , Zhao Zhang , Wanjia Hua , Liling Yang , Mengning Huang , Xu Li , Tiecheng Bai , Han Li , Man Zhang
Apples are among the most popular fruits globally due to their health and nutritional benefits for humans. Artificial intelligence in agriculture has advanced, but vision, which improves machine efficiency, speed, and production, still needs to be improved. Managing apple development from planting to harvest affects productivity, quality, and economics. In this study, by establishing a vision system platform with a range of camera types that conforms with orchard standard specifications for data gathering, this work provides two new apple collections: Orchard Fuji Growth Stages (OFGS) and Orchard Apple Varieties (OAV), with preliminary benchmark assessments. Secondly, this research proposes the orchard apple vision transformer method (POA-VT), incorporating novel regularization techniques (CRT) that assist us in boosting efficiency and optimizing the loss functions. The highest accuracy scores are 91.56 % for OFGS and 94.20 % for OAV. Thirdly, an ablation study will be conducted to demonstrate the importance of CRT to the proposed method. Fourthly, the CRT outperforms the baselines by comparing it with the standard regularization functions. Finally, time series analyses predict the ‘Fuji’ growth stage, with the outstanding training and validation RMSE being 19.29 and 19.26, respectively. The proposed method offers high efficiency via multiple tasks and improves the automation of apple systems. It is highly flexible in handling various tasks related to apple fruits. Furthermore, it can integrate with real-time systems, such as UAVs and sorting systems. This research benefits the growth of apple’s robotic vision, development policies, time-sensitive harvesting schedules, and decision-making.
{"title":"Apple varieties and growth prediction with time series classification based on deep learning to impact the harvesting decisions","authors":"Mustafa Mhamed , Zhao Zhang , Wanjia Hua , Liling Yang , Mengning Huang , Xu Li , Tiecheng Bai , Han Li , Man Zhang","doi":"10.1016/j.compind.2024.104191","DOIUrl":"10.1016/j.compind.2024.104191","url":null,"abstract":"<div><div>Apples are among the most popular fruits globally due to their health and nutritional benefits for humans. Artificial intelligence in agriculture has advanced, but vision, which improves machine efficiency, speed, and production, still needs to be improved. Managing apple development from planting to harvest affects productivity, quality, and economics. In this study, by establishing a vision system platform with a range of camera types that conforms with orchard standard specifications for data gathering, this work provides two new apple collections: Orchard Fuji Growth Stages (OFGS) and Orchard Apple Varieties (OAV), with preliminary benchmark assessments. Secondly, this research proposes the orchard apple vision transformer method (POA-VT), incorporating novel regularization techniques (CRT) that assist us in boosting efficiency and optimizing the loss functions. The highest accuracy scores are 91.56 % for OFGS and 94.20 % for OAV. Thirdly, an ablation study will be conducted to demonstrate the importance of CRT to the proposed method. Fourthly, the CRT outperforms the baselines by comparing it with the standard regularization functions. Finally, time series analyses predict the ‘Fuji’ growth stage, with the outstanding training and validation RMSE being 19.29 and 19.26, respectively. The proposed method offers high efficiency via multiple tasks and improves the automation of apple systems. It is highly flexible in handling various tasks related to apple fruits. Furthermore, it can integrate with real-time systems, such as UAVs and sorting systems. This research benefits the growth of apple’s robotic vision, development policies, time-sensitive harvesting schedules, and decision-making.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104191"},"PeriodicalIF":8.2,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-27DOI: 10.1016/j.compind.2024.104194
Quan Qian , Fei Wu , Yi Wang , Yi Qin
In the field of fault transfer diagnosis, many approaches only focus on the distribution alignment and knowledge transfer between the source domain and target domain. However, most of these approaches ignore the precondition of whether this transfer task is transferable. Current mainstream transferability discrimination methods heavily depend on expert knowledge and are extremely vulnerable to the noise interference and variations in feature scale. This limits their applicability due to the intelligent requirements and complex industrial environment. To address the challenges mentioned previously, this paper introduces a novel cross-domain similarity measure called maximum subspace transferability discriminant analysis (MSTDA) with zero-label prior knowledge. MSTDA is comprised of a maximum subspace representation and a similarity measurement criterion. During the phase of maximum subspace representation, a new kernel-induced Hilbert space is designed to map the low-dimensional original samples into the high-dimensional space to maximize the separability of different faults and then solve the separable intrinsic fault features. Following that, a novel similarity measurement criterion that is resistant to variations in feature scale is developed. This criterion is based on the orthogonal bases of intrinsic feature subspaces. The mini-batch sampling strategy is used to ensure the timeliness of MSTDA. Finally, the experimental results on three cases, particularly in the actual wind turbine dataset, confirm that the proposed MSTDA outperforms other well-known similarity measure methods in terms of transferability evaluation. The related code can be downloaded from https://qinyi-team.github.io/2024/09/Maximum-subspace-transferability-discriminant-analysis.
{"title":"Maximum subspace transferability discriminant analysis: A new cross-domain similarity measure for wind-turbine fault transfer diagnosis","authors":"Quan Qian , Fei Wu , Yi Wang , Yi Qin","doi":"10.1016/j.compind.2024.104194","DOIUrl":"10.1016/j.compind.2024.104194","url":null,"abstract":"<div><div>In the field of fault transfer diagnosis, many approaches only focus on the distribution alignment and knowledge transfer between the source domain and target domain. However, most of these approaches ignore the precondition of whether this transfer task is transferable. Current mainstream transferability discrimination methods heavily depend on expert knowledge and are extremely vulnerable to the noise interference and variations in feature scale. This limits their applicability due to the intelligent requirements and complex industrial environment. To address the challenges mentioned previously, this paper introduces a novel cross-domain similarity measure called maximum subspace transferability discriminant analysis (MSTDA) with zero-label prior knowledge. MSTDA is comprised of a maximum subspace representation and a similarity measurement criterion. During the phase of maximum subspace representation, a new kernel-induced Hilbert space is designed to map the low-dimensional original samples into the high-dimensional space to maximize the separability of different faults and then solve the separable intrinsic fault features. Following that, a novel similarity measurement criterion that is resistant to variations in feature scale is developed. This criterion is based on the orthogonal bases of intrinsic feature subspaces. The mini-batch sampling strategy is used to ensure the timeliness of MSTDA. Finally, the experimental results on three cases, particularly in the actual wind turbine dataset, confirm that the proposed MSTDA outperforms other well-known similarity measure methods in terms of transferability evaluation. The related code can be downloaded from https://qinyi-team.github.io/2024/09/Maximum-subspace-transferability-discriminant-analysis.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104194"},"PeriodicalIF":8.2,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}