Pub Date : 2023-10-01DOI: 10.1016/j.compind.2023.103962
Daryl Powell, Maria Chiara Magnanini
This introduction to the special issue discusses contemporary advances in zero defect manufacturing. As a technology-intensive concept, zero defect manufacturing has gained greater traction in recent years, given widespread interest in and adoption of Industry 4.0. As such, zero defect manufacturing has the potential to disrupt and reshape our entire manufacturing ideology. In this editorial, we present an overview of the main findings of the papers that were selected for publication in this special issue and provide our reflections for the future of zero defect manufacturing research.
{"title":"Editorial: Special issue on advances in zero defect manufacturing","authors":"Daryl Powell, Maria Chiara Magnanini","doi":"10.1016/j.compind.2023.103962","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103962","url":null,"abstract":"<div><p>This introduction to the special issue discusses contemporary advances in zero defect manufacturing. As a technology-intensive concept, zero defect manufacturing has gained greater traction in recent years, given widespread interest in and adoption of Industry 4.0. As such, zero defect manufacturing has the potential to disrupt and reshape our entire manufacturing ideology. In this editorial, we present an overview of the main findings of the papers that were selected for publication in this special issue and provide our reflections for the future of zero defect manufacturing research.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1016/j.compind.2023.103974
Sybren de Kinderen , Qin Ma , Monika Kaczmarek-Heß
Verification in the realm of enterprise modeling (EM) ensures both the consistency of EM language specifications (i.e., meta models and additional well-formedness constraints), as well as of enterprise models. The consistency of enterprise models, which integrate different perspectives on an enterprise, ensures that they contain the necessary, in line with domain-specific rules, information for carrying out a variety of model-driven enterprise analyses. Meta modeling platforms are instrumental in carrying out such verification, especially when multiple languages are applied in tandem, as is inherent to enterprise modeling.
This paper reports on our practical experiences of using formal methods for verification in the context of EM. Motivated by the required verification capabilities, we show for one example platform, ADOxx, how it can be chained together with Alloy, an example of lightweight formal method, to capitalize on complementary platform strengths. Namely, ADOxx for language specification and use, and Alloy for verification capabilities. We show the verification, both, on the meta model level, in terms of checking the consistency of language specifications, and on the model level, in terms of checking models against well-formedness constraints. We illustrate the chaining of ADOxx and Alloy on the basis of consistency checks of two languages applied in tandem, namely the value modeling language e3value and the IT infrastructure modeling language, ITML. We also carry out experiments with three further languages to reflect upon the performance of Alloy, and its capability to uncover inconsistencies.
{"title":"Leveraging the power of formal methods in the realm of enterprise modeling—On the example of extending the (meta) model verification possibilities of ADOxx with Alloy","authors":"Sybren de Kinderen , Qin Ma , Monika Kaczmarek-Heß","doi":"10.1016/j.compind.2023.103974","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103974","url":null,"abstract":"<div><p>Verification in the realm of enterprise modeling (EM) ensures both the consistency of EM language specifications (i.e., meta models and additional well-formedness constraints), as well as of enterprise models. The consistency of enterprise models, which integrate different perspectives on an enterprise, ensures that they contain the necessary, in line with domain-specific rules, information for carrying out a variety of model-driven enterprise analyses. Meta modeling platforms are instrumental in carrying out such verification, especially when multiple languages are applied in tandem, as is inherent to enterprise modeling.</p><p>This paper reports on our practical experiences of using formal methods for verification in the context of EM. Motivated by the required verification capabilities, we show for one example platform, ADOxx, how it can be chained together with Alloy, an example of lightweight formal method, to capitalize on complementary platform strengths. Namely, ADOxx for language specification and use, and Alloy for verification capabilities. We show the verification, both, on the meta model level, in terms of checking the consistency of language specifications, and on the model level, in terms of checking models against well-formedness constraints. We illustrate the chaining of ADOxx and Alloy on the basis of consistency checks of two languages applied in tandem, namely the value modeling language e3value and the IT infrastructure modeling language, ITML. We also carry out experiments with three further languages to reflect upon the performance of Alloy, and its capability to uncover inconsistencies.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1016/j.compind.2023.103971
Guilherme X. Ferreira , Melise Maria V. de Paula , Rafael P. Pagan , Bruno G. Batista
Fleet planning and management activities are essential to establish the quantity and type of vehicles needed to pursue production plans and reduce costs. In steel companies, logistic analysts are responsible for these fleet activities. However, analysts still face challenges assessing fleet utilization due to the volume and form in which the data is found. Normally, this type of problem is out of the scope of fleet planning models in the literature. For information extraction from massive and complex databases, visual analytics is an alternative that employs data analysis and visualization techniques. This research investigates visual analytics to support industrial vehicle fleet management. As a result, two artifacts were developed: a fleet measurement model and Fleet Profile, a visual analytics solution. Two research cycles, each one producing a Fleet Profile version that was evaluated with real cases after the development. Findings from the analysis of the evaluation contents indicated that the Fleet Profile’s visual description could help to evaluate fleet utilization and identify gaps for fleet optimization.
{"title":"Fleet Profile: Using visual analytics to prospect logistic solutions in industrial vehicles fleet","authors":"Guilherme X. Ferreira , Melise Maria V. de Paula , Rafael P. Pagan , Bruno G. Batista","doi":"10.1016/j.compind.2023.103971","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103971","url":null,"abstract":"<div><p>Fleet planning and management activities are essential to establish the quantity and type of vehicles needed to pursue production plans and reduce costs. In steel companies, logistic analysts are responsible for these fleet activities. However, analysts still face challenges assessing fleet utilization due to the volume and form in which the data is found. Normally, this type of problem is out of the scope of fleet planning models in the literature. For information extraction from massive and complex databases, visual analytics is an alternative that employs data analysis and visualization techniques. This research investigates visual analytics to support industrial vehicle fleet management. As a result, two artifacts were developed: a fleet measurement model and Fleet Profile, a visual analytics solution. Two research cycles, each one producing a Fleet Profile version that was evaluated with real cases after the development. Findings from the analysis of the evaluation contents indicated that the Fleet Profile’s visual description could help to evaluate fleet utilization and identify gaps for fleet optimization.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1016/j.compind.2023.103987
Matteo Perno , Lars Hvam , Anders Haug
Digital twins (DTs) are rapidly changing how manufacturing companies leverage the large volumes of data they generate daily to gain a competitive advantage and optimize their supply chains. When coupled with recent developments in machine learning (ML), DTs have the potential to generate invaluable insights for process manufacturing companies to help them optimize their manufacturing processes. However, this potential has yet to be fully exploited due to the challenges that process manufacturing companies face in developing and implementing DTs in their organizations. Although DTs are receiving increasing attention in both industry and academia, there is limited literature on how to apply them in the process industry. To address this gap, this paper presents a framework for developing ML-based DTs to predict critical process parameters in real time. The proposed framework is tested through a case study at an international process manufacturing company in which it was used to collect and process plant data, build accurate predictive models for two critical process parameters, and develop a DT application to visualize the models’ predictions. The case study demonstrated the usefulness of the proposed DT–ML framework in the sense that it provided the company with more accurate predictions than the models it previously applied. The study provides insights into the value of applying ML-based DT in the process industry and sheds light on some of the challenges associated with the application of this technology.
{"title":"A machine learning digital twin approach for critical process parameter prediction in a catalyst manufacturing line","authors":"Matteo Perno , Lars Hvam , Anders Haug","doi":"10.1016/j.compind.2023.103987","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103987","url":null,"abstract":"<div><p>Digital twins (DTs) are rapidly changing how manufacturing companies leverage the large volumes of data they generate daily to gain a competitive advantage and optimize their supply chains. When coupled with recent developments in machine learning (ML), DTs have the potential to generate invaluable insights for process manufacturing companies to help them optimize their manufacturing processes. However, this potential has yet to be fully exploited due to the challenges that process manufacturing companies face in developing and implementing DTs in their organizations. Although DTs are receiving increasing attention in both industry and academia, there is limited literature on how to apply them in the process industry. To address this gap, this paper presents a framework for developing ML-based DTs to predict critical process parameters in real time. The proposed framework is tested through a case study at an international process manufacturing company in which it was used to collect and process plant data, build accurate predictive models for two critical process parameters, and develop a DT application to visualize the models’ predictions. The case study demonstrated the usefulness of the proposed DT–ML framework in the sense that it provided the company with more accurate predictions than the models it previously applied. The study provides insights into the value of applying ML-based DT in the process industry and sheds light on some of the challenges associated with the application of this technology.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49720762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The disassembly process is one of the most expensive phases in the product life cycle for both maintenance and the End of Life dismantling process. Industry must optimize the disassembly sequence to ensure time-cost-efficiency. This paper presents a new approach based on the Reinforcement Learning algorithm to optimize Disassembly Sequence Planning. This research work focuses on two types of dismantling: partial and full disassembly. By introducing a fitness function within the Reinforcement Learning algorithm, it is aimed at implementing optimized Disassembly Sequence Planning for five disassembly parameters or goals: (1) minimizing disassembly tool changes, (2) minimizing disassembly direction changes, (3) optimizing dismantling time including preparation and processing time, (4) prioritizing the dismantling of the smallest parts, and (5) facilitating access to wear parts. The proposed approach is applied to a demonstrative example. Finally, a comparison with other approaches from the literature is provided to demonstrate the efficiency of the new approach.
{"title":"Reinforcement learning for disassembly sequence planning optimization","authors":"Amal Allagui , Imen Belhadj , Régis Plateaux , Moncef Hammadi , Olivia Penas , Nizar Aifaoui","doi":"10.1016/j.compind.2023.103992","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103992","url":null,"abstract":"<div><p>The disassembly process is one of the most expensive phases in the product life cycle for both maintenance and the End of Life dismantling process. Industry must optimize the disassembly sequence to ensure time-cost-efficiency. This paper presents a new approach based on the Reinforcement Learning algorithm to optimize Disassembly Sequence Planning. This research work focuses on two types of dismantling: partial and full disassembly. By introducing a fitness function within the Reinforcement Learning algorithm, it is aimed at implementing optimized Disassembly Sequence Planning for five disassembly parameters or goals: (1) minimizing disassembly tool changes, (2) minimizing disassembly direction changes, (3) optimizing dismantling time including preparation and processing time, (4) prioritizing the dismantling of the smallest parts, and (5) facilitating access to wear parts. The proposed approach is applied to a demonstrative example. Finally, a comparison with other approaches from the literature is provided to demonstrate the efficiency of the new approach.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1016/j.compind.2023.103980
Adolfo CRESPO MARQUEZ , José Antonio MARCOS ALBERCA , Antonio J. GUILLÉN LÓPEZ , Antonio DE LA FUENTE CARMONA
Digital Twins (DTs) are gaining popularity in the context of the fourth industrial revolution to replicate physical equipment and systems in the digital world. DTs promise increased productivity and sustainable performance by integrating data, models, and decision-support systems. However, before realizing the potential benefits of DTs for maintenance management, several challenges need to be addressed, including a lack of conceptual basis, functional description, and established requirements. Hence, the paper presents, in a practical manner, how to cover this gap in digital configurations for maintenance management, designed to benefit of DTs. The scope of the paper includes the design and implementation of an innovative condition-based maintenance application (CBM App) based on a DT of train axle bearings, and uses a generic framework for digital maintenance management for the functional description of the DT within the CBM App. The paper provides details of the models and algorithms used to build the DT and ensures that recommended features are fulfilled. To test the DT's effectiveness and robustness, the design and framework are implemented in real CBM applications of TALGO, a high-speed train manufacturer. These tools are deemed helpful for easing DT implementation within the CBM App and can be replicated in other operational contexts.
{"title":"Digital twins in condition-based maintenance apps: A case study for train axle bearings","authors":"Adolfo CRESPO MARQUEZ , José Antonio MARCOS ALBERCA , Antonio J. GUILLÉN LÓPEZ , Antonio DE LA FUENTE CARMONA","doi":"10.1016/j.compind.2023.103980","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103980","url":null,"abstract":"<div><p>Digital Twins (DTs) are gaining popularity in the context of the fourth industrial revolution to replicate physical equipment and systems in the digital world. DTs promise increased productivity and sustainable performance by integrating data, models, and decision-support systems. However, before realizing the potential benefits of DTs for maintenance management, several challenges need to be addressed, including a lack of conceptual basis, functional description, and established requirements. Hence, the paper presents, in a practical manner, how to cover this gap in digital configurations for maintenance management, designed to benefit of DTs. The scope of the paper includes the design and implementation of an innovative condition-based maintenance application (CBM App) based on a DT of train axle bearings, and uses a generic framework for digital maintenance management for the functional description of the DT within the CBM App. The paper provides details of the models and algorithms used to build the DT and ensures that recommended features are fulfilled. To test the DT's effectiveness and robustness, the design and framework are implemented in real CBM applications of TALGO, a high-speed train manufacturer. These tools are deemed helpful for easing DT implementation within the CBM App and can be replicated in other operational contexts.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1016/j.compind.2023.103975
Weihao Zhang , Yuqin Zeng , Jiapeng Wang , Honglin Ma , Qi Zhang , Shuqian Fan
The randomness and low frequency of laser powder bed fusion defects are two important characteristics that can impact the quality and reliability of parts. Therefore, effectively detecting the forming quality of parts during the manufacturing process has become an important research problem in the field of intelligent additive manufacturing technology. In this study, the use of multi-scale and multi-feature manifold learning methods first demonstrated that the global optimal solution for predicting the forming morphology of the melt track cannot be obtained when the number of process phenomenon features in the laser powder bed fusion process is unknown. As an alternative, a multi-scale feature pyramid network is used for processing long sequence high-speed videos and predicting the forming morphology. Specifically, to address the randomness issue, this study used a coaxial high-speed imaging system to monitor the entire forming process and designed a 2D Transformer-based video understanding model to process high-speed video data and recognize key process phenomena. To solve the low frequency issue, physics-based simulation can quickly understand how process parameters affect the forming quality of parts to provide guidance for constructing multi-mode category datasets. The experimental results indicate that the model can accurately predict the forming morphology of the melt track, better control the entire forming process, and thus improve manufacturing quality and efficiency.
{"title":"Multi-scale feature pyramid approach for melt track classification in laser powder bed fusion via coaxial high-speed imaging","authors":"Weihao Zhang , Yuqin Zeng , Jiapeng Wang , Honglin Ma , Qi Zhang , Shuqian Fan","doi":"10.1016/j.compind.2023.103975","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103975","url":null,"abstract":"<div><p>The randomness and low frequency of laser powder bed fusion<span> defects are two important characteristics that can impact the quality and reliability of parts. Therefore, effectively detecting the forming quality of parts during the manufacturing process<span><span><span> has become an important research problem in the field of intelligent additive manufacturing technology. In this study, the use of multi-scale and multi-feature manifold learning methods first demonstrated that the </span>global optimal solution for predicting the forming morphology of the melt track cannot be obtained when the number of process phenomenon features in the laser powder bed fusion process is unknown. As an alternative, a multi-scale feature pyramid network is used for processing long sequence high-speed videos and predicting the forming morphology. Specifically, to address the randomness issue, this study used a coaxial high-speed </span>imaging system<span> to monitor the entire forming process and designed a 2D Transformer-based video understanding model to process high-speed video data and recognize key process phenomena. To solve the low frequency issue, physics-based simulation can quickly understand how process parameters affect the forming quality of parts to provide guidance for constructing multi-mode category datasets. The experimental results indicate that the model can accurately predict the forming morphology of the melt track, better control the entire forming process, and thus improve manufacturing quality and efficiency.</span></span></span></p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Domain adaptation (DA) methods have achieved promising results in machinery fault diagnosis owing to their ability to mitigate the distribution discrepancy between domains. However, existing fault diagnosis methods based on DA are tailored for a specific setting, and highly rely on prior knowledge about the relationship between the source and target label sets which is usually not available in advance. To broaden the applicability of DA for fault diagnosis, this paper proposes a universal transfer network to handle all types of DA settings, including closed-set DA, partial DA, open-set DA, and open-partial DA. The proposed method utilizes self-supervised learning to uncover the cluster structure of the target domain, and incorporates entropy-based feature alignment to align shared-class samples while separating unknown-class samples. Moreover, an open-set classifier is trained to provide a confidence criterion, which is then used to construct a sample-level uncertainty criterion for identifying unknown-class samples efficiently. The proposed method is evaluated on Office-31 dataset and two fault diagnosis datasets. Our experimental results demonstrate that the proposed method performs better in all DA settings when compared to other methods.
{"title":"A universal transfer network for machinery fault diagnosis","authors":"Xiaolei Yu , Zhibin Zhao , Xingwu Zhang , Shaohua Tian , Chee-Keong Kwoh , Xiaoli Li , Xuefeng Chen","doi":"10.1016/j.compind.2023.103976","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103976","url":null,"abstract":"<div><p>Domain adaptation<span> (DA) methods have achieved promising results in machinery fault diagnosis owing to their ability to mitigate the distribution discrepancy between domains. However, existing fault diagnosis methods based on DA are tailored for a specific setting, and highly rely on prior knowledge about the relationship between the source and target label sets which is usually not available in advance. To broaden the applicability of DA for fault diagnosis, this paper proposes a universal transfer network to handle all types of DA settings, including closed-set DA, partial DA, open-set DA, and open-partial DA. The proposed method utilizes self-supervised learning to uncover the cluster structure of the target domain, and incorporates entropy-based feature alignment to align shared-class samples while separating unknown-class samples. Moreover, an open-set classifier is trained to provide a confidence criterion, which is then used to construct a sample-level uncertainty criterion for identifying unknown-class samples efficiently. The proposed method is evaluated on Office-31 dataset and two fault diagnosis datasets. Our experimental results demonstrate that the proposed method performs better in all DA settings when compared to other methods.</span></p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1016/j.compind.2023.103957
Julian Martinsson Bonde , Michael Kokkolaras , Petter Andersson , Massimo Panarotto , Ola Isaksson
In conceptual design studies engineers typically utilize data-based surrogate models to enable rapid evaluation of design objectives that otherwise would be too computationally expensive and time-consuming to simulate. Due to the computationally expensive simulations, the data-based surrogate models are often trained using small sample sizes, resulting in low-fidelity models which can produce results that are not trustworthy. To mitigate this issue, a similarity-assisted design space exploration method is proposed. The similarity is measured between design points that have been evaluated through lower-fidelity data-based surrogate models and design points that have been evaluated using higher-fidelity physics-based simulations. This similarity information can then be used by design engineers to better understand the trustworthiness of the data produced by the low-fidelity surrogate models. Our numerical experiments demonstrate that such a similarity measurement can be used as an indicator of the trustworthiness of the lower-fidelity model predictions. Moreover, a second similarity metric is proposed for measuring the similarity of new designs to legacy designs, thus highlighting the potential to reuse knowledge, analysis models, and data. The proposed method is demonstrated by means of an aero-engine structural component conceptual design study. An open-source software tool developed to assist in data visualization is also presented.
{"title":"A similarity-assisted multi-fidelity approach to conceptual design space exploration","authors":"Julian Martinsson Bonde , Michael Kokkolaras , Petter Andersson , Massimo Panarotto , Ola Isaksson","doi":"10.1016/j.compind.2023.103957","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103957","url":null,"abstract":"<div><p>In conceptual design studies engineers typically utilize data-based surrogate models to enable rapid evaluation of design objectives that otherwise would be too computationally expensive and time-consuming to simulate. Due to the computationally expensive simulations, the data-based surrogate models are often trained using small sample sizes, resulting in low-fidelity models which can produce results that are not trustworthy. To mitigate this issue, a similarity-assisted design space exploration method is proposed. The similarity is measured between design points that have been evaluated through lower-fidelity data-based surrogate models and design points that have been evaluated using higher-fidelity physics-based simulations. This similarity information can then be used by design engineers to better understand the trustworthiness of the data produced by the low-fidelity surrogate models. Our numerical experiments demonstrate that such a similarity measurement can be used as an indicator of the trustworthiness of the lower-fidelity model predictions. Moreover, a second similarity metric is proposed for measuring the similarity of new designs to legacy designs, thus highlighting the potential to reuse knowledge, analysis models, and data. The proposed method is demonstrated by means of an aero-engine structural component conceptual design study. An open-source software tool developed to assist in data visualization is also presented.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1016/j.compind.2023.103981
Akshay Avhad, Casper Schou, Ole Madsen
Swarm Production Systems adopt an agile, reconfigurable and flexible production philosophy using mobile robot platforms for workstations and material transport. As a result, the factory floor can continuously restructure itself to an optimal spatial topology suited to any given production mix. This new production paradigm has to deal with frequently changing factory layouts and an execution plan for a fleet of autonomous robots in the planning stage. For every reconfiguration in the event of a change of order, the carrier and process robots require an initial task plan prior to runtime production and a reactive mechanism to adapt to uncertainties on the shop floor. An interoperable management system across the production and robotics domain called the Swarm Manager handles the task planning, allocation and scheduling for process and product transport robots. This research provides conceptualization with an abstract framework and an architecture describing methods with required functionalities for a Swarm Manager. A generic framework based on multi-agent systems addresses the explicit functional scope for individual agents inside the Swarm Manager. Based on the functional needs, a system-level architecture is proposed to explain algorithms within task planning, allocation and scheduling agents, and information flow within them.
{"title":"A framework for multi-robot control in execution of a Swarm Production System","authors":"Akshay Avhad, Casper Schou, Ole Madsen","doi":"10.1016/j.compind.2023.103981","DOIUrl":"https://doi.org/10.1016/j.compind.2023.103981","url":null,"abstract":"<div><p>Swarm Production Systems adopt an agile, reconfigurable and flexible production philosophy using mobile robot platforms for workstations and material transport. As a result, the factory floor can continuously restructure itself to an optimal spatial topology suited to any given production mix. This new production paradigm has to deal with frequently changing factory layouts and an execution plan for a fleet of autonomous robots in the planning stage. For every reconfiguration in the event of a change of order, the carrier and process robots require an initial task plan prior to runtime production and a reactive mechanism to adapt to uncertainties on the shop floor. An interoperable management system across the production and robotics domain called the Swarm Manager handles the task planning, allocation and scheduling for process and product transport robots. This research provides conceptualization with an abstract framework and an architecture describing methods with required functionalities for a Swarm Manager. A generic framework based on multi-agent systems addresses the explicit functional scope for individual agents inside the Swarm Manager. Based on the functional needs, a system-level architecture is proposed to explain algorithms within task planning, allocation and scheduling agents, and information flow within them.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}