J. Allen, Anand Balu Nellippallil, Zhenjun Ming, J. Milisavljevic-Syed, F. Mistree
In the context of the theme for this special issue, namely, challenges and opportunities in computing research to enable next generation engineering applications, our intent in writing this paper is to seed the dialog on furthering computing research associated with the design of cyber-physical-social systems. Cyber-Physical-Social Systems (CPSS's) are natural extensions of Cyber-Physical Systems (CPS's) that add the consideration of human interactions and cooperation with cyber systems and physical systems. CPSS's are becoming increasingly important as we face challenges such as regulating our impact on the environment, eradicating disease, transitioning to digital and sustainable manufacturing, and improving healthcare. Human stakeholders in these systems are integral to the effectiveness of these systems. One of the key features of CPSS is that the form, structure, and interactions constantly evolve to meet changes in the environment. Design of evolving CPSS include making tradeoffs amongst the cyber, the physical, and the social systems. Advances in computing and information science have given us opportunities to ask difficult, and important questions, especially those related to cyber-physical-social systems. In this paper we identify research opportunities worth investigating. We start with theoretical and mathematical frameworks for identifying and framing the problem – specifically, problem identification and formulation, data management, CPSS modeling and CPSS in action. Then we discuss issues related to the design of CPSS including decision making, computational platform support, and verification and validation. Building on this foundation, we suggest a way forward.
{"title":"Designing Evolving Cyber-Physical-Social Systems: Computational Research Opportunities","authors":"J. Allen, Anand Balu Nellippallil, Zhenjun Ming, J. Milisavljevic-Syed, F. Mistree","doi":"10.1115/1.4062883","DOIUrl":"https://doi.org/10.1115/1.4062883","url":null,"abstract":"\u0000 In the context of the theme for this special issue, namely, challenges and opportunities in computing research to enable next generation engineering applications, our intent in writing this paper is to seed the dialog on furthering computing research associated with the design of cyber-physical-social systems. Cyber-Physical-Social Systems (CPSS's) are natural extensions of Cyber-Physical Systems (CPS's) that add the consideration of human interactions and cooperation with cyber systems and physical systems. CPSS's are becoming increasingly important as we face challenges such as regulating our impact on the environment, eradicating disease, transitioning to digital and sustainable manufacturing, and improving healthcare. Human stakeholders in these systems are integral to the effectiveness of these systems. One of the key features of CPSS is that the form, structure, and interactions constantly evolve to meet changes in the environment. Design of evolving CPSS include making tradeoffs amongst the cyber, the physical, and the social systems. Advances in computing and information science have given us opportunities to ask difficult, and important questions, especially those related to cyber-physical-social systems. In this paper we identify research opportunities worth investigating. We start with theoretical and mathematical frameworks for identifying and framing the problem – specifically, problem identification and formulation, data management, CPSS modeling and CPSS in action. Then we discuss issues related to the design of CPSS including decision making, computational platform support, and verification and validation. Building on this foundation, we suggest a way forward.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"48 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84937803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marek S. Lukasiewicz, M. Rossoni, E. Spadoni, Nicolò Dozio, M. Carulli, F. Ferrise, M. Bordegoni
As the Metaverse gains popularity due to its use in various industries, so does the desire to take advantage of all its potential. While visual and audio technologies already provide access to the Metaverse, there is increasing interest in haptic and olfactory technologies, which are less developed and have been studied for a shorter time. Currently, there are limited options for users to experience the olfactory aspect of the Metaverse. This paper introduces an open-source kit that makes it simple to add the sense of smell to the Metaverse. The details of the solution, including its technical specifications, are outlined to enable potential users to utilize, test, and enhance the project and make it available to the scientific community.
{"title":"An open-source Olfactory Display to add the sense of smell to the Metaverse","authors":"Marek S. Lukasiewicz, M. Rossoni, E. Spadoni, Nicolò Dozio, M. Carulli, F. Ferrise, M. Bordegoni","doi":"10.1115/1.4062889","DOIUrl":"https://doi.org/10.1115/1.4062889","url":null,"abstract":"\u0000 As the Metaverse gains popularity due to its use in various industries, so does the desire to take advantage of all its potential. While visual and audio technologies already provide access to the Metaverse, there is increasing interest in haptic and olfactory technologies, which are less developed and have been studied for a shorter time. Currently, there are limited options for users to experience the olfactory aspect of the Metaverse. This paper introduces an open-source kit that makes it simple to add the sense of smell to the Metaverse. The details of the solution, including its technical specifications, are outlined to enable potential users to utilize, test, and enhance the project and make it available to the scientific community.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45624163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiangce Chen, Justin Pierce, Glen Williams, Timothy W. Simpson, N. Meisel, Sneha Prabha Narra, Christopher McComb
The temperature history of an additively-manufactured part plays a critical role in determining process-structure-property relationships in fusion-based additive manufacturing (AM) processes. Therefore, fast thermal simulation methods are needed for a variety of AM tasks, from temperature history prediction for part design and process planning to in-situ temperature monitoring and control during manufacturing. However, conventional numerical simulation methods fall short in satisfying the strict requirements of these applications due to the large space and time scales involved. While data-driven surrogate models are of interest for their rapid computation capabilities, the performance of these models relies on the size and quality of the training data, which is often prohibitively expensive to create. Physics-informed neural networks (PINNs) mitigate the need for large datasets by imposing physical principles during the training process. This work investigates the use of a PINN to predict the time-varying temperature distribution in a part during manufacturing with Laser Powder Bed Fusion (L-PBF). Notably, the use of the PINN in this study enables the model to be trained solely on randomly-synthesized data. This training data is both inexpensive to obtain and the presence of stochasticity in the dataset improves the generalizability of the trained model. Results show that the PINN model achieves higher accuracy than a comparable artificial neural network trained on labeled data. Further, the PINN model trained in this work maintains high accuracy in predicting temperature for laser path scanning strategies unseen in the training data.
{"title":"ACCELERATING THERMAL SIMULATIONS IN ADDITIVE MANUFACTURING BY TRAINING PHYSICS-INFORMED NEURAL NETWORKS WITH RANDOMLY-SYNTHESIZED DATA","authors":"Jiangce Chen, Justin Pierce, Glen Williams, Timothy W. Simpson, N. Meisel, Sneha Prabha Narra, Christopher McComb","doi":"10.1115/1.4062852","DOIUrl":"https://doi.org/10.1115/1.4062852","url":null,"abstract":"\u0000 The temperature history of an additively-manufactured part plays a critical role in determining process-structure-property relationships in fusion-based additive manufacturing (AM) processes. Therefore, fast thermal simulation methods are needed for a variety of AM tasks, from temperature history prediction for part design and process planning to in-situ temperature monitoring and control during manufacturing. However, conventional numerical simulation methods fall short in satisfying the strict requirements of these applications due to the large space and time scales involved. While data-driven surrogate models are of interest for their rapid computation capabilities, the performance of these models relies on the size and quality of the training data, which is often prohibitively expensive to create. Physics-informed neural networks (PINNs) mitigate the need for large datasets by imposing physical principles during the training process. This work investigates the use of a PINN to predict the time-varying temperature distribution in a part during manufacturing with Laser Powder Bed Fusion (L-PBF). Notably, the use of the PINN in this study enables the model to be trained solely on randomly-synthesized data. This training data is both inexpensive to obtain and the presence of stochasticity in the dataset improves the generalizability of the trained model. Results show that the PINN model achieves higher accuracy than a comparable artificial neural network trained on labeled data. Further, the PINN model trained in this work maintains high accuracy in predicting temperature for laser path scanning strategies unseen in the training data.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44516192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Semantic knowledge of part-part and part-whole relationships in assemblies is useful for a variety of tasks from searching design repositories to the construction of engineering knowledge bases. In this work, we propose that the natural language names designers use in computer aided design (CAD) software are a valuable source of such knowledge, and that large language models (LLMs) contain useful domain-specific information for working with this data as well as other CAD and engineering-related tasks. In particular, we extract and clean a large corpus of natural language part, feature, and document names and use this to quantitatively demonstrate that a pre-trained language model can outperform numerous benchmarks on three self-supervised tasks, without ever having seen this data before. Moreover, we show that fine-tuning on the text data corpus further boosts the performance on all tasks, thus demonstrating the value of the text data which until now has been largely ignored. We also identify key limitations to using LLMs with text data alone, and our findings provide a strong motivation for further work into multi-modal text-geometry models. To aid and encourage further work in this area we make all our data and code publicly available.
{"title":"What’s in a Name? Evaluating Assembly-Part Semantic Knowledge in Language Models Through User-Provided Names in Computer Aided Design Files","authors":"Peter Meltzer, Joseph Lambourne, Daniele Grandi","doi":"10.1115/1.4062454","DOIUrl":"https://doi.org/10.1115/1.4062454","url":null,"abstract":"Abstract Semantic knowledge of part-part and part-whole relationships in assemblies is useful for a variety of tasks from searching design repositories to the construction of engineering knowledge bases. In this work, we propose that the natural language names designers use in computer aided design (CAD) software are a valuable source of such knowledge, and that large language models (LLMs) contain useful domain-specific information for working with this data as well as other CAD and engineering-related tasks. In particular, we extract and clean a large corpus of natural language part, feature, and document names and use this to quantitatively demonstrate that a pre-trained language model can outperform numerous benchmarks on three self-supervised tasks, without ever having seen this data before. Moreover, we show that fine-tuning on the text data corpus further boosts the performance on all tasks, thus demonstrating the value of the text data which until now has been largely ignored. We also identify key limitations to using LLMs with text data alone, and our findings provide a strong motivation for further work into multi-modal text-geometry models. To aid and encourage further work in this area we make all our data and code publicly available.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135904535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Functional decomposition is an important task in early systems engineering and design, where the overall function of the system is resolved into the functions of its components or subassemblies. Conventionally, this task is performed manually, because of the possibility of multiple solution paths and the need for understanding the physics phenomena that could realize the desired effects. To this end, this paper presents a formal method for functional decomposition using physics-based qualitative reasoning. The formal representation includes three parts: (1) a natural language lexicon that can be used to detect the changes of physical states of material and energy flows, (2) a set of causation tables that abstracts the knowledge of qualitative physics by capturing the causal relations between the various quantities involved in a physical phenomenon or process, and (3) a process-to-subgraph mapping that translate the physical processes to function structure constructs. The algorithms use the above three representations and some additional topological reasoning to synthesize and assemble function structure graphs that are decompositions of a given black box model. The paper presents the formal representations and reasoning algorithms, and illustrates this method using an example function model of an air-heating device. It also presents the software implementation of the representations and the algorithms and uses it to validate the method’s ability to generate multiple decompositions from a black box function model.
{"title":"Formal Qualitative Physics-Based Reasoning for Functional Decomposition of Engineered Systems","authors":"Xiaoyang Mao, Chiradeep Sen","doi":"10.1115/1.4062748","DOIUrl":"https://doi.org/10.1115/1.4062748","url":null,"abstract":"\u0000 Functional decomposition is an important task in early systems engineering and design, where the overall function of the system is resolved into the functions of its components or subassemblies. Conventionally, this task is performed manually, because of the possibility of multiple solution paths and the need for understanding the physics phenomena that could realize the desired effects. To this end, this paper presents a formal method for functional decomposition using physics-based qualitative reasoning. The formal representation includes three parts: (1) a natural language lexicon that can be used to detect the changes of physical states of material and energy flows, (2) a set of causation tables that abstracts the knowledge of qualitative physics by capturing the causal relations between the various quantities involved in a physical phenomenon or process, and (3) a process-to-subgraph mapping that translate the physical processes to function structure constructs. The algorithms use the above three representations and some additional topological reasoning to synthesize and assemble function structure graphs that are decompositions of a given black box model. The paper presents the formal representations and reasoning algorithms, and illustrates this method using an example function model of an air-heating device. It also presents the software implementation of the representations and the algorithms and uses it to validate the method’s ability to generate multiple decompositions from a black box function model.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"13 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82563559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate estimation of remaining useful life (RUL) becomes a crucial task when bearing operates under dynamic working conditions. The environmental noise, different operating conditions, and multiple fault modes result in the existence of considerable distribution and feature shifts between different domains. To address these issues, a novel framework TSBiLSTM is proposed that utilizes 1DCNN, SBiLSTM, and AM synergically to extract highly abstract feature representation, and domain adaptation is realized using the MK-MMD (multi-kernel maximum mean discrepancy) metric and domain confusion layer. One-dimensional CNN (1DCNN) and stacked bi-directional LSTM (SBiLSTM) are utilized to take advantage of spatio-temporal features with attention mechanism (AM) to selectively process the influential degradation information. MK-MMD provides effective kernel selection along with a domain confusion layer to effectively extract domain invariant features. Both experimentation and comparison studies are conducted to verify the effectiveness and feasibility of the proposed TSBiLSTM model. The generalized performance is demonstrated using IEEE PHM datasets based on RMSE, MAE, absolute percent mean error, and percentage mean error. The promising RUL prediction results validate the superiority and usability of the proposed TSBiLSTM model as a promising prognostic tool for dynamic operating conditions.
{"title":"Unsupervised Domain Deep Transfer Learning Approach for Rolling Bearing Remaining Useful Life Estimation","authors":"M. Rathore, S. Harsha","doi":"10.1115/1.4062731","DOIUrl":"https://doi.org/10.1115/1.4062731","url":null,"abstract":"\u0000 Accurate estimation of remaining useful life (RUL) becomes a crucial task when bearing operates under dynamic working conditions. The environmental noise, different operating conditions, and multiple fault modes result in the existence of considerable distribution and feature shifts between different domains. To address these issues, a novel framework TSBiLSTM is proposed that utilizes 1DCNN, SBiLSTM, and AM synergically to extract highly abstract feature representation, and domain adaptation is realized using the MK-MMD (multi-kernel maximum mean discrepancy) metric and domain confusion layer. One-dimensional CNN (1DCNN) and stacked bi-directional LSTM (SBiLSTM) are utilized to take advantage of spatio-temporal features with attention mechanism (AM) to selectively process the influential degradation information. MK-MMD provides effective kernel selection along with a domain confusion layer to effectively extract domain invariant features. Both experimentation and comparison studies are conducted to verify the effectiveness and feasibility of the proposed TSBiLSTM model. The generalized performance is demonstrated using IEEE PHM datasets based on RMSE, MAE, absolute percent mean error, and percentage mean error. The promising RUL prediction results validate the superiority and usability of the proposed TSBiLSTM model as a promising prognostic tool for dynamic operating conditions.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"2 1 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77440187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designing an excellent hull to reduce the path energy consumption of UUV sailing is crucial to improving UUV energy endurance. However, due to the relative velocity and attack angle between the UUV and the ocean current will frequently change during the entire path, realizing a path energy consumption-based UUV hull design will result in a tremendous amount of calculation. In this work, based on the idea of articial intelligence-aided design (AIAD), we have successfully developed a data-driven design methodology for UUV hull design. Specically, we first developed and implemented deep learning (DL) algorithm for predicting the resis- tance of the UUV with different hull shapes under different velocities and attack angles. By mixing the proposed DL algorithm and introducing the particle swarm optimization (PSO) algorithm into the UUV hull design, we proposed a data-driven AIAD methodology. A path energy consumption-based experiment has been conducted based on the proposed method- ology, where the design results showed that the proposed design methodology maintains eciency and reliability while overcoming the high design workload.
{"title":"Artificial Intelligence Aided Design (AIAD) of Hull Form of Unmanned Underwater Vehicles (UUVs) for Minimization of Energy Consumption","authors":"Yu Ao, Jian Xu, Dapeng Zhang, Shaofan Li","doi":"10.1115/1.4062661","DOIUrl":"https://doi.org/10.1115/1.4062661","url":null,"abstract":"\u0000 Designing an excellent hull to reduce the path energy consumption of UUV sailing is crucial to improving UUV energy endurance. However, due to the relative velocity and attack angle between the UUV and the ocean current will frequently change during the entire path, realizing a path energy consumption-based UUV hull design will result in a tremendous amount of calculation. In this work, based on the idea of articial intelligence-aided design (AIAD), we have successfully developed a data-driven design methodology for UUV hull design. Specically, we first developed and implemented deep learning (DL) algorithm for predicting the resis- tance of the UUV with different hull shapes under different velocities and attack angles. By mixing the proposed DL algorithm and introducing the particle swarm optimization (PSO) algorithm into the UUV hull design, we proposed a data-driven AIAD methodology. A path energy consumption-based experiment has been conducted based on the proposed method- ology, where the design results showed that the proposed design methodology maintains eciency and reliability while overcoming the high design workload.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"54 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77990398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims to present a potential cybersecurity risk existing in Mixed Reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user's mid-air gestures. We first created a testbed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display (HMD). To interact with UIs and input information, the user's hand movements and gestures are tracked by the MR system. We set up the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users' hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input.
{"title":"“I can see your password”: A case study about cybersecurity risks in mid-air interactions of mixed reality-based smart manufacturing applications","authors":"Wenhao Yang, Xiwen Dengxiong, Xueting Wang, Yidan Hu, Yunbo Zhang","doi":"10.1115/1.4062658","DOIUrl":"https://doi.org/10.1115/1.4062658","url":null,"abstract":"\u0000 This paper aims to present a potential cybersecurity risk existing in Mixed Reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user's mid-air gestures. We first created a testbed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display (HMD). To interact with UIs and input information, the user's hand movements and gestures are tracked by the MR system. We set up the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users' hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43716343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phong Nguyen, Joseph Choi, H.S. Udaykumar, Stephen Baek
Abstract Many mechanical engineering applications call for multiscale computational modeling and simulation. However, solving for complex multiscale systems remains computationally onerous due to the high dimensionality of the solution space. Recently, machine learning (ML) has emerged as a promising solution that can either serve as a surrogate for, accelerate or augment traditional numerical methods. Pioneering work has demonstrated that ML provides solutions to governing systems of equations with comparable accuracy to those obtained using direct numerical methods, but with significantly faster computational speed. These high-speed, high-fidelity estimations can facilitate the solving of complex multiscale systems by providing a better initial solution to traditional solvers. This paper provides a perspective on the opportunities and challenges of using ML for complex multiscale modeling and simulation. We first outline the current state-of-the-art ML approaches for simulating multiscale systems and highlight some of the landmark developments. Next, we discuss current challenges for ML in multiscale computational modeling, such as the data and discretization dependence, interpretability, and data sharing and collaborative platform development. Finally, we suggest several potential research directions for the future.
{"title":"Challenges and Opportunities for Machine Learning in Multiscale Computational Modeling","authors":"Phong Nguyen, Joseph Choi, H.S. Udaykumar, Stephen Baek","doi":"10.1115/1.4062495","DOIUrl":"https://doi.org/10.1115/1.4062495","url":null,"abstract":"Abstract Many mechanical engineering applications call for multiscale computational modeling and simulation. However, solving for complex multiscale systems remains computationally onerous due to the high dimensionality of the solution space. Recently, machine learning (ML) has emerged as a promising solution that can either serve as a surrogate for, accelerate or augment traditional numerical methods. Pioneering work has demonstrated that ML provides solutions to governing systems of equations with comparable accuracy to those obtained using direct numerical methods, but with significantly faster computational speed. These high-speed, high-fidelity estimations can facilitate the solving of complex multiscale systems by providing a better initial solution to traditional solvers. This paper provides a perspective on the opportunities and challenges of using ML for complex multiscale modeling and simulation. We first outline the current state-of-the-art ML approaches for simulating multiscale systems and highlight some of the landmark developments. Next, we discuss current challenges for ML in multiscale computational modeling, such as the data and discretization dependence, interpretability, and data sharing and collaborative platform development. Finally, we suggest several potential research directions for the future.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134974636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Elsayed, Adam Dachowicz, M. Atallah, Jitesh H. Panchal
The digitization of manufacturing has transformed the product realization process across many industries, from aerospace and automotive to medicine and healthcare. While this progress has accelerated product development cycles and enabled designers to create products with previously unachievable complexity and precision, it has also opened the door to a broad array of unique security concerns, from theft of intellectual property to supply chain attacks and counterfeiting. To address these concerns, information embedding (e.g., watermarks and fingerprints) has emerged as a promising solution that enhances product security and traceability. Information embedding techniques involve storing unique and secure information within parts, making these parts easier to track and to verify for authenticity. However, a successful information embedding scheme requires information to be transmitted in physical parts both securely and in a way that is accessible to end users. Ensuring these qualities introduces unique computational and engineering challenges. For instance, these qualities require the embedding scheme designer to have an accurate model of the cyber-physical processes needed to embed information during manufacturing and read that information later in the product life cycle, as well as models of the cyber-physical, economic, and/or industrial processes that may degrade that information through natural wear-and-tear, or through intentional attacks by determined adversaries. This paper discusses challenges and research opportunities for the engineering design and manufacturing community in developing methods for efficient information embedding in manufactured products.
{"title":"Information Embedding for Secure Manufacturing: Challenges and Research Opportunities","authors":"K. Elsayed, Adam Dachowicz, M. Atallah, Jitesh H. Panchal","doi":"10.1115/1.4062600","DOIUrl":"https://doi.org/10.1115/1.4062600","url":null,"abstract":"\u0000 The digitization of manufacturing has transformed the product realization process across many industries, from aerospace and automotive to medicine and healthcare. While this progress has accelerated product development cycles and enabled designers to create products with previously unachievable complexity and precision, it has also opened the door to a broad array of unique security concerns, from theft of intellectual property to supply chain attacks and counterfeiting. To address these concerns, information embedding (e.g., watermarks and fingerprints) has emerged as a promising solution that enhances product security and traceability. Information embedding techniques involve storing unique and secure information within parts, making these parts easier to track and to verify for authenticity. However, a successful information embedding scheme requires information to be transmitted in physical parts both securely and in a way that is accessible to end users. Ensuring these qualities introduces unique computational and engineering challenges. For instance, these qualities require the embedding scheme designer to have an accurate model of the cyber-physical processes needed to embed information during manufacturing and read that information later in the product life cycle, as well as models of the cyber-physical, economic, and/or industrial processes that may degrade that information through natural wear-and-tear, or through intentional attacks by determined adversaries. This paper discusses challenges and research opportunities for the engineering design and manufacturing community in developing methods for efficient information embedding in manufactured products.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":"1 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87204870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}