Functional decomposition is an important task in early systems engineering and design, where the overall function of the system is resolved into the functions of its components or subassemblies. Conventionally, this task is performed manually, because of the possibility of multiple solution paths and the need for understanding the physics phenomena that could realize the desired effects. To this end, this paper presents a formal method for functional decomposition using physics-based qualitative reasoning. The formal representation includes three parts: (1) a natural language lexicon that can be used to detect the changes of physical states of material and energy flows, (2) a set of causation tables that abstracts the knowledge of qualitative physics by capturing the causal relations between the various quantities involved in a physical phenomenon or process, and (3) a process-to-subgraph mapping that translate the physical processes to function structure constructs. The algorithms use the above three representations and some additional topological reasoning to synthesize and assemble function structure graphs that are decompositions of a given black box model. The paper presents the formal representations and reasoning algorithms, and illustrates this method using an example function model of an air-heating device. It also presents the software implementation of the representations and the algorithms and uses it to validate the method’s ability to generate multiple decompositions from a black box function model.
{"title":"Formal Qualitative Physics-Based Reasoning for Functional Decomposition of Engineered Systems","authors":"Xiaoyang Mao, Chiradeep Sen","doi":"10.1115/1.4062748","DOIUrl":"https://doi.org/10.1115/1.4062748","url":null,"abstract":"\u0000 Functional decomposition is an important task in early systems engineering and design, where the overall function of the system is resolved into the functions of its components or subassemblies. Conventionally, this task is performed manually, because of the possibility of multiple solution paths and the need for understanding the physics phenomena that could realize the desired effects. To this end, this paper presents a formal method for functional decomposition using physics-based qualitative reasoning. The formal representation includes three parts: (1) a natural language lexicon that can be used to detect the changes of physical states of material and energy flows, (2) a set of causation tables that abstracts the knowledge of qualitative physics by capturing the causal relations between the various quantities involved in a physical phenomenon or process, and (3) a process-to-subgraph mapping that translate the physical processes to function structure constructs. The algorithms use the above three representations and some additional topological reasoning to synthesize and assemble function structure graphs that are decompositions of a given black box model. The paper presents the formal representations and reasoning algorithms, and illustrates this method using an example function model of an air-heating device. It also presents the software implementation of the representations and the algorithms and uses it to validate the method’s ability to generate multiple decompositions from a black box function model.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82563559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate estimation of remaining useful life (RUL) becomes a crucial task when bearing operates under dynamic working conditions. The environmental noise, different operating conditions, and multiple fault modes result in the existence of considerable distribution and feature shifts between different domains. To address these issues, a novel framework TSBiLSTM is proposed that utilizes 1DCNN, SBiLSTM, and AM synergically to extract highly abstract feature representation, and domain adaptation is realized using the MK-MMD (multi-kernel maximum mean discrepancy) metric and domain confusion layer. One-dimensional CNN (1DCNN) and stacked bi-directional LSTM (SBiLSTM) are utilized to take advantage of spatio-temporal features with attention mechanism (AM) to selectively process the influential degradation information. MK-MMD provides effective kernel selection along with a domain confusion layer to effectively extract domain invariant features. Both experimentation and comparison studies are conducted to verify the effectiveness and feasibility of the proposed TSBiLSTM model. The generalized performance is demonstrated using IEEE PHM datasets based on RMSE, MAE, absolute percent mean error, and percentage mean error. The promising RUL prediction results validate the superiority and usability of the proposed TSBiLSTM model as a promising prognostic tool for dynamic operating conditions.
{"title":"Unsupervised Domain Deep Transfer Learning Approach for Rolling Bearing Remaining Useful Life Estimation","authors":"M. Rathore, S. Harsha","doi":"10.1115/1.4062731","DOIUrl":"https://doi.org/10.1115/1.4062731","url":null,"abstract":"\u0000 Accurate estimation of remaining useful life (RUL) becomes a crucial task when bearing operates under dynamic working conditions. The environmental noise, different operating conditions, and multiple fault modes result in the existence of considerable distribution and feature shifts between different domains. To address these issues, a novel framework TSBiLSTM is proposed that utilizes 1DCNN, SBiLSTM, and AM synergically to extract highly abstract feature representation, and domain adaptation is realized using the MK-MMD (multi-kernel maximum mean discrepancy) metric and domain confusion layer. One-dimensional CNN (1DCNN) and stacked bi-directional LSTM (SBiLSTM) are utilized to take advantage of spatio-temporal features with attention mechanism (AM) to selectively process the influential degradation information. MK-MMD provides effective kernel selection along with a domain confusion layer to effectively extract domain invariant features. Both experimentation and comparison studies are conducted to verify the effectiveness and feasibility of the proposed TSBiLSTM model. The generalized performance is demonstrated using IEEE PHM datasets based on RMSE, MAE, absolute percent mean error, and percentage mean error. The promising RUL prediction results validate the superiority and usability of the proposed TSBiLSTM model as a promising prognostic tool for dynamic operating conditions.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77440187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designing an excellent hull to reduce the path energy consumption of UUV sailing is crucial to improving UUV energy endurance. However, due to the relative velocity and attack angle between the UUV and the ocean current will frequently change during the entire path, realizing a path energy consumption-based UUV hull design will result in a tremendous amount of calculation. In this work, based on the idea of articial intelligence-aided design (AIAD), we have successfully developed a data-driven design methodology for UUV hull design. Specically, we first developed and implemented deep learning (DL) algorithm for predicting the resis- tance of the UUV with different hull shapes under different velocities and attack angles. By mixing the proposed DL algorithm and introducing the particle swarm optimization (PSO) algorithm into the UUV hull design, we proposed a data-driven AIAD methodology. A path energy consumption-based experiment has been conducted based on the proposed method- ology, where the design results showed that the proposed design methodology maintains eciency and reliability while overcoming the high design workload.
{"title":"Artificial Intelligence Aided Design (AIAD) of Hull Form of Unmanned Underwater Vehicles (UUVs) for Minimization of Energy Consumption","authors":"Yu Ao, Jian Xu, Dapeng Zhang, Shaofan Li","doi":"10.1115/1.4062661","DOIUrl":"https://doi.org/10.1115/1.4062661","url":null,"abstract":"\u0000 Designing an excellent hull to reduce the path energy consumption of UUV sailing is crucial to improving UUV energy endurance. However, due to the relative velocity and attack angle between the UUV and the ocean current will frequently change during the entire path, realizing a path energy consumption-based UUV hull design will result in a tremendous amount of calculation. In this work, based on the idea of articial intelligence-aided design (AIAD), we have successfully developed a data-driven design methodology for UUV hull design. Specically, we first developed and implemented deep learning (DL) algorithm for predicting the resis- tance of the UUV with different hull shapes under different velocities and attack angles. By mixing the proposed DL algorithm and introducing the particle swarm optimization (PSO) algorithm into the UUV hull design, we proposed a data-driven AIAD methodology. A path energy consumption-based experiment has been conducted based on the proposed method- ology, where the design results showed that the proposed design methodology maintains eciency and reliability while overcoming the high design workload.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77990398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims to present a potential cybersecurity risk existing in Mixed Reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user's mid-air gestures. We first created a testbed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display (HMD). To interact with UIs and input information, the user's hand movements and gestures are tracked by the MR system. We set up the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users' hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input.
{"title":"“I can see your password”: A case study about cybersecurity risks in mid-air interactions of mixed reality-based smart manufacturing applications","authors":"Wenhao Yang, Xiwen Dengxiong, Xueting Wang, Yidan Hu, Yunbo Zhang","doi":"10.1115/1.4062658","DOIUrl":"https://doi.org/10.1115/1.4062658","url":null,"abstract":"\u0000 This paper aims to present a potential cybersecurity risk existing in Mixed Reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user's mid-air gestures. We first created a testbed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display (HMD). To interact with UIs and input information, the user's hand movements and gestures are tracked by the MR system. We set up the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users' hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43716343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phong Nguyen, Joseph Choi, H.S. Udaykumar, Stephen Baek
Abstract Many mechanical engineering applications call for multiscale computational modeling and simulation. However, solving for complex multiscale systems remains computationally onerous due to the high dimensionality of the solution space. Recently, machine learning (ML) has emerged as a promising solution that can either serve as a surrogate for, accelerate or augment traditional numerical methods. Pioneering work has demonstrated that ML provides solutions to governing systems of equations with comparable accuracy to those obtained using direct numerical methods, but with significantly faster computational speed. These high-speed, high-fidelity estimations can facilitate the solving of complex multiscale systems by providing a better initial solution to traditional solvers. This paper provides a perspective on the opportunities and challenges of using ML for complex multiscale modeling and simulation. We first outline the current state-of-the-art ML approaches for simulating multiscale systems and highlight some of the landmark developments. Next, we discuss current challenges for ML in multiscale computational modeling, such as the data and discretization dependence, interpretability, and data sharing and collaborative platform development. Finally, we suggest several potential research directions for the future.
{"title":"Challenges and Opportunities for Machine Learning in Multiscale Computational Modeling","authors":"Phong Nguyen, Joseph Choi, H.S. Udaykumar, Stephen Baek","doi":"10.1115/1.4062495","DOIUrl":"https://doi.org/10.1115/1.4062495","url":null,"abstract":"Abstract Many mechanical engineering applications call for multiscale computational modeling and simulation. However, solving for complex multiscale systems remains computationally onerous due to the high dimensionality of the solution space. Recently, machine learning (ML) has emerged as a promising solution that can either serve as a surrogate for, accelerate or augment traditional numerical methods. Pioneering work has demonstrated that ML provides solutions to governing systems of equations with comparable accuracy to those obtained using direct numerical methods, but with significantly faster computational speed. These high-speed, high-fidelity estimations can facilitate the solving of complex multiscale systems by providing a better initial solution to traditional solvers. This paper provides a perspective on the opportunities and challenges of using ML for complex multiscale modeling and simulation. We first outline the current state-of-the-art ML approaches for simulating multiscale systems and highlight some of the landmark developments. Next, we discuss current challenges for ML in multiscale computational modeling, such as the data and discretization dependence, interpretability, and data sharing and collaborative platform development. Finally, we suggest several potential research directions for the future.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134974636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Elsayed, Adam Dachowicz, M. Atallah, Jitesh H. Panchal
The digitization of manufacturing has transformed the product realization process across many industries, from aerospace and automotive to medicine and healthcare. While this progress has accelerated product development cycles and enabled designers to create products with previously unachievable complexity and precision, it has also opened the door to a broad array of unique security concerns, from theft of intellectual property to supply chain attacks and counterfeiting. To address these concerns, information embedding (e.g., watermarks and fingerprints) has emerged as a promising solution that enhances product security and traceability. Information embedding techniques involve storing unique and secure information within parts, making these parts easier to track and to verify for authenticity. However, a successful information embedding scheme requires information to be transmitted in physical parts both securely and in a way that is accessible to end users. Ensuring these qualities introduces unique computational and engineering challenges. For instance, these qualities require the embedding scheme designer to have an accurate model of the cyber-physical processes needed to embed information during manufacturing and read that information later in the product life cycle, as well as models of the cyber-physical, economic, and/or industrial processes that may degrade that information through natural wear-and-tear, or through intentional attacks by determined adversaries. This paper discusses challenges and research opportunities for the engineering design and manufacturing community in developing methods for efficient information embedding in manufactured products.
{"title":"Information Embedding for Secure Manufacturing: Challenges and Research Opportunities","authors":"K. Elsayed, Adam Dachowicz, M. Atallah, Jitesh H. Panchal","doi":"10.1115/1.4062600","DOIUrl":"https://doi.org/10.1115/1.4062600","url":null,"abstract":"\u0000 The digitization of manufacturing has transformed the product realization process across many industries, from aerospace and automotive to medicine and healthcare. While this progress has accelerated product development cycles and enabled designers to create products with previously unachievable complexity and precision, it has also opened the door to a broad array of unique security concerns, from theft of intellectual property to supply chain attacks and counterfeiting. To address these concerns, information embedding (e.g., watermarks and fingerprints) has emerged as a promising solution that enhances product security and traceability. Information embedding techniques involve storing unique and secure information within parts, making these parts easier to track and to verify for authenticity. However, a successful information embedding scheme requires information to be transmitted in physical parts both securely and in a way that is accessible to end users. Ensuring these qualities introduces unique computational and engineering challenges. For instance, these qualities require the embedding scheme designer to have an accurate model of the cyber-physical processes needed to embed information during manufacturing and read that information later in the product life cycle, as well as models of the cyber-physical, economic, and/or industrial processes that may degrade that information through natural wear-and-tear, or through intentional attacks by determined adversaries. This paper discusses challenges and research opportunities for the engineering design and manufacturing community in developing methods for efficient information embedding in manufactured products.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87204870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Douglas L. Van Bossuyt, Britta Hale, R. Arlitt, N. Papakonstantinou
In an age of worsening global threat landscape and accelerating uncertainty, the design and manufacture of systems must increase resilience and robustness across both the system itself and the entire systems design process. We generally trust our colleagues after initial clearance/background checks; and systems to function as intended and within operating parameters after safety engineering review, verification, validation, and/or system qualification testing. This approach has led to increased insider threat impacts; thus we suggest moving to the “trust, but verify” approach embodied by the Zero-Trust paradigm. Zero-Trust is increasingly adopted for network security but has not seen wide adoption in systems design and operation. Achieving the goal of Zero-Trust throughout the systems lifecycle will help to ensure that no single bad actor -- whether human or machine learning / artificial intelligence (ML/AI) -- can induce failure anywhere in a system's lifecycle. Additionally, while ML/AI and their associated risks are already entrenched within the operations phase of many systems' lifecycles, ML/AI is gaining traction during the design phase. For example, generative design algorithms are increasingly popular but there is less understanding of potential risks. Adopting the Zero-Trust philosophy helps ensure robust and resilient design, manufacture, operations, maintenance, upgrade, and disposal of systems. We outline the rewards and challenges of implementing Zero-Trust and propose the Framework for Zero-Trust for the System Design Lifecycle. The paper highlights several areas of ongoing research with focus on high priority areas where the community should focus efforts.
{"title":"Zero-Trust for the System Design Lifecycle","authors":"Douglas L. Van Bossuyt, Britta Hale, R. Arlitt, N. Papakonstantinou","doi":"10.1115/1.4062597","DOIUrl":"https://doi.org/10.1115/1.4062597","url":null,"abstract":"\u0000 In an age of worsening global threat landscape and accelerating uncertainty, the design and manufacture of systems must increase resilience and robustness across both the system itself and the entire systems design process. We generally trust our colleagues after initial clearance/background checks; and systems to function as intended and within operating parameters after safety engineering review, verification, validation, and/or system qualification testing. This approach has led to increased insider threat impacts; thus we suggest moving to the “trust, but verify” approach embodied by the Zero-Trust paradigm. Zero-Trust is increasingly adopted for network security but has not seen wide adoption in systems design and operation. Achieving the goal of Zero-Trust throughout the systems lifecycle will help to ensure that no single bad actor -- whether human or machine learning / artificial intelligence (ML/AI) -- can induce failure anywhere in a system's lifecycle. Additionally, while ML/AI and their associated risks are already entrenched within the operations phase of many systems' lifecycles, ML/AI is gaining traction during the design phase. For example, generative design algorithms are increasingly popular but there is less understanding of potential risks. Adopting the Zero-Trust philosophy helps ensure robust and resilient design, manufacture, operations, maintenance, upgrade, and disposal of systems. We outline the rewards and challenges of implementing Zero-Trust and propose the Framework for Zero-Trust for the System Design Lifecycle. The paper highlights several areas of ongoing research with focus on high priority areas where the community should focus efforts.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72485877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prediction of the remaining useful life (RUL) is of great significance to ensure the safe operation of industrial equipment and to reduce the cost of regular preventive maintenance. However, the complex operating conditions and various fault modes make it difficult to extract features containing more degradation information with existing prediction methods. We propose a self-supervised learning method based on variational automatic encoder (VAE) to extract features of data's operating conditions and fault modes. Then the clustering algorithm is applied to the extracted features to divide data from different failure modes into different categories and reduce the impact of complex working conditions on the estimation accuracy. In order to verify the effectiveness of the proposed method, we conduct experiments with different network structures on the C-MAPSS dataset, and the results verified that our method can effectively improve the feature extraction capability of the model. In addition, the experimental results further demonstrate the superiority and necessity of using hidden features for clustering rather than raw data.
{"title":"Feature Extraction Based on Self-Supervised Learning for Remaining Useful Life Prediction","authors":"Zhenjun Yu, Ningbo Lei, Yu Mo, Xin Xu, Xiu Li, Biqing Huang","doi":"10.1115/1.4062599","DOIUrl":"https://doi.org/10.1115/1.4062599","url":null,"abstract":"\u0000 The prediction of the remaining useful life (RUL) is of great significance to ensure the safe operation of industrial equipment and to reduce the cost of regular preventive maintenance. However, the complex operating conditions and various fault modes make it difficult to extract features containing more degradation information with existing prediction methods. We propose a self-supervised learning method based on variational automatic encoder (VAE) to extract features of data's operating conditions and fault modes. Then the clustering algorithm is applied to the extracted features to divide data from different failure modes into different categories and reduce the impact of complex working conditions on the estimation accuracy. In order to verify the effectiveness of the proposed method, we conduct experiments with different network structures on the C-MAPSS dataset, and the results verified that our method can effectively improve the feature extraction capability of the model. In addition, the experimental results further demonstrate the superiority and necessity of using hidden features for clustering rather than raw data.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79114346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Customization is an increasing trend in fashion product industry to reflect individual lifestyles. Previous studies have examined the idea of virtual footwear try-on in augmented reality (AR) using a depth camera. However, the depth camera restricts the deployment of this technology in practice. This research proposes to estimate the 6-DoF pose of a human foot from a color image using deep learning models to solve the problem. We construct a training dataset consisting of synthetic and real foot images that are automatically annotated. Three convolutional neural network models (DOPE, DOPE2, and YOLO6d) are trained with the dataset to predict the foot pose in real-time. The model performances are evaluated using metrics for accuracy, computational efficiency, and training time. A prototyping system implementing the best model demonstrates the feasibility of virtual footwear try-on using a RGB camera. Test results also indicate the necessity of real training data to bridge the reality gap in estimating the human foot pose.
{"title":"Virtual Footwear Try-on in Augmented Reality using Deep Learning Models","authors":"Chih-Hsing Chu, Ting-Yang Chou, S. Liu","doi":"10.1115/1.4062596","DOIUrl":"https://doi.org/10.1115/1.4062596","url":null,"abstract":"\u0000 Customization is an increasing trend in fashion product industry to reflect individual lifestyles. Previous studies have examined the idea of virtual footwear try-on in augmented reality (AR) using a depth camera. However, the depth camera restricts the deployment of this technology in practice. This research proposes to estimate the 6-DoF pose of a human foot from a color image using deep learning models to solve the problem. We construct a training dataset consisting of synthetic and real foot images that are automatically annotated. Three convolutional neural network models (DOPE, DOPE2, and YOLO6d) are trained with the dataset to predict the foot pose in real-time. The model performances are evaluated using metrics for accuracy, computational efficiency, and training time. A prototyping system implementing the best model demonstrates the feasibility of virtual footwear try-on using a RGB camera. Test results also indicate the necessity of real training data to bridge the reality gap in estimating the human foot pose.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48862810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyber-physical-human systems (CPHS) are smart products and systems that offer services to their customers, supported by back-end systems (e.g., information, finance) and other infrastructure. In this paper, initial concepts and research issues are presented regarding the computational design of CPHS, CPHS families, and generations of these families. Significant research gaps are identified that should drive future research directions. The approach proposed here is a novel combination of generative and configuration design methods with product family design methodology and an explicit consideration of usability across all human stakeholders. With this approach, a wide variety of CPHS, including customized CPHS, can be developed quickly by sharing technologies and modules across CPHS family members, while ensuring user acceptance. The domain of assistive technology is used in this paper to provide an example field of practice that could benefit from a systematic design methodology and opportunities to leverage technology solutions.
{"title":"Research Issues in the Generative Design of Cyber-Physical-Human Systems","authors":"D. Rosen, C. Choi","doi":"10.1115/1.4062598","DOIUrl":"https://doi.org/10.1115/1.4062598","url":null,"abstract":"\u0000 Cyber-physical-human systems (CPHS) are smart products and systems that offer services to their customers, supported by back-end systems (e.g., information, finance) and other infrastructure. In this paper, initial concepts and research issues are presented regarding the computational design of CPHS, CPHS families, and generations of these families. Significant research gaps are identified that should drive future research directions. The approach proposed here is a novel combination of generative and configuration design methods with product family design methodology and an explicit consideration of usability across all human stakeholders. With this approach, a wide variety of CPHS, including customized CPHS, can be developed quickly by sharing technologies and modules across CPHS family members, while ensuring user acceptance. The domain of assistive technology is used in this paper to provide an example field of practice that could benefit from a systematic design methodology and opportunities to leverage technology solutions.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78249118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}