Pub Date : 2023-01-13DOI: 10.1017/S0890060422000191
N. Muthumanickam, J. Duarte, T. Simpson
Abstract Modern day building design projects require multidisciplinary expertise from architects and engineers across various phases of the design (conceptual, preliminary, and detailed) and construction processes. The Architecture Engineering and Construction (AEC) community has recently shifted gears toward leveraging design optimization techniques to make well-informed decisions in the design of buildings. However, most of the building design optimization efforts are either multidisciplinary optimization confined to just a specific design phase (conceptual/preliminary/detailed) or single disciplinary optimization (structural/thermal/daylighting/energy) spanning across multiple phases. Complexity in changing the optimization setup as the design progresses through subsequent phases, interoperability issues between modeling and physics-based analysis tools used at later stages, and the lack of an appropriate level of design detail to get meaningful results from these sophisticated analysis tools are few challenges that limit multi-phase multidisciplinary design optimization (MDO) in the AEC field. This paper proposes a computational building design platform leveraging concurrent engineering techniques such as interactive problem structuring, simulation-based optimization using meta models for energy and daylighting (machine learning based) and tradespace visualization. The proposed multi-phase concurrent MDO framework is demonstrated by using it to design and optimize a sample office building for energy and daylighting objectives across multiple phases. Furthermore, limitations of the proposed framework and future avenues of research are listed.
{"title":"Multidisciplinary concurrent optimization framework for multi-phase building design process","authors":"N. Muthumanickam, J. Duarte, T. Simpson","doi":"10.1017/S0890060422000191","DOIUrl":"https://doi.org/10.1017/S0890060422000191","url":null,"abstract":"Abstract Modern day building design projects require multidisciplinary expertise from architects and engineers across various phases of the design (conceptual, preliminary, and detailed) and construction processes. The Architecture Engineering and Construction (AEC) community has recently shifted gears toward leveraging design optimization techniques to make well-informed decisions in the design of buildings. However, most of the building design optimization efforts are either multidisciplinary optimization confined to just a specific design phase (conceptual/preliminary/detailed) or single disciplinary optimization (structural/thermal/daylighting/energy) spanning across multiple phases. Complexity in changing the optimization setup as the design progresses through subsequent phases, interoperability issues between modeling and physics-based analysis tools used at later stages, and the lack of an appropriate level of design detail to get meaningful results from these sophisticated analysis tools are few challenges that limit multi-phase multidisciplinary design optimization (MDO) in the AEC field. This paper proposes a computational building design platform leveraging concurrent engineering techniques such as interactive problem structuring, simulation-based optimization using meta models for energy and daylighting (machine learning based) and tradespace visualization. The proposed multi-phase concurrent MDO framework is demonstrated by using it to design and optimize a sample office building for energy and daylighting objectives across multiple phases. Furthermore, limitations of the proposed framework and future avenues of research are listed.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43232333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-12DOI: 10.1017/S0890060422000245
Erman Çakıt, M. Dağdeviren
Abstract Determining accurate standard time using direct measurement techniques is especially challenging in companies that do not have a proper environment for time measurement studies or that manufacture items requiring complex production schedules. New and specific time measurement techniques are required for such companies. This research developed a novel time estimation approach based on several machine learning methods. The set of collected inputs in the manufacturing environment, including a number of products, the number of welding operations, product's surface area factor, difficulty/working environment factors, and the number of metal forming processes. The data were collected from one of the largest bus manufacturing companies in Turkey. Experimental results demonstrate that when model accuracy was measured using performance measures, k-nearest neighbors outperformed other machine learning techniques in terms of prediction accuracy. “The number of welding operations” and “the number of pieces” were found to be the most effective parameters. The findings show that machine learning algorithms can estimate standard time, and the findings can be used for several purposes, including lowering production costs, increasing productivity, and ensuring efficiency in the execution of their operating processes by other companies that manufacture similar products.
{"title":"Comparative analysis of machine learning algorithms for predicting standard time in a manufacturing environment","authors":"Erman Çakıt, M. Dağdeviren","doi":"10.1017/S0890060422000245","DOIUrl":"https://doi.org/10.1017/S0890060422000245","url":null,"abstract":"Abstract Determining accurate standard time using direct measurement techniques is especially challenging in companies that do not have a proper environment for time measurement studies or that manufacture items requiring complex production schedules. New and specific time measurement techniques are required for such companies. This research developed a novel time estimation approach based on several machine learning methods. The set of collected inputs in the manufacturing environment, including a number of products, the number of welding operations, product's surface area factor, difficulty/working environment factors, and the number of metal forming processes. The data were collected from one of the largest bus manufacturing companies in Turkey. Experimental results demonstrate that when model accuracy was measured using performance measures, k-nearest neighbors outperformed other machine learning techniques in terms of prediction accuracy. “The number of welding operations” and “the number of pieces” were found to be the most effective parameters. The findings show that machine learning algorithms can estimate standard time, and the findings can be used for several purposes, including lowering production costs, increasing productivity, and ensuring efficiency in the execution of their operating processes by other companies that manufacture similar products.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48244715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-06DOI: 10.1017/S0890060422000233
Chun Kit Jeffery Hou, K. Behdinan
Abstract Cold rolling involves large deformation of the workpiece leading to temperature increase due to plastic deformation. This process is highly nonlinear and leads to large computation times to fully model the process. This paper describes the use of dimension-reduced neural networks (DR-NNs) for predicting temperature changes due to plastic deformation in a two-stage cold rolling process. The main objective of these models is to reduce computational demand, error, and uncertainty in predictions. Material properties, feed velocity, sheet dimensions, and friction models are introduced as inputs for the dimensionality reduction. Different linear and nonlinear dimensionality reduction methods reduce the input space to a smaller set of principal components. The principal components are fed as inputs to the neural networks for predicting the output temperature change. The DR-NNs are compared against a standalone neural network and show improvements in terms of lower computational time and prediction uncertainty.
{"title":"Neural networks with dimensionality reduction for predicting temperature change due to plastic deformation in a cold rolling simulation","authors":"Chun Kit Jeffery Hou, K. Behdinan","doi":"10.1017/S0890060422000233","DOIUrl":"https://doi.org/10.1017/S0890060422000233","url":null,"abstract":"Abstract Cold rolling involves large deformation of the workpiece leading to temperature increase due to plastic deformation. This process is highly nonlinear and leads to large computation times to fully model the process. This paper describes the use of dimension-reduced neural networks (DR-NNs) for predicting temperature changes due to plastic deformation in a two-stage cold rolling process. The main objective of these models is to reduce computational demand, error, and uncertainty in predictions. Material properties, feed velocity, sheet dimensions, and friction models are introduced as inputs for the dimensionality reduction. Different linear and nonlinear dimensionality reduction methods reduce the input space to a smaller set of principal components. The principal components are fed as inputs to the neural networks for predicting the output temperature change. The DR-NNs are compared against a standalone neural network and show improvements in terms of lower computational time and prediction uncertainty.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46278018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1017/s0890060423000185
Yan Liu, Qingquan Jian, Claudia M. Eckert
Abstract Product data sharing is fundamental for collaborative product design and development. Although the STandard for Exchange of Product model data (STEP) enables this by providing a unified data definition and description, it lacks the ability to provide a more semantically enriched product data model. Many researchers suggest converting STEP models to ontology models and propose rules for mapping EXPRESS, the descriptive language of STEP, to Web Ontology Language (OWL). In most research, this mapping is a manual process which is time-consuming and prone to misunderstandings. To support this conversion, this research proposes an automatic method based on natural language processing techniques (NLP). The similarities of language elements in the reference manuals of EXPRESS and OWL have been analyzed in terms of three aspects: heading semantics, text semantics, and heading hierarchy. The paper focusses on translating between language elements, but the same approach has also been applied to the definition of the data models. Two forms of the semantic analysis with NLP are proposed: a Combination of Random Walks (RW) and Global Vectors for Word Representation (GloVe) for heading semantic similarity; and a Decoding-enhanced BERT with disentangled attention (DeBERTa) ensemble model for text semantic similarity. The evaluation shows the feasibility of the proposed method. The results not only cover most language elements mapped by current research, but also identify the mappings of the elements that have not been included. It also indicates the potential to identify the OWL segments for the EXPRESS declarations.
摘要产品数据共享是协同产品设计与开发的基础。尽管产品模型数据交换标准(STEP)通过提供统一的数据定义和描述来实现这一点,但它缺乏提供语义更丰富的产品数据模型的能力。许多研究者建议将STEP模型转换为本体模型,并提出了STEP描述语言EXPRESS到Web ontology language (OWL)的映射规则。在大多数研究中,这种映射是一个人工过程,既耗时又容易产生误解。为了支持这种转换,本研究提出了一种基于自然语言处理技术(NLP)的自动方法。从标题语义、文本语义和标题层次三个方面分析了EXPRESS和OWL参考手册中语言元素的相似性。本文主要关注语言元素之间的翻译,但同样的方法也应用于数据模型的定义。提出了两种基于自然语言处理的语义分析方法:结合随机行走(RW)和全局向量(GloVe)进行标题语义相似度分析;以及一种解码增强的文本语义相似度BERT解纠缠注意集成模型(DeBERTa)。评价结果表明了该方法的可行性。结果不仅涵盖了当前研究所映射的大多数语言元素,而且还确定了尚未包含的元素的映射。它还指出了为EXPRESS声明识别OWL段的可能性。
{"title":"A semantic similarity-based method to support the conversion from EXPRESS to OWL","authors":"Yan Liu, Qingquan Jian, Claudia M. Eckert","doi":"10.1017/s0890060423000185","DOIUrl":"https://doi.org/10.1017/s0890060423000185","url":null,"abstract":"Abstract Product data sharing is fundamental for collaborative product design and development. Although the STandard for Exchange of Product model data (STEP) enables this by providing a unified data definition and description, it lacks the ability to provide a more semantically enriched product data model. Many researchers suggest converting STEP models to ontology models and propose rules for mapping EXPRESS, the descriptive language of STEP, to Web Ontology Language (OWL). In most research, this mapping is a manual process which is time-consuming and prone to misunderstandings. To support this conversion, this research proposes an automatic method based on natural language processing techniques (NLP). The similarities of language elements in the reference manuals of EXPRESS and OWL have been analyzed in terms of three aspects: heading semantics, text semantics, and heading hierarchy. The paper focusses on translating between language elements, but the same approach has also been applied to the definition of the data models. Two forms of the semantic analysis with NLP are proposed: a Combination of Random Walks (RW) and Global Vectors for Word Representation (GloVe) for heading semantic similarity; and a Decoding-enhanced BERT with disentangled attention (DeBERTa) ensemble model for text semantic similarity. The evaluation shows the feasibility of the proposed method. The results not only cover most language elements mapped by current research, but also identify the mappings of the elements that have not been included. It also indicates the potential to identify the OWL segments for the EXPRESS declarations.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135444991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1017/S0890060422000208
I. Yang, H. Prayogo
Abstract Although an accurate reliability assessment is essential to build a resilient infrastructure, it usually requires time-consuming computation. To reduce the computational burden, machine learning-based surrogate models have been used extensively to predict the probability of failure for structural designs. Nevertheless, the surrogate model still needs to compute and assess a certain number of training samples to achieve sufficient prediction accuracy. This paper proposes a new surrogate method for reliability analysis called Adaptive Hyperball Kriging Reliability Analysis (AHKRA). The AHKRA method revolves around using a hyperball-based sampling region. The radius of the hyperball represents the precision of reliability analysis. It is iteratively adjusted based on the number of samples required to evaluate the probability of failure with a target coefficient of variation. AHKRA adopts samples in a hyperball instead of an n-sigma rule-based sampling region to avoid the curse of dimensionality. The application of AHKRA in ten mathematical and two practical cases verifies its accuracy, efficiency, and robustness as it outperforms previous Kriging-based methods.
{"title":"Adaptive hyperball Kriging method for efficient reliability analysis","authors":"I. Yang, H. Prayogo","doi":"10.1017/S0890060422000208","DOIUrl":"https://doi.org/10.1017/S0890060422000208","url":null,"abstract":"Abstract Although an accurate reliability assessment is essential to build a resilient infrastructure, it usually requires time-consuming computation. To reduce the computational burden, machine learning-based surrogate models have been used extensively to predict the probability of failure for structural designs. Nevertheless, the surrogate model still needs to compute and assess a certain number of training samples to achieve sufficient prediction accuracy. This paper proposes a new surrogate method for reliability analysis called Adaptive Hyperball Kriging Reliability Analysis (AHKRA). The AHKRA method revolves around using a hyperball-based sampling region. The radius of the hyperball represents the precision of reliability analysis. It is iteratively adjusted based on the number of samples required to evaluate the probability of failure with a target coefficient of variation. AHKRA adopts samples in a hyperball instead of an n-sigma rule-based sampling region to avoid the curse of dimensionality. The application of AHKRA in ten mathematical and two practical cases verifies its accuracy, efficiency, and robustness as it outperforms previous Kriging-based methods.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45441603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-04DOI: 10.1017/S0890060422000178
Lingyu Wang, Siyu Zhu, Jin Qi, Jie Hu
Abstract In the era of rapid product update and intense competition, aesthetic design has been increasingly important in various fields, as aesthetic feelings of customers largely influence their purchase preferences. However, the quantification of aesthetic feeling is still a very subjective process due to vague evaluations. The determination of form parameters according to aesthetics is difficult hitherto. Aesthetic measure recently arises as a prominent tool for this purpose using formulas derived from aesthetic theory. But as revealed by existing studies, it needs to be customized with deterministic and objective methods to be reliable in practice use. To facilitate this application, this paper proposes an evolutionary form design method, integrating aesthetic dimension selection and parameter optimization. After summarizing initial aesthetic dimensions, aesthetic dimension selection based on expert decision-making and particle swarm optimization (PSO) is carried out. With filtered aesthetic dimensions, design parameters are optimized with NSGA-II (non-dominated sorting genetic algorithm). The quality of pareto solutions obtained to be design schemes is assessed by three criteria to conduct sensitivity analysis of cross and mutation probability and population size. Our experiment using bicycle form design shows that the proposed evolutionary form design method can generate numerous and variant aesthetic design schemes rapidly. This is very useful for both product redesign and innovative new product development.
{"title":"An evolutionary form design method based on aesthetic dimension selection and NSGA-II","authors":"Lingyu Wang, Siyu Zhu, Jin Qi, Jie Hu","doi":"10.1017/S0890060422000178","DOIUrl":"https://doi.org/10.1017/S0890060422000178","url":null,"abstract":"Abstract In the era of rapid product update and intense competition, aesthetic design has been increasingly important in various fields, as aesthetic feelings of customers largely influence their purchase preferences. However, the quantification of aesthetic feeling is still a very subjective process due to vague evaluations. The determination of form parameters according to aesthetics is difficult hitherto. Aesthetic measure recently arises as a prominent tool for this purpose using formulas derived from aesthetic theory. But as revealed by existing studies, it needs to be customized with deterministic and objective methods to be reliable in practice use. To facilitate this application, this paper proposes an evolutionary form design method, integrating aesthetic dimension selection and parameter optimization. After summarizing initial aesthetic dimensions, aesthetic dimension selection based on expert decision-making and particle swarm optimization (PSO) is carried out. With filtered aesthetic dimensions, design parameters are optimized with NSGA-II (non-dominated sorting genetic algorithm). The quality of pareto solutions obtained to be design schemes is assessed by three criteria to conduct sensitivity analysis of cross and mutation probability and population size. Our experiment using bicycle form design shows that the proposed evolutionary form design method can generate numerous and variant aesthetic design schemes rapidly. This is very useful for both product redesign and innovative new product development.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47523645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A growing trend in requirements elicitation is the use of machine learning (ML) techniques to automate the cumbersome requirement handling process. This literature review summarizes and analyzes studies that incorporate ML and natural language processing (NLP) into demand elicitation. We answer the following research questions: (1) What requirement elicitation activities are supported by ML? (2) What data sources are used to build ML-based requirement solutions? (3) What technologies, algorithms, and tools are used to build ML-based requirement elicitation? (4) How to construct an ML-based requirements elicitation method? (5) What are the available tools to support ML-based requirements elicitation methodology? Keywords derived from these research questions led to 975 records initially retrieved from 7 scientific search engines. Finally, 86 articles were selected for inclusion in the review. As the primary research finding, we identified 15 ML-based requirement elicitation tasks and classified them into four categories. Twelve different data sources for building a data-driven model are identified and classified in this literature review. In addition, we categorized the techniques for constructing ML-based requirement elicitation methods into five parts, which are Data Cleansing and Preprocessing, Textual Feature Extraction, Learning, Evaluation, and Tools. More specifically, 3 categories of preprocessing methods, 3 different feature extraction strategies, 12 different families of learning methods, 2 different evaluation strategies, and various off-the-shelf publicly available tools were identified. Furthermore, we discussed the limitations of the current studies and proposed eight potential directions for future research.
{"title":"Machine learning in requirements elicitation: a literature review","authors":"Cheligeer Cheligeer, Jingwei Huang, Guosong Wu, N. Bhuiyan, Yuan Xu, Yong Zeng","doi":"10.1017/S0890060422000166","DOIUrl":"https://doi.org/10.1017/S0890060422000166","url":null,"abstract":"Abstract A growing trend in requirements elicitation is the use of machine learning (ML) techniques to automate the cumbersome requirement handling process. This literature review summarizes and analyzes studies that incorporate ML and natural language processing (NLP) into demand elicitation. We answer the following research questions: (1) What requirement elicitation activities are supported by ML? (2) What data sources are used to build ML-based requirement solutions? (3) What technologies, algorithms, and tools are used to build ML-based requirement elicitation? (4) How to construct an ML-based requirements elicitation method? (5) What are the available tools to support ML-based requirements elicitation methodology? Keywords derived from these research questions led to 975 records initially retrieved from 7 scientific search engines. Finally, 86 articles were selected for inclusion in the review. As the primary research finding, we identified 15 ML-based requirement elicitation tasks and classified them into four categories. Twelve different data sources for building a data-driven model are identified and classified in this literature review. In addition, we categorized the techniques for constructing ML-based requirement elicitation methods into five parts, which are Data Cleansing and Preprocessing, Textual Feature Extraction, Learning, Evaluation, and Tools. More specifically, 3 categories of preprocessing methods, 3 different feature extraction strategies, 12 different families of learning methods, 2 different evaluation strategies, and various off-the-shelf publicly available tools were identified. Furthermore, we discussed the limitations of the current studies and proposed eight potential directions for future research.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42793054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-17DOI: 10.1017/S089006042200021X
Kaixin Sha, Yupeng Li, Zhihua Zhao, Na Zhang
Abstract Redesign is a widespread strategy for product improvement whose essence is the optimization of design parameters (DPs) considering the trade-off between customer satisfaction and cost concerns. Similar to the relation between customer requirements (CRs) and customer satisfaction, the sensitivity of customer satisfaction is diverse to different DPs. In this study, a sensitivity-enhanced customer satisfaction function is defined for redesign model construction. This fills the research gap in product redesign that lacking of consideration and quantification of customer satisfaction sensitivity. First, a sensitivity index is defined based on Kano indices for analyzing sensitivity of customer satisfaction in different DP categories. Second, traditional customer satisfaction function has been improved by injecting the sensitivity of customer satisfaction to the variations of DPs. Subsequently, a DP optimization model is established to maximize shared surplus between customers and enterprise. Finally, a case study involving the redesign of a braking system of automobile is implemented to demonstrate the effectiveness and rationality of the proposed approach. The results show that the improved customer satisfaction function can reflect a more nuanced relationship between customer satisfaction and fulfilment level of DPs. Additionally, the proposed redesign model helps designers determine the target values of DPs under a better trade-off and enhances enterprise competitiveness.
{"title":"Product redesign considering the sensitivity of customer satisfaction","authors":"Kaixin Sha, Yupeng Li, Zhihua Zhao, Na Zhang","doi":"10.1017/S089006042200021X","DOIUrl":"https://doi.org/10.1017/S089006042200021X","url":null,"abstract":"Abstract Redesign is a widespread strategy for product improvement whose essence is the optimization of design parameters (DPs) considering the trade-off between customer satisfaction and cost concerns. Similar to the relation between customer requirements (CRs) and customer satisfaction, the sensitivity of customer satisfaction is diverse to different DPs. In this study, a sensitivity-enhanced customer satisfaction function is defined for redesign model construction. This fills the research gap in product redesign that lacking of consideration and quantification of customer satisfaction sensitivity. First, a sensitivity index is defined based on Kano indices for analyzing sensitivity of customer satisfaction in different DP categories. Second, traditional customer satisfaction function has been improved by injecting the sensitivity of customer satisfaction to the variations of DPs. Subsequently, a DP optimization model is established to maximize shared surplus between customers and enterprise. Finally, a case study involving the redesign of a braking system of automobile is implemented to demonstrate the effectiveness and rationality of the proposed approach. The results show that the improved customer satisfaction function can reflect a more nuanced relationship between customer satisfaction and fulfilment level of DPs. Additionally, the proposed redesign model helps designers determine the target values of DPs under a better trade-off and enhances enterprise competitiveness.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45400067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-29DOI: 10.1017/S0890060422000154
A. Bhatt, A. Chakrabarti
Abstract The goal of this paper is to develop and test a gamified design thinking framework, including its pedagogical elements, for supporting various learning objectives for school students. By synthesizing the elements and principles of design, learning and games, the authors propose a framework for a learning tool for school students to fulfil a number of learning objectives; the framework includes a design thinking process called “IISC Design Thinking” and its gamified version called “IISC DBox”. The effectiveness of the framework as a learning tool has been evaluated by conducting workshops that involved 77 school students. The results suggest that the gamification used had a positive effect on the design outcomes, fulfilment of learning objectives, and learners' achievements, indicating the potential of the framework for offering an effective, gamified tool for promoting design thinking in school education. In addition to presenting results from empirical studies for fulfilment of the objectives, this paper also proposes an approach that can be used for identifying appropriate learning objectives, selecting appropriate game elements to fulfil these objectives, and integrating appropriate game elements with design and learning elements. The paper also proposes a general approach for assessing the effectiveness of a gamified version for attaining a given set of learning objectives. The methodology used in this paper thus can be used as a reference for developing and evaluating a gamified version of design thinking course suitable not only for school education but also for other domains (e.g., engineering, management) with minimal changes.
{"title":"Gamification of design thinking: a way to enhance effectiveness of learning","authors":"A. Bhatt, A. Chakrabarti","doi":"10.1017/S0890060422000154","DOIUrl":"https://doi.org/10.1017/S0890060422000154","url":null,"abstract":"Abstract The goal of this paper is to develop and test a gamified design thinking framework, including its pedagogical elements, for supporting various learning objectives for school students. By synthesizing the elements and principles of design, learning and games, the authors propose a framework for a learning tool for school students to fulfil a number of learning objectives; the framework includes a design thinking process called “IISC Design Thinking” and its gamified version called “IISC DBox”. The effectiveness of the framework as a learning tool has been evaluated by conducting workshops that involved 77 school students. The results suggest that the gamification used had a positive effect on the design outcomes, fulfilment of learning objectives, and learners' achievements, indicating the potential of the framework for offering an effective, gamified tool for promoting design thinking in school education. In addition to presenting results from empirical studies for fulfilment of the objectives, this paper also proposes an approach that can be used for identifying appropriate learning objectives, selecting appropriate game elements to fulfil these objectives, and integrating appropriate game elements with design and learning elements. The paper also proposes a general approach for assessing the effectiveness of a gamified version for attaining a given set of learning objectives. The methodology used in this paper thus can be used as a reference for developing and evaluating a gamified version of design thinking course suitable not only for school education but also for other domains (e.g., engineering, management) with minimal changes.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49018130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-29DOI: 10.1017/S0890060422000117
Zhaotong Yang, Mei Yang, R. Sisson, Yanhua Li, Jianyu Liang
Abstract In this work, an artificial neural network model is established to understand the relationship among the tensile properties of as-printed Ti6Al4V parts, annealing parameters, and the tensile properties of annealed Ti6Al4V parts. The database was established by collecting published reports on the annealing treatment of selective laser melting (SLM) Ti6Al4V, from 2006 to 2020. Using the established model, it is possible to prescribe annealing parameters and predict properties after annealing for SLM Ti-6Al-4V parts with high confidence. The model shows high accuracy in the prediction of yield strength (YS) and ultimate tensile strength (UTS). It is found that the YS and UTS are sensitive to the annealing parameters, including temperature and holding time. The YS and UTS are also sensitive to initial YS and UTS of as-printed parts. The model suggests that an annealing process of the holding time of fewer than 4 h and the holding temperature lower than 850°C is desirable for as-printed Ti6Al4V parts to reach the YS required by the ASTM standard. By studying the collected data of microstructure and tensile properties of annealed Ti6Al4V, a new Hall-Petch relationship is proposed to correlate grain size and YS for annealed SLM Ti6Al4V parts in this work. The prediction of strain to failure shows lower accuracy compared with the predictions of YS and UTS due to the large scattering of the experimental data collected from the published reports.
{"title":"Machine learning model to predict tensile properties of annealed Ti6Al4V parts prepared by selective laser melting","authors":"Zhaotong Yang, Mei Yang, R. Sisson, Yanhua Li, Jianyu Liang","doi":"10.1017/S0890060422000117","DOIUrl":"https://doi.org/10.1017/S0890060422000117","url":null,"abstract":"Abstract In this work, an artificial neural network model is established to understand the relationship among the tensile properties of as-printed Ti6Al4V parts, annealing parameters, and the tensile properties of annealed Ti6Al4V parts. The database was established by collecting published reports on the annealing treatment of selective laser melting (SLM) Ti6Al4V, from 2006 to 2020. Using the established model, it is possible to prescribe annealing parameters and predict properties after annealing for SLM Ti-6Al-4V parts with high confidence. The model shows high accuracy in the prediction of yield strength (YS) and ultimate tensile strength (UTS). It is found that the YS and UTS are sensitive to the annealing parameters, including temperature and holding time. The YS and UTS are also sensitive to initial YS and UTS of as-printed parts. The model suggests that an annealing process of the holding time of fewer than 4 h and the holding temperature lower than 850°C is desirable for as-printed Ti6Al4V parts to reach the YS required by the ASTM standard. By studying the collected data of microstructure and tensile properties of annealed Ti6Al4V, a new Hall-Petch relationship is proposed to correlate grain size and YS for annealed SLM Ti6Al4V parts in this work. The prediction of strain to failure shows lower accuracy compared with the predictions of YS and UTS due to the large scattering of the experimental data collected from the published reports.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44951341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}