Computational design is growing in necessity for advancing biomedical technologies, particularly for complex systems with numerous trade-offs. For instance, in tissue scaffolds constructed from repeating unit cells, the structure's porosity and topology affect biological tissue and vasculature growth. Here, we adapt curvature-based tissue growth and agent-based vasculature models for predicting scaffold mechanobiological growth. A non-dominated sorting genetic algorithm (NSGA II) is used for dual-objective optimization of scaffold tissue and blood vessel growth with heterogeneous unit cell placement. Design inputs consist of unit cells of two different topologies, void unit cells, and beam diameters from 64 to 313 μm. Findings demonstrate a design heuristic for optimizing scaffolds by placing two selected unit cells, one that favors high tissue growth density and one that favors blood vessel growth, throughout the scaffold. The pareto front of solutions demonstrates that scaffolds with large porous areas termed Channel Voids or Small Voids improve vasculature growth while lattices with no larger void areas result in higher tissue growth. Results demonstrate the merit in computational investigations for characterizing tissue scaffold design trade-offs, and provide a foundation for future design multi-objective optimization for complex biomedical systems.
{"title":"Dual-Objective Mechanobiological Growth Optimization for Heterogenous Lattice Structures","authors":"Amit Arefin, Paul F. Egan","doi":"10.1115/1.4064241","DOIUrl":"https://doi.org/10.1115/1.4064241","url":null,"abstract":"\u0000 Computational design is growing in necessity for advancing biomedical technologies, particularly for complex systems with numerous trade-offs. For instance, in tissue scaffolds constructed from repeating unit cells, the structure's porosity and topology affect biological tissue and vasculature growth. Here, we adapt curvature-based tissue growth and agent-based vasculature models for predicting scaffold mechanobiological growth. A non-dominated sorting genetic algorithm (NSGA II) is used for dual-objective optimization of scaffold tissue and blood vessel growth with heterogeneous unit cell placement. Design inputs consist of unit cells of two different topologies, void unit cells, and beam diameters from 64 to 313 μm. Findings demonstrate a design heuristic for optimizing scaffolds by placing two selected unit cells, one that favors high tissue growth density and one that favors blood vessel growth, throughout the scaffold. The pareto front of solutions demonstrates that scaffolds with large porous areas termed Channel Voids or Small Voids improve vasculature growth while lattices with no larger void areas result in higher tissue growth. Results demonstrate the merit in computational investigations for characterizing tissue scaffold design trade-offs, and provide a foundation for future design multi-objective optimization for complex biomedical systems.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"114 10","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138590416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandeep Krishnakumar, Cynthia Letting, Nicolas F. Soria Zurita, Jessica Menold
Design representations play a pivotal role in the design process. In particular, design representations enable the formation of a shared understanding between team members, enhancing team performance. This paper explores the relationship between design representation modality and shared understanding among designers during communicative acts between design dyads. A mixed-methods study with 40 designers was conducted to investigate if representation modality affects shared understanding and identify the factors that shape shared understanding during communication. Quantitative results suggest that low-fidelity prototypes and sketches did not significantly differ in terms of the shared understanding they facilitated within dyads. Qualitative analysis identified four factors at the representation- and actor-level that influence how shared understanding is built between individuals during design communication. This research extends our understanding of the utility of design representations given the needs of communicative contexts; specifically, this work demonstrates that designers must understand the perspectives of listeners during communication to create representations that accurately represent the information that a listener seeks to gain.
{"title":"If you build it, will they understand? Considerations for creating shared understanding through design artifacts","authors":"Sandeep Krishnakumar, Cynthia Letting, Nicolas F. Soria Zurita, Jessica Menold","doi":"10.1115/1.4064239","DOIUrl":"https://doi.org/10.1115/1.4064239","url":null,"abstract":"\u0000 Design representations play a pivotal role in the design process. In particular, design representations enable the formation of a shared understanding between team members, enhancing team performance. This paper explores the relationship between design representation modality and shared understanding among designers during communicative acts between design dyads. A mixed-methods study with 40 designers was conducted to investigate if representation modality affects shared understanding and identify the factors that shape shared understanding during communication. Quantitative results suggest that low-fidelity prototypes and sketches did not significantly differ in terms of the shared understanding they facilitated within dyads. Qualitative analysis identified four factors at the representation- and actor-level that influence how shared understanding is built between individuals during design communication. This research extends our understanding of the utility of design representations given the needs of communicative contexts; specifically, this work demonstrates that designers must understand the perspectives of listeners during communication to create representations that accurately represent the information that a listener seeks to gain.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"54 15","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138593057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by heat dissipation, the rigid-compliant hybrid cellular expansion mechanisms with motion amplification and superposition are proposed in this paper. Compared with existing studies, the expansion mechanism is not only easy to realize the plane tessellation via cellular design due to its regular polygon structure, but also has the ability of motion amplification and superposition due to its compliant displacement amplifier and rigid scissors. Firstly, scheme of expansion mechanisms, especially working principle of motion amplification and superposition are introduced. The configuration design of a family of expansion mechanisms is presented, including varying number of edges, concave/convex property, inner/outer layout. Secondly, the constraint condition and analytical modeling of relations between output performances of expansion mechanisms and dimensional parameters is carried out. Third, the displacement amplification ratio of expansion mechanisms, and output performances of several typical expansion mechanisms when they are acted as cells to tessellate a plane with constrained area are analyzed. Finally, the output performances of expansion mechanisms are verified via the finite element analysis. The results show that proposed cellular expansion mechanisms are beneficial for realizing plane tessellation, offer motion amplification and superposition, which provide prospects in the field of mechanism design such as metamaterials.
{"title":"Rigid-compliant hybrid cellular expansion mechanisms with motion amplification and superposition","authors":"Tingwei Wang, Jingjun Yu, Hongzhe Zhao","doi":"10.1115/1.4064240","DOIUrl":"https://doi.org/10.1115/1.4064240","url":null,"abstract":"\u0000 Motivated by heat dissipation, the rigid-compliant hybrid cellular expansion mechanisms with motion amplification and superposition are proposed in this paper. Compared with existing studies, the expansion mechanism is not only easy to realize the plane tessellation via cellular design due to its regular polygon structure, but also has the ability of motion amplification and superposition due to its compliant displacement amplifier and rigid scissors. Firstly, scheme of expansion mechanisms, especially working principle of motion amplification and superposition are introduced. The configuration design of a family of expansion mechanisms is presented, including varying number of edges, concave/convex property, inner/outer layout. Secondly, the constraint condition and analytical modeling of relations between output performances of expansion mechanisms and dimensional parameters is carried out. Third, the displacement amplification ratio of expansion mechanisms, and output performances of several typical expansion mechanisms when they are acted as cells to tessellate a plane with constrained area are analyzed. Finally, the output performances of expansion mechanisms are verified via the finite element analysis. The results show that proposed cellular expansion mechanisms are beneficial for realizing plane tessellation, offer motion amplification and superposition, which provide prospects in the field of mechanism design such as metamaterials.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"6 8","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138592661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural networks have gained popularity for modeling complex non-linear relationships. Their computational efficiency has led to their growing adoption in optimization methods, including topology optimization. Recently, there have been several contributions towards improving derivatives of neural network outputs, which can improve their use in gradient-based optimization. However, a comparative study has yet to be conducted on the different derivative methods for the sensitivity of the input features on the neural network outputs. This paper aims to evaluate four derivative methods: analytical neural network's Jacobian, central finite difference method, complex step method, and automatic differentiation. These methods are implemented into density-based and homogenization-based topology optimization using multilayer perceptrons (MLPs). For density-based topology optimization, the MLP approximates Young's modulus for the solid-isotropic-material-with-penalization (SIMP) model. For homogenization-based topology optimization, the MLP approximates the homogenized stiffness tensor of a representative volume element, e.g., square cell microstructure with a rectangular hole. The comparative study is performed by solving two-dimensional topology optimization problems using the sensitivity coefficients from each derivative method. Evaluation includes initial sensitivity coefficients, convergence plots, and the final topologies, compliance, and design variables. The findings demonstrate that neural network-based sensitivity coefficients are sufficient for density-based and homogenization-based topology optimization. The neural network's Jacobian, complex step method, and automatic differentiation produced identical sensitivity coefficients to working precision. The study's open-source code is provided through an included Python repository.
{"title":"Evaluation of Neural Network-based Derivatives for Topology Optimization","authors":"Joel C. Najmon, Andres Tovar","doi":"10.1115/1.4064243","DOIUrl":"https://doi.org/10.1115/1.4064243","url":null,"abstract":"\u0000 Neural networks have gained popularity for modeling complex non-linear relationships. Their computational efficiency has led to their growing adoption in optimization methods, including topology optimization. Recently, there have been several contributions towards improving derivatives of neural network outputs, which can improve their use in gradient-based optimization. However, a comparative study has yet to be conducted on the different derivative methods for the sensitivity of the input features on the neural network outputs. This paper aims to evaluate four derivative methods: analytical neural network's Jacobian, central finite difference method, complex step method, and automatic differentiation. These methods are implemented into density-based and homogenization-based topology optimization using multilayer perceptrons (MLPs). For density-based topology optimization, the MLP approximates Young's modulus for the solid-isotropic-material-with-penalization (SIMP) model. For homogenization-based topology optimization, the MLP approximates the homogenized stiffness tensor of a representative volume element, e.g., square cell microstructure with a rectangular hole. The comparative study is performed by solving two-dimensional topology optimization problems using the sensitivity coefficients from each derivative method. Evaluation includes initial sensitivity coefficients, convergence plots, and the final topologies, compliance, and design variables. The findings demonstrate that neural network-based sensitivity coefficients are sufficient for density-based and homogenization-based topology optimization. The neural network's Jacobian, complex step method, and automatic differentiation produced identical sensitivity coefficients to working precision. The study's open-source code is provided through an included Python repository.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"46 4","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138594018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The moment method is commonly used in reliability analysis, in which the maximum entropy method (MEM) and polynomial fitting (PF) have been widely used due to their advantages in accuracy and efficiency, respectively. In this paper, we propose a novel reliability analysis method by combining MEM and PF. The probability density function is preliminarily estimated using the fractional moment maximum entropy method (FM-MEM), based on which PF is then used to further improve the accuracy. The proposed method can avoid the phenomenon of the negative probability density and function oscillations in PF effectively. Moreover, the order of the exponential polynomial in the FM-MEM is adaptively selected in the preliminary solution calculation process. An iterative process for the number of exponential polynomial terms is also proposed, using the integral of the moment error function and the integrals of the local and global negative probability density as the convergence criteria. Four numerical examples and one engineering example are tested, and the results are compared with those of the Monte Carlo simulation and the classical FM-MEM results, respectively, demonstrating the good performance of the proposed method.
{"title":"An Improved Fractional Moment Maximum Entropy Method with Polynomial Fitting","authors":"Gang Li, Yixuan Wang, Yan Zeng, W. He","doi":"10.1115/1.4064247","DOIUrl":"https://doi.org/10.1115/1.4064247","url":null,"abstract":"\u0000 The moment method is commonly used in reliability analysis, in which the maximum entropy method (MEM) and polynomial fitting (PF) have been widely used due to their advantages in accuracy and efficiency, respectively. In this paper, we propose a novel reliability analysis method by combining MEM and PF. The probability density function is preliminarily estimated using the fractional moment maximum entropy method (FM-MEM), based on which PF is then used to further improve the accuracy. The proposed method can avoid the phenomenon of the negative probability density and function oscillations in PF effectively. Moreover, the order of the exponential polynomial in the FM-MEM is adaptively selected in the preliminary solution calculation process. An iterative process for the number of exponential polynomial terms is also proposed, using the integral of the moment error function and the integrals of the local and global negative probability density as the convergence criteria. Four numerical examples and one engineering example are tested, and the results are compared with those of the Monte Carlo simulation and the classical FM-MEM results, respectively, demonstrating the good performance of the proposed method.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"30 22","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138591071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahra Zanjani Foumani, Amin Yousefpour, Mehdi Shishehbor, R. Bostanabad
Bayesian optimization (BO) is a sequential optimization strategy that is increasingly employed in a wide range of areas such as materials design. In real world applications, acquiring high-fidelity (HF) data through physical experiments or HF simulations is the major cost component of BO. To alleviate this bottleneck, multi-fidelity (MF) methods are used to forgo the sole reliance on the expensive HF data and reduce the sampling costs by querying inexpensive low-fidelity (LF) sources whose data are correlated with HF samples. However, existing multi-fidelity BO (MFBO) methods operate under the following two assumptions that rarely hold in practical applications: (1) LF sources provide data that are well correlated with the HF data on a global scale, and (2) a single random process can model the noise in the MF data.} These assumptions dramatically reduce the performance of MFBO when LF sources are only locally correlated with the HF source or when the noise variance varies across the data sources. Herein, we view these two limitations and uncertainty sources and address them by building an emulator that more accurately quantifies uncertainties. Specifically, our emulator (1) learns a separate noise model for each data source, and (2) leverages strictly proper scoring rules in regularizing itself. We illustrate the performance of our method through analytical examples and engineering problems in materials design. The comparative studies indicate that our MFBO method outperforms existing technologies, provides interpretable results, and can leverage LF sources which are only locally correlated with the HF source.
{"title":"Safeguarding Multi-fidelity Bayesian Optimization Against Large Model Form Errors and Heterogeneous Noise","authors":"Zahra Zanjani Foumani, Amin Yousefpour, Mehdi Shishehbor, R. Bostanabad","doi":"10.1115/1.4064160","DOIUrl":"https://doi.org/10.1115/1.4064160","url":null,"abstract":"Bayesian optimization (BO) is a sequential optimization strategy that is increasingly employed in a wide range of areas such as materials design. In real world applications, acquiring high-fidelity (HF) data through physical experiments or HF simulations is the major cost component of BO. To alleviate this bottleneck, multi-fidelity (MF) methods are used to forgo the sole reliance on the expensive HF data and reduce the sampling costs by querying inexpensive low-fidelity (LF) sources whose data are correlated with HF samples. However, existing multi-fidelity BO (MFBO) methods operate under the following two assumptions that rarely hold in practical applications: (1) LF sources provide data that are well correlated with the HF data on a global scale, and (2) a single random process can model the noise in the MF data.} These assumptions dramatically reduce the performance of MFBO when LF sources are only locally correlated with the HF source or when the noise variance varies across the data sources. Herein, we view these two limitations and uncertainty sources and address them by building an emulator that more accurately quantifies uncertainties. Specifically, our emulator (1) learns a separate noise model for each data source, and (2) leverages strictly proper scoring rules in regularizing itself. We illustrate the performance of our method through analytical examples and engineering problems in materials design. The comparative studies indicate that our MFBO method outperforms existing technologies, provides interpretable results, and can leverage LF sources which are only locally correlated with the HF source.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"68 6","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139205036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Along with the rapid advancement of additive manufacturing technology, 3D-printed structures and materials have been popularly employed in diverse applications. Computer simulations of these structures and materials are often characterized by a vast number of spatial-varied parameters to predict the structural response of interest. Direct Monte Carlo methods are infeasible for the uncertainty quantification and reliability assessment of such systems as they require a huge number of forward model evaluations in order to obtain convergent statistics. To alleviate this difficulty, this paper presents a convolutional dimension-reduction network with knowledge reasoning-based loss regularization as explainable deep learning framework for surrogate modeling and uncertainty quantification of structures with high-dimensional spatial variations. To manage the inherent high-dimensionality, a deep Convolutional Dimension-Reduction network (ConvDR) is constructed to transform the spatial data into a low-dimensional latent space. In the latent space, domain knowledge is formulated as a form of loss regularization to train the ConvDR network as a surrogate model to predict the response of interest. Then evolutionary algorithms are utilized to train the deep convolutional dimension-reduction network. Two 2D structures with manufacturing-induced spatial-variated material compositions are used to demonstrate the performance of the proposed approach.
{"title":"Convolutional Dimension-Reduction with Knowledge Reasoning for Reliability Approximations of Structures under High-Dimensional Spatial Uncertainties","authors":"Luojie Shi, Zhou Kai, Zequn Wang","doi":"10.1115/1.4064159","DOIUrl":"https://doi.org/10.1115/1.4064159","url":null,"abstract":"Along with the rapid advancement of additive manufacturing technology, 3D-printed structures and materials have been popularly employed in diverse applications. Computer simulations of these structures and materials are often characterized by a vast number of spatial-varied parameters to predict the structural response of interest. Direct Monte Carlo methods are infeasible for the uncertainty quantification and reliability assessment of such systems as they require a huge number of forward model evaluations in order to obtain convergent statistics. To alleviate this difficulty, this paper presents a convolutional dimension-reduction network with knowledge reasoning-based loss regularization as explainable deep learning framework for surrogate modeling and uncertainty quantification of structures with high-dimensional spatial variations. To manage the inherent high-dimensionality, a deep Convolutional Dimension-Reduction network (ConvDR) is constructed to transform the spatial data into a low-dimensional latent space. In the latent space, domain knowledge is formulated as a form of loss regularization to train the ConvDR network as a surrogate model to predict the response of interest. Then evolutionary algorithms are utilized to train the deep convolutional dimension-reduction network. Two 2D structures with manufacturing-induced spatial-variated material compositions are used to demonstrate the performance of the proposed approach.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"1905 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139198091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The enhancement of midsole compressive energy return is associated with improved running economy. Traditional midsole materials such as EVA, TPU, and PEBA foams typically exhibit hardening force-displacement characteristics. On the other hand, a midsole with softening properties, which can be achieved through Compliant Constant Force Mechanisms (CFMs), can provide significant benefits in terms of energy storage and return. This study presents the development of such a midsole, incorporating 3D printed TPU CFM designs derived through structural optimization. The mechanical properties under cyclic loading were evaluated and compared with those of commercially available running shoes with state-of-the-art PEBA foam midsoles, specifically the Nike ZoomX Vaporfly Next% 2 (NVP). Our custom midsole demonstrated promising mechanical performance. At similar deformation levels, the new design increased energy storage by 58.1% and energy return by 47.0%, while reducing the peak compressive force by 24.3%. As per our understanding, this is the first study to prove that the inclusion of CFMs in the structural design of 3D printed midsoles can significantly enhance energy return.
{"title":"Boosting energy return using 3D printed midsoles designed with compliant constant force mechanisms","authors":"Haihua Ou, S. Johnson","doi":"10.1115/1.4064164","DOIUrl":"https://doi.org/10.1115/1.4064164","url":null,"abstract":"The enhancement of midsole compressive energy return is associated with improved running economy. Traditional midsole materials such as EVA, TPU, and PEBA foams typically exhibit hardening force-displacement characteristics. On the other hand, a midsole with softening properties, which can be achieved through Compliant Constant Force Mechanisms (CFMs), can provide significant benefits in terms of energy storage and return. This study presents the development of such a midsole, incorporating 3D printed TPU CFM designs derived through structural optimization. The mechanical properties under cyclic loading were evaluated and compared with those of commercially available running shoes with state-of-the-art PEBA foam midsoles, specifically the Nike ZoomX Vaporfly Next% 2 (NVP). Our custom midsole demonstrated promising mechanical performance. At similar deformation levels, the new design increased energy storage by 58.1% and energy return by 47.0%, while reducing the peak compressive force by 24.3%. As per our understanding, this is the first study to prove that the inclusion of CFMs in the structural design of 3D printed midsoles can significantly enhance energy return.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"644 ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139202692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sequential sampling methods have gained significant attention due to their ability to iteratively construct surrogate models by sequentially inserting new samples based on existing ones. However, efficiently and accurately creating surrogate models for high-dimensional, nonlinear, and multimodal problems is still a challenging task. This paper proposes a new sequential sampling method for surrogate modeling based on a hybrid metric, specifically making the following three contributions: (1) a hybrid metric is developed by integrating the leave-one-out cross-validation error, the local nonlinearity, and the relative size of Voronoi regions using the entropy weights, which well considers both the global exploration and local exploitation of existing samples; (2) a Pareto-TOPSIS strategy is proposed to first filter out unnecessary regions and then efficiently identify the sensitive region within the remaining regions, thereby improving the efficiency of sensitive region identification; and (3) a PE&V learning function is proposed based on the prediction error and variance of the intermediate surrogate models to identify the new sample to be inserted in the sensitive region. The proposed sequential sampling method is compared with four state-of-the-art sequential sampling methods for creating Kriging surrogate models in seven numerical cases and one real-world engineering case of a cutterhead of a tunnel boring machine. The results show that compared with the other four methods, the proposed sequential sampling method can more quickly and robustly create an accurate surrogate model using a smaller number of samples.
{"title":"A new sequential sampling method for surrogate modeling based on a hybrid metric","authors":"Weifei Hu, Feng Zhao, Xiaoyu Deng, Feiyun Cong, Jianwei Wu, Zhen-yu Liu, Jianrong Tan","doi":"10.1115/1.4064163","DOIUrl":"https://doi.org/10.1115/1.4064163","url":null,"abstract":"Sequential sampling methods have gained significant attention due to their ability to iteratively construct surrogate models by sequentially inserting new samples based on existing ones. However, efficiently and accurately creating surrogate models for high-dimensional, nonlinear, and multimodal problems is still a challenging task. This paper proposes a new sequential sampling method for surrogate modeling based on a hybrid metric, specifically making the following three contributions: (1) a hybrid metric is developed by integrating the leave-one-out cross-validation error, the local nonlinearity, and the relative size of Voronoi regions using the entropy weights, which well considers both the global exploration and local exploitation of existing samples; (2) a Pareto-TOPSIS strategy is proposed to first filter out unnecessary regions and then efficiently identify the sensitive region within the remaining regions, thereby improving the efficiency of sensitive region identification; and (3) a PE&V learning function is proposed based on the prediction error and variance of the intermediate surrogate models to identify the new sample to be inserted in the sensitive region. The proposed sequential sampling method is compared with four state-of-the-art sequential sampling methods for creating Kriging surrogate models in seven numerical cases and one real-world engineering case of a cutterhead of a tunnel boring machine. The results show that compared with the other four methods, the proposed sequential sampling method can more quickly and robustly create an accurate surrogate model using a smaller number of samples.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"122 8","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139196492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Madhurima Das, May Huang, Christine Xu, Maria C. Yang
Digital tools for sketching, such as tablets, have become popular for streamlining design work and keeping a large quantity of sketches in one place. However, their impact on design creativity, novelty, and concept evolution is not yet well understood. Here, we present a controlled human subjects study that assesses the influence of tablets (iPads) on concept novelty and evolution in the context of an engineering design concept generation exercise. We expect that iPad use will not influence concept novelty due to its similar speed of use as pen and paper sketching. We expect to see different patterns in concept evolution between the two types of tools, namely, that iPad users will demonstrate more iteration on a concept (concept evolution) than pen and paper users due to the fact that iPad features make it easy to copy and paste previous sketches and then modify them. We find that the tool used is not correlated with concept novelty. Additionally, we find no strong differences in overall concept evolution quantities between the two tools, though we see that iPad sketches exhibited more cases of consecutive concept evolution than nonconsecutive whereas paper and pen sketches showed an equal amount of both consecutive and non-consecutive concept evolution. Results indicate that overall, iPads may not significantly inhibit designers' creative skills and thus could be a reasonable replacement for pen and paper sketching, which has implications for both design education and practice.
{"title":"The Influence of Digital Sketching Tools on Concept Novelty and Evolution","authors":"Madhurima Das, May Huang, Christine Xu, Maria C. Yang","doi":"10.1115/1.4064162","DOIUrl":"https://doi.org/10.1115/1.4064162","url":null,"abstract":"Digital tools for sketching, such as tablets, have become popular for streamlining design work and keeping a large quantity of sketches in one place. However, their impact on design creativity, novelty, and concept evolution is not yet well understood. Here, we present a controlled human subjects study that assesses the influence of tablets (iPads) on concept novelty and evolution in the context of an engineering design concept generation exercise. We expect that iPad use will not influence concept novelty due to its similar speed of use as pen and paper sketching. We expect to see different patterns in concept evolution between the two types of tools, namely, that iPad users will demonstrate more iteration on a concept (concept evolution) than pen and paper users due to the fact that iPad features make it easy to copy and paste previous sketches and then modify them. We find that the tool used is not correlated with concept novelty. Additionally, we find no strong differences in overall concept evolution quantities between the two tools, though we see that iPad sketches exhibited more cases of consecutive concept evolution than nonconsecutive whereas paper and pen sketches showed an equal amount of both consecutive and non-consecutive concept evolution. Results indicate that overall, iPads may not significantly inhibit designers' creative skills and thus could be a reasonable replacement for pen and paper sketching, which has implications for both design education and practice.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"129 4","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139205484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}