Kristen M. Edwards, Binyang Song, Jaron Porciello, Mark Engelbert, Carolyn Huang, Faez Ahmed
When designing evidence-based policies and programs, decision-makers must distill key information from a vast and rapidly growing literature base. Identifying relevant literature from raw search results is time and resource intensive, and is often done by manual screening. In this study, we develop an AI agent based on a bidirectional encoder representations from transformers (BERT) model and incorporate it into a human team designing an evidence synthesis product for global development. We explore the effectiveness of the human-AI hybrid team in accelerating the evidence synthesis process. We further enhance the human-AI hybrid team through active learning (AL). Specifically, we explore different sampling strategies: random, least confidence (LC), and highest priority (HP) sampling, to study their influence on the collaborative screening process. Results show that incorporating the BERT-based AI agent can reduce the human screening effort by 68.5% compared to the case of no AI assistance, and by 16.8% compared to using the industry standard model for identifying 80% of all relevant documents. When we apply the HP sampling strategy, the human screening effort can be reduced even more: by 78% for identifying 80% of all relevant documents compared to no AI assistance. We apply the AL-enhanced human-AI hybrid teaming workflow in the design process of three evidence gap maps which are now published for USAID's use. These findings demonstrate how AI can accelerate the development of evidence synthesis products and promote timely evidence-based decision making in global development.
{"title":"ADVISE: Accelerating the Creation of Evidence Syntheses for Global Development using Natural Language Processing-supported Human-AI Collaboration","authors":"Kristen M. Edwards, Binyang Song, Jaron Porciello, Mark Engelbert, Carolyn Huang, Faez Ahmed","doi":"10.1115/1.4064245","DOIUrl":"https://doi.org/10.1115/1.4064245","url":null,"abstract":"\u0000 When designing evidence-based policies and programs, decision-makers must distill key information from a vast and rapidly growing literature base. Identifying relevant literature from raw search results is time and resource intensive, and is often done by manual screening. In this study, we develop an AI agent based on a bidirectional encoder representations from transformers (BERT) model and incorporate it into a human team designing an evidence synthesis product for global development. We explore the effectiveness of the human-AI hybrid team in accelerating the evidence synthesis process. We further enhance the human-AI hybrid team through active learning (AL). Specifically, we explore different sampling strategies: random, least confidence (LC), and highest priority (HP) sampling, to study their influence on the collaborative screening process. Results show that incorporating the BERT-based AI agent can reduce the human screening effort by 68.5% compared to the case of no AI assistance, and by 16.8% compared to using the industry standard model for identifying 80% of all relevant documents. When we apply the HP sampling strategy, the human screening effort can be reduced even more: by 78% for identifying 80% of all relevant documents compared to no AI assistance. We apply the AL-enhanced human-AI hybrid teaming workflow in the design process of three evidence gap maps which are now published for USAID's use. These findings demonstrate how AI can accelerate the development of evidence synthesis products and promote timely evidence-based decision making in global development.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"4 10","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138592528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alfred Leuenberger, Eliott Birner, Thomas S. Lumpe, T. Stanković
The design representations of lattice structures are fundamental to the development of computational design approaches. Current applications of lattice structures are characterized by ever-growing demand on the computational resources to solve difficult optimization problems or generate large datasets, opting for the development of efficient design representations which offer a high range of possible design variants, while at the same time generating design spaces with attributes suitable for computational methods to explore. In response, the focus of this work is to propose a parametric design representation based on crystallographic symmetries and investigate its implications for the computational design of lattice structures. The work defines design rules to support the design of functionally graded structures using crystallographic symmetries such that the connectivity between individual members in a structure with varying geometry is guaranteed, and investigates how to use the parametrization in the context of optimization. The results show that the proposed parametrization achieves a compact design representation to benefit the computational design process by employing a small number of design variables to control a broad range of complex geometries. The results also show that the design spaces based on the proposed parametrization can be successfully explored using a direct search-based method.
{"title":"Computational Design of 2D Lattice Structures based on Crystallographic Symmetries","authors":"Alfred Leuenberger, Eliott Birner, Thomas S. Lumpe, T. Stanković","doi":"10.1115/1.4064246","DOIUrl":"https://doi.org/10.1115/1.4064246","url":null,"abstract":"\u0000 The design representations of lattice structures are fundamental to the development of computational design approaches. Current applications of lattice structures are characterized by ever-growing demand on the computational resources to solve difficult optimization problems or generate large datasets, opting for the development of efficient design representations which offer a high range of possible design variants, while at the same time generating design spaces with attributes suitable for computational methods to explore. In response, the focus of this work is to propose a parametric design representation based on crystallographic symmetries and investigate its implications for the computational design of lattice structures. The work defines design rules to support the design of functionally graded structures using crystallographic symmetries such that the connectivity between individual members in a structure with varying geometry is guaranteed, and investigates how to use the parametrization in the context of optimization. The results show that the proposed parametrization achieves a compact design representation to benefit the computational design process by employing a small number of design variables to control a broad range of complex geometries. The results also show that the design spaces based on the proposed parametrization can be successfully explored using a direct search-based method.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"56 7","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138593355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational design is growing in necessity for advancing biomedical technologies, particularly for complex systems with numerous trade-offs. For instance, in tissue scaffolds constructed from repeating unit cells, the structure's porosity and topology affect biological tissue and vasculature growth. Here, we adapt curvature-based tissue growth and agent-based vasculature models for predicting scaffold mechanobiological growth. A non-dominated sorting genetic algorithm (NSGA II) is used for dual-objective optimization of scaffold tissue and blood vessel growth with heterogeneous unit cell placement. Design inputs consist of unit cells of two different topologies, void unit cells, and beam diameters from 64 to 313 μm. Findings demonstrate a design heuristic for optimizing scaffolds by placing two selected unit cells, one that favors high tissue growth density and one that favors blood vessel growth, throughout the scaffold. The pareto front of solutions demonstrates that scaffolds with large porous areas termed Channel Voids or Small Voids improve vasculature growth while lattices with no larger void areas result in higher tissue growth. Results demonstrate the merit in computational investigations for characterizing tissue scaffold design trade-offs, and provide a foundation for future design multi-objective optimization for complex biomedical systems.
{"title":"Dual-Objective Mechanobiological Growth Optimization for Heterogenous Lattice Structures","authors":"Amit Arefin, Paul F. Egan","doi":"10.1115/1.4064241","DOIUrl":"https://doi.org/10.1115/1.4064241","url":null,"abstract":"\u0000 Computational design is growing in necessity for advancing biomedical technologies, particularly for complex systems with numerous trade-offs. For instance, in tissue scaffolds constructed from repeating unit cells, the structure's porosity and topology affect biological tissue and vasculature growth. Here, we adapt curvature-based tissue growth and agent-based vasculature models for predicting scaffold mechanobiological growth. A non-dominated sorting genetic algorithm (NSGA II) is used for dual-objective optimization of scaffold tissue and blood vessel growth with heterogeneous unit cell placement. Design inputs consist of unit cells of two different topologies, void unit cells, and beam diameters from 64 to 313 μm. Findings demonstrate a design heuristic for optimizing scaffolds by placing two selected unit cells, one that favors high tissue growth density and one that favors blood vessel growth, throughout the scaffold. The pareto front of solutions demonstrates that scaffolds with large porous areas termed Channel Voids or Small Voids improve vasculature growth while lattices with no larger void areas result in higher tissue growth. Results demonstrate the merit in computational investigations for characterizing tissue scaffold design trade-offs, and provide a foundation for future design multi-objective optimization for complex biomedical systems.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"114 10","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138590416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandeep Krishnakumar, Cynthia Letting, Nicolas F. Soria Zurita, Jessica Menold
Design representations play a pivotal role in the design process. In particular, design representations enable the formation of a shared understanding between team members, enhancing team performance. This paper explores the relationship between design representation modality and shared understanding among designers during communicative acts between design dyads. A mixed-methods study with 40 designers was conducted to investigate if representation modality affects shared understanding and identify the factors that shape shared understanding during communication. Quantitative results suggest that low-fidelity prototypes and sketches did not significantly differ in terms of the shared understanding they facilitated within dyads. Qualitative analysis identified four factors at the representation- and actor-level that influence how shared understanding is built between individuals during design communication. This research extends our understanding of the utility of design representations given the needs of communicative contexts; specifically, this work demonstrates that designers must understand the perspectives of listeners during communication to create representations that accurately represent the information that a listener seeks to gain.
{"title":"If you build it, will they understand? Considerations for creating shared understanding through design artifacts","authors":"Sandeep Krishnakumar, Cynthia Letting, Nicolas F. Soria Zurita, Jessica Menold","doi":"10.1115/1.4064239","DOIUrl":"https://doi.org/10.1115/1.4064239","url":null,"abstract":"\u0000 Design representations play a pivotal role in the design process. In particular, design representations enable the formation of a shared understanding between team members, enhancing team performance. This paper explores the relationship between design representation modality and shared understanding among designers during communicative acts between design dyads. A mixed-methods study with 40 designers was conducted to investigate if representation modality affects shared understanding and identify the factors that shape shared understanding during communication. Quantitative results suggest that low-fidelity prototypes and sketches did not significantly differ in terms of the shared understanding they facilitated within dyads. Qualitative analysis identified four factors at the representation- and actor-level that influence how shared understanding is built between individuals during design communication. This research extends our understanding of the utility of design representations given the needs of communicative contexts; specifically, this work demonstrates that designers must understand the perspectives of listeners during communication to create representations that accurately represent the information that a listener seeks to gain.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"54 15","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138593057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by heat dissipation, the rigid-compliant hybrid cellular expansion mechanisms with motion amplification and superposition are proposed in this paper. Compared with existing studies, the expansion mechanism is not only easy to realize the plane tessellation via cellular design due to its regular polygon structure, but also has the ability of motion amplification and superposition due to its compliant displacement amplifier and rigid scissors. Firstly, scheme of expansion mechanisms, especially working principle of motion amplification and superposition are introduced. The configuration design of a family of expansion mechanisms is presented, including varying number of edges, concave/convex property, inner/outer layout. Secondly, the constraint condition and analytical modeling of relations between output performances of expansion mechanisms and dimensional parameters is carried out. Third, the displacement amplification ratio of expansion mechanisms, and output performances of several typical expansion mechanisms when they are acted as cells to tessellate a plane with constrained area are analyzed. Finally, the output performances of expansion mechanisms are verified via the finite element analysis. The results show that proposed cellular expansion mechanisms are beneficial for realizing plane tessellation, offer motion amplification and superposition, which provide prospects in the field of mechanism design such as metamaterials.
{"title":"Rigid-compliant hybrid cellular expansion mechanisms with motion amplification and superposition","authors":"Tingwei Wang, Jingjun Yu, Hongzhe Zhao","doi":"10.1115/1.4064240","DOIUrl":"https://doi.org/10.1115/1.4064240","url":null,"abstract":"\u0000 Motivated by heat dissipation, the rigid-compliant hybrid cellular expansion mechanisms with motion amplification and superposition are proposed in this paper. Compared with existing studies, the expansion mechanism is not only easy to realize the plane tessellation via cellular design due to its regular polygon structure, but also has the ability of motion amplification and superposition due to its compliant displacement amplifier and rigid scissors. Firstly, scheme of expansion mechanisms, especially working principle of motion amplification and superposition are introduced. The configuration design of a family of expansion mechanisms is presented, including varying number of edges, concave/convex property, inner/outer layout. Secondly, the constraint condition and analytical modeling of relations between output performances of expansion mechanisms and dimensional parameters is carried out. Third, the displacement amplification ratio of expansion mechanisms, and output performances of several typical expansion mechanisms when they are acted as cells to tessellate a plane with constrained area are analyzed. Finally, the output performances of expansion mechanisms are verified via the finite element analysis. The results show that proposed cellular expansion mechanisms are beneficial for realizing plane tessellation, offer motion amplification and superposition, which provide prospects in the field of mechanism design such as metamaterials.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"6 8","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138592661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural networks have gained popularity for modeling complex non-linear relationships. Their computational efficiency has led to their growing adoption in optimization methods, including topology optimization. Recently, there have been several contributions towards improving derivatives of neural network outputs, which can improve their use in gradient-based optimization. However, a comparative study has yet to be conducted on the different derivative methods for the sensitivity of the input features on the neural network outputs. This paper aims to evaluate four derivative methods: analytical neural network's Jacobian, central finite difference method, complex step method, and automatic differentiation. These methods are implemented into density-based and homogenization-based topology optimization using multilayer perceptrons (MLPs). For density-based topology optimization, the MLP approximates Young's modulus for the solid-isotropic-material-with-penalization (SIMP) model. For homogenization-based topology optimization, the MLP approximates the homogenized stiffness tensor of a representative volume element, e.g., square cell microstructure with a rectangular hole. The comparative study is performed by solving two-dimensional topology optimization problems using the sensitivity coefficients from each derivative method. Evaluation includes initial sensitivity coefficients, convergence plots, and the final topologies, compliance, and design variables. The findings demonstrate that neural network-based sensitivity coefficients are sufficient for density-based and homogenization-based topology optimization. The neural network's Jacobian, complex step method, and automatic differentiation produced identical sensitivity coefficients to working precision. The study's open-source code is provided through an included Python repository.
{"title":"Evaluation of Neural Network-based Derivatives for Topology Optimization","authors":"Joel C. Najmon, Andres Tovar","doi":"10.1115/1.4064243","DOIUrl":"https://doi.org/10.1115/1.4064243","url":null,"abstract":"\u0000 Neural networks have gained popularity for modeling complex non-linear relationships. Their computational efficiency has led to their growing adoption in optimization methods, including topology optimization. Recently, there have been several contributions towards improving derivatives of neural network outputs, which can improve their use in gradient-based optimization. However, a comparative study has yet to be conducted on the different derivative methods for the sensitivity of the input features on the neural network outputs. This paper aims to evaluate four derivative methods: analytical neural network's Jacobian, central finite difference method, complex step method, and automatic differentiation. These methods are implemented into density-based and homogenization-based topology optimization using multilayer perceptrons (MLPs). For density-based topology optimization, the MLP approximates Young's modulus for the solid-isotropic-material-with-penalization (SIMP) model. For homogenization-based topology optimization, the MLP approximates the homogenized stiffness tensor of a representative volume element, e.g., square cell microstructure with a rectangular hole. The comparative study is performed by solving two-dimensional topology optimization problems using the sensitivity coefficients from each derivative method. Evaluation includes initial sensitivity coefficients, convergence plots, and the final topologies, compliance, and design variables. The findings demonstrate that neural network-based sensitivity coefficients are sufficient for density-based and homogenization-based topology optimization. The neural network's Jacobian, complex step method, and automatic differentiation produced identical sensitivity coefficients to working precision. The study's open-source code is provided through an included Python repository.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"46 4","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138594018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The moment method is commonly used in reliability analysis, in which the maximum entropy method (MEM) and polynomial fitting (PF) have been widely used due to their advantages in accuracy and efficiency, respectively. In this paper, we propose a novel reliability analysis method by combining MEM and PF. The probability density function is preliminarily estimated using the fractional moment maximum entropy method (FM-MEM), based on which PF is then used to further improve the accuracy. The proposed method can avoid the phenomenon of the negative probability density and function oscillations in PF effectively. Moreover, the order of the exponential polynomial in the FM-MEM is adaptively selected in the preliminary solution calculation process. An iterative process for the number of exponential polynomial terms is also proposed, using the integral of the moment error function and the integrals of the local and global negative probability density as the convergence criteria. Four numerical examples and one engineering example are tested, and the results are compared with those of the Monte Carlo simulation and the classical FM-MEM results, respectively, demonstrating the good performance of the proposed method.
{"title":"An Improved Fractional Moment Maximum Entropy Method with Polynomial Fitting","authors":"Gang Li, Yixuan Wang, Yan Zeng, W. He","doi":"10.1115/1.4064247","DOIUrl":"https://doi.org/10.1115/1.4064247","url":null,"abstract":"\u0000 The moment method is commonly used in reliability analysis, in which the maximum entropy method (MEM) and polynomial fitting (PF) have been widely used due to their advantages in accuracy and efficiency, respectively. In this paper, we propose a novel reliability analysis method by combining MEM and PF. The probability density function is preliminarily estimated using the fractional moment maximum entropy method (FM-MEM), based on which PF is then used to further improve the accuracy. The proposed method can avoid the phenomenon of the negative probability density and function oscillations in PF effectively. Moreover, the order of the exponential polynomial in the FM-MEM is adaptively selected in the preliminary solution calculation process. An iterative process for the number of exponential polynomial terms is also proposed, using the integral of the moment error function and the integrals of the local and global negative probability density as the convergence criteria. Four numerical examples and one engineering example are tested, and the results are compared with those of the Monte Carlo simulation and the classical FM-MEM results, respectively, demonstrating the good performance of the proposed method.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"30 22","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138591071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahra Zanjani Foumani, Amin Yousefpour, Mehdi Shishehbor, R. Bostanabad
Bayesian optimization (BO) is a sequential optimization strategy that is increasingly employed in a wide range of areas such as materials design. In real world applications, acquiring high-fidelity (HF) data through physical experiments or HF simulations is the major cost component of BO. To alleviate this bottleneck, multi-fidelity (MF) methods are used to forgo the sole reliance on the expensive HF data and reduce the sampling costs by querying inexpensive low-fidelity (LF) sources whose data are correlated with HF samples. However, existing multi-fidelity BO (MFBO) methods operate under the following two assumptions that rarely hold in practical applications: (1) LF sources provide data that are well correlated with the HF data on a global scale, and (2) a single random process can model the noise in the MF data.} These assumptions dramatically reduce the performance of MFBO when LF sources are only locally correlated with the HF source or when the noise variance varies across the data sources. Herein, we view these two limitations and uncertainty sources and address them by building an emulator that more accurately quantifies uncertainties. Specifically, our emulator (1) learns a separate noise model for each data source, and (2) leverages strictly proper scoring rules in regularizing itself. We illustrate the performance of our method through analytical examples and engineering problems in materials design. The comparative studies indicate that our MFBO method outperforms existing technologies, provides interpretable results, and can leverage LF sources which are only locally correlated with the HF source.
{"title":"Safeguarding Multi-fidelity Bayesian Optimization Against Large Model Form Errors and Heterogeneous Noise","authors":"Zahra Zanjani Foumani, Amin Yousefpour, Mehdi Shishehbor, R. Bostanabad","doi":"10.1115/1.4064160","DOIUrl":"https://doi.org/10.1115/1.4064160","url":null,"abstract":"Bayesian optimization (BO) is a sequential optimization strategy that is increasingly employed in a wide range of areas such as materials design. In real world applications, acquiring high-fidelity (HF) data through physical experiments or HF simulations is the major cost component of BO. To alleviate this bottleneck, multi-fidelity (MF) methods are used to forgo the sole reliance on the expensive HF data and reduce the sampling costs by querying inexpensive low-fidelity (LF) sources whose data are correlated with HF samples. However, existing multi-fidelity BO (MFBO) methods operate under the following two assumptions that rarely hold in practical applications: (1) LF sources provide data that are well correlated with the HF data on a global scale, and (2) a single random process can model the noise in the MF data.} These assumptions dramatically reduce the performance of MFBO when LF sources are only locally correlated with the HF source or when the noise variance varies across the data sources. Herein, we view these two limitations and uncertainty sources and address them by building an emulator that more accurately quantifies uncertainties. Specifically, our emulator (1) learns a separate noise model for each data source, and (2) leverages strictly proper scoring rules in regularizing itself. We illustrate the performance of our method through analytical examples and engineering problems in materials design. The comparative studies indicate that our MFBO method outperforms existing technologies, provides interpretable results, and can leverage LF sources which are only locally correlated with the HF source.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"68 6","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139205036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Along with the rapid advancement of additive manufacturing technology, 3D-printed structures and materials have been popularly employed in diverse applications. Computer simulations of these structures and materials are often characterized by a vast number of spatial-varied parameters to predict the structural response of interest. Direct Monte Carlo methods are infeasible for the uncertainty quantification and reliability assessment of such systems as they require a huge number of forward model evaluations in order to obtain convergent statistics. To alleviate this difficulty, this paper presents a convolutional dimension-reduction network with knowledge reasoning-based loss regularization as explainable deep learning framework for surrogate modeling and uncertainty quantification of structures with high-dimensional spatial variations. To manage the inherent high-dimensionality, a deep Convolutional Dimension-Reduction network (ConvDR) is constructed to transform the spatial data into a low-dimensional latent space. In the latent space, domain knowledge is formulated as a form of loss regularization to train the ConvDR network as a surrogate model to predict the response of interest. Then evolutionary algorithms are utilized to train the deep convolutional dimension-reduction network. Two 2D structures with manufacturing-induced spatial-variated material compositions are used to demonstrate the performance of the proposed approach.
{"title":"Convolutional Dimension-Reduction with Knowledge Reasoning for Reliability Approximations of Structures under High-Dimensional Spatial Uncertainties","authors":"Luojie Shi, Zhou Kai, Zequn Wang","doi":"10.1115/1.4064159","DOIUrl":"https://doi.org/10.1115/1.4064159","url":null,"abstract":"Along with the rapid advancement of additive manufacturing technology, 3D-printed structures and materials have been popularly employed in diverse applications. Computer simulations of these structures and materials are often characterized by a vast number of spatial-varied parameters to predict the structural response of interest. Direct Monte Carlo methods are infeasible for the uncertainty quantification and reliability assessment of such systems as they require a huge number of forward model evaluations in order to obtain convergent statistics. To alleviate this difficulty, this paper presents a convolutional dimension-reduction network with knowledge reasoning-based loss regularization as explainable deep learning framework for surrogate modeling and uncertainty quantification of structures with high-dimensional spatial variations. To manage the inherent high-dimensionality, a deep Convolutional Dimension-Reduction network (ConvDR) is constructed to transform the spatial data into a low-dimensional latent space. In the latent space, domain knowledge is formulated as a form of loss regularization to train the ConvDR network as a surrogate model to predict the response of interest. Then evolutionary algorithms are utilized to train the deep convolutional dimension-reduction network. Two 2D structures with manufacturing-induced spatial-variated material compositions are used to demonstrate the performance of the proposed approach.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"1905 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139198091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The enhancement of midsole compressive energy return is associated with improved running economy. Traditional midsole materials such as EVA, TPU, and PEBA foams typically exhibit hardening force-displacement characteristics. On the other hand, a midsole with softening properties, which can be achieved through Compliant Constant Force Mechanisms (CFMs), can provide significant benefits in terms of energy storage and return. This study presents the development of such a midsole, incorporating 3D printed TPU CFM designs derived through structural optimization. The mechanical properties under cyclic loading were evaluated and compared with those of commercially available running shoes with state-of-the-art PEBA foam midsoles, specifically the Nike ZoomX Vaporfly Next% 2 (NVP). Our custom midsole demonstrated promising mechanical performance. At similar deformation levels, the new design increased energy storage by 58.1% and energy return by 47.0%, while reducing the peak compressive force by 24.3%. As per our understanding, this is the first study to prove that the inclusion of CFMs in the structural design of 3D printed midsoles can significantly enhance energy return.
{"title":"Boosting energy return using 3D printed midsoles designed with compliant constant force mechanisms","authors":"Haihua Ou, S. Johnson","doi":"10.1115/1.4064164","DOIUrl":"https://doi.org/10.1115/1.4064164","url":null,"abstract":"The enhancement of midsole compressive energy return is associated with improved running economy. Traditional midsole materials such as EVA, TPU, and PEBA foams typically exhibit hardening force-displacement characteristics. On the other hand, a midsole with softening properties, which can be achieved through Compliant Constant Force Mechanisms (CFMs), can provide significant benefits in terms of energy storage and return. This study presents the development of such a midsole, incorporating 3D printed TPU CFM designs derived through structural optimization. The mechanical properties under cyclic loading were evaluated and compared with those of commercially available running shoes with state-of-the-art PEBA foam midsoles, specifically the Nike ZoomX Vaporfly Next% 2 (NVP). Our custom midsole demonstrated promising mechanical performance. At similar deformation levels, the new design increased energy storage by 58.1% and energy return by 47.0%, while reducing the peak compressive force by 24.3%. As per our understanding, this is the first study to prove that the inclusion of CFMs in the structural design of 3D printed midsoles can significantly enhance energy return.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"644 ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139202692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}