The use of lattice structures in design for additive manufacturing has quickly emerged as a popular and efficient design alternative for creating innovative multifunctional lightweight solutions. In particular, the family of triply periodic minimal surfaces (TPMS) studied in detail by Schoen for generating frame-or shell-based lattice structures seems extra promising. In this paper a multi-scale topology optimization approach for optimal macro-layout and local grading of TPMS-based lattice structures is presented. The approach is formulated using two different density fields, one for identifying the macro-layout and another one for setting the local grading of the TPMS-based lattice. The macro density variable is governed by the standard SIMP formulation, but the local one defines the orthotropic elasticity of the element following material interpolation laws derived by numerical homogenization. Such laws are derived for frame- and shell-based Gyroid, G-prime and Schwarz-D lattices using transversely isotropic elasticity for the bulk material. A nice feature of the approach is that the lower and upper additive manufacturing limits on the local density of the TMPS-based lattices are included properly. The performance of the approach is excellent, and this is demonstrated by solving several three-dimensional benchmark problems, e.g., the optimal macro-layout and local grading of Schwarz-D lattice for the established GE-bracket is identified using the presented approach.
在增材制造设计中使用点阵结构已经迅速成为一种流行和高效的设计替代方案,用于创建创新的多功能轻量级解决方案。特别是,Schoen详细研究的用于生成框架或壳基晶格结构的三周期极小表面(TPMS)族似乎特别有前途。本文提出了一种多尺度拓扑优化方法,用于优化基于tpms的晶格结构的宏观布局和局部分级。该方法使用两个不同的密度场来制定,一个用于识别宏观布局,另一个用于设置基于tpms的晶格的局部分级。宏观密度变量由标准SIMP公式控制,而局部密度变量根据数值均匀化导出的材料插值规律来定义单元的正交各向异性弹性。这些定律是基于框架和壳基Gyroid, g '和Schwarz-D晶格使用横向各向同性弹性体材料。该方法的一个很好的特点是适当地包括了基于tmps的晶格的局部密度的下下限和上限。该方法的性能非常优异,并通过解决几个三维基准问题得到了证明,例如,使用该方法确定了已建立的ge支架的Schwarz-D晶格的最优宏观布局和局部分级。
{"title":"A Multi-Scale Topology Optimization Approach for Optimal Macro-Layout and Local Grading of TPMS-Based Lattices","authors":"N. Strömberg","doi":"10.1115/detc2021-67163","DOIUrl":"https://doi.org/10.1115/detc2021-67163","url":null,"abstract":"\u0000 The use of lattice structures in design for additive manufacturing has quickly emerged as a popular and efficient design alternative for creating innovative multifunctional lightweight solutions. In particular, the family of triply periodic minimal surfaces (TPMS) studied in detail by Schoen for generating frame-or shell-based lattice structures seems extra promising. In this paper a multi-scale topology optimization approach for optimal macro-layout and local grading of TPMS-based lattice structures is presented. The approach is formulated using two different density fields, one for identifying the macro-layout and another one for setting the local grading of the TPMS-based lattice. The macro density variable is governed by the standard SIMP formulation, but the local one defines the orthotropic elasticity of the element following material interpolation laws derived by numerical homogenization. Such laws are derived for frame- and shell-based Gyroid, G-prime and Schwarz-D lattices using transversely isotropic elasticity for the bulk material. A nice feature of the approach is that the lower and upper additive manufacturing limits on the local density of the TMPS-based lattices are included properly. The performance of the approach is excellent, and this is demonstrated by solving several three-dimensional benchmark problems, e.g., the optimal macro-layout and local grading of Schwarz-D lattice for the established GE-bracket is identified using the presented approach.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125227741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enforcing connectivity of parts or their complement space during automated design is essential for various manufacturing and functional considerations such as removing powder, wiring internal components, and flowing internal coolant. The global nature of connectivity makes it difficult to incorporate into generative design methods that rely on local decision making, e.g., topology optimization (TO) algorithms whose update rules depend on the sensitivity of objective functions or constraints to locally change the design. Connectivity is commonly corrected for in a post-processing step, which may result in suboptimal designs. We propose a recasting of the connectivity constraint as a locally differentiable violation measure, defined as a “virtual” compliance, modeled after physical (e.g., thermal or structural) compliance. Such measures can be used within TO alongside other objective functions and constraints, using a weighted penalty scheme to navigate tradeoffs. By carefully specifying the boundary conditions of the virtual compliance problem, the designer can enforce connectivity between arbitrary regions of the part’s complement space while satisfying a primary objective function in the TO loop. We demonstrate the effectiveness of our approach using both 2D and 3D examples, show its flexibility to consider multiple virtual domains, and confirm the benefits of considering connectivity in the design loop rather than enforcing it through post-processing.
{"title":"Topology Optimization With Locally Evaluable Complement Space Connectivity","authors":"C. Morris, Amir M. Mirzendehdel, M. Behandish","doi":"10.1115/detc2021-67499","DOIUrl":"https://doi.org/10.1115/detc2021-67499","url":null,"abstract":"\u0000 Enforcing connectivity of parts or their complement space during automated design is essential for various manufacturing and functional considerations such as removing powder, wiring internal components, and flowing internal coolant. The global nature of connectivity makes it difficult to incorporate into generative design methods that rely on local decision making, e.g., topology optimization (TO) algorithms whose update rules depend on the sensitivity of objective functions or constraints to locally change the design. Connectivity is commonly corrected for in a post-processing step, which may result in suboptimal designs. We propose a recasting of the connectivity constraint as a locally differentiable violation measure, defined as a “virtual” compliance, modeled after physical (e.g., thermal or structural) compliance. Such measures can be used within TO alongside other objective functions and constraints, using a weighted penalty scheme to navigate tradeoffs. By carefully specifying the boundary conditions of the virtual compliance problem, the designer can enforce connectivity between arbitrary regions of the part’s complement space while satisfying a primary objective function in the TO loop. We demonstrate the effectiveness of our approach using both 2D and 3D examples, show its flexibility to consider multiple virtual domains, and confirm the benefits of considering connectivity in the design loop rather than enforcing it through post-processing.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126102735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The representation of material structure geometry is essential to the reconstruction, physical simulation, and the multiscale structure design with Random Heterogeneous Material (RHM). Traditional approaches to material structure representation often need to balance the trade-off between efficacy and accuracy. Recently, deep learning-based techniques have been adopted to reduce the computational time of RHM reconstruction. However, existing approaches generally lack guarantees over key RHM characteristics, including Minkowski functionals and correlation functions. We propose a novel approach to geometrically enhancing the deep learning-based RHM representation by introducing Minkowski functionals, a set of topological and geometrical characteristics of material structure, into the training of conditional Generative Adversarial Networks (cGAN). This hybrid approach combines the feature learning capability of deep learning with the well-established material structure characteristics, greatly improving the accuracy of the RHM representation while maintaining its efficiency. The effectiveness of the proposed hybrid approach is validated through the reconstruction of a wide range of natural and manmade materials, including Voronoi foam structures, femur, and sandstone. Through computational experiments, we demonstrate that geometrically enhancing the training of cGAN for RHM representation not only significantly decreases the representation error in Minkowski functionals between input sample materials and reconstructed results, but also improves the performance of other material structure characteristics, such as two-point correlation functions.
{"title":"Geometry Enhanced Generative Adversarial Networks for Random Heterogeneous Material Representation","authors":"Hongrui Chen, Xingchen Liu","doi":"10.1115/detc2021-71918","DOIUrl":"https://doi.org/10.1115/detc2021-71918","url":null,"abstract":"\u0000 The representation of material structure geometry is essential to the reconstruction, physical simulation, and the multiscale structure design with Random Heterogeneous Material (RHM). Traditional approaches to material structure representation often need to balance the trade-off between efficacy and accuracy. Recently, deep learning-based techniques have been adopted to reduce the computational time of RHM reconstruction. However, existing approaches generally lack guarantees over key RHM characteristics, including Minkowski functionals and correlation functions.\u0000 We propose a novel approach to geometrically enhancing the deep learning-based RHM representation by introducing Minkowski functionals, a set of topological and geometrical characteristics of material structure, into the training of conditional Generative Adversarial Networks (cGAN). This hybrid approach combines the feature learning capability of deep learning with the well-established material structure characteristics, greatly improving the accuracy of the RHM representation while maintaining its efficiency. The effectiveness of the proposed hybrid approach is validated through the reconstruction of a wide range of natural and manmade materials, including Voronoi foam structures, femur, and sandstone. Through computational experiments, we demonstrate that geometrically enhancing the training of cGAN for RHM representation not only significantly decreases the representation error in Minkowski functionals between input sample materials and reconstructed results, but also improves the performance of other material structure characteristics, such as two-point correlation functions.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125367223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optimizing a system’s resilience can be challenging, especially when it involves considering both the inherent resilience of a robust design and the active resilience of a health management system to a set of computationally-expensive hazard simulations. While prior work has developed specialized architectures to effectively and efficiently solve combined design and resilience optimization problems, the comparison of these architectures has been limited to a single case study. To further study resilience optimization formulations, this work develops a problem repository which includes previously-developed resilience optimization problems and additional problems presented in this work: a notional system resilience model, a pandemic response model, and a cooling tank hazard prevention model. This work then uses models in the repository at large to understand the characteristics of resilience optimization problems and study the applicability of optimization architectures and decomposition strategies. Based on the comparisons in the repository, applying an optimization architecture effectively requires understanding the alignment and coupling relationships between the design and resilience models, as well as the efficiency characteristics of the algorithms. While alignment determines the necessity of a surrogate of resilience cost in the upper-level design problem, coupling determines the overall applicability of a sequential, alternating, or bilevel structure. Additionally, the application of decomposition strategies is dependent on there being limited interactions between variable sets, which often does not hold when a resilience policy is parameterized in terms of actions to take in hazardous model states rather than specific given scenarios.
{"title":"Understanding Resilience Optimization Architectures With an Optimization Problem Repository","authors":"Daniel E. Hulse, Hongyang Zhang, C. Hoyle","doi":"10.1115/detc2021-70985","DOIUrl":"https://doi.org/10.1115/detc2021-70985","url":null,"abstract":"\u0000 Optimizing a system’s resilience can be challenging, especially when it involves considering both the inherent resilience of a robust design and the active resilience of a health management system to a set of computationally-expensive hazard simulations. While prior work has developed specialized architectures to effectively and efficiently solve combined design and resilience optimization problems, the comparison of these architectures has been limited to a single case study. To further study resilience optimization formulations, this work develops a problem repository which includes previously-developed resilience optimization problems and additional problems presented in this work: a notional system resilience model, a pandemic response model, and a cooling tank hazard prevention model. This work then uses models in the repository at large to understand the characteristics of resilience optimization problems and study the applicability of optimization architectures and decomposition strategies. Based on the comparisons in the repository, applying an optimization architecture effectively requires understanding the alignment and coupling relationships between the design and resilience models, as well as the efficiency characteristics of the algorithms. While alignment determines the necessity of a surrogate of resilience cost in the upper-level design problem, coupling determines the overall applicability of a sequential, alternating, or bilevel structure. Additionally, the application of decomposition strategies is dependent on there being limited interactions between variable sets, which often does not hold when a resilience policy is parameterized in terms of actions to take in hazardous model states rather than specific given scenarios.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128805117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atharva Hans, Ashish M. Chaudhari, Ilias Bilionis, Jitesh H. Panchal
Cost and schedule overruns are common in the procurement of large-scale defense acquisition programs. Current work focuses on identifying the root causes of cost growth and schedule delays in the defense acquisition programs. There is need for a mix of quantitative and qualitative analysis of cost and schedule overruns which takes into account program factor such as, technology maturity, design maturity, initial acquisition time, and program complexity. Such analysis requires an easy to access database for program-specific data about how an acquisition programs’ technical and financial characteristics vary over the time. To fulfill this need, the objective of this paper is twofold: (i) to develop a database of major US defense weapons programs which includes details of the technical and financial characteristics and how they vary over time, and (ii) to test various hypotheses about the interdependence of such characteristics using the collected data. To achieve the objective, we use a mixed-method analysis on schedule and cost growth data available in the U.S. Government Accountability Office’s (GAO’s) defense acquisitions annual assessments during the period 2003–2017. We extracted both analytical and textual data from original reports into Excel files and further created an easy to access database accessible from a Python environment. The analysis reveals that technology immaturity is the major driver of cost and schedule growth during the early stages of the acquisition programs while technical inefficiencies drive cost overruns and schedule delays during the later stages. Further, we find that the acquisition programs with longer initial length do not necessarily have higher greater cost growth. The dataset and the results provide a useful starting point for the research community for modeling cost and schedule overruns, and for practitioners to inform their systems acquisition processes.
{"title":"A Mixed-Method Analysis of Schedule and Cost Growth in Defense Acquisition Programs","authors":"Atharva Hans, Ashish M. Chaudhari, Ilias Bilionis, Jitesh H. Panchal","doi":"10.1115/detc2021-71517","DOIUrl":"https://doi.org/10.1115/detc2021-71517","url":null,"abstract":"\u0000 Cost and schedule overruns are common in the procurement of large-scale defense acquisition programs. Current work focuses on identifying the root causes of cost growth and schedule delays in the defense acquisition programs. There is need for a mix of quantitative and qualitative analysis of cost and schedule overruns which takes into account program factor such as, technology maturity, design maturity, initial acquisition time, and program complexity. Such analysis requires an easy to access database for program-specific data about how an acquisition programs’ technical and financial characteristics vary over the time. To fulfill this need, the objective of this paper is twofold: (i) to develop a database of major US defense weapons programs which includes details of the technical and financial characteristics and how they vary over time, and (ii) to test various hypotheses about the interdependence of such characteristics using the collected data. To achieve the objective, we use a mixed-method analysis on schedule and cost growth data available in the U.S. Government Accountability Office’s (GAO’s) defense acquisitions annual assessments during the period 2003–2017. We extracted both analytical and textual data from original reports into Excel files and further created an easy to access database accessible from a Python environment. The analysis reveals that technology immaturity is the major driver of cost and schedule growth during the early stages of the acquisition programs while technical inefficiencies drive cost overruns and schedule delays during the later stages. Further, we find that the acquisition programs with longer initial length do not necessarily have higher greater cost growth. The dataset and the results provide a useful starting point for the research community for modeling cost and schedule overruns, and for practitioners to inform their systems acquisition processes.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"7 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120917443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new technique for designing nonlinear feedback controllers that can effectively and efficiently control nonlinear and unstable dynamical systems. The technique, called State-Parameterized Nonlinear Programming Control (sp-NLPC), constructs an optimal control strategy that is a function of dynamical system states. This is achieved through an offline parametric optimization process using the predictive parameterized Pareto genetic algorithm (P3GA) and representing the optimized state-varying policy using radial basis function (RBF) metamodeling. The sp-NLPC technique avoids many limitations of alternative methods, such as the need to make strong assumptions about model form (e.g., linearity) and the demands of online optimization processes. The proposed method is benchmarked on the problems of controlling the highly nonlinear and inherently unstable systems: single and double inverted pendulums on a cart. Performance and computational efficiency are compared to several competing control design techniques. Results show that sp-NLPC outperforms and is more efficient than competing methods. The parametric solution strategy for sp-NLPC lends itself to use in Control Co-Design (CCD). Such extensions are discussed as part of future work.
{"title":"A Methodology for Designing a Nonlinear Feedback Controller via Parametric Optimization: State-Parameterized Nonlinear Programming Control","authors":"Ying-Kuan Tsai, R. Malak","doi":"10.1115/detc2021-69295","DOIUrl":"https://doi.org/10.1115/detc2021-69295","url":null,"abstract":"\u0000 This paper introduces a new technique for designing nonlinear feedback controllers that can effectively and efficiently control nonlinear and unstable dynamical systems. The technique, called State-Parameterized Nonlinear Programming Control (sp-NLPC), constructs an optimal control strategy that is a function of dynamical system states. This is achieved through an offline parametric optimization process using the predictive parameterized Pareto genetic algorithm (P3GA) and representing the optimized state-varying policy using radial basis function (RBF) metamodeling. The sp-NLPC technique avoids many limitations of alternative methods, such as the need to make strong assumptions about model form (e.g., linearity) and the demands of online optimization processes. The proposed method is benchmarked on the problems of controlling the highly nonlinear and inherently unstable systems: single and double inverted pendulums on a cart. Performance and computational efficiency are compared to several competing control design techniques. Results show that sp-NLPC outperforms and is more efficient than competing methods. The parametric solution strategy for sp-NLPC lends itself to use in Control Co-Design (CCD). Such extensions are discussed as part of future work.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121621023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Wire Arc Additive Manufacturing (WAAM), weld beads are deposited bead-by-bead and layer-by-layer, leading to the final part. Thus, the lack of uniformity or geometrically defective bead will subsequently lead to voids in the printed part, which will have a great impact on the overall part quality and mechanical strength. To resolve this, several techniques have been proposed to identity such defects using vision or thermal-based sensing, so as to aid in the implementation of in-situ corrective measures to save time and cost. However, due to the environment that they are operating in, these sensors are not an effective way of picking up irregularities as compared to acoustic sensing. Therefore, in this paper, we seek to study into three acoustic feature-based machine learning frameworks — Principal Component Analysis (PCA) + K-Nearest Neighbors (KNN), Mel Frequency Cepstral Coefficients (MFCC) + Neural Network (NN) and Mel Frequency Cepstral Coefficients (MFCC) + Convolutional Neural Network (CNN) and evaluate their performance for the real-time identification of geometrically defective weld bead. Experiments are carried out on stainless steel (ER316LSi), bronze (ERCuNiAl) and mixed dataset containing both stainless steel and bronze. The results show that all three frameworks outperform the state-of-the-art acoustic signal based ANN approach in terms of accuracy. The best performing framework PCA+KNN outperforms ANN by more than 15%, 30% and 30% for stainless steel, bronze and mixed datasets, respectively.
{"title":"A Study on the Acoustic Signal Based Frameworks for the Real-Time Identification of Geometrically Defective Wire Arc Bead","authors":"Nowrin Akter Surovi, A. G. Dharmawan, G. Soh","doi":"10.1115/detc2021-69573","DOIUrl":"https://doi.org/10.1115/detc2021-69573","url":null,"abstract":"\u0000 In Wire Arc Additive Manufacturing (WAAM), weld beads are deposited bead-by-bead and layer-by-layer, leading to the final part. Thus, the lack of uniformity or geometrically defective bead will subsequently lead to voids in the printed part, which will have a great impact on the overall part quality and mechanical strength. To resolve this, several techniques have been proposed to identity such defects using vision or thermal-based sensing, so as to aid in the implementation of in-situ corrective measures to save time and cost. However, due to the environment that they are operating in, these sensors are not an effective way of picking up irregularities as compared to acoustic sensing. Therefore, in this paper, we seek to study into three acoustic feature-based machine learning frameworks — Principal Component Analysis (PCA) + K-Nearest Neighbors (KNN), Mel Frequency Cepstral Coefficients (MFCC) + Neural Network (NN) and Mel Frequency Cepstral Coefficients (MFCC) + Convolutional Neural Network (CNN) and evaluate their performance for the real-time identification of geometrically defective weld bead. Experiments are carried out on stainless steel (ER316LSi), bronze (ERCuNiAl) and mixed dataset containing both stainless steel and bronze. The results show that all three frameworks outperform the state-of-the-art acoustic signal based ANN approach in terms of accuracy. The best performing framework PCA+KNN outperforms ANN by more than 15%, 30% and 30% for stainless steel, bronze and mixed datasets, respectively.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115117157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scientific and engineering problems often require an inexpensive surrogate model to aid understanding and the search for promising designs. While Gaussian processes (GP) stand out as easy-to-use and interpretable learners in surrogate modeling, they have difficulties in accommodating big datasets, qualitative inputs, and multi-type responses obtained from different simulators, which has become a common challenge for a growing number of data-driven design applications. In this paper, we propose a GP model that utilizes latent variables and functions obtained through variational inference to address the aforementioned challenges simultaneously. The method is built upon the latent variable Gaussian process (LVGP) model where qualitative factors are mapped into a continuous latent space to enable GP modeling of mixed-variable datasets. By extending variational inference to LVGP models, the large training dataset is replaced by a small set of inducing points to address the scalability issue. Output response vectors are represented by a linear combination of independent latent functions, forming a flexible kernel structure to handle multi-type responses. Comparative studies demonstrate that the proposed method scales well for large datasets with over 104 data points, while outperforming state-of-the-art machine learning methods without requiring much hyperparameter tuning. In addition, an interpretable latent space is obtained to draw insights into the effect of qualitative factors, such as those associated with “building blocks” of architectures and element choices in metamaterial and materials design. Our approach is demonstrated for machine learning of ternary oxide materials and topology optimization of a multiscale compliant mechanism with aperiodic microstructures and multiple materials.
{"title":"Data-Driven Design via Scalable Gaussian Processes for Multi-Response Big Data With Qualitative Factors","authors":"Liwei Wang, Suraj Yerramilli, Akshay Iyer, D. Apley, Ping Zhu, Wei Chen","doi":"10.1115/detc2021-71570","DOIUrl":"https://doi.org/10.1115/detc2021-71570","url":null,"abstract":"\u0000 Scientific and engineering problems often require an inexpensive surrogate model to aid understanding and the search for promising designs. While Gaussian processes (GP) stand out as easy-to-use and interpretable learners in surrogate modeling, they have difficulties in accommodating big datasets, qualitative inputs, and multi-type responses obtained from different simulators, which has become a common challenge for a growing number of data-driven design applications. In this paper, we propose a GP model that utilizes latent variables and functions obtained through variational inference to address the aforementioned challenges simultaneously. The method is built upon the latent variable Gaussian process (LVGP) model where qualitative factors are mapped into a continuous latent space to enable GP modeling of mixed-variable datasets. By extending variational inference to LVGP models, the large training dataset is replaced by a small set of inducing points to address the scalability issue. Output response vectors are represented by a linear combination of independent latent functions, forming a flexible kernel structure to handle multi-type responses. Comparative studies demonstrate that the proposed method scales well for large datasets with over 104 data points, while outperforming state-of-the-art machine learning methods without requiring much hyperparameter tuning. In addition, an interpretable latent space is obtained to draw insights into the effect of qualitative factors, such as those associated with “building blocks” of architectures and element choices in metamaterial and materials design. Our approach is demonstrated for machine learning of ternary oxide materials and topology optimization of a multiscale compliant mechanism with aperiodic microstructures and multiple materials.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132642276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, many studies on product design have utilized online data for customer analysis. However, most of them treat online customers as a group of people with the same preferences while customer segmentation is a key strategy in conventional market analysis. To supplement this gap, this paper proposes a new methodology for online customer segmentation. First, customer attributes are extracted from online customer reviews. Then, a customer network is constructed based on the extracted attributes. Finally, the network is partitioned by modularity clustering and the resulting clusters are analyzed by topic frequency. The methodology is implemented to a smartphone review data. The result shows that online customers have different preferences as offline customers do, and they can be divided into separate groups with different tendencies for product features. This can help product designers to draw segment-based design implications from online data.
{"title":"Data-Driven Customer Segmentation Based On Online Review Analysis and Customer Network Construction","authors":"Seyoung Park, Harrison M. Kim","doi":"10.1115/detc2021-70036","DOIUrl":"https://doi.org/10.1115/detc2021-70036","url":null,"abstract":"\u0000 Recently, many studies on product design have utilized online data for customer analysis. However, most of them treat online customers as a group of people with the same preferences while customer segmentation is a key strategy in conventional market analysis. To supplement this gap, this paper proposes a new methodology for online customer segmentation. First, customer attributes are extracted from online customer reviews. Then, a customer network is constructed based on the extracted attributes. Finally, the network is partitioned by modularity clustering and the resulting clusters are analyzed by topic frequency. The methodology is implemented to a smartphone review data. The result shows that online customers have different preferences as offline customers do, and they can be divided into separate groups with different tendencies for product features. This can help product designers to draw segment-based design implications from online data.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132288172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Additive manufacturing (AM) can produce complex geometrical shapes and multi-material parts that are not possible using typical manufacturing processes. The properties of multi-material AM parts are often unknown. For multi-material parts made using Fused Deposition Modeling (FDM), these properties are driven by the filament. Acquiring the properties of the products or the filament necessitates experiments that can be expensive and time-consuming. Thus, there is a need for simulation-based design tools that can determine the multi-material properties of the filament by exploring the complex process-structure-property (p-s-p) relationship. In this paper, we present a Goal-Oriented Inverse Design (GoID) method to produce feedstock filament for FDM process with specific property goals. Using this method, the designers connects the structure and property in the p-s-p relationship by identifying satisficing material composition for specific property goals. The filament properties identified in the problem are percentage elongation, tensile strength, and Young’s Modulus. The problem is formulated using a generic decision-based design framework, Concept Exploration Framework. The solution space exploration for satisficing solutions is performed using the compromise Decision Support Problem (cDSP). The forward information flow is first established to generate the necessary mathematical relationships between the composition and the property goals. Next, the target property goals of the filament are set. The cDSP is used for solution space exploration to identify satisficing solutions for material composition for the target property goals. While the results are interesting, the focus of our work is to demonstrate, and refine, the goal-oriented, inverse design method for the AM domain.
{"title":"Goal-Oriented Inverse Design (GoID) of Feedstock Filament for Fused Deposition Modeling","authors":"A. Deka, A. Nellippallil, John Hall","doi":"10.1115/detc2021-70503","DOIUrl":"https://doi.org/10.1115/detc2021-70503","url":null,"abstract":"\u0000 Additive manufacturing (AM) can produce complex geometrical shapes and multi-material parts that are not possible using typical manufacturing processes. The properties of multi-material AM parts are often unknown. For multi-material parts made using Fused Deposition Modeling (FDM), these properties are driven by the filament. Acquiring the properties of the products or the filament necessitates experiments that can be expensive and time-consuming. Thus, there is a need for simulation-based design tools that can determine the multi-material properties of the filament by exploring the complex process-structure-property (p-s-p) relationship.\u0000 In this paper, we present a Goal-Oriented Inverse Design (GoID) method to produce feedstock filament for FDM process with specific property goals. Using this method, the designers connects the structure and property in the p-s-p relationship by identifying satisficing material composition for specific property goals. The filament properties identified in the problem are percentage elongation, tensile strength, and Young’s Modulus. The problem is formulated using a generic decision-based design framework, Concept Exploration Framework. The solution space exploration for satisficing solutions is performed using the compromise Decision Support Problem (cDSP). The forward information flow is first established to generate the necessary mathematical relationships between the composition and the property goals. Next, the target property goals of the filament are set. The cDSP is used for solution space exploration to identify satisficing solutions for material composition for the target property goals. While the results are interesting, the focus of our work is to demonstrate, and refine, the goal-oriented, inverse design method for the AM domain.","PeriodicalId":204380,"journal":{"name":"Volume 3A: 47th Design Automation Conference (DAC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}