Molla Hazifur Rahman, Michael S. Gashler, Charles Xie, Zhenghui Sha
Design is essentially a decision-making process, and systems design decisions are sequentially made. In-depth understanding on human sequential decision-making patterns in design helps discover useful design heuristics to improve existing algorithms of computational design. In this paper, we develop a framework for clustering designers with similar sequential design patterns. We adopt the Function-Behavior-Structure based design process model to characterize designers’ action sequence logged by computer-aided design (CAD) software as a sequence of design process stages. Such a sequence reflects designers’ thinking and sequential decision making during the design process. Then, the Markov chain is used to quantify the transitions between design stages from which various clustering methods can be applied. Three different clustering methods are tested, including the K-means clustering, the hierarchical clustering and the network-based clustering. A verification approach based on variation of information is developed to evaluate the effectiveness of each method and to identify the clusters of designers who show strong behavioral similarities. The framework is applied in a solar energy systems design problem — energy-plus home design. The case study shows that the proposed framework can successfully cluster designers and identify their sequential decision-making similarities and dissimilarities. Our framework can support the studies on the correlation between potential factors (e.g., designers’ demographics) and certain design behavioral patterns, as well as the correlation between behavioral patterns and design quality to identify beneficial design heuristics.
{"title":"Automatic Clustering of Sequential Design Behaviors","authors":"Molla Hazifur Rahman, Michael S. Gashler, Charles Xie, Zhenghui Sha","doi":"10.1115/DETC2018-86300","DOIUrl":"https://doi.org/10.1115/DETC2018-86300","url":null,"abstract":"Design is essentially a decision-making process, and systems design decisions are sequentially made. In-depth understanding on human sequential decision-making patterns in design helps discover useful design heuristics to improve existing algorithms of computational design. In this paper, we develop a framework for clustering designers with similar sequential design patterns. We adopt the Function-Behavior-Structure based design process model to characterize designers’ action sequence logged by computer-aided design (CAD) software as a sequence of design process stages. Such a sequence reflects designers’ thinking and sequential decision making during the design process. Then, the Markov chain is used to quantify the transitions between design stages from which various clustering methods can be applied. Three different clustering methods are tested, including the K-means clustering, the hierarchical clustering and the network-based clustering. A verification approach based on variation of information is developed to evaluate the effectiveness of each method and to identify the clusters of designers who show strong behavioral similarities. The framework is applied in a solar energy systems design problem — energy-plus home design. The case study shows that the proposed framework can successfully cluster designers and identify their sequential decision-making similarities and dissimilarities. Our framework can support the studies on the correlation between potential factors (e.g., designers’ demographics) and certain design behavioral patterns, as well as the correlation between behavioral patterns and design quality to identify beneficial design heuristics.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129950601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Papakonstantinou, Joonas Linnosmaa, J. Alanen, B. O’Halloran
Safety engineering for complex systems is a very challenging task and the industry has a firm basis and trust on a set of established methods like the Probabilistic Risk Assessment (PRA). New methodologies for system engineering are being proposed by academia, some related to safety, but they have a limited chance for successful adoption by the safety industry unless they provide a clear connection and benefit in relation to the traditional methodologies. Model-Based System Engineering (MBSE) has produced multiple safety related applications. In past work system models were used to generate event trees, failure propagation scenarios and for early human reliability analyses. This paper extends previous work, on a high-level interdisciplinary system model for early defense in depth assessment, to support the automatic generation of fault tree statements for specific critical system components. These statements can then be combined into fault trees using software already utilized by the industry. The fault trees can then be linked to event trees in order to provide a more complete picture of an initiating event, the mitigating functions and critical components that are involved. The produced fault trees use a worst-case scenario approach by stating that if a dependency exists then the failure propagation is certain. Our proposed method doesn’t consider specific failure modes and related probabilities, a safety expert can use them as a starting point for further development. The methodology is demonstrated with a case study of a spent fuel pool cooling system of a nuclear plant.
{"title":"Automatic Fault Tree Generation From Multidisciplinary Dependency Models for Early Failure Propagation Assessment","authors":"N. Papakonstantinou, Joonas Linnosmaa, J. Alanen, B. O’Halloran","doi":"10.1115/DETC2018-85189","DOIUrl":"https://doi.org/10.1115/DETC2018-85189","url":null,"abstract":"Safety engineering for complex systems is a very challenging task and the industry has a firm basis and trust on a set of established methods like the Probabilistic Risk Assessment (PRA). New methodologies for system engineering are being proposed by academia, some related to safety, but they have a limited chance for successful adoption by the safety industry unless they provide a clear connection and benefit in relation to the traditional methodologies. Model-Based System Engineering (MBSE) has produced multiple safety related applications. In past work system models were used to generate event trees, failure propagation scenarios and for early human reliability analyses. This paper extends previous work, on a high-level interdisciplinary system model for early defense in depth assessment, to support the automatic generation of fault tree statements for specific critical system components. These statements can then be combined into fault trees using software already utilized by the industry. The fault trees can then be linked to event trees in order to provide a more complete picture of an initiating event, the mitigating functions and critical components that are involved. The produced fault trees use a worst-case scenario approach by stating that if a dependency exists then the failure propagation is certain. Our proposed method doesn’t consider specific failure modes and related probabilities, a safety expert can use them as a starting point for further development. The methodology is demonstrated with a case study of a spent fuel pool cooling system of a nuclear plant.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133170088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Redaelli, E. Biffi, G. Colombo, P. Fraschini, G. Reni
The present paper aims at discussing the current manufacturing processes of chest orthoses, considering patients affected by Osteogenesis Imperfecta (OI) and a possible future scenario. OI is a genetic disease caused primarily by the genes responsible for collagen production. One of the most common symptoms among the groups of living subjects affected by OI is scoliosis, the abnormal deformation of the spine curvature. The non-invasive treatments for realigning the spine consist of both physical exercise and use of chest braces. The latter are strongly patient-dependent devices; thus, the level of customization is high. The production processes can be classified in: traditional, modern and research process. The first one consists of a sequence of manual operations on plaster casts and final orthoses. The modern process integrates CAD/CAM systems for the first phases of virtual 3D modeling and the automation of the cast production using milling robot, but maintaining the second part of the process. The research process considers the introduction of polymer Additive Manufacturing (AM) in substitution to the thermoforming. Advantages and disadvantages related to each process are discussed in relation to the OI problem.
{"title":"Current and Future Manufacturing of Chest Orthoses, Considering the Case of Osteogenesis Imperfecta","authors":"D. Redaelli, E. Biffi, G. Colombo, P. Fraschini, G. Reni","doi":"10.1115/DETC2018-86425","DOIUrl":"https://doi.org/10.1115/DETC2018-86425","url":null,"abstract":"The present paper aims at discussing the current manufacturing processes of chest orthoses, considering patients affected by Osteogenesis Imperfecta (OI) and a possible future scenario. OI is a genetic disease caused primarily by the genes responsible for collagen production. One of the most common symptoms among the groups of living subjects affected by OI is scoliosis, the abnormal deformation of the spine curvature. The non-invasive treatments for realigning the spine consist of both physical exercise and use of chest braces. The latter are strongly patient-dependent devices; thus, the level of customization is high. The production processes can be classified in: traditional, modern and research process. The first one consists of a sequence of manual operations on plaster casts and final orthoses. The modern process integrates CAD/CAM systems for the first phases of virtual 3D modeling and the automation of the cast production using milling robot, but maintaining the second part of the process. The research process considers the introduction of polymer Additive Manufacturing (AM) in substitution to the thermoforming. Advantages and disadvantages related to each process are discussed in relation to the OI problem.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125962547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of globalized supply chains, counterfeiting of manufactured goods is a growing problem. The financial, legal, and reputational costs that counterfeit goods impose on legitimate enterprises have spurred investigation into efficient and robust anti-counterfeiting methodologies. In particular, physically unclonable functions (PUFs) have been applied effectively in several manufacturing areas, especially electronics. However, anti-counterfeiting solutions for generic manufactured goods are often expensive to make and implement, or not robust to minor damage that the goods may sustain during transport and use. In this paper, a framework for developing robust, efficient, and cost-effective optical PUFs for anti-counterfeiting of manufactured metallic goods is proposed, along with an example implementation for 4140-steel parts according to standard ASTM A29. For an input library of 50 steel micrographs, the proposed example PUF is shown to have good robustness to simulated part damage and an estimated classification error rate of less than 1%.
{"title":"Optical PUF Design for Anti-Counterfeiting in Manufacturing of Metallic Goods","authors":"Adam Dachowicz, M. Atallah, Jitesh H. Panchal","doi":"10.1115/DETC2018-85714","DOIUrl":"https://doi.org/10.1115/DETC2018-85714","url":null,"abstract":"In the context of globalized supply chains, counterfeiting of manufactured goods is a growing problem. The financial, legal, and reputational costs that counterfeit goods impose on legitimate enterprises have spurred investigation into efficient and robust anti-counterfeiting methodologies. In particular, physically unclonable functions (PUFs) have been applied effectively in several manufacturing areas, especially electronics. However, anti-counterfeiting solutions for generic manufactured goods are often expensive to make and implement, or not robust to minor damage that the goods may sustain during transport and use. In this paper, a framework for developing robust, efficient, and cost-effective optical PUFs for anti-counterfeiting of manufactured metallic goods is proposed, along with an example implementation for 4140-steel parts according to standard ASTM A29. For an input library of 50 steel micrographs, the proposed example PUF is shown to have good robustness to simulated part damage and an estimated classification error rate of less than 1%.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126304335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing user experience on a new vehicle design objectively is critical for an automotive company to improve customer satisfactory of its products. The technology of motion capture and digital human modeling has been utilized in many recent studies to assist in the design process of a new product. Motions of human interacting with a new product are captured or simulated. The human body trajectory and swept volumes of the motion are overlaid with the product geometry in CAD, providing a guidance for the direction of new design changes. However, a CAD system generally requires some training to be efficiently used to review a design. A more intuitive and easier approach is preferred. In recent years, Virtual and Augmented Reality (VR/AR) are taking root in engineering area. The technology provide great advantages to the applications of design review, process simulation, maintenance, and training, etc., by expanding the physical world with virtual components or helpful information. This paper presents a new method of motion-based immersive design system for vehicle occupant package that integrates the technologies of VR/AR and digital human modeling. The method incorporates human motions captured or simulated along with the new geometry of a vehicle compartment inside a VR/AR environment. Through this, the designer is able to not only immersively experience the new design, but also observe the behavior of different users interacting with the design in the 3D environment. It provides a systematic and intuitive approach for a designer to quickly iterate through the test-review-revise design cycle and achieve a more accommodating and occupant friendly vehicle compartment design.
{"title":"A Method of Motion-Based Immersive Design System for Vehicle Occupant Package","authors":"J. Wan, Nanxin Wang","doi":"10.1115/DETC2018-85054","DOIUrl":"https://doi.org/10.1115/DETC2018-85054","url":null,"abstract":"Assessing user experience on a new vehicle design objectively is critical for an automotive company to improve customer satisfactory of its products. The technology of motion capture and digital human modeling has been utilized in many recent studies to assist in the design process of a new product. Motions of human interacting with a new product are captured or simulated. The human body trajectory and swept volumes of the motion are overlaid with the product geometry in CAD, providing a guidance for the direction of new design changes. However, a CAD system generally requires some training to be efficiently used to review a design. A more intuitive and easier approach is preferred.\u0000 In recent years, Virtual and Augmented Reality (VR/AR) are taking root in engineering area. The technology provide great advantages to the applications of design review, process simulation, maintenance, and training, etc., by expanding the physical world with virtual components or helpful information.\u0000 This paper presents a new method of motion-based immersive design system for vehicle occupant package that integrates the technologies of VR/AR and digital human modeling. The method incorporates human motions captured or simulated along with the new geometry of a vehicle compartment inside a VR/AR environment. Through this, the designer is able to not only immersively experience the new design, but also observe the behavior of different users interacting with the design in the 3D environment. It provides a systematic and intuitive approach for a designer to quickly iterate through the test-review-revise design cycle and achieve a more accommodating and occupant friendly vehicle compartment design.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123760086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chao Zhang, Guanghui Zhou, Bai Quandong, Q. Lu, Fengtian Chang
Pre-existing knowledge buried in high-end equipment manufacturing enterprises could be effectively reused to help decision-makers develop good judgements to make decisions about the problems in new product development, which in turn speeds up and improves the quality of product innovation. Nevertheless, a knowledge-based decision support system in high-end equipment domain is still not fully accomplished due to the complication of knowledge content, fragmentation of knowledge theme, heterogeneousness of knowledge format, and decentralization of knowledge storage. To address these issues, this paper develops a high-end equipment knowledge management system (HEKM) for supporting knowledge-driven decision-making in new product development. HEKM provides three steps for knowledge management and reuse. Firstly, knowledge resources are captured and structured through a standard knowledge description template. Then, OWL ontologies are employed to explicitly and unambiguously describe the concepts of the captured knowledge and also the relationships that hold between those concepts. Finally, the Personalized PageRank algorithm together with ontology reasoning approach is used to perform knowledge navigation, where decision-makers could acquire the most relevant knowledge for a given problem through knowledge query or customized active push. The feasibility and effectiveness of HEKM are demonstrated through three industrial application examples.
{"title":"HEKM: A High-End Equipment Knowledge Management System for Supporting Knowledge-Driven Decision-Making in New Product Development","authors":"Chao Zhang, Guanghui Zhou, Bai Quandong, Q. Lu, Fengtian Chang","doi":"10.1115/DETC2018-85151","DOIUrl":"https://doi.org/10.1115/DETC2018-85151","url":null,"abstract":"Pre-existing knowledge buried in high-end equipment manufacturing enterprises could be effectively reused to help decision-makers develop good judgements to make decisions about the problems in new product development, which in turn speeds up and improves the quality of product innovation. Nevertheless, a knowledge-based decision support system in high-end equipment domain is still not fully accomplished due to the complication of knowledge content, fragmentation of knowledge theme, heterogeneousness of knowledge format, and decentralization of knowledge storage. To address these issues, this paper develops a high-end equipment knowledge management system (HEKM) for supporting knowledge-driven decision-making in new product development. HEKM provides three steps for knowledge management and reuse. Firstly, knowledge resources are captured and structured through a standard knowledge description template. Then, OWL ontologies are employed to explicitly and unambiguously describe the concepts of the captured knowledge and also the relationships that hold between those concepts. Finally, the Personalized PageRank algorithm together with ontology reasoning approach is used to perform knowledge navigation, where decision-makers could acquire the most relevant knowledge for a given problem through knowledge query or customized active push. The feasibility and effectiveness of HEKM are demonstrated through three industrial application examples.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116232990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chong Chen, Y. Liu, Xianfang Sun, Shixuan Wang, C. Cairano-Gilfedder, Scott Titmus, A. Syntetos
Over the last few decades, reliability analysis has gained more and more attention as it can be beneficial in lowering the maintenance cost. Time between failures (TBF) is an essential topic in reliability analysis. If the TBF can be accurately predicted, preventive maintenance can be scheduled in advance in order to avoid critical failures. The purpose of this paper is to research the TBF using deep learning techniques. Deep learning, as a tool capable of capturing the highly complex and nonlinearly patterns, can be a useful tool for TBF prediction. The general principle of how to design deep learning model was introduced. By using a sizeable amount of automobile TBF dataset, we conduct an experiential study on TBF prediction by deep learning and several data mining approaches. The empirical results show the merits of deep learning in performance but comes with cost of high computational load.
{"title":"Reliability Analysis Using Deep Learning","authors":"Chong Chen, Y. Liu, Xianfang Sun, Shixuan Wang, C. Cairano-Gilfedder, Scott Titmus, A. Syntetos","doi":"10.1115/DETC2018-86172","DOIUrl":"https://doi.org/10.1115/DETC2018-86172","url":null,"abstract":"Over the last few decades, reliability analysis has gained more and more attention as it can be beneficial in lowering the maintenance cost. Time between failures (TBF) is an essential topic in reliability analysis. If the TBF can be accurately predicted, preventive maintenance can be scheduled in advance in order to avoid critical failures. The purpose of this paper is to research the TBF using deep learning techniques. Deep learning, as a tool capable of capturing the highly complex and nonlinearly patterns, can be a useful tool for TBF prediction. The general principle of how to design deep learning model was introduced. By using a sizeable amount of automobile TBF dataset, we conduct an experiential study on TBF prediction by deep learning and several data mining approaches. The empirical results show the merits of deep learning in performance but comes with cost of high computational load.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125060430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaussian Process (GP) regression is a well-established probabilistic meta-modeling and data analysis tool. The posterior distribution of the GP parameters can be estimated using, e.g., Markov Chain Monte Carlo (MCMC). The ability to make predictions is a key aspect of using such surrogate models. To make a GP prediction, the MCMC chain as well as the training data are required. For some applications, GP predictions can require too much computational time and/or memory, especially for many training data points. This motivates the present work to represent the GP in an equivalent polynomial (or other global functional) form called a portable GP. The portable GP inherits many benefits of the GP including feature ranking via Sobol indices, robust fitting to non-linear and high-dimensional data, accurate uncertainty estimates, etc. The framework expands the GP in a high-dimensional model representation (HDMR). After fitting each HDMR basis function with a polynomial, they are all added together to form the portable GP. A ranking of which basis functions to use in the fitting process is automatically provided via Sobol indices. The uncertainty from the fitting process can be propagated to the final GP polynomial estimate. In applications where speed and accuracy are paramount, spline fits to the basis functions give very good results. Finally, portable BHM provides an alternative set of assumptions with regards to extrapolation behavior which may be more appropriate than the assumptions inherent in GPs.
{"title":"Polynomial Representation of the Gaussian Process","authors":"Jesper Kristensen, I. Asher, Liping Wang","doi":"10.1115/DETC2018-85145","DOIUrl":"https://doi.org/10.1115/DETC2018-85145","url":null,"abstract":"Gaussian Process (GP) regression is a well-established probabilistic meta-modeling and data analysis tool. The posterior distribution of the GP parameters can be estimated using, e.g., Markov Chain Monte Carlo (MCMC). The ability to make predictions is a key aspect of using such surrogate models. To make a GP prediction, the MCMC chain as well as the training data are required. For some applications, GP predictions can require too much computational time and/or memory, especially for many training data points. This motivates the present work to represent the GP in an equivalent polynomial (or other global functional) form called a portable GP. The portable GP inherits many benefits of the GP including feature ranking via Sobol indices, robust fitting to non-linear and high-dimensional data, accurate uncertainty estimates, etc. The framework expands the GP in a high-dimensional model representation (HDMR). After fitting each HDMR basis function with a polynomial, they are all added together to form the portable GP. A ranking of which basis functions to use in the fitting process is automatically provided via Sobol indices. The uncertainty from the fitting process can be propagated to the final GP polynomial estimate. In applications where speed and accuracy are paramount, spline fits to the basis functions give very good results. Finally, portable BHM provides an alternative set of assumptions with regards to extrapolation behavior which may be more appropriate than the assumptions inherent in GPs.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127041351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The objective of this research is to investigate the performance of a solid model similarity assessment method. This method is used to assess the similarity of tessellated solid models, where the tessellated geometry is in the form of triangles — specifically, the method compares STL files. A histogram of (triangle) tessellation areas is generated for each solid model being compared. The difference in the histograms of two solid models indicates their dissimilarity. The performance of the solid model similarity assessment method is evaluated by varying tessellation resolutions, and by varying histogram bin sizes. The solid model similarity assessment method is also compared to methods from literature. The comprehensive testing was performed using 867 solid models from the Engineering Shape Benchmark. It is found that the method was robust in its sensitivity to histogram bin sizes, and robust in its sensitivity to tessellation resolution. It is found that for small retrieval sizes, precision is relatively high. It is also found that this method outperformed methods from literature when comparing models that are rectangular, flat, thin, and/or cubic. Additionally, shortcomings of this method and related future work is identified.
{"title":"Similarity of Tessellated Solid Models for Engineering Applications","authors":"R. S. Renu, Christopher Sousa","doi":"10.1115/DETC2018-85269","DOIUrl":"https://doi.org/10.1115/DETC2018-85269","url":null,"abstract":"The objective of this research is to investigate the performance of a solid model similarity assessment method. This method is used to assess the similarity of tessellated solid models, where the tessellated geometry is in the form of triangles — specifically, the method compares STL files. A histogram of (triangle) tessellation areas is generated for each solid model being compared. The difference in the histograms of two solid models indicates their dissimilarity. The performance of the solid model similarity assessment method is evaluated by varying tessellation resolutions, and by varying histogram bin sizes. The solid model similarity assessment method is also compared to methods from literature. The comprehensive testing was performed using 867 solid models from the Engineering Shape Benchmark. It is found that the method was robust in its sensitivity to histogram bin sizes, and robust in its sensitivity to tessellation resolution. It is found that for small retrieval sizes, precision is relatively high. It is also found that this method outperformed methods from literature when comparing models that are rectangular, flat, thin, and/or cubic. Additionally, shortcomings of this method and related future work is identified.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127725340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of Internet of Things (IoT) has created an unanticipated rise of smart and connected products in the consumer market. While, smart and connected products have become a fundamental part of our day to day life, consumer’s perspective regarding these smart and connected products still remains an uncharted territory. This paper tries to explore how these “smart” and “connected” products are perceived in the consumer market and what are the key driving factors behind the unparalleled success of these products. In order to answer these questions, the authors first determined the “smartness” and “connectivity” criteria to judge all the products based on the most commonly used rating systems for such products. Followed by a case study analysis to determine if there is a correlation between “smartness”, “connectivity” and other product parameters. It is discovered that “smartness” as defined in the paper, is a resource intensive component of a product and therefore, directly affects the price of a product. On the other hand, consumers are more receptive to “connectivity” aspect of a product. The correlations found in the paper could help fill the gaps between areas of focus for technology development in the industry and user demands.
{"title":"Analysis of Consumer Response and Pricing of Smart and Connected Products","authors":"D. Patel, Darshan Yadav, Beshoy Morkos","doi":"10.1115/DETC2018-86304","DOIUrl":"https://doi.org/10.1115/DETC2018-86304","url":null,"abstract":"The emergence of Internet of Things (IoT) has created an unanticipated rise of smart and connected products in the consumer market. While, smart and connected products have become a fundamental part of our day to day life, consumer’s perspective regarding these smart and connected products still remains an uncharted territory. This paper tries to explore how these “smart” and “connected” products are perceived in the consumer market and what are the key driving factors behind the unparalleled success of these products. In order to answer these questions, the authors first determined the “smartness” and “connectivity” criteria to judge all the products based on the most commonly used rating systems for such products. Followed by a case study analysis to determine if there is a correlation between “smartness”, “connectivity” and other product parameters. It is discovered that “smartness” as defined in the paper, is a resource intensive component of a product and therefore, directly affects the price of a product. On the other hand, consumers are more receptive to “connectivity” aspect of a product. The correlations found in the paper could help fill the gaps between areas of focus for technology development in the industry and user demands.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130329100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}