Model Based Systems Engineering (MBSE) is a mainstream methodology for the design of complex systems. Verification is a necessary part of MBSE. Although there is significant past research on verification, some deficiencies still exist, such as behavior requirement verification in the early design stage is lacking. In this study, behavior verification at the early design stage is presented. First, a unified modeling method based on SysML is proposed and some transformation rules are defined to ensure the correctness and definiteness of the ontology generation. Second, behavior requirements are classified and formalized as rules. Finally, a hierarchical behavior verification approach based on ontology reasoning is proposed. This approach is convenient for designers to use and no additional expertise is needed. A case study is provided to demonstrate its effectiveness.
{"title":"Ontology Based Behavior Verification for Complex Systems","authors":"Ruirui Chen, Yusheng Liu, Xiaoping Ye","doi":"10.1115/DETC2018-85689","DOIUrl":"https://doi.org/10.1115/DETC2018-85689","url":null,"abstract":"Model Based Systems Engineering (MBSE) is a mainstream methodology for the design of complex systems. Verification is a necessary part of MBSE. Although there is significant past research on verification, some deficiencies still exist, such as behavior requirement verification in the early design stage is lacking. In this study, behavior verification at the early design stage is presented. First, a unified modeling method based on SysML is proposed and some transformation rules are defined to ensure the correctness and definiteness of the ontology generation. Second, behavior requirements are classified and formalized as rules. Finally, a hierarchical behavior verification approach based on ontology reasoning is proposed. This approach is convenient for designers to use and no additional expertise is needed. A case study is provided to demonstrate its effectiveness.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127661912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ming Xu, T. Scholl, P. Berjano, Jazmin Cruz, James Yang
Rod fracture and nonunion are common complications associated with pedicle subtraction osteotomies (PSO). Supplementary rods and interbody cage (IB) are added to reduce the primary rod stress. As supplementary rods, delta rods and cross rods have been proposed to reduce more stress on the primary rods compared to conventional supplementary rods (accessary rods) in PSO. The objective of this study is to investigate the effects of cross rods and delta rods on reducing primary rod stress in PSO subject. A validated 3D finite element model of a T12-S1 spine segment with 25° PSO at L3 and bilateral rods fixation from T12-S1 was used to compare different rod configurations: 1) PSO and two primary rods (PSO+2P); 2) PSO with an IB at L2-L3 (PSO+2P+IB); 3) PSO with accessory rods and an IB at L2-L3 (PSO+2P+IB+2A); 4) PSO with delta rods and an IB at L2-L3 (PSO+2P+IB+2D); 5) PSO with single cross rod and an IB at L2-L3 (PSO+2P+IB+1C); 6) PSO with double cross rods and an IB at L2-L3 (PSO+2P+IB+2C). The spine model was loaded with a follower load of 400 N combined with pure moments of 7.5 Nm in flexion, extension, right lateral bending, and right axial rotation. Von Mises stress of the primary rods were predicted for all test conditions. The PSO without IB condition had the largest primary rod stress in flexion. With IB at L2-L3, the rod stress in flexion reduced by 15%. Adding 2 conventional supplementary rods reduced the rod stress in flexion by 29%, which was achieved by adding single cross rod. The maximum von Mises stress occurred in the middle of the primary rods without supplementary rods whereas the maximum stress concentrated adjacent to the contact region between the connectors and the primary rods. Delta rods and double cross rods reduced the most rod stress in flexion, which were by 33% and 32% respectively. Under lateral bending, 2 delta rods reduced the most primary rod stress (−33%). Under axial rotation, the single cross rod reduced the most primary rod stress (−48%). Interbody cages and supplementary rods reduced the primary rod stress in a comparable way. Primary rod stress with 2 delta rods and double cross rods were comparable, which were marginally lower than those with conventional supplementary rods. Adding single cross rod was comparable to adding 2 conventional accessory rods in rod stress reduction in flexion. Under lateral bending, delta rods reduced most rod stress whereas under axial rotation, cross rods reduced most rod stress. This study suggested that both delta rods and cross rods reduce more primary rod stress than conventional accessory rods do.
{"title":"Rod Stress Prediction in Spinal Alignment Surgery With Different Supplementary Rod Constructing Techniques: A Finite Element Study","authors":"Ming Xu, T. Scholl, P. Berjano, Jazmin Cruz, James Yang","doi":"10.1115/DETC2018-85601","DOIUrl":"https://doi.org/10.1115/DETC2018-85601","url":null,"abstract":"Rod fracture and nonunion are common complications associated with pedicle subtraction osteotomies (PSO). Supplementary rods and interbody cage (IB) are added to reduce the primary rod stress. As supplementary rods, delta rods and cross rods have been proposed to reduce more stress on the primary rods compared to conventional supplementary rods (accessary rods) in PSO. The objective of this study is to investigate the effects of cross rods and delta rods on reducing primary rod stress in PSO subject. A validated 3D finite element model of a T12-S1 spine segment with 25° PSO at L3 and bilateral rods fixation from T12-S1 was used to compare different rod configurations: 1) PSO and two primary rods (PSO+2P); 2) PSO with an IB at L2-L3 (PSO+2P+IB); 3) PSO with accessory rods and an IB at L2-L3 (PSO+2P+IB+2A); 4) PSO with delta rods and an IB at L2-L3 (PSO+2P+IB+2D); 5) PSO with single cross rod and an IB at L2-L3 (PSO+2P+IB+1C); 6) PSO with double cross rods and an IB at L2-L3 (PSO+2P+IB+2C). The spine model was loaded with a follower load of 400 N combined with pure moments of 7.5 Nm in flexion, extension, right lateral bending, and right axial rotation. Von Mises stress of the primary rods were predicted for all test conditions. The PSO without IB condition had the largest primary rod stress in flexion. With IB at L2-L3, the rod stress in flexion reduced by 15%. Adding 2 conventional supplementary rods reduced the rod stress in flexion by 29%, which was achieved by adding single cross rod. The maximum von Mises stress occurred in the middle of the primary rods without supplementary rods whereas the maximum stress concentrated adjacent to the contact region between the connectors and the primary rods. Delta rods and double cross rods reduced the most rod stress in flexion, which were by 33% and 32% respectively. Under lateral bending, 2 delta rods reduced the most primary rod stress (−33%). Under axial rotation, the single cross rod reduced the most primary rod stress (−48%). Interbody cages and supplementary rods reduced the primary rod stress in a comparable way. Primary rod stress with 2 delta rods and double cross rods were comparable, which were marginally lower than those with conventional supplementary rods. Adding single cross rod was comparable to adding 2 conventional accessory rods in rod stress reduction in flexion. Under lateral bending, delta rods reduced most rod stress whereas under axial rotation, cross rods reduced most rod stress. This study suggested that both delta rods and cross rods reduce more primary rod stress than conventional accessory rods do.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114399922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A number of pathologies impact on the way a patient can either move or control the movements of the body. Traumas, articulation arthritis or generic orthopedic disease affect the way a person can walk or perform everyday movements; brain or spine issues can lead to a complete or partial impairment, affecting both muscular response and sensitivity. Each of these disorder shares the need of assessing patient’s condition while doing specific tests and exercises or accomplishing everyday life tasks. Moreover, also high-level sport activity may be worth using digital tools to acquire physical performances to be improved. The assessment can be done for several purpose, such as creating a custom physical rehabilitation plan, monitoring improvements or worsening over time, correcting wrong postures or bad habits and, in the sportive domain to optimize effectiveness of gestures or related energy consumption. The paper shows the use of low-cost motion capture techniques to acquire human motion, the transfer of motion data to a digital human model and the extraction of desired information according to each specific medical or sportive purpose. We adopted the well-known and widespread Mocap technology implemented by Microsoft Kinect devices and we used iPisoft tools to perform acquisition and the preliminary data elaboration on the virtual skeleton of the patient. The focus of the paper is on the working method that can be generalized to be adopted in any medical, rehabilitative or sportive condition in which the analysis of the motion is crucial. The acquisition scene can be optimized in terms of size and shape of the working volume and in the number and positioning of sensors. However, the most important and decisive phase consist in the knowledge acquisition and management. For each application and even for each single exercise or tasks a set of evaluation rules and thresholds must be extracted from literature or, more often, directly form experienced personnel. This operation is generally time consuming and require further iterations to be refined, but it is the core to generate an effective metric and to correctly assess patients and athletes performances. Once rules are defined, proper algorithms are defined and implemented to automatically extract only the relevant data in specific time frames to calculate performance indexes. At last, a report is generated according to final user requests and skills.
{"title":"A Method to Analyse Generic Human Motion With Low-Cost Mocap Technologies","authors":"D. Regazzoni, A. Vitali, C. Rizzi, G. Colombo","doi":"10.1115/DETC2018-86197","DOIUrl":"https://doi.org/10.1115/DETC2018-86197","url":null,"abstract":"A number of pathologies impact on the way a patient can either move or control the movements of the body. Traumas, articulation arthritis or generic orthopedic disease affect the way a person can walk or perform everyday movements; brain or spine issues can lead to a complete or partial impairment, affecting both muscular response and sensitivity. Each of these disorder shares the need of assessing patient’s condition while doing specific tests and exercises or accomplishing everyday life tasks. Moreover, also high-level sport activity may be worth using digital tools to acquire physical performances to be improved. The assessment can be done for several purpose, such as creating a custom physical rehabilitation plan, monitoring improvements or worsening over time, correcting wrong postures or bad habits and, in the sportive domain to optimize effectiveness of gestures or related energy consumption.\u0000 The paper shows the use of low-cost motion capture techniques to acquire human motion, the transfer of motion data to a digital human model and the extraction of desired information according to each specific medical or sportive purpose. We adopted the well-known and widespread Mocap technology implemented by Microsoft Kinect devices and we used iPisoft tools to perform acquisition and the preliminary data elaboration on the virtual skeleton of the patient.\u0000 The focus of the paper is on the working method that can be generalized to be adopted in any medical, rehabilitative or sportive condition in which the analysis of the motion is crucial. The acquisition scene can be optimized in terms of size and shape of the working volume and in the number and positioning of sensors. However, the most important and decisive phase consist in the knowledge acquisition and management. For each application and even for each single exercise or tasks a set of evaluation rules and thresholds must be extracted from literature or, more often, directly form experienced personnel. This operation is generally time consuming and require further iterations to be refined, but it is the core to generate an effective metric and to correctly assess patients and athletes performances. Once rules are defined, proper algorithms are defined and implemented to automatically extract only the relevant data in specific time frames to calculate performance indexes. At last, a report is generated according to final user requests and skills.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126191486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Harper, Aparajithan Sivanathan, T. Lim, S. McGibbon, J. Ritchie
Mixed reality opens new ways of connecting users to virtual content. With simulation-based education and training (SBET), mixed reality offers an enriched environment to experience digital learning. In turn, learners can develop their mental models to process and connect 2D/3D information in real-world settings. This paper reports on the use of the Microsoft HoloLens to create a mixed reality SBET environment. The challenges of this investigation are harmonising augmented real-world content, including the use of real-time, low-latency tracking of tangible objects and the interaction of these with the augmented content. The research emphasis is on technology-mediated affordances. For example, what affordance does the HoloLens provide the leaner in terms of interactive manipulation or navigation in the virtual environment? We examine this through control-display (CD) gain in conjunction with cyber-physical systems (CPS) approaches. This work builds on previously attained knowledge from the creation of an AR application for vocational education and training (VET) of stonemasonry.
{"title":"Control-Display Affordances in Simulation Based Education","authors":"S. Harper, Aparajithan Sivanathan, T. Lim, S. McGibbon, J. Ritchie","doi":"10.1115/detc2018-85352","DOIUrl":"https://doi.org/10.1115/detc2018-85352","url":null,"abstract":"Mixed reality opens new ways of connecting users to virtual content. With simulation-based education and training (SBET), mixed reality offers an enriched environment to experience digital learning. In turn, learners can develop their mental models to process and connect 2D/3D information in real-world settings. This paper reports on the use of the Microsoft HoloLens to create a mixed reality SBET environment. The challenges of this investigation are harmonising augmented real-world content, including the use of real-time, low-latency tracking of tangible objects and the interaction of these with the augmented content. The research emphasis is on technology-mediated affordances. For example, what affordance does the HoloLens provide the leaner in terms of interactive manipulation or navigation in the virtual environment? We examine this through control-display (CD) gain in conjunction with cyber-physical systems (CPS) approaches. This work builds on previously attained knowledge from the creation of an AR application for vocational education and training (VET) of stonemasonry.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121594479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present haptics-enabled mid-air interactions for sketching collections of three-dimensional planar curves — 3D curve-soups — as a means for 3D design conceptualization. Haptics-based mid-air interactions have been extensively studied for modeling of surfaces and solids. The same is not true for modeling curves; there is little work that explores spatiality, tangibility, and kinesthetics for curve modeling, as seen from the perspective of 3D sketching for conceptualization. We study pen-based mid air interactions for free-form curve input from the perspective of manual labor, controllability, and kinesthetic feedback. For this, we implemented a simple haptics-enabled workflow for users to draw and compose collections of planar curves on a force-enabled virtual canvas. We introduce a novel force-feedback metaphor for curve drawing, and investigate three novel rotation techniques within our workflow for both controlled and free-form sketching tasks.
{"title":"Kinesthetically Augmented Mid-Air Sketching of Multi-Planar 3D Curve-Soups","authors":"Ronak R. Mohanty, Umema Bohari, Vinayak, E. Ragan","doi":"10.1115/DETC2018-86141","DOIUrl":"https://doi.org/10.1115/DETC2018-86141","url":null,"abstract":"We present haptics-enabled mid-air interactions for sketching collections of three-dimensional planar curves — 3D curve-soups — as a means for 3D design conceptualization. Haptics-based mid-air interactions have been extensively studied for modeling of surfaces and solids. The same is not true for modeling curves; there is little work that explores spatiality, tangibility, and kinesthetics for curve modeling, as seen from the perspective of 3D sketching for conceptualization. We study pen-based mid air interactions for free-form curve input from the perspective of manual labor, controllability, and kinesthetic feedback. For this, we implemented a simple haptics-enabled workflow for users to draw and compose collections of planar curves on a force-enabled virtual canvas. We introduce a novel force-feedback metaphor for curve drawing, and investigate three novel rotation techniques within our workflow for both controlled and free-form sketching tasks.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129719423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew L. Dering, Chonghan Lee, K. Hopkinson, Conrad S. Tucker
The authors of this work present a method that mines big media data streams from large Social Media Networks in order to discover novel correlations between objects appearing in images and electricity utilization patterns. The hypothesis of this work is that there exist correlations between what users take pictures of, and electricity utilization patterns. This work employs a Convolutional Neural Network to detect objects in 578,232 images gathered from over 15,000,000 tweets sent in the San Diego area. These objects were considered in the context of concurrent power use, on a monthly and hourly basis. The results reveal both positive and negative correlations between power use and specific objects, such as lamps (.053 hourly), dogs (−.011 hourly), horses (.422 monthly) and motorcycles (−.415, monthly).
{"title":"A Deep Learning Model for Mining Object-Energy Correlations Using Social Media Image Data","authors":"Matthew L. Dering, Chonghan Lee, K. Hopkinson, Conrad S. Tucker","doi":"10.1115/DETC2018-85417","DOIUrl":"https://doi.org/10.1115/DETC2018-85417","url":null,"abstract":"The authors of this work present a method that mines big media data streams from large Social Media Networks in order to discover novel correlations between objects appearing in images and electricity utilization patterns. The hypothesis of this work is that there exist correlations between what users take pictures of, and electricity utilization patterns. This work employs a Convolutional Neural Network to detect objects in 578,232 images gathered from over 15,000,000 tweets sent in the San Diego area. These objects were considered in the context of concurrent power use, on a monthly and hourly basis. The results reveal both positive and negative correlations between power use and specific objects, such as lamps (.053 hourly), dogs (−.011 hourly), horses (.422 monthly) and motorcycles (−.415, monthly).","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131964347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lorenzo Micaroni, M. Carulli, F. Ferrise, M. Bordegoni, A. Gallace
This research aims to design and develop an innovative system, based on an olfactory display, to be used for investigating the directionality of the sense of olfaction. In particular, the design of an experimental setup to understand and determine to what extent the sense of olfaction is directional and whether there is prevalence of the sense of vision over the one of smell when determining the direction of an odor, is described. The experimental setup is based on low cost Virtual Reality (VR) technologies. In particular, the system is based on a custom directional olfactory display, an Oculus Rift Head Mounted Display (HMD) to deliver both visual and olfactory cues and an input device to register subjects’ answers. The VR environment is developed in Unity3D. The paper describes the design of the olfactory interface as well as its integration with the overall system. Finally the results of the initial testing are reported in the paper.
{"title":"Design of a Directional Olfactory Display to Study the Integration of Vision and Olfaction","authors":"Lorenzo Micaroni, M. Carulli, F. Ferrise, M. Bordegoni, A. Gallace","doi":"10.1115/DETC2018-85972","DOIUrl":"https://doi.org/10.1115/DETC2018-85972","url":null,"abstract":"This research aims to design and develop an innovative system, based on an olfactory display, to be used for investigating the directionality of the sense of olfaction. In particular, the design of an experimental setup to understand and determine to what extent the sense of olfaction is directional and whether there is prevalence of the sense of vision over the one of smell when determining the direction of an odor, is described. The experimental setup is based on low cost Virtual Reality (VR) technologies. In particular, the system is based on a custom directional olfactory display, an Oculus Rift Head Mounted Display (HMD) to deliver both visual and olfactory cues and an input device to register subjects’ answers. The VR environment is developed in Unity3D. The paper describes the design of the olfactory interface as well as its integration with the overall system. Finally the results of the initial testing are reported in the paper.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125169675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jundi Liu, Steven Hwang, Walter Yund, L. Boyle, A. Banerjee
In current supply chain operations, the transactions among suppliers and original equipment manufacturers (OEMs) are sometimes inefficient and unreliable due to limited information exchange and lack of knowledge about the supplier capabilities. For the OEMs, majority of downstream operations are sequential, requiring the availabilities of all the parts on time to ensure successful executions of production schedules. Therefore, accurate prediction of the delivery times of purchase orders (POs) is critical to satisfying these requirements. However, such prediction is challenging due to the suppliers’ distributed locations, time-varying capabilities and capacities, and unexpected changes in raw materials procurements. We address some of these challenges by developing supervised machine learning models in the form of Random Forests and Quantile Regression Forests that are trained on historical PO transactional data. Further, given the fact that many predictors are categorical variables, we apply a dimension reduction method to identify the most influential category levels. Results on real-world OEM data show effective performance with substantially lower prediction errors than supplier-provided delivery time estimates.
{"title":"Predicting Purchase Orders Delivery Times Using Regression Models With Dimension Reduction","authors":"Jundi Liu, Steven Hwang, Walter Yund, L. Boyle, A. Banerjee","doi":"10.1115/DETC2018-85710","DOIUrl":"https://doi.org/10.1115/DETC2018-85710","url":null,"abstract":"In current supply chain operations, the transactions among suppliers and original equipment manufacturers (OEMs) are sometimes inefficient and unreliable due to limited information exchange and lack of knowledge about the supplier capabilities. For the OEMs, majority of downstream operations are sequential, requiring the availabilities of all the parts on time to ensure successful executions of production schedules. Therefore, accurate prediction of the delivery times of purchase orders (POs) is critical to satisfying these requirements. However, such prediction is challenging due to the suppliers’ distributed locations, time-varying capabilities and capacities, and unexpected changes in raw materials procurements. We address some of these challenges by developing supervised machine learning models in the form of Random Forests and Quantile Regression Forests that are trained on historical PO transactional data. Further, given the fact that many predictors are categorical variables, we apply a dimension reduction method to identify the most influential category levels. Results on real-world OEM data show effective performance with substantially lower prediction errors than supplier-provided delivery time estimates.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115361753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Siddharth, Amaresh Chakrabarti, S. Venkataraman
Analogical design has been a long-standing approach to solve engineering design problems. However, it is still unclear as to how analogues should be presented to engineering design in order to maximize the utility of these. The utility is minimal when analogues are complex and belong to other domain (e.g., biology). Prior work includes the use of a function model called SAPPhIRE to represent over 800 biological and engineered systems. SAPPhIRE stands for the entities: States, Actions, Parts, Phenomena, Inputs, oRgans, and Effects that together represent the functionality of a system at various levels of abstraction. In this paper, we combine instances of SAPPhIRE model for representing complex systems (also from the biological domain). We use an electric buzzer to illustrate and compare the efficacy of this model in explaining complex systems with that of a well-known model from literature. The use of multiple-instance SAPPhIRE model instances seems to provide a more comprehensive explanation of a complex system, which includes elements of description that are not present in other models, providing an indication as to which elements might have been missing from a given description. The proposed model is implemented in a web-based tool called Idea-Inspire 4.0, a brief introduction of which is also provided.
{"title":"Representing Complex Analogues Using a Function Model to Support Conceptual Design","authors":"L. Siddharth, Amaresh Chakrabarti, S. Venkataraman","doi":"10.1115/DETC2018-85579","DOIUrl":"https://doi.org/10.1115/DETC2018-85579","url":null,"abstract":"Analogical design has been a long-standing approach to solve engineering design problems. However, it is still unclear as to how analogues should be presented to engineering design in order to maximize the utility of these. The utility is minimal when analogues are complex and belong to other domain (e.g., biology). Prior work includes the use of a function model called SAPPhIRE to represent over 800 biological and engineered systems. SAPPhIRE stands for the entities: States, Actions, Parts, Phenomena, Inputs, oRgans, and Effects that together represent the functionality of a system at various levels of abstraction. In this paper, we combine instances of SAPPhIRE model for representing complex systems (also from the biological domain). We use an electric buzzer to illustrate and compare the efficacy of this model in explaining complex systems with that of a well-known model from literature. The use of multiple-instance SAPPhIRE model instances seems to provide a more comprehensive explanation of a complex system, which includes elements of description that are not present in other models, providing an indication as to which elements might have been missing from a given description. The proposed model is implemented in a web-based tool called Idea-Inspire 4.0, a brief introduction of which is also provided.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126828766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yusen He, F. Fei, Wenbo Wang, Xuan Song, Zhiyu Sun, Stephen Seung-Yeob Baek
Projection micro-stereolithography (P-μSLA) processes have been widely utilized in three-dimensional (3D) digital fabrication. However, various uncertainties of a photopolymerization process often deteriorates the geometric accuracy of fabrication results. A predictive model that maps input shapes to actual outcomes in real-time would be immensely beneficial for designers and process engineers, permitting rapid design exploration through inexpensive trials-and-errors, such that optimal design parameters as well as optimal shape modification plan could be identified with only minimal waste of time, material, and labor. However, no computational model has ever succeeded in predicting such geometric inaccuracies to a reasonable precision. In this regard, we propose a novel idea of predicting output shapes from input projection patterns of a P-μSLA process via deep neural networks. To this end, a convolutional encoder-decoder network is proposed in this paper. The network takes a projection image as the input and returns a predicted shape after fabrication as the output. Cross-validation analyses showed the root-mean-square-error (RMSE) of 10.72 μm in average, indicating noticeable performance of the proposed convolutional encoder-decoder network.
{"title":"Predicting Manufactured Shapes of a Projection Micro-Stereolithography Process via Convolutional Encoder-Decoder Networks","authors":"Yusen He, F. Fei, Wenbo Wang, Xuan Song, Zhiyu Sun, Stephen Seung-Yeob Baek","doi":"10.1115/DETC2018-85458","DOIUrl":"https://doi.org/10.1115/DETC2018-85458","url":null,"abstract":"Projection micro-stereolithography (P-μSLA) processes have been widely utilized in three-dimensional (3D) digital fabrication. However, various uncertainties of a photopolymerization process often deteriorates the geometric accuracy of fabrication results. A predictive model that maps input shapes to actual outcomes in real-time would be immensely beneficial for designers and process engineers, permitting rapid design exploration through inexpensive trials-and-errors, such that optimal design parameters as well as optimal shape modification plan could be identified with only minimal waste of time, material, and labor. However, no computational model has ever succeeded in predicting such geometric inaccuracies to a reasonable precision. In this regard, we propose a novel idea of predicting output shapes from input projection patterns of a P-μSLA process via deep neural networks. To this end, a convolutional encoder-decoder network is proposed in this paper. The network takes a projection image as the input and returns a predicted shape after fabrication as the output. Cross-validation analyses showed the root-mean-square-error (RMSE) of 10.72 μm in average, indicating noticeable performance of the proposed convolutional encoder-decoder network.","PeriodicalId":338721,"journal":{"name":"Volume 1B: 38th Computers and Information in Engineering Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115028018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}