The paper presents a formulation for support optimization to maximize heat dissipation in additive manufacturing. To simulate heat transfer from the part to the supports, a boundary-dependent heat flux is applied on the part/support interface. Since the density-based topology optimization does not involve explicit boundary, the heat flux is implicitly imposed through a domain integration of a Heaviside projected density gradient. As such, this formulation also supports simultaneous optimization of support and parts in additive manufacturing. Self-supporting supports are obtained by controlling the anisotropic thermal conductivity of the supports. Three different objective functions related to heat dissipation efficiency are investigated. Numerical examples are presented to demonstrate the validity and efficiency of the proposed approach.
{"title":"Optimizing Support for Heat Dissipation in Additive Manufacturing","authors":"Cunfu Wang, Xiaoping Qian","doi":"10.1115/detc2020-22198","DOIUrl":"https://doi.org/10.1115/detc2020-22198","url":null,"abstract":"\u0000 The paper presents a formulation for support optimization to maximize heat dissipation in additive manufacturing. To simulate heat transfer from the part to the supports, a boundary-dependent heat flux is applied on the part/support interface. Since the density-based topology optimization does not involve explicit boundary, the heat flux is implicitly imposed through a domain integration of a Heaviside projected density gradient. As such, this formulation also supports simultaneous optimization of support and parts in additive manufacturing. Self-supporting supports are obtained by controlling the anisotropic thermal conductivity of the supports. Three different objective functions related to heat dissipation efficiency are investigated. Numerical examples are presented to demonstrate the validity and efficiency of the proposed approach.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123982532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenqiang Yu, Zhengwei Nie, Yuyi Lin, Huanying Jiang
Fused deposition modeling (FDM) is an additive manufacturing technology commonly used for three-dimensional printing. The screw rod is an important part of the extrusion nozzle in the FDM manufacturing. The technical parameters of the screw rod, such as the screw pitch, screw angle, groove width, and length-diameter ratio (or aspect ratio), etc., directly determine the speed of extrusion and the precision of modeling. To improve the printing speed and quality of FDM models, this work analyzed the influence of extrusion screw’s parameters on the extrusion flow rate and proposed the best parameters for the screw rod of the FDM process. The results of this work have a crucial significance on the modeling speed and accuracy of the FDM.
{"title":"Analysis of Extrusion Parameters for the Fused Deposition Modeling Process","authors":"Wenqiang Yu, Zhengwei Nie, Yuyi Lin, Huanying Jiang","doi":"10.1115/detc2020-22280","DOIUrl":"https://doi.org/10.1115/detc2020-22280","url":null,"abstract":"\u0000 Fused deposition modeling (FDM) is an additive manufacturing technology commonly used for three-dimensional printing. The screw rod is an important part of the extrusion nozzle in the FDM manufacturing. The technical parameters of the screw rod, such as the screw pitch, screw angle, groove width, and length-diameter ratio (or aspect ratio), etc., directly determine the speed of extrusion and the precision of modeling. To improve the printing speed and quality of FDM models, this work analyzed the influence of extrusion screw’s parameters on the extrusion flow rate and proposed the best parameters for the screw rod of the FDM process. The results of this work have a crucial significance on the modeling speed and accuracy of the FDM.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129494653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireframe has been proved very useful for learning human body from semantic parameters. However, the definition of the wireframe is highly dependent on the anthropological experiences of experts in previous works. Hence it is usually not easy to obtain a well-defined wireframe for a new set of human models in the available database. To overcome such difficulty, an automated wireframe generation method would be very helpful in relieving the need for manual anthropometric definition. In order to find such an automated wireframe designing method, a natural way is using automatic segmentation methods to divide the human body model into small mesh patches. Nevertheless, different segmentation approaches could have various segmented patches, thus resulting in various wireframes. How these wireframes affect human body learning performance? In this paper, we attempt to answer this research question by comparing different segmentation methods. Different wireframes are generated with the mesh segmentation methods, and then we use these wireframes as an intermediate agent to learn the relationship between the human body mesh models and the semantic parameters. We compared the reconstruction accuracy with different generated wireframe sets and summarized several meaningful design guidelines for developing an automatic wireframe-aware segmentation method for human body learning.
{"title":"Comparing Segmentation Approaches for Learning-Aware Wireframe Generation on Human Model","authors":"Jida Huang, Tsz-Ho Kwok","doi":"10.1115/detc2020-22616","DOIUrl":"https://doi.org/10.1115/detc2020-22616","url":null,"abstract":"\u0000 Wireframe has been proved very useful for learning human body from semantic parameters. However, the definition of the wireframe is highly dependent on the anthropological experiences of experts in previous works. Hence it is usually not easy to obtain a well-defined wireframe for a new set of human models in the available database. To overcome such difficulty, an automated wireframe generation method would be very helpful in relieving the need for manual anthropometric definition. In order to find such an automated wireframe designing method, a natural way is using automatic segmentation methods to divide the human body model into small mesh patches. Nevertheless, different segmentation approaches could have various segmented patches, thus resulting in various wireframes. How these wireframes affect human body learning performance? In this paper, we attempt to answer this research question by comparing different segmentation methods. Different wireframes are generated with the mesh segmentation methods, and then we use these wireframes as an intermediate agent to learn the relationship between the human body mesh models and the semantic parameters. We compared the reconstruction accuracy with different generated wireframe sets and summarized several meaningful design guidelines for developing an automatic wireframe-aware segmentation method for human body learning.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128851224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laxmi Poudel, L. Marques, R. Williams, Hyden Zachary, Pablo Guerra, Oliver Fowler, S. Moquin, Zhenghui Sha, Wenchao Zhou
Cooperative 3D printing (C3DP) is a novel approach to additive manufacturing, where multiple mobile 3D printing robots work together cooperatively to print the desired part. At the core of C3DP lies the chunk-based printing strategy. This strategy splits the desired part into smaller chunks, and then the chunks are assigned and scheduled to be printed by individual printing robots. In our previous work, we presented various hardware and software components of C3DP, such as mobile 3D printers, chunk-based slicing, scheduling, and simulation. In this study, we present a fully integrated and functional C3DP platform with all necessary components, including chunker, slicer, scheduler, printing robots, build floor, and outline how they work in unison from a system-level perspective. To realize C3DP, new developments of both hardware and software are presented, including new chunking approaches, scalable scheduler for multiple robots, SCARA-based printing robots, a mobile platform for transporting printing robots, modular floor tiles, and a charging station for the mobile platform. Finally, we demonstrate the capability of the system using two case studies. In these demonstrations, a CAD model of a part is fed to the chunker, divided into smaller chunks, passed to the scheduler, and assigned and scheduled to be printed by the scheduler with a given number of robots. The slicer generates G-code for each of the chunks and combines G-code into one file for each robot. The simulator then uses the G-code generated by the slicer to generate animations for visualization purposes.
{"title":"Architecting the Cooperative 3D Printing System","authors":"Laxmi Poudel, L. Marques, R. Williams, Hyden Zachary, Pablo Guerra, Oliver Fowler, S. Moquin, Zhenghui Sha, Wenchao Zhou","doi":"10.1115/detc2020-22711","DOIUrl":"https://doi.org/10.1115/detc2020-22711","url":null,"abstract":"\u0000 Cooperative 3D printing (C3DP) is a novel approach to additive manufacturing, where multiple mobile 3D printing robots work together cooperatively to print the desired part. At the core of C3DP lies the chunk-based printing strategy. This strategy splits the desired part into smaller chunks, and then the chunks are assigned and scheduled to be printed by individual printing robots. In our previous work, we presented various hardware and software components of C3DP, such as mobile 3D printers, chunk-based slicing, scheduling, and simulation. In this study, we present a fully integrated and functional C3DP platform with all necessary components, including chunker, slicer, scheduler, printing robots, build floor, and outline how they work in unison from a system-level perspective. To realize C3DP, new developments of both hardware and software are presented, including new chunking approaches, scalable scheduler for multiple robots, SCARA-based printing robots, a mobile platform for transporting printing robots, modular floor tiles, and a charging station for the mobile platform. Finally, we demonstrate the capability of the system using two case studies. In these demonstrations, a CAD model of a part is fed to the chunker, divided into smaller chunks, passed to the scheduler, and assigned and scheduled to be printed by the scheduler with a given number of robots. The slicer generates G-code for each of the chunks and combines G-code into one file for each robot. The simulator then uses the G-code generated by the slicer to generate animations for visualization purposes.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132518657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today’s complex world of development of functions for automating Driving Systems (ADS), methods, tools, systems and new approaches are necessary for a seamless application. Furthermore, it is important to apply new technics of simulation and visualization (Digital Twin) for the new ADS functions. To prototype and to test these functions in a physical manner is not only a costly and complex effort but also encounters legal and bureaucratic obstacles. The importance of simulation is very high. For that reason, this paper and corresponding research project will develop a consistent traceable System Engineering approach of autonomous driving functions and its environment based on Munich Agile Concept for Model-Based-Systems-Engineering (MBSE). MBSE is based on three important core pillar which is 1) Methods/Processes, 2) Language and 3) Systems. The purpose of the new developed Munich Agile Concept Approach is to handle the complexity over the entire ADS feature development from the system requirement definition process up to the test and validation of the system. The Munich Agile Concept contains six different levels which are System Requirement-, System Functional-, System Architecture-, System Validation-, System Test and the System-Usage-Level. For defining the first three-level, a graphical language called System Modelling Language (SysML) has been applied.
{"title":"Application of Munich Agile Concept for MBSE by Means of Automated Valet Parking Functions and the 3D Environment-Data","authors":"V. Salehi, Jihad Taha, Shirui Wang","doi":"10.1115/detc2020-22040","DOIUrl":"https://doi.org/10.1115/detc2020-22040","url":null,"abstract":"\u0000 In today’s complex world of development of functions for automating Driving Systems (ADS), methods, tools, systems and new approaches are necessary for a seamless application. Furthermore, it is important to apply new technics of simulation and visualization (Digital Twin) for the new ADS functions. To prototype and to test these functions in a physical manner is not only a costly and complex effort but also encounters legal and bureaucratic obstacles. The importance of simulation is very high. For that reason, this paper and corresponding research project will develop a consistent traceable System Engineering approach of autonomous driving functions and its environment based on Munich Agile Concept for Model-Based-Systems-Engineering (MBSE). MBSE is based on three important core pillar which is 1) Methods/Processes, 2) Language and 3) Systems. The purpose of the new developed Munich Agile Concept Approach is to handle the complexity over the entire ADS feature development from the system requirement definition process up to the test and validation of the system. The Munich Agile Concept contains six different levels which are System Requirement-, System Functional-, System Architecture-, System Validation-, System Test and the System-Usage-Level. For defining the first three-level, a graphical language called System Modelling Language (SysML) has been applied.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117094927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James Cunningham, C. López, O. Ashour, Conrad S. Tucker
In this work, a Deep Reinforcement Learning (RL) approach is proposed for Procedural Content Generation (PCG) that seeks to automate the generation of multiple related virtual reality (VR) environments for enhanced personalized learning. This allows for the user to be exposed to multiple virtual scenarios that demonstrate a consistent theme, which is especially valuable in an educational context. RL approaches to PCG offer the advantage of not requiring training data, as opposed to other PCG approaches that employ supervised learning approaches. This work advances the state of the art in RL-based PCG by demonstrating the ability to generate a diversity of contexts in order to teach the same underlying concept. A case study is presented that demonstrates the feasibility of the proposed RL-based PCG method using examples of probability distributions in both manufacturing facility and grocery store virtual environments. The method demonstrated in this paper has the potential to enable the automatic generation of a variety of virtual environments that are connected by a common concept or theme.
{"title":"Multi-Context Generation in Virtual Reality Environments Using Deep Reinforcement Learning","authors":"James Cunningham, C. López, O. Ashour, Conrad S. Tucker","doi":"10.1115/detc2020-22624","DOIUrl":"https://doi.org/10.1115/detc2020-22624","url":null,"abstract":"\u0000 In this work, a Deep Reinforcement Learning (RL) approach is proposed for Procedural Content Generation (PCG) that seeks to automate the generation of multiple related virtual reality (VR) environments for enhanced personalized learning. This allows for the user to be exposed to multiple virtual scenarios that demonstrate a consistent theme, which is especially valuable in an educational context. RL approaches to PCG offer the advantage of not requiring training data, as opposed to other PCG approaches that employ supervised learning approaches. This work advances the state of the art in RL-based PCG by demonstrating the ability to generate a diversity of contexts in order to teach the same underlying concept. A case study is presented that demonstrates the feasibility of the proposed RL-based PCG method using examples of probability distributions in both manufacturing facility and grocery store virtual environments. The method demonstrated in this paper has the potential to enable the automatic generation of a variety of virtual environments that are connected by a common concept or theme.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124107483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a motion analysis system based on a network of common RGB cameras, which provides the measurement of various angles considered for postural assessment, in order to facilitate the evaluation of the ergonomic indices commonly used for the determination of risk of musculoskeletal disorders of operators in manufacturing workplaces. To enable the tracking of operator postures during the performed tasks, the system exploits the multi person keypoints detection library “OpenPose”. The proposed system has been validated with a real industrial case study regarding a washing machine assembly line. Results suggest how the proposed system supports ergonomists in risk assessment of musculoskeletal disorders through the OCRA index.
{"title":"A Low Cost Motion Analysis System Based on RGB Cameras to Support Ergonomic Risk Assessment in Real Workplaces","authors":"Alex Altieri, S. Ceccacci, A. Talipu, M. Mengoni","doi":"10.1115/detc2020-22308","DOIUrl":"https://doi.org/10.1115/detc2020-22308","url":null,"abstract":"\u0000 This paper introduces a motion analysis system based on a network of common RGB cameras, which provides the measurement of various angles considered for postural assessment, in order to facilitate the evaluation of the ergonomic indices commonly used for the determination of risk of musculoskeletal disorders of operators in manufacturing workplaces. To enable the tracking of operator postures during the performed tasks, the system exploits the multi person keypoints detection library “OpenPose”. The proposed system has been validated with a real industrial case study regarding a washing machine assembly line. Results suggest how the proposed system supports ergonomists in risk assessment of musculoskeletal disorders through the OCRA index.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127418983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lakshmi N. A. Venkatanarasimhan, Ahmed Chowdhury, Chiradeep Sen
Features are higher-level modeling entities that encapsulate multiple lower-level entities and relations into one easily usable unit. It has been shown previously that having CAD-like features in function modeling can increase the ease of modeling, model consistency, and reasoning. This paper presents and illustrates a protocol for extracting function features that can be used frequently in computer-aided function modeling within a given technical domain, and formally defines those features using graphical templates and grammar rules. A comprehensive set of six function features is thus created for the Thermal-Fluid Systems domain. Extendibility of the protocol is then illustrated by using it to extract two additional features from a different domain. The features thus produced, and their variants and usage in modeling are also discussed.
{"title":"A Vocabulary of Function Features for Computer Aided Modeling of Thermal-Fluid Systems","authors":"Lakshmi N. A. Venkatanarasimhan, Ahmed Chowdhury, Chiradeep Sen","doi":"10.1115/detc2020-22611","DOIUrl":"https://doi.org/10.1115/detc2020-22611","url":null,"abstract":"\u0000 Features are higher-level modeling entities that encapsulate multiple lower-level entities and relations into one easily usable unit. It has been shown previously that having CAD-like features in function modeling can increase the ease of modeling, model consistency, and reasoning. This paper presents and illustrates a protocol for extracting function features that can be used frequently in computer-aided function modeling within a given technical domain, and formally defines those features using graphical templates and grammar rules. A comprehensive set of six function features is thus created for the Thermal-Fluid Systems domain. Extendibility of the protocol is then illustrated by using it to extract two additional features from a different domain. The features thus produced, and their variants and usage in modeling are also discussed.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122329422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Lunasin, A. Iliopoulos, J. Michopoulos, J. Steuben
The development of advanced robotic systems for material testing by the U.S. Naval Research laboratory has expanded the set of requirements for mechatronic system control. This expansion lies beyond the limits of readily available control system capabilities because of the high control rate requirements. To establish this capability, a control mechanism based on the online identification of the ordinary differential equation governing the coupled equations of motion and deformation is proposed in the present work. The part of the proposed approach involving the inverse identification of the ODEs at hand is described in its general form first. Subsequently, the numerical verification is demonstrated via synthetic tests for a compliant actuation system with varying levels of noise injected in the system.
{"title":"Inverse Identification of Non-Linear ODEs for the Dynamic Control of Material Testing Systems","authors":"E. Lunasin, A. Iliopoulos, J. Michopoulos, J. Steuben","doi":"10.1115/detc2020-22434","DOIUrl":"https://doi.org/10.1115/detc2020-22434","url":null,"abstract":"\u0000 The development of advanced robotic systems for material testing by the U.S. Naval Research laboratory has expanded the set of requirements for mechatronic system control. This expansion lies beyond the limits of readily available control system capabilities because of the high control rate requirements. To establish this capability, a control mechanism based on the online identification of the ordinary differential equation governing the coupled equations of motion and deformation is proposed in the present work. The part of the proposed approach involving the inverse identification of the ODEs at hand is described in its general form first. Subsequently, the numerical verification is demonstrated via synthetic tests for a compliant actuation system with varying levels of noise injected in the system.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133918996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Bruno, A. Ceriani, Zhang Zhan, G. Caruso, A. Mastro
A human mission to Mars has long been advocated. As each year the scientific researches bring mankind closer to establishing human habitats on Mars, the question of how astronauts can sustain themselves whilst away from the blue planet becomes crucial. The project presented in this paper aims at designing and developing the Virtual Reality (VR) simulation of an inflatable modular greenhouse featuring a system that manages the growth of the plants and helps the astronauts control and monitor the whole greenhouse more extensively. The use of VR technology allows simulating an immersive environment of a Mars habitat highlighting its greenhouse overcoming the limitation of physical locations. Wearing the Oculus Rift head-mounted display (HMD) while holding Oculus Rift Touch Controllers, astronauts or Mars exploration enthusiasts could experience the highly interactive and realistic environment. Its goal is to provide training and evaluative simulations of astronauts’ basic tasks and performances in the greenhouse on Mars while testing the growing method of hydroponics equipped with a smart growing controlling and monitoring system.
{"title":"Virtual Reality to Simulate an Inflatable Modular Hydroponics Greenhouse on Mars","authors":"F. Bruno, A. Ceriani, Zhang Zhan, G. Caruso, A. Mastro","doi":"10.1115/detc2020-22326","DOIUrl":"https://doi.org/10.1115/detc2020-22326","url":null,"abstract":"\u0000 A human mission to Mars has long been advocated. As each year the scientific researches bring mankind closer to establishing human habitats on Mars, the question of how astronauts can sustain themselves whilst away from the blue planet becomes crucial.\u0000 The project presented in this paper aims at designing and developing the Virtual Reality (VR) simulation of an inflatable modular greenhouse featuring a system that manages the growth of the plants and helps the astronauts control and monitor the whole greenhouse more extensively. The use of VR technology allows simulating an immersive environment of a Mars habitat highlighting its greenhouse overcoming the limitation of physical locations. Wearing the Oculus Rift head-mounted display (HMD) while holding Oculus Rift Touch Controllers, astronauts or Mars exploration enthusiasts could experience the highly interactive and realistic environment. Its goal is to provide training and evaluative simulations of astronauts’ basic tasks and performances in the greenhouse on Mars while testing the growing method of hydroponics equipped with a smart growing controlling and monitoring system.","PeriodicalId":164403,"journal":{"name":"Volume 9: 40th Computers and Information in Engineering Conference (CIE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134096426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}