In this study, we present a method for controlling the surface shape of transparent liquids that generate caustics due to luminance distribution of an arbitrary image. Our method consists of the process of caustics design and the process of liquid surface control. While designing caustics, an arbitrary grayscale image is treated as the input image and the surface shape of transparent liquid that generates caustics is calculated based on the luminance distribution of that input image. We utilize the Poisson's equation to obtain a continuous liquid surface. A driving force based on the current height field and the target height field is introduced as the external force. This force that configures the current liquid surface into the target shape in the previous process is calculated and the change in the water surface and the caustics due to this force is verified using fluid simulation.
{"title":"Simulation Controlling Method for Generating Desired Water Caustics","authors":"Kenta Suzuki, Makoto Fujisawa, M. Mikawa","doi":"10.1109/CW.2019.00034","DOIUrl":"https://doi.org/10.1109/CW.2019.00034","url":null,"abstract":"In this study, we present a method for controlling the surface shape of transparent liquids that generate caustics due to luminance distribution of an arbitrary image. Our method consists of the process of caustics design and the process of liquid surface control. While designing caustics, an arbitrary grayscale image is treated as the input image and the surface shape of transparent liquid that generates caustics is calculated based on the luminance distribution of that input image. We utilize the Poisson's equation to obtain a continuous liquid surface. A driving force based on the current height field and the target height field is introduced as the external force. This force that configures the current liquid surface into the target shape in the previous process is calculated and the change in the water surface and the caustics due to this force is verified using fluid simulation.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121637695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Helin, J. Karjalainen, Pauline Kiernan, M. Wolff, David Martinez Oliveira
This paper introduces a Proof-of-Concept (PoC) for Mixed Reality (MR) system to support an astronaut's manual work, the system is called MobiPV4Hololens. It has been developed in the European Space Agency's (ESA) project called "MobiPV4Hololens - Prototype a Media Helmet for MobiPV Implemented Using Microsoft (MS) HoloLens". The MS HoloLens mixed reality platform was integrated as the hands-free user interface to the ESA Mobile Procedure Viewer system called MobiPV. Based on the user evaluation most of the users believe that the MobiPV4Hololens is beneficial in supporting procedure execution.
{"title":"Mixed Reality User Interface for Astronauts Procedure Viewer","authors":"K. Helin, J. Karjalainen, Pauline Kiernan, M. Wolff, David Martinez Oliveira","doi":"10.1109/cw.2019.00011","DOIUrl":"https://doi.org/10.1109/cw.2019.00011","url":null,"abstract":"This paper introduces a Proof-of-Concept (PoC) for Mixed Reality (MR) system to support an astronaut's manual work, the system is called MobiPV4Hololens. It has been developed in the European Space Agency's (ESA) project called \"MobiPV4Hololens - Prototype a Media Helmet for MobiPV Implemented Using Microsoft (MS) HoloLens\". The MS HoloLens mixed reality platform was integrated as the hands-free user interface to the ESA Mobile Procedure Viewer system called MobiPV. Based on the user evaluation most of the users believe that the MobiPV4Hololens is beneficial in supporting procedure execution.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116762044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hand-drawn sketch is a powerful modality to query 3D shape models. However, specifying a detailed 3D shape by a sketch on the first try without reference (i.e., 3D model or real object) is difficult. In this paper, we aim at a sketch-based 3D shape retrieval system that tolerates coarsely drawn or incomplete sketches having small number of strokes. Such a system could be used to start a sketch-retrieve-refine interactive loop that could lead to a 3D shape having required shape details. Proposed algorithm uses deep feature embedding into common feature embedding space to compare sketches and 3D shape models. To handle coarse or incomplete sketches, a sketch, which is a sequence of strokes, is augmented by removing stroke for training a pair of DNNs to extract sketch features. A sketch feature is a fusion of an image based feature extracted by a convolutional neural network (CNN) and a 2D point sequence feature extracted by using a recurrent neural network (RNN). Embedding of 3D shape feature and the sketch feature is learned by using triplet loss. Experimental evaluation of the proposed method is performed using (simulated) incomplete sketches created by removing part of their strokes. The experiments show that sketch stroke removal augmentation significantly improved retrieval accuracy if queried by using such incomplete sketches.
{"title":"Query by Partially-Drawn Sketches for 3D Shape Retrieval","authors":"Shutaro Kuwabara, Ryutarou Ohbuchi, T. Furuya","doi":"10.1109/CW.2019.00020","DOIUrl":"https://doi.org/10.1109/CW.2019.00020","url":null,"abstract":"Hand-drawn sketch is a powerful modality to query 3D shape models. However, specifying a detailed 3D shape by a sketch on the first try without reference (i.e., 3D model or real object) is difficult. In this paper, we aim at a sketch-based 3D shape retrieval system that tolerates coarsely drawn or incomplete sketches having small number of strokes. Such a system could be used to start a sketch-retrieve-refine interactive loop that could lead to a 3D shape having required shape details. Proposed algorithm uses deep feature embedding into common feature embedding space to compare sketches and 3D shape models. To handle coarse or incomplete sketches, a sketch, which is a sequence of strokes, is augmented by removing stroke for training a pair of DNNs to extract sketch features. A sketch feature is a fusion of an image based feature extracted by a convolutional neural network (CNN) and a 2D point sequence feature extracted by using a recurrent neural network (RNN). Embedding of 3D shape feature and the sketch feature is learned by using triplet loss. Experimental evaluation of the proposed method is performed using (simulated) incomplete sketches created by removing part of their strokes. The experiments show that sketch stroke removal augmentation significantly improved retrieval accuracy if queried by using such incomplete sketches.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121126418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a method that allows a user to create folded surfaces with a good sense of reality from simply sketches directly drawing curves on the surface of a cloth 3D model. The user can obtain a one-shot image of a simple cloth fold simulation result with a good sense of reality through our method in a short time, even for the one who has little or no knowledge of the cloth apparel cutting structure and the physical properties of the fabric. The method consists of a simulation method that allows the user to make a preliminary adjustment to their cloth model, a reconstruction method to design the folded surface by sketching on the surface, and a refinement method to modify the shape of the folded surface.
{"title":"Realistic Folded Surface Modeling from Sketching","authors":"Yufei Zheng, Hatsu Shi, S. Saito","doi":"10.1109/CW.2019.00033","DOIUrl":"https://doi.org/10.1109/CW.2019.00033","url":null,"abstract":"We propose a method that allows a user to create folded surfaces with a good sense of reality from simply sketches directly drawing curves on the surface of a cloth 3D model. The user can obtain a one-shot image of a simple cloth fold simulation result with a good sense of reality through our method in a short time, even for the one who has little or no knowledge of the cloth apparel cutting structure and the physical properties of the fabric. The method consists of a simulation method that allows the user to make a preliminary adjustment to their cloth model, a reconstruction method to design the folded surface by sketching on the surface, and a refinement method to modify the shape of the folded surface.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125731927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Zhu, Chongwei Su, Gaochao Cui, Changle Zhou, Jianhai Zhang, Wanzeng Kong
Motor imagery (MI) is a kind of spontaneous controlled brain computer interface (BCI) paradigm, which is more likely to the concept of 'mind control'. The idle state detection is an important problem to construct a robust MI-BCI system since it needs to tell whether the subject is in MI task and the idle state contains much diverse cases. Herein, EEG-based multi-user BCI refers to two or more subjects engage in a coordinate task while their EEG are simultaneously recorded. The objective of this paper is to explore how the multi-user MI-BCI performance in idle detection based on CSP (common spatial pattern) and brain-network features. We proposed several strategies for cross-brain feature fusion. Results show that 1) Through CSP features, the classification accuracy of cross-brain outperforms the single brain CSP feature across different strategies. 2) Through brain-network features, the classification accuracy of concatenated with the paired subjects outperforms the single brain-network, while the inter-brain-network is lower than single subject 3) alpha frequency band shows better performance than other bands. Multi-user MI-BCI would be a potential way to improve the idle state detection accuracy.
{"title":"Idle-State Detection in Multi-user Motor Imagery Brain Computer Interface with Cross-Brain CSP and Hyper-Brain-Network","authors":"Li Zhu, Chongwei Su, Gaochao Cui, Changle Zhou, Jianhai Zhang, Wanzeng Kong","doi":"10.1109/CW.2019.00045","DOIUrl":"https://doi.org/10.1109/CW.2019.00045","url":null,"abstract":"Motor imagery (MI) is a kind of spontaneous controlled brain computer interface (BCI) paradigm, which is more likely to the concept of 'mind control'. The idle state detection is an important problem to construct a robust MI-BCI system since it needs to tell whether the subject is in MI task and the idle state contains much diverse cases. Herein, EEG-based multi-user BCI refers to two or more subjects engage in a coordinate task while their EEG are simultaneously recorded. The objective of this paper is to explore how the multi-user MI-BCI performance in idle detection based on CSP (common spatial pattern) and brain-network features. We proposed several strategies for cross-brain feature fusion. Results show that 1) Through CSP features, the classification accuracy of cross-brain outperforms the single brain CSP feature across different strategies. 2) Through brain-network features, the classification accuracy of concatenated with the paired subjects outperforms the single brain-network, while the inter-brain-network is lower than single subject 3) alpha frequency band shows better performance than other bands. Multi-user MI-BCI would be a potential way to improve the idle state detection accuracy.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122278796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Nõmm, Tanel Kossas, A. Toomela, Kadri Medijainen, P. Taba
Fine motor tests have been a workhorse in neurology, psychology and psychiatry nearly for one hundred years. In spite of its simplicity, just paper and pen required to conduct the test, their results were proven to be reliable and accepted by the medical community. Nevertheless, it should be noted that assessment of the testing results was and is performed visually by the practitioner, whereas no measurable numeric parameters are used. Such setting inevitably introduces a subjective component to the assessment. Introduction of digital tables and later tablet computers has opened new frontiers in the fine motor analysis. Nowadays tablet computer equipped with stylus pen allows collecting kinematic and pressure parameters describing aspects of the test invisible to the naked eye. In spite of the recent achievements in the digitisation of fine motor tests, very few attention was paid to the parameters of the tests itself. In this paper, the length of the alternating series test is investigated with respect to the accuracy of the classifiers, used to support diagnostics of the Parkinson's disease.
{"title":"Determining Necessary Length of the Alternating Series Test for Parkinson's Disease Modelling","authors":"S. Nõmm, Tanel Kossas, A. Toomela, Kadri Medijainen, P. Taba","doi":"10.1109/CW.2019.00050","DOIUrl":"https://doi.org/10.1109/CW.2019.00050","url":null,"abstract":"Fine motor tests have been a workhorse in neurology, psychology and psychiatry nearly for one hundred years. In spite of its simplicity, just paper and pen required to conduct the test, their results were proven to be reliable and accepted by the medical community. Nevertheless, it should be noted that assessment of the testing results was and is performed visually by the practitioner, whereas no measurable numeric parameters are used. Such setting inevitably introduces a subjective component to the assessment. Introduction of digital tables and later tablet computers has opened new frontiers in the fine motor analysis. Nowadays tablet computer equipped with stylus pen allows collecting kinematic and pressure parameters describing aspects of the test invisible to the naked eye. In spite of the recent achievements in the digitisation of fine motor tests, very few attention was paid to the parameters of the tests itself. In this paper, the length of the alternating series test is investigated with respect to the accuracy of the classifiers, used to support diagnostics of the Parkinson's disease.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134560913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xi Zhao, K. Go, K. Kashiwagi, M. Toyoura, Xiaoyang Mao, I. Fujishiro
Visual field defect (VFD) refers to a symptom in which a patient loses part of his/her field of view (FoV). Medical therapy can halt the progression of VFD, but complete recovery is impossible. In this paper, we propose a computational method for alleviating the restricted FoV with an optical see-through head-mounted display (OST-HMD), where an overview scene captured by the installed camera is overlaid on the persisting FoV. Since the overview window occludes with the real world scene, there is a trade-off between the augmented contextual information and the local unscreened information. We hypothesized that such a trade-off can be resolved by taking into consideration the size of the overview window and its displacement from the center of the unimpaired FoV. We therefore conducted an empirical evaluation through a Whac-A-Mole type of task with ten VFD-imitative subjects, where three sizes of an overview window with a fixed aspect ratio and seven positions in terms of elevation and azimuth were used combinatorially on an OST-HMD to find the best size and position of the overview window. It was statistically proven that for left-sided homonymous VFD-imitative subjects, the performance of the task was better when the medium-sized overview window was placed in the lower right position. The obtained result can legitimate default settings for the proposed VFD alleviation method.
{"title":"Computational Alleviation of Homonymous Visual Field Defect with OST-HMD: The Effect of Size and Position of Overlaid Overview Window","authors":"Xi Zhao, K. Go, K. Kashiwagi, M. Toyoura, Xiaoyang Mao, I. Fujishiro","doi":"10.1109/CW.2019.00036","DOIUrl":"https://doi.org/10.1109/CW.2019.00036","url":null,"abstract":"Visual field defect (VFD) refers to a symptom in which a patient loses part of his/her field of view (FoV). Medical therapy can halt the progression of VFD, but complete recovery is impossible. In this paper, we propose a computational method for alleviating the restricted FoV with an optical see-through head-mounted display (OST-HMD), where an overview scene captured by the installed camera is overlaid on the persisting FoV. Since the overview window occludes with the real world scene, there is a trade-off between the augmented contextual information and the local unscreened information. We hypothesized that such a trade-off can be resolved by taking into consideration the size of the overview window and its displacement from the center of the unimpaired FoV. We therefore conducted an empirical evaluation through a Whac-A-Mole type of task with ten VFD-imitative subjects, where three sizes of an overview window with a fixed aspect ratio and seven positions in terms of elevation and azimuth were used combinatorially on an OST-HMD to find the best size and position of the overview window. It was statistically proven that for left-sided homonymous VFD-imitative subjects, the performance of the task was better when the medium-sized overview window was placed in the lower right position. The obtained result can legitimate default settings for the proposed VFD alleviation method.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132140181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Van der Waals equation is an equation of state that generalizes the ideal gas law. It involves two characteristic curves, called binodal and spinodal curves. They are usually reconstructed through standard polynomial fitting. However, the resulting fitting models are strongly limited in several ways. In this paper, we address this issue through least-squares approximation of the set of 2D points by using free-form Bezier curves. This requires to perform data parameterization in addition to computing the poles of the curves. This is achieved by applying a powerful swarm intelligence method called the firefly algorithm. Our method is applied to real data of a gas. Our results show that the method can reconstruct the characteristic curves with good accuracy. Comparative work shows that our approach outperforms two state-of-the-art methods for this example.
{"title":"Applying Firefly Algorithm to Data Fitting for the Van der Waals Equation of State with Bézier Curves","authors":"Almudena Campuzano, A. Iglesias, A. Gálvez","doi":"10.1109/CW.2019.00042","DOIUrl":"https://doi.org/10.1109/CW.2019.00042","url":null,"abstract":"The Van der Waals equation is an equation of state that generalizes the ideal gas law. It involves two characteristic curves, called binodal and spinodal curves. They are usually reconstructed through standard polynomial fitting. However, the resulting fitting models are strongly limited in several ways. In this paper, we address this issue through least-squares approximation of the set of 2D points by using free-form Bezier curves. This requires to perform data parameterization in addition to computing the poles of the curves. This is achieved by applying a powerful swarm intelligence method called the firefly algorithm. Our method is applied to real data of a gas. Our results show that the method can reconstruct the characteristic curves with good accuracy. Comparative work shows that our approach outperforms two state-of-the-art methods for this example.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121361114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality applications become more and more important to enhance our daily life and workflows. The possibility of showing additional virtual content in a camera stream is helpful for many different use cases like room planning where such applications can offer a simple and intuitive visualization. However, augmented reality applications can suffer from the interference by real objects that may disrupt the user experience. In recent years, there has been research regarding the removal of real objects from camera streams by applying diminished reality techniques. Current approaches are generally limited to flat objects, video streams with little camera movement, or can only remove objects in front of simple and mostly planar backgrounds. In our approach, we show a robust and efficient way to remove a selected 3D object from the camera stream visually. The removal is based on a dense 3D reconstruction of the physical environment stored in a voxel grid that can be created and extended on-the-fly. Hereby, an undesired object can be replaced by a background rendered from the reconstruction allowing for more complex environments than previous approaches. Furthermore, remaining holes by the removal of the object are removed applying an inpainting approach. Finally, we apply color correction to get a seamless transition between the virtual content and the camera image.
{"title":"An Efficient Diminished Reality Approach Using Real-Time Surface Reconstruction","authors":"Christian Kunert, Tobias Schwandt, W. Broll","doi":"10.1109/CW.2019.00010","DOIUrl":"https://doi.org/10.1109/CW.2019.00010","url":null,"abstract":"Augmented reality applications become more and more important to enhance our daily life and workflows. The possibility of showing additional virtual content in a camera stream is helpful for many different use cases like room planning where such applications can offer a simple and intuitive visualization. However, augmented reality applications can suffer from the interference by real objects that may disrupt the user experience. In recent years, there has been research regarding the removal of real objects from camera streams by applying diminished reality techniques. Current approaches are generally limited to flat objects, video streams with little camera movement, or can only remove objects in front of simple and mostly planar backgrounds. In our approach, we show a robust and efficient way to remove a selected 3D object from the camera stream visually. The removal is based on a dense 3D reconstruction of the physical environment stored in a voxel grid that can be created and extended on-the-fly. Hereby, an undesired object can be replaced by a background rendered from the reconstruction allowing for more complex environments than previous approaches. Furthermore, remaining holes by the removal of the object are removed applying an inpainting approach. Finally, we apply color correction to get a seamless transition between the virtual content and the camera image.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123421258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the development of a mobile application that enables the creation of holographic objects and displays holographic objects on mobile devices. This project aims to demonstrate how the functionalities are implemented, and testing would be conducted to ensure the accuracy of creation and exhibitions of the holographic objects. During the app testing, there would be two approaches, which were tested based on two different lightings conditions. Users would be able to create their holographic objects with minimum lighting changes based on the results of the two approaches.
{"title":"Augmented Reality Hologram","authors":"Jia Jun Gan, Owen Noel Newton Fernando","doi":"10.1109/CW.2019.00064","DOIUrl":"https://doi.org/10.1109/CW.2019.00064","url":null,"abstract":"This paper presents the development of a mobile application that enables the creation of holographic objects and displays holographic objects on mobile devices. This project aims to demonstrate how the functionalities are implemented, and testing would be conducted to ensure the accuracy of creation and exhibitions of the holographic objects. During the app testing, there would be two approaches, which were tested based on two different lightings conditions. Users would be able to create their holographic objects with minimum lighting changes based on the results of the two approaches.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122754083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}