The paper1 presents guidelines for cooperation between software developers and graphic designers creating urban visualizations.
本文1提出了软件开发人员和图形设计师之间合作创建城市可视化的指导方针。
{"title":"3D model making patterns for active architectural visualization: guidelines for graphic designers cooperating with software developers","authors":"Dominik Pielak, Mateusz Kowalski, J. Lebiedź","doi":"10.1145/3208806.3211217","DOIUrl":"https://doi.org/10.1145/3208806.3211217","url":null,"abstract":"The paper1 presents guidelines for cooperation between software developers and graphic designers creating urban visualizations.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130467684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Within this paper, we present a novel 3D web architecture style which allows to build format-agnostic 3D model graphs on the basis of ReSTful principles. We generalize the abstract definitions of RFC 2077 and allow to compose models and model fractions while transferring the "Media Selection URI" to a new domain. We present a best practice subset of HTTP/HTTPS and ReST to model authorization, data change, and content format negation within a single efficient request. This allows implementations to handle massive graphs with hybrid format configurations on the very efficient HTTP transport layer, without further application intervention. The system should be attractive to platform and service providers aiming to increase their ability to build 3D data application mashups with a much higher level of interoperability. We also hope to inspire standardization organizations to link generic "model/*" formats to RFC 2077 defined semantics like "compose".
{"title":"MoST: a 3D web architectural style for hybrid model data","authors":"J. Behr, Max Limper, Timo Sturm","doi":"10.1145/3208806.3208823","DOIUrl":"https://doi.org/10.1145/3208806.3208823","url":null,"abstract":"Within this paper, we present a novel 3D web architecture style which allows to build format-agnostic 3D model graphs on the basis of ReSTful principles. We generalize the abstract definitions of RFC 2077 and allow to compose models and model fractions while transferring the \"Media Selection URI\" to a new domain. We present a best practice subset of HTTP/HTTPS and ReST to model authorization, data change, and content format negation within a single efficient request. This allows implementations to handle massive graphs with hybrid format configurations on the very efficient HTTP transport layer, without further application intervention. The system should be attractive to platform and service providers aiming to increase their ability to build 3D data application mashups with a much higher level of interoperability. We also hope to inspire standardization organizations to link generic \"model/*\" formats to RFC 2077 defined semantics like \"compose\".","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114150944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current shared worlds for games, simulations, AR, and VR rely on "good-enough" intuitively correct depictions of shared world state. This is inadequate to provide repeatable, verifiable results for decision-making, safety related/equipment-in-the-loop simulations, or distributed multi-user augmented reality. These require world representations which are physically correct to a designer-defined level of fidelity and produce repeatable, verifiable results. A network-distributed dynamic simulation architecture as illustrated in Figure 1 is presented, with consistent distributed state and selective level of physics-based fidelity with known bounds on transient durations when state diverges due to external input. Coherent dynamic state has previously been considered impossible.
{"title":"Defeating lag in network-distributed physics simulations: an architecture supporting declarative network physics representation protocols","authors":"Loren Peitso, D. Brutzman","doi":"10.1145/3208806.3208826","DOIUrl":"https://doi.org/10.1145/3208806.3208826","url":null,"abstract":"Current shared worlds for games, simulations, AR, and VR rely on \"good-enough\" intuitively correct depictions of shared world state. This is inadequate to provide repeatable, verifiable results for decision-making, safety related/equipment-in-the-loop simulations, or distributed multi-user augmented reality. These require world representations which are physically correct to a designer-defined level of fidelity and produce repeatable, verifiable results. A network-distributed dynamic simulation architecture as illustrated in Figure 1 is presented, with consistent distributed state and selective level of physics-based fidelity with known bounds on transient durations when state diverges due to external input. Coherent dynamic state has previously been considered impossible.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132127306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this poster, we present a virtual car showroom. The use of virtual and augmented reality (VR/AR) becomes increasingly popular in various application domains, including prototyping, marketing and merchandising. In car industry, VR/AR systems enable rapid creation and evaluation of virtual prototypes by domain specialists as well as potential customers. In this work, we present a virtual car showroom implemented using the Unity game engine, an HMD and a gesture tracking system. The application enables immersive presentation and interaction with 3D cars in a virtual environment.
{"title":"A virtual car showroom","authors":"Adrian Nowak, J. Flotyński","doi":"10.1145/3208806.3208832","DOIUrl":"https://doi.org/10.1145/3208806.3208832","url":null,"abstract":"In this poster, we present a virtual car showroom. The use of virtual and augmented reality (VR/AR) becomes increasingly popular in various application domains, including prototyping, marketing and merchandising. In car industry, VR/AR systems enable rapid creation and evaluation of virtual prototypes by domain specialists as well as potential customers. In this work, we present a virtual car showroom implemented using the Unity game engine, an HMD and a gesture tracking system. The application enables immersive presentation and interaction with 3D cars in a virtual environment.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126569385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Kurowski, M. Glowiak, Bogdan Ludwiczak, M. Strozyk, M. Ciznicki, A. Binczewski, M. Alvarez-Mesa
Poznan Supercomputing and Networking Services (PSNC) offers advanced visualisation and multimedia infrastructures as a set of dedicated laboratories to conduct innovative research and development projects involving both academia and industry. In this short overview we present the existing facilities located at the PSNC campus in Poznan, Poland as well as short descriptions of example applications and networked services which have been recently developed.
{"title":"PSNC advanced multimedia and visualization infrastructures, services and applications","authors":"K. Kurowski, M. Glowiak, Bogdan Ludwiczak, M. Strozyk, M. Ciznicki, A. Binczewski, M. Alvarez-Mesa","doi":"10.1145/3208806.3229053","DOIUrl":"https://doi.org/10.1145/3208806.3229053","url":null,"abstract":"Poznan Supercomputing and Networking Services (PSNC) offers advanced visualisation and multimedia infrastructures as a set of dedicated laboratories to conduct innovative research and development projects involving both academia and industry. In this short overview we present the existing facilities located at the PSNC campus in Poznan, Poland as well as short descriptions of example applications and networked services which have been recently developed.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114637611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a method that localizes the direct manipulation of blendshape models for facial animation with a customized sketch-based interface. Direct manipulation methods address the problem of cumbersome weight editing process using the traditional tools with a practical "pin-and-drag" operation directly on the 3D facial models. However, most of the direct manipulation methods have a global deformation impact, which lead to unintuitive and unexpected results. To this end, we propose a new way to localize the theory of direct manipulation method, using geodesic circles for confining the edits to the local geometry. Inspired by artists' brush painting on canvas, we additionally introduce a sketch-based interface as an application that provides direct manipulation and produces expressive facial poses efficiently and intuitively. Our method allows the artists to simply sketch directly onto the 3D facial model and automatically produces the expeditious manipulation until the desired facial pose is obtained. We show that localized blendshape direct manipulation has the potential to reduce the time-consuming blendshape editing process to an easy freehand stroke drawing.
{"title":"Direct manipulation of blendshapes using a sketch-based interface","authors":"O. Cetinaslan, V. Orvalho","doi":"10.1145/3208806.3208811","DOIUrl":"https://doi.org/10.1145/3208806.3208811","url":null,"abstract":"We introduce a method that localizes the direct manipulation of blendshape models for facial animation with a customized sketch-based interface. Direct manipulation methods address the problem of cumbersome weight editing process using the traditional tools with a practical \"pin-and-drag\" operation directly on the 3D facial models. However, most of the direct manipulation methods have a global deformation impact, which lead to unintuitive and unexpected results. To this end, we propose a new way to localize the theory of direct manipulation method, using geodesic circles for confining the edits to the local geometry. Inspired by artists' brush painting on canvas, we additionally introduce a sketch-based interface as an application that provides direct manipulation and produces expressive facial poses efficiently and intuitively. Our method allows the artists to simply sketch directly onto the 3D facial model and automatically produces the expeditious manipulation until the desired facial pose is obtained. We show that localized blendshape direct manipulation has the potential to reduce the time-consuming blendshape editing process to an easy freehand stroke drawing.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124371279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas F. Polys, Cecile Newcomb, T. Schenk, Thomas S. Skuzinski, D. Dunay
This project explores the difficulties of increasing density in a college town struggling with how to plan for population growth. It presents a concept idea for a section of Downtown Blacksburg, Virginia that meets the varied planning goals of the community. It also experiments with an innovative way of presenting plans with 3D computer models to prompt discussion about the vision by inviting a group of people to experience 3D models of the concept in an immersive display. A select group of participants completed surveys, viewed presentations of 3D computer models of conceptual developments in Blacksburg, and discussed their opinions and thoughts about the models and proposed ideas. The findings suggest that 3D modeling can be a better planning tool for helping decision-makers understand density and quality design than typical planning tools based on 2D presentations.
{"title":"The value of 3D models and immersive technology in planning urban density","authors":"Nicholas F. Polys, Cecile Newcomb, T. Schenk, Thomas S. Skuzinski, D. Dunay","doi":"10.1145/3208806.3208824","DOIUrl":"https://doi.org/10.1145/3208806.3208824","url":null,"abstract":"This project explores the difficulties of increasing density in a college town struggling with how to plan for population growth. It presents a concept idea for a section of Downtown Blacksburg, Virginia that meets the varied planning goals of the community. It also experiments with an innovative way of presenting plans with 3D computer models to prompt discussion about the vision by inviting a group of people to experience 3D models of the concept in an immersive display. A select group of participants completed surveys, viewed presentations of 3D computer models of conceptual developments in Blacksburg, and discussed their opinions and thoughts about the models and proposed ideas. The findings suggest that 3D modeling can be a better planning tool for helping decision-makers understand density and quality design than typical planning tools based on 2D presentations.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121790839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present the pipeline of animated 3D web content creation, which is based on semantic composition of 3D content activities into more complex animations. The use of knowledge representation adapts the approach to the current trends in the web development and enables modeling of animations using different concepts, at arbitrary abstraction levels, which makes the approach intelligible to domain experts without technical skills. Within the pipeline, we use the OpenStage 2 motion capture system and the Unity game engine.
{"title":"Query-based composition of animations for 3D web applications","authors":"J. Flotyński, Marcin Krzyszkowski, K. Walczak","doi":"10.1145/3208806.3208828","DOIUrl":"https://doi.org/10.1145/3208806.3208828","url":null,"abstract":"In this paper, we present the pipeline of animated 3D web content creation, which is based on semantic composition of 3D content activities into more complex animations. The use of knowledge representation adapts the approach to the current trends in the web development and enables modeling of animations using different concepts, at arbitrary abstraction levels, which makes the approach intelligible to domain experts without technical skills. Within the pipeline, we use the OpenStage 2 motion capture system and the Unity game engine.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129015576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Smilevski, G. Thirunavukkarasu, M. Seyedmahmoudian, S. McMillan, B. Horan
Snakebite, one of the most common and catastrophic environmental sicknesses, occurs due to the ignorance of its importance toward public health. The rich protein and peptide toxin nature of snake venom makes snake bite envenomation clinically challenging and a scientifically attractive issue. In most cases, the severity of snake bite envenomation mainly depends on the quality of first aid or snake bite management measure given to the victim prior to hospital treatment. In countries with field management strategies (such as pressure immobilization technique (PIT)), including Australia, the number of fatalities due to snake bites is considerably less compared with those in other countries without such precautionary measures. PIT involves the wrapping of a bandage or a crepe over the bitten area with a standard pressure of 55--70 and 40--70 mm Hg for lower and upper extremities, respectively. This technique delays the absorption rate or venom spread inside the body. However, the PIT displays a noticeable failure rate due to its sensitivity toward the pressure range that must be maintained when gripping the bandage around the bitten area. Off-the-shelf bandages with visual markers aid in the process of training on PIT. Despite the visual markers on the bandage, human interpretation of these markers differs, which causes discrepancies in applying correct pressure. In this paper, a mixed reality-based virtual reality (VR) training tool for PIT training is proposed. The VR application assists in training individuals to self-validate the correctness of pressure range applied to the bandage. The application provides a passive haptic response and a visual feedback on an augmented live stream of the camera to indicate whether the pressure is within the range. Visual feedback is obtained using a feature extraction technique, which adds novelty to the proposed research. Feedback suggests that the VR-based training tool will assist individuals in obtaining real-time feedback on the correctness of the bandage pressure and further understand the process of PIT.
{"title":"Mixed reality tool for training on pressure immobilization treatment of snake bite envenomation","authors":"S. Smilevski, G. Thirunavukkarasu, M. Seyedmahmoudian, S. McMillan, B. Horan","doi":"10.1145/3208806.3208827","DOIUrl":"https://doi.org/10.1145/3208806.3208827","url":null,"abstract":"Snakebite, one of the most common and catastrophic environmental sicknesses, occurs due to the ignorance of its importance toward public health. The rich protein and peptide toxin nature of snake venom makes snake bite envenomation clinically challenging and a scientifically attractive issue. In most cases, the severity of snake bite envenomation mainly depends on the quality of first aid or snake bite management measure given to the victim prior to hospital treatment. In countries with field management strategies (such as pressure immobilization technique (PIT)), including Australia, the number of fatalities due to snake bites is considerably less compared with those in other countries without such precautionary measures. PIT involves the wrapping of a bandage or a crepe over the bitten area with a standard pressure of 55--70 and 40--70 mm Hg for lower and upper extremities, respectively. This technique delays the absorption rate or venom spread inside the body. However, the PIT displays a noticeable failure rate due to its sensitivity toward the pressure range that must be maintained when gripping the bandage around the bitten area. Off-the-shelf bandages with visual markers aid in the process of training on PIT. Despite the visual markers on the bandage, human interpretation of these markers differs, which causes discrepancies in applying correct pressure. In this paper, a mixed reality-based virtual reality (VR) training tool for PIT training is proposed. The VR application assists in training individuals to self-validate the correctness of pressure range applied to the bandage. The application provides a passive haptic response and a visual feedback on an augmented live stream of the camera to indicate whether the pressure is within the range. Visual feedback is obtained using a feature extraction technique, which adds novelty to the proposed research. Feedback suggests that the VR-based training tool will assist individuals in obtaining real-time feedback on the correctness of the bandage pressure and further understand the process of PIT.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127798830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teresa Matos, Rui Nóbrega, R. Rodrigues, Marisa Pinheiro
The use of 360° videos has been increasing steadily in the 2010s, as content creators and users search for more immersive experiences. The freedom to choose where to look at during the video may hinder the overall experience instead of enhancing it, as there is no guarantee that the user will focus on relevant sections of the scene. Visual annotations superimposed on the video, such as text boxes or arrow icons, can help guide the user through the narrative of the video while maintaining freedom of movement. This paper presents a web-based immersive visualizer for 360° videos that contain dynamic media annotations, rendered in real-time. A set of annotations was created with the purpose of providing information or guiding the user to points of interest. The visualizer can be used with a computer, using a keyboard and mouse or HTC Vive, and in mobile devices with Cardboard VR headsets, to experience the video in virtual reality, which is made possible with the WebVR API. The visualizer was evaluated through usability tests, to analyze the impact of different annotation techniques on the users' experience. The obtained results demonstrate that annotations can assist in guiding the user during the video, and a careful design is imperative so that they are not intrusive and distracting for the viewers.
{"title":"Dynamic annotations on an interactive web-based 360° video player","authors":"Teresa Matos, Rui Nóbrega, R. Rodrigues, Marisa Pinheiro","doi":"10.1145/3208806.3208818","DOIUrl":"https://doi.org/10.1145/3208806.3208818","url":null,"abstract":"The use of 360° videos has been increasing steadily in the 2010s, as content creators and users search for more immersive experiences. The freedom to choose where to look at during the video may hinder the overall experience instead of enhancing it, as there is no guarantee that the user will focus on relevant sections of the scene. Visual annotations superimposed on the video, such as text boxes or arrow icons, can help guide the user through the narrative of the video while maintaining freedom of movement. This paper presents a web-based immersive visualizer for 360° videos that contain dynamic media annotations, rendered in real-time. A set of annotations was created with the purpose of providing information or guiding the user to points of interest. The visualizer can be used with a computer, using a keyboard and mouse or HTC Vive, and in mobile devices with Cardboard VR headsets, to experience the video in virtual reality, which is made possible with the WebVR API. The visualizer was evaluated through usability tests, to analyze the impact of different annotation techniques on the users' experience. The obtained results demonstrate that annotations can assist in guiding the user during the video, and a careful design is imperative so that they are not intrusive and distracting for the viewers.","PeriodicalId":323662,"journal":{"name":"Proceedings of the 23rd International ACM Conference on 3D Web Technology","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116172420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}