We present an approach for animating characters using inverse kinematics (IK) handles that allows for intuitive keyframing of physically plausible motion. Specifically, we extend traditional IK and keyframing to include center-of-mass (CM) and inertia handles along with physically based templates to help an animator produce trajectories that respect physics during dynamic activities, such as swinging, stepping, and jumping. Animators can easily control both posture and physics-based quantities (inertia shape, CM position, and linear momentum) when building motions from scratch, but also have complete freedom to create exaggerated or impossible motions. We present results for a variety of planar characters of different morphologies.
{"title":"PhysIK: Physically Plausible and Intuitive Keyframing","authors":"A. Rabbani, P. Kry","doi":"10.20380/GI2016.19","DOIUrl":"https://doi.org/10.20380/GI2016.19","url":null,"abstract":"We present an approach for animating characters using inverse kinematics (IK) handles that allows for intuitive keyframing of physically plausible motion. Specifically, we extend traditional IK and keyframing to include center-of-mass (CM) and inertia handles along with physically based templates to help an animator produce trajectories that respect physics during dynamic activities, such as swinging, stepping, and jumping. Animators can easily control both posture and physics-based quantities (inertia shape, CM position, and linear momentum) when building motions from scratch, but also have complete freedom to create exaggerated or impossible motions. We present results for a variety of planar characters of different morphologies.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"9 1","pages":"153-161"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89659070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time rendering of high-quality, anti-aliased shadows is a challenging problem in shadow mapping. Filtering the shadow map reduces aliasing, but artifacts are still visible for low-resolution shadow maps or small kernel sizes. Moreover, the existing techniques suffer from light leaking artifacts. Shadow silhouette recovery reduces perspective aliasing at the cost of large memory footprint and high computational overhead for the shadow mapping. In this paper, we reduce aliasing with the revectorizationbased shadow mapping. To effectively reduce the perspective aliasing, we revectorize shadow boundaries based on their discontinuity directions. Then, we take advantage of the discontinuity space to filter the shadow silhouettes, further suppressing the remaining artifacts. To control the filter kernel size, we incorporate percentage-closer filtering into the algorithm. This enables us to reduce jagged shadow boundaries, to simulate penumbra and to provide high-quality screen-space anti-aliasing. Compared to previous techniques, we show that shadow revectorization produces less artifacts, consumes less memory and offers real-time performance. The results show that our solution can be used in games and other applications in which real-time, high-quality shadows are desirable.
{"title":"Revectorization-Based Shadow Mapping","authors":"Márcio C. F. Macedo, A. Apolinario","doi":"10.20380/GI2016.10","DOIUrl":"https://doi.org/10.20380/GI2016.10","url":null,"abstract":"Real-time rendering of high-quality, anti-aliased shadows is a challenging problem in shadow mapping. Filtering the shadow map reduces aliasing, but artifacts are still visible for low-resolution shadow maps or small kernel sizes. Moreover, the existing techniques suffer from light leaking artifacts. Shadow silhouette recovery reduces perspective aliasing at the cost of large memory footprint and high computational overhead for the shadow mapping. In this paper, we reduce aliasing with the revectorizationbased shadow mapping. To effectively reduce the perspective aliasing, we revectorize shadow boundaries based on their discontinuity directions. Then, we take advantage of the discontinuity space to filter the shadow silhouettes, further suppressing the remaining artifacts. To control the filter kernel size, we incorporate percentage-closer filtering into the algorithm. This enables us to reduce jagged shadow boundaries, to simulate penumbra and to provide high-quality screen-space anti-aliasing. Compared to previous techniques, we show that shadow revectorization produces less artifacts, consumes less memory and offers real-time performance. The results show that our solution can be used in games and other applications in which real-time, high-quality shadows are desirable.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"5 1","pages":"75-83"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82330697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rotation controllers are used to interactively orient models in many important applications in 3D computer graphics and visualisation. Unfortunately, previous studies do not provide clear guidance on which rotation controller to use in a particular situation, either because they assess performance measures and rotation tasks in relative isolation or because they do not achieve statistical significance. In this paper, we present the results of a broad quantitative user experiment (n = 46) to compare the three most prevalent rotation controllers (Arcball, Two-Axis Valuator, and Discrete Sliders) according to both speed and accuracy across two classes of tasks (orientation matching and inspection). While we found no significant differences between Arcball and Two-Axis Valuator, Discrete Sliders were found to be significantly more accurate for simple orienting tasks (a medium to large effect), but slower across all tasks (a small to medium effect, median approximately two seconds). Thus, a Discrete Sliders controller is better suited to situations where finegrained accuracy is valued over speed and in other instances, e.g., inspection, either an Arcball or Two-Axis Valuator is appropriate.
{"title":"Usability and Performance of Mouse-based Rotation Controllers","authors":"S. Rybicki, B. DeRenzi, J. Gain","doi":"10.20380/GI2016.12","DOIUrl":"https://doi.org/10.20380/GI2016.12","url":null,"abstract":"Rotation controllers are used to interactively orient models in many important applications in 3D computer graphics and visualisation. Unfortunately, previous studies do not provide clear guidance on which rotation controller to use in a particular situation, either because they assess performance measures and rotation tasks in relative isolation or because they do not achieve statistical significance. \u0000 \u0000In this paper, we present the results of a broad quantitative user experiment (n = 46) to compare the three most prevalent rotation controllers (Arcball, Two-Axis Valuator, and Discrete Sliders) according to both speed and accuracy across two classes of tasks (orientation matching and inspection). While we found no significant differences between Arcball and Two-Axis Valuator, Discrete Sliders were found to be significantly more accurate for simple orienting tasks (a medium to large effect), but slower across all tasks (a small to medium effect, median approximately two seconds). Thus, a Discrete Sliders controller is better suited to situations where finegrained accuracy is valued over speed and in other instances, e.g., inspection, either an Arcball or Two-Axis Valuator is appropriate.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"88 1","pages":"93-100"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90664172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a novel input techniques for deformable devices, bend gestures can prove difficult for users to perform correctly as they have many characteristics to master. In this paper, we present three bend visual guides which use feedforward and feedback mechanisms to lead users to correctly perform bend gestures on a flexible device. We conducted an experiment to evaluate the efficiency and preference for our visual feedback designs. Our results show that users performed faster when the visual guidance appeared at the location where the bend gesture is to be performed instead of always at a fixed location on the screen. While feedforward improved users' performance, using feedback had a negative effect. We propose a set of design guidelines for visual systems for bend gestures.
{"title":"Visual Feedforward Guides for Performing Bend Gestures on Deformable Prototypes","authors":"Farshad Daliri, A. Girouard","doi":"10.20380/GI2016.27","DOIUrl":"https://doi.org/10.20380/GI2016.27","url":null,"abstract":"As a novel input techniques for deformable devices, bend gestures can prove difficult for users to perform correctly as they have many characteristics to master. In this paper, we present three bend visual guides which use feedforward and feedback mechanisms to lead users to correctly perform bend gestures on a flexible device. We conducted an experiment to evaluate the efficiency and preference for our visual feedback designs. Our results show that users performed faster when the visual guidance appeared at the location where the bend gesture is to be performed instead of always at a fixed location on the screen. While feedforward improved users' performance, using feedback had a negative effect. We propose a set of design guidelines for visual systems for bend gestures.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"26 1","pages":"209-216"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81893578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengzhao Li, C. Gutwin, Kevin G. Stanley, Miguel A. Nacenta
People frequently form small groups in many social and professional situations: from conference attendees meeting at a coffee break, to siblings gathering at a family barbecue. These ad-hoc gatherings typically form into predictable geometries based on circles or circular arcs (called F-Formations). Because our lives are increasingly stored and represented by data on handheld devices, the desire to be able to share digital objects while in these groupings has increased. Using the relative position in these groups to facilitate file sharing can enable intuitive techniques such as passing or flicking. However, there is no reliable, lightweight, ad-hoc technology for detecting and representing relative locations around a circle. In this paper, we present two systems that can auto-order locations about a circle based on sensors that are standard on commodity smartphones. We tested these systems using an object-passing task in a laboratory environment against unordered and proximity-based systems, and show that our techniques are faster, are more accurate, and are preferred by users.
{"title":"All Across the Circle: Using Auto-Ordering to Improve Object Transfer between Mobile Devices","authors":"Chengzhao Li, C. Gutwin, Kevin G. Stanley, Miguel A. Nacenta","doi":"10.20380/GI2016.07","DOIUrl":"https://doi.org/10.20380/GI2016.07","url":null,"abstract":"People frequently form small groups in many social and professional situations: from conference attendees meeting at a coffee break, to siblings gathering at a family barbecue. These ad-hoc gatherings typically form into predictable geometries based on circles or circular arcs (called F-Formations). Because our lives are increasingly stored and represented by data on handheld devices, the desire to be able to share digital objects while in these groupings has increased. Using the relative position in these groups to facilitate file sharing can enable intuitive techniques such as passing or flicking. However, there is no reliable, lightweight, ad-hoc technology for detecting and representing relative locations around a circle. In this paper, we present two systems that can auto-order locations about a circle based on sensors that are standard on commodity smartphones. We tested these systems using an object-passing task in a laboratory environment against unordered and proximity-based systems, and show that our techniques are faster, are more accurate, and are preferred by users.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"93 1","pages":"49-56"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75644960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alice Thudt, Jagoda Walny, Charles Perin, F. Rajabiyazdi, Lindsay MacDonald, Riane Vardeleon, S. Greenberg, Sheelagh Carpendale
Stacked graphs are a visualization technique popular in casual scenarios for representing multiple time-series. Variations of stacked graphs have been focused on reducing the distortion of individual streams because foundational perceptual studies suggest that variably curved slopes may make it difficult to accurately read and compare values. We contribute to this discussion by formally comparing the relative readability of basic stacked area charts, ThemeRivers, streamgraphs and our own interactive technique for straightening baselines of individual streams in a ThemeRiver. We used both real-world and randomly generated datasets and covered tasks at the elementary, intermediate and overall information levels. Results indicate that the decreased distortion of the newer techniques does appear to improve their readability, with streamgraphs performing best for value comparison tasks. We also found that when a variety of tasks is expected to be performed, using the interactive version of the themeriver leads to more correctness at the cost of being slower for value comparison tasks.
{"title":"Assessing the Readability of Stacked Graphs","authors":"Alice Thudt, Jagoda Walny, Charles Perin, F. Rajabiyazdi, Lindsay MacDonald, Riane Vardeleon, S. Greenberg, Sheelagh Carpendale","doi":"10.20380/GI2016.21","DOIUrl":"https://doi.org/10.20380/GI2016.21","url":null,"abstract":"Stacked graphs are a visualization technique popular in casual scenarios for representing multiple time-series. Variations of stacked graphs have been focused on reducing the distortion of individual streams because foundational perceptual studies suggest that variably curved slopes may make it difficult to accurately read and compare values. We contribute to this discussion by formally comparing the relative readability of basic stacked area charts, ThemeRivers, streamgraphs and our own interactive technique for straightening baselines of individual streams in a ThemeRiver. We used both real-world and randomly generated datasets and covered tasks at the elementary, intermediate and overall information levels. Results indicate that the decreased distortion of the newer techniques does appear to improve their readability, with streamgraphs performing best for value comparison tasks. We also found that when a variety of tasks is expected to be performed, using the interactive version of the themeriver leads to more correctness at the cost of being slower for value comparison tasks.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"15 1","pages":"167-174"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85073103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Cannanure, Xiang 'Anthony' Chen, Jennifer Mankoff
Interacting with a smart watch requires a fair amount of attention, which can disrupt a user's primary activity. While single-handed gestures have been developed for other platforms, they are cumbersome to perform with a watch. A simple interaction is needed that can be used to quickly and subtly access the watch at the user's convenience. In this paper, we developed Twist "n" Knock--a one-handed gesture that can quickly trigger functionality on a smart watch without causing unintended false positives. This gesture is performed by quickly twisting the wrist that wears the watch and then knocking on a nearby surface such as the thigh when standing or a table when sitting. Our evaluation with 11 participants shows that by chunking the twisting and knocking motion into a combined action, Twist 'n' Knock offers distinct features that produced only 2 false positives over a combined 22 hours of real world collection (11 users for 2 hours each). In structured tests, accuracy was 93%.
{"title":"Twist 'n' Knock: A One-handed Gesture for Smart Watches","authors":"V. Cannanure, Xiang 'Anthony' Chen, Jennifer Mankoff","doi":"10.20380/GI2016.24","DOIUrl":"https://doi.org/10.20380/GI2016.24","url":null,"abstract":"Interacting with a smart watch requires a fair amount of attention, which can disrupt a user's primary activity. While single-handed gestures have been developed for other platforms, they are cumbersome to perform with a watch. A simple interaction is needed that can be used to quickly and subtly access the watch at the user's convenience. In this paper, we developed Twist \"n\" Knock--a one-handed gesture that can quickly trigger functionality on a smart watch without causing unintended false positives. This gesture is performed by quickly twisting the wrist that wears the watch and then knocking on a nearby surface such as the thigh when standing or a table when sitting. Our evaluation with 11 participants shows that by chunking the twisting and knocking motion into a combined action, Twist 'n' Knock offers distinct features that produced only 2 false positives over a combined 22 hours of real world collection (11 users for 2 hours each). In structured tests, accuracy was 93%.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"133 1","pages":"189-193"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91397797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rozenn Bouville Berthelot, V. Gouranton, B. Arnaldi
This paper presents the use of Virtual Reality (VR) for movie actors rehearsal of VFX-enhanced scenes. The impediment behind VFX scenes is that actors must be filmed in front of monochromatic green or blue screens with hardly any cue to the digital scenery that is supposed to surround them. The problem is worsens when the scene includes interaction with digital partners. The actors must pretend they are sharing the set with imaginary creatures when they are, in fact, on their own on an empty set. To support actors in this complicated task, we introduce the use of VR for acting rehearsals not only to immerse actors in the digital scenery but to provide them with advanced features for rehearsing their play. Indeed, our approach combines a fully interactive environment with a dynamic scenario feature to allow actors to become familiar with the virtual elements while rehearsing dialogue and action at their own speed. The interactive and creative rehearsals enabled by the system can be either single-user or multi-user. Moreover, thanks to the wide range of supported platforms, VR rehearsals can take place either on-set or off-set. We conducted a preliminary study to assess whether VR training can replace classical training. The results show that VR-trained actors deliver a performance just as good as ordinarily trained actors. Moreover, all the subjects in our experiment preferred VR training to classic training.
{"title":"Virtual Reality Rehearsals for Acting with Visual Effects","authors":"Rozenn Bouville Berthelot, V. Gouranton, B. Arnaldi","doi":"10.20380/GI2016.16","DOIUrl":"https://doi.org/10.20380/GI2016.16","url":null,"abstract":"This paper presents the use of Virtual Reality (VR) for movie actors rehearsal of VFX-enhanced scenes. The impediment behind VFX scenes is that actors must be filmed in front of monochromatic green or blue screens with hardly any cue to the digital scenery that is supposed to surround them. The problem is worsens when the scene includes interaction with digital partners. The actors must pretend they are sharing the set with imaginary creatures when they are, in fact, on their own on an empty set. To support actors in this complicated task, we introduce the use of VR for acting rehearsals not only to immerse actors in the digital scenery but to provide them with advanced features for rehearsing their play. Indeed, our approach combines a fully interactive environment with a dynamic scenario feature to allow actors to become familiar with the virtual elements while rehearsing dialogue and action at their own speed. The interactive and creative rehearsals enabled by the system can be either single-user or multi-user. Moreover, thanks to the wide range of supported platforms, VR rehearsals can take place either on-set or off-set. We conducted a preliminary study to assess whether VR training can replace classical training. The results show that VR-trained actors deliver a performance just as good as ordinarily trained actors. Moreover, all the subjects in our experiment preferred VR training to classic training.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"51 1","pages":"125-132"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76032117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extraction of deformable 3D geometry is not accessible to casual users, as it either requires dedicated hardware or vast manual effort. Inspired by the recent success of semi-automatic 3D reconstruction from a single image, we introduce a sketch-based extraction technique that allows a fast reconstruction of a dynamic articulated shape from a single video. We model the shape as a union of generalized cylinders deformed by an animation of their axes, representing the "limbs" of the articulated shape. The axes are acquired from strokes sketched by the user on top of a few key frames. Our method bypasses the meticulous effort required to establish dense correspondences when applying common structure from motion techniques for shape reconstruction. Instead, we produce a plausible shape from the fusion of silhouettes over multiple frames. Reconstruction is performed at interactive rates, allowing interaction and refinement until the desired quality is achieved.
{"title":"Animated 3D Creatures from Single-view Video by Skeletal Sketching","authors":"Bernhard Reinert, Tobias Ritschel, H. Seidel","doi":"10.20380/GI2016.17","DOIUrl":"https://doi.org/10.20380/GI2016.17","url":null,"abstract":"Extraction of deformable 3D geometry is not accessible to casual users, as it either requires dedicated hardware or vast manual effort. Inspired by the recent success of semi-automatic 3D reconstruction from a single image, we introduce a sketch-based extraction technique that allows a fast reconstruction of a dynamic articulated shape from a single video. We model the shape as a union of generalized cylinders deformed by an animation of their axes, representing the \"limbs\" of the articulated shape. The axes are acquired from strokes sketched by the user on top of a few key frames. Our method bypasses the meticulous effort required to establish dense correspondences when applying common structure from motion techniques for shape reconstruction. Instead, we produce a plausible shape from the fusion of silhouettes over multiple frames. Reconstruction is performed at interactive rates, allowing interaction and refinement until the desired quality is achieved.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"116 1","pages":"133-141"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76218100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advances in research on quality metrics for bounding volume hierarchies (BVHs) have shown that greedy top-down SAH builders construct BVHs with superior traversal performance despite the fact that the resulting SAH values are higher than those created by more sophisticated builders. Motivated by this observation we examine a construction algorithm that uses recursive SAH values of temporarily constructed SAH-built BVHs to guide the construction. The resulting BVHs achieve up to 28% better trace performance for primary rays and up to 24% better trace performance for secondary diffuse rays compared to standard plane sweeping without applying spatial splits. Allowing spatial splits we still achieve up to 20% resp. 19% better performance. While our approach is not suitable for real-time BVH construction, we show that the proposed algorithm has subquadratic computational complexity in the number of primitives, which renders it usable in practical applications.
{"title":"Recursive SAH-based Bounding Volume Hierarchy Construction","authors":"Dominik Wodniok, M. Goesele","doi":"10.20380/GI2016.13","DOIUrl":"https://doi.org/10.20380/GI2016.13","url":null,"abstract":"Advances in research on quality metrics for bounding volume hierarchies (BVHs) have shown that greedy top-down SAH builders construct BVHs with superior traversal performance despite the fact that the resulting SAH values are higher than those created by more sophisticated builders. Motivated by this observation we examine a construction algorithm that uses recursive SAH values of temporarily constructed SAH-built BVHs to guide the construction. The resulting BVHs achieve up to 28% better trace performance for primary rays and up to 24% better trace performance for secondary diffuse rays compared to standard plane sweeping without applying spatial splits. Allowing spatial splits we still achieve up to 20% resp. 19% better performance. While our approach is not suitable for real-time BVH construction, we show that the proposed algorithm has subquadratic computational complexity in the number of primitives, which renders it usable in practical applications.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"6 1","pages":"101-107"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91148377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}