The reliable assessment of affective reactions to stimuli is paramount in a variety of scientific fields, including HCI (Human-Computer Interaction). Variation of emotional states over time, however, warrants the need for quick measurements of emotions. To address it, new tools for quick assessments of affective states have been developed. In this work, we explore the CAAT (Circumplex Affective Assessment Tool), an instrument with a unique design in the scope of affect assessment -- a graphical control element -- that makes it amenable to seamless integration in user interfaces. We briefly describe the CAAT and present a multi-dimensional evaluation that evidences the tool's viability. We have assessed its test-retest reliability, construct validity and quickness of use, by collecting data through an unsupervised, web-based user study. Results show high test-retest reliability, evidence the tool's construct validity and confirm its quickness of use, making it a good fit for longitudinal studies and systems requiring quick assessments of emotional reactions.
{"title":"On Sounder Ground: CAAT, a Viable Widget for Affective Reaction Assessment","authors":"Bruno Cardoso, Osvaldo Santos, T. Romão","doi":"10.1145/2807442.2807465","DOIUrl":"https://doi.org/10.1145/2807442.2807465","url":null,"abstract":"The reliable assessment of affective reactions to stimuli is paramount in a variety of scientific fields, including HCI (Human-Computer Interaction). Variation of emotional states over time, however, warrants the need for quick measurements of emotions. To address it, new tools for quick assessments of affective states have been developed. In this work, we explore the CAAT (Circumplex Affective Assessment Tool), an instrument with a unique design in the scope of affect assessment -- a graphical control element -- that makes it amenable to seamless integration in user interfaces. We briefly describe the CAAT and present a multi-dimensional evaluation that evidences the tool's viability. We have assessed its test-retest reliability, construct validity and quickness of use, by collecting data through an unsupervised, web-based user study. Results show high test-retest reliability, evidence the tool's construct validity and confirm its quickness of use, making it a good fit for longitudinal studies and systems requiring quick assessments of emotional reactions.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122722246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viktor Miruchna, Robert Walter, David Lindlbauer, Maren Lehmann, R. Klitzing, Jörg Müller
We present GelTouch, a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32 C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feedback through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%).
{"title":"GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel","authors":"Viktor Miruchna, Robert Walter, David Lindlbauer, Maren Lehmann, R. Klitzing, Jörg Müller","doi":"10.1145/2807442.2807487","DOIUrl":"https://doi.org/10.1145/2807442.2807487","url":null,"abstract":"We present GelTouch, a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32 C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feedback through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%).","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132843360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hrvoje Benko, E. Ofek, Feng Zheng, Andrew D. Wilson
Optically see-through (OST) augmented reality glasses can overlay spatially-registered computer-generated content onto the real world. However, current optical designs and weight considerations limit their diagonal field of view to less than 40 degrees, making it difficult to create a sense of immersion or give the viewer an overview of the augmented reality space. We combine OST glasses with a projection-based spatial augmented reality display to achieve a novel display hybrid, called FoveAR, capable of greater than 100 degrees field of view, view dependent graphics, extended brightness and color, as well as interesting combinations of public and personal data display. We contribute details of our prototype implementation and an analysis of the interactive design space that our system enables. We also contribute four prototype experiences showcasing the capabilities of FoveAR as well as preliminary user feedback providing insights for enhancing future FoveAR experiences.
{"title":"FoveAR: Combining an Optically See-Through Near-Eye Display with Projector-Based Spatial Augmented Reality","authors":"Hrvoje Benko, E. Ofek, Feng Zheng, Andrew D. Wilson","doi":"10.1145/2807442.2807493","DOIUrl":"https://doi.org/10.1145/2807442.2807493","url":null,"abstract":"Optically see-through (OST) augmented reality glasses can overlay spatially-registered computer-generated content onto the real world. However, current optical designs and weight considerations limit their diagonal field of view to less than 40 degrees, making it difficult to create a sense of immersion or give the viewer an overview of the augmented reality space. We combine OST glasses with a projection-based spatial augmented reality display to achieve a novel display hybrid, called FoveAR, capable of greater than 100 degrees field of view, view dependent graphics, extended brightness and color, as well as interesting combinations of public and personal data display. We contribute details of our prototype implementation and an analysis of the interactive design space that our system enables. We also contribute four prototype experiences showcasing the capabilities of FoveAR as well as preliminary user feedback providing insights for enhancing future FoveAR experiences.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133761591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Corona, a novel spatial sensing technique that implicitly locates adjacent mobile devices in the same plane by examining asymmetric Bluetooth Low Energy RSSI distributions. The underlying phenomenon is that the off-center BLE antenna and asymmetric radio frequency topology create a characteristic Bluetooth RSSI distribution around the device. By comparing the real-time RSSI readings against a RSSI distribution model, each device can derive the relative position of the other adjacent device. Our experiments using an iPhone and iPad Mini show that Corona yields position estimation at 50% accuracy within a 2cm range, or 85% for the best two candidates. We developed an application to combine Corona with accelerometer readings to mitigate ambiguity and enable cross-device interactions on adjacent devices.
{"title":"Corona: Positioning Adjacent Device with Asymmetric Bluetooth Low Energy RSSI Distributions","authors":"Haojian Jin, Cheng Xu, Kent Lyons","doi":"10.1145/2807442.2807485","DOIUrl":"https://doi.org/10.1145/2807442.2807485","url":null,"abstract":"We introduce Corona, a novel spatial sensing technique that implicitly locates adjacent mobile devices in the same plane by examining asymmetric Bluetooth Low Energy RSSI distributions. The underlying phenomenon is that the off-center BLE antenna and asymmetric radio frequency topology create a characteristic Bluetooth RSSI distribution around the device. By comparing the real-time RSSI readings against a RSSI distribution model, each device can derive the relative position of the other adjacent device. Our experiments using an iPhone and iPad Mini show that Corona yields position estimation at 50% accuracy within a 2cm range, or 85% for the best two candidates. We developed an application to combine Corona with accelerometer readings to mitigate ambiguity and enable cross-device interactions on adjacent devices.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126523767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many variables have been shown to impact whether a vibration stimulus will be perceived. We present a user study that takes into account not only previously investigated predictors such as vibration intensity and duration along with the age of the person receiving the stimulus, but also the amount of motion, as measured by an accelerometer, at the site of vibration immediately preceding the stimulus. This is a more specific measure than in previous studies showing an effect on perception due to gross conditions such as walking. We show that a logistic regression model including prior acceleration is significantly better at predicting vibration perception than a model including only vibration intensity, duration and participant age. In addition to the overall regression, we discuss individual participant differences and measures of classification performance for real-world applications. Our expectation is that haptic interface designers will be able to use such results to design better vibrations that are perceivable under the user's current activity conditions, without being annoyingly loud or jarring, eventually approaching ``perceptually equivalent' feedback independent of motion.
{"title":"Improving Haptic Feedback on Wearable Devices through Accelerometer Measurements","authors":"Jeffrey R. Blum, I. Frissen, J. Cooperstock","doi":"10.1145/2807442.2807474","DOIUrl":"https://doi.org/10.1145/2807442.2807474","url":null,"abstract":"Many variables have been shown to impact whether a vibration stimulus will be perceived. We present a user study that takes into account not only previously investigated predictors such as vibration intensity and duration along with the age of the person receiving the stimulus, but also the amount of motion, as measured by an accelerometer, at the site of vibration immediately preceding the stimulus. This is a more specific measure than in previous studies showing an effect on perception due to gross conditions such as walking. We show that a logistic regression model including prior acceleration is significantly better at predicting vibration perception than a model including only vibration intensity, duration and participant age. In addition to the overall regression, we discuss individual participant differences and measures of classification performance for real-world applications. Our expectation is that haptic interface designers will be able to use such results to design better vibrations that are perceivable under the user's current activity conditions, without being annoyingly loud or jarring, eventually approaching ``perceptually equivalent' feedback independent of motion.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126121204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Udayan Umapathi, Hsiang-Ting Chen, Stefanie Müller, L. Wall, Anna Seufert, Patrick Baudisch
Laser cutters are useful for rapid prototyping because they are fast. However, they only produce planar 2D geometry. One approach to creating non-planar objects is to cut the object in horizontal slices and to stack and glue them. This approach, however, requires manual effort for the assembly and time for the glue to set, defeating the purpose of using a fast fabrication tool. We propose eliminating the assembly step with our system LaserStacker. The key idea is to use the laser cutter to not only cut but also to weld. Users place not one acrylic sheet, but a stack of acrylic sheets into their cutter. In a single process, LaserStacker cuts each individual layer to shape (through all layers above it), welds layers by melting material at their interface, and heals undesired cuts in higher layers. When users take out the object from the laser cutter, it is already assembled. To allow users to model stacked objects efficiently, we built an extension to a commercial 3D editor (SketchUp) that provides tools for defining which parts should be connected and which remain loose. When users hit the export button, LaserStacker converts the 3D model into cutting, welding, and healing instructions for the laser cutter. We show how LaserStacker does not only allow making static objects, such as architectural models, but also objects with moving parts and simple mechanisms, such as scissors, a simple pinball machine, and a mechanical toy with gears.
{"title":"LaserStacker: Fabricating 3D Objects by Laser Cutting and Welding","authors":"Udayan Umapathi, Hsiang-Ting Chen, Stefanie Müller, L. Wall, Anna Seufert, Patrick Baudisch","doi":"10.1145/2807442.2807512","DOIUrl":"https://doi.org/10.1145/2807442.2807512","url":null,"abstract":"Laser cutters are useful for rapid prototyping because they are fast. However, they only produce planar 2D geometry. One approach to creating non-planar objects is to cut the object in horizontal slices and to stack and glue them. This approach, however, requires manual effort for the assembly and time for the glue to set, defeating the purpose of using a fast fabrication tool. We propose eliminating the assembly step with our system LaserStacker. The key idea is to use the laser cutter to not only cut but also to weld. Users place not one acrylic sheet, but a stack of acrylic sheets into their cutter. In a single process, LaserStacker cuts each individual layer to shape (through all layers above it), welds layers by melting material at their interface, and heals undesired cuts in higher layers. When users take out the object from the laser cutter, it is already assembled. To allow users to model stacked objects efficiently, we built an extension to a commercial 3D editor (SketchUp) that provides tools for defining which parts should be connected and which remain loose. When users hit the export button, LaserStacker converts the 3D model into cutting, welding, and healing instructions for the laser cutter. We show how LaserStacker does not only allow making static objects, such as architectural models, but also objects with moving parts and simple mechanisms, such as scissors, a simple pinball machine, and a mechanical toy with gears.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115330314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amy Pavel, Dan B. Goldman, Bjoern Hartmann, Maneesh Agrawala
Searching for scenes in movies is a time-consuming but crucial task for film studies scholars, film professionals, and new media artists. In pilot interviews we have found that such users search for a wide variety of clips---e.g., actions, props, dialogue phrases, character performances, locations---and they return to particular scenes they have seen in the past. Today, these users find relevant clips by watching the entire movie, scrubbing the video timeline, or navigating via DVD chapter menus. Increasingly, users can also index films through transcripts---however, dialogue often lacks visual context, character names, and high level event descriptions. We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries. Our interface integrates information from such sources to allow expressive search at several levels of granularity: Captions provide access to accurate dialogue, scripts describe shot-by-shot actions and settings, and plot summaries contain high-level event descriptions. We propose new algorithms for finding word-level caption to script alignments, parsing text scripts, and aligning plot summaries to scripts. Film studies graduate students evaluating SceneSkim expressed enthusiasm about the usability of the proposed system for their research and teaching.
{"title":"SceneSkim: Searching and Browsing Movies Using Synchronized Captions, Scripts and Plot Summaries","authors":"Amy Pavel, Dan B. Goldman, Bjoern Hartmann, Maneesh Agrawala","doi":"10.1145/2807442.2807502","DOIUrl":"https://doi.org/10.1145/2807442.2807502","url":null,"abstract":"Searching for scenes in movies is a time-consuming but crucial task for film studies scholars, film professionals, and new media artists. In pilot interviews we have found that such users search for a wide variety of clips---e.g., actions, props, dialogue phrases, character performances, locations---and they return to particular scenes they have seen in the past. Today, these users find relevant clips by watching the entire movie, scrubbing the video timeline, or navigating via DVD chapter menus. Increasingly, users can also index films through transcripts---however, dialogue often lacks visual context, character names, and high level event descriptions. We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries. Our interface integrates information from such sources to allow expressive search at several levels of granularity: Captions provide access to accurate dialogue, scripts describe shot-by-shot actions and settings, and plot summaries contain high-level event descriptions. We propose new algorithms for finding word-level caption to script alignments, parsing text scripts, and aligning plot summaries to scripts. Film studies graduate students evaluating SceneSkim expressed enthusiasm about the usability of the proposed system for their research and teaching.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"31 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120986234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital fabrication has enabled massive creativity in hobbyist communities and professional product design. These emerging technologies excel at realizing an arbitrary shape or form; however these objects are often rigid and lack the feel desired by designers. We aim to enable physical haptic design in passive 3D printed objects. This paper identifies two core areas for extending physical design into digital fabrication: designing the external and internal haptic characteristics of an object. We present HapticPrint as a pair of design tools to easily modify the feel of a 3D model. Our external tool maps textures and UI elements onto arbitrary shapes, and our internal tool modifies the internal geometry of models for novel compliance and weight characteristics. We demonstrate the value of HapticPrint with a range of applications that expand the aesthetics of feel, usability, and interactivity in 3D artifacts.
{"title":"HapticPrint: Designing Feel Aesthetics for Digital Fabrication","authors":"César Torres, Tim Campbell, Neil Kumar, E. Paulos","doi":"10.1145/2807442.2807492","DOIUrl":"https://doi.org/10.1145/2807442.2807492","url":null,"abstract":"Digital fabrication has enabled massive creativity in hobbyist communities and professional product design. These emerging technologies excel at realizing an arbitrary shape or form; however these objects are often rigid and lack the feel desired by designers. We aim to enable physical haptic design in passive 3D printed objects. This paper identifies two core areas for extending physical design into digital fabrication: designing the external and internal haptic characteristics of an object. We present HapticPrint as a pair of design tools to easily modify the feel of a 3D model. Our external tool maps textures and UI elements onto arbitrary shapes, and our internal tool modifies the internal geometry of models for novel compliance and weight characteristics. We demonstrate the value of HapticPrint with a range of applications that expand the aesthetics of feel, usability, and interactivity in 3D artifacts.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131955168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Yi, Chun Yu, M. Zhang, Sida Gao, Ke Sun, Yuanchun Shi
Ten-finger freehand mid-air typing is a potential solution for post-desktop interaction. However, the absence of tactile feedback as well as the inability to accurately distinguish tapping finger or target keys exists as the major challenge for mid-air typing. In this paper, we present ATK, a novel interaction technique that enables freehand ten-finger typing in the air based on 3D hand tracking data. Our hypothesis is that expert typists are able to transfer their typing ability from physical keyboards to mid-air typing. We followed an iterative approach in designing ATK. We first empirically investigated users' mid-air typing behavior, and examined fingertip kinematics during tapping, correlated movement among fingers and 3D distribution of tapping endpoints. Based on the findings, we proposed a probabilistic tap detection algorithm, and augmented Goodman's input correction model to account for the ambiguity in distinguishing tapping finger. We finally evaluated the performance of ATK with a 4-block study. Participants typed 23.0 WPM with an uncorrected word-level error rate of 0.3% in the first block, and later achieved 29.2 WPM in the last block without sacrificing accuracy.
{"title":"ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data","authors":"Xin Yi, Chun Yu, M. Zhang, Sida Gao, Ke Sun, Yuanchun Shi","doi":"10.1145/2807442.2807504","DOIUrl":"https://doi.org/10.1145/2807442.2807504","url":null,"abstract":"Ten-finger freehand mid-air typing is a potential solution for post-desktop interaction. However, the absence of tactile feedback as well as the inability to accurately distinguish tapping finger or target keys exists as the major challenge for mid-air typing. In this paper, we present ATK, a novel interaction technique that enables freehand ten-finger typing in the air based on 3D hand tracking data. Our hypothesis is that expert typists are able to transfer their typing ability from physical keyboards to mid-air typing. We followed an iterative approach in designing ATK. We first empirically investigated users' mid-air typing behavior, and examined fingertip kinematics during tapping, correlated movement among fingers and 3D distribution of tapping endpoints. Based on the findings, we proposed a probabilistic tap detection algorithm, and augmented Goodman's input correction model to account for the ambiguity in distinguishing tapping finger. We finally evaluated the performance of ATK with a 4-block study. Participants typed 23.0 WPM with an uncorrected word-level error rate of 0.3% in the first block, and later achieved 29.2 WPM in the last block without sacrificing accuracy.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116708383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Alt, A. Bulling, G. Gravanis, Daniel Buschek
Users tend to position themselves in front of interactive public displays in such a way as to best perceive its content. Currently, this sweet spot is implicitly defined by display properties, content, the input modality, as well as space constraints in front of the display. We present GravitySpot - an approach that makes sweet spots flexible by actively guiding users to arbitrary target positions in front of displays using visual cues. Such guidance is beneficial, for example, if a particular input technology only works at a specific distance or if users should be guided towards a non-crowded area of a large display. In two controlled lab studies (n=29) we evaluate different visual cues based on color, shape, and motion, as well as position-to-cue mapping functions. We show that both the visual cues and mapping functions allow for fine-grained control over positioning speed and accuracy. Findings are complemented by observations from a 3-month real-world deployment.
{"title":"GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues","authors":"Florian Alt, A. Bulling, G. Gravanis, Daniel Buschek","doi":"10.1145/2807442.2807490","DOIUrl":"https://doi.org/10.1145/2807442.2807490","url":null,"abstract":"Users tend to position themselves in front of interactive public displays in such a way as to best perceive its content. Currently, this sweet spot is implicitly defined by display properties, content, the input modality, as well as space constraints in front of the display. We present GravitySpot - an approach that makes sweet spots flexible by actively guiding users to arbitrary target positions in front of displays using visual cues. Such guidance is beneficial, for example, if a particular input technology only works at a specific distance or if users should be guided towards a non-crowded area of a large display. In two controlled lab studies (n=29) we evaluate different visual cues based on color, shape, and motion, as well as position-to-cue mapping functions. We show that both the visual cues and mapping functions allow for fine-grained control over positioning speed and accuracy. Findings are complemented by observations from a 3-month real-world deployment.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128035104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}