This study aims to examine the impact of persuasive content developed based on Protection Motivation Theory (PMT) to promote heritage site preservation awareness among the local community. Digital content consists of threat and coping messages related to heritage site preservation is developed and shown to 20 undergraduate students residing in Luang Prabang world heritage site in Lao PDR. Pre and post-test survey result shows that subjects report stronger agreement on threat, coping, and intention to preserve heritage site after the subjects are exposed to the persuasive digital content. Current work forms one of the core parts of mobile application development under the project of ICT applications for sustainable management of world heritage site in least developed country.
{"title":"Persuasive content development: application of protection motivation theory in promoting heritage site preservation awareness","authors":"Y. Poong, Shinobu Yamaguchi, Jun-ichi Takada","doi":"10.1145/2559206.2581259","DOIUrl":"https://doi.org/10.1145/2559206.2581259","url":null,"abstract":"This study aims to examine the impact of persuasive content developed based on Protection Motivation Theory (PMT) to promote heritage site preservation awareness among the local community. Digital content consists of threat and coping messages related to heritage site preservation is developed and shown to 20 undergraduate students residing in Luang Prabang world heritage site in Lao PDR. Pre and post-test survey result shows that subjects report stronger agreement on threat, coping, and intention to preserve heritage site after the subjects are exposed to the persuasive digital content. Current work forms one of the core parts of mobile application development under the project of ICT applications for sustainable management of world heritage site in least developed country.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115157642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with vision impairment have been a longstanding well-recognized user group addressed in HCI. Despite the recent interest in studying sighted dog owners and their pets in HCI, there is a noticeable gap in the field with regards to research on visually impaired owners and their dogs (guide dog teams). This paper presents portions of an ongoing study that explores interactions of guide dog teams revealing major opportunities for focusing on challenges faced in "off-work" everyday activities. In particular, opportunities point to promoting design interventions enriching play-interaction through accessible dog toys utilizing sensor technologies.
{"title":"Improving guide dog team play with accessible dog toys","authors":"Sabrina Hauser, Ron Wakkary, Carman Neustaedter","doi":"10.1145/2559206.2581316","DOIUrl":"https://doi.org/10.1145/2559206.2581316","url":null,"abstract":"People with vision impairment have been a longstanding well-recognized user group addressed in HCI. Despite the recent interest in studying sighted dog owners and their pets in HCI, there is a noticeable gap in the field with regards to research on visually impaired owners and their dogs (guide dog teams). This paper presents portions of an ongoing study that explores interactions of guide dog teams revealing major opportunities for focusing on challenges faced in \"off-work\" everyday activities. In particular, opportunities point to promoting design interventions enriching play-interaction through accessible dog toys utilizing sensor technologies. ","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123042358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital Media can empower the traditionally technologically neglected exploration and outreach components of an ethologist's process. A digitally holistic scientific process holds implications for empowering both fields of ethology and digital media.
{"title":"Digital naturalism: designing holistic ethological interaction","authors":"Andrew Quitmeyer","doi":"10.1145/2559206.2559956","DOIUrl":"https://doi.org/10.1145/2559206.2559956","url":null,"abstract":"Digital Media can empower the traditionally technologically neglected exploration and outreach components of an ethologist's process. A digitally holistic scientific process holds implications for empowering both fields of ethology and digital media.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123161510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier Gonzalez-Sanchez, Maria Elena Chavez Echeagaray, R. Atkinson, W. Burleson
One important way for systems to adapt to their individual users is related to their ability to show empathy. Being empathetic implies that the computer is able to recognize a user's affective states and understand the implication of those states. Detection of affective states is a step forward to provide machines with the necessary intelligence to appropriately interact with humans. This course provides a description and demonstration of tools and methodologies for automatically detecting affective states with a multimodal approach.
{"title":"Multimodal detection of affective states: a roadmap through diverse technologies","authors":"Javier Gonzalez-Sanchez, Maria Elena Chavez Echeagaray, R. Atkinson, W. Burleson","doi":"10.1145/2559206.2567820","DOIUrl":"https://doi.org/10.1145/2559206.2567820","url":null,"abstract":"One important way for systems to adapt to their individual users is related to their ability to show empathy. Being empathetic implies that the computer is able to recognize a user's affective states and understand the implication of those states. Detection of affective states is a step forward to provide machines with the necessary intelligence to appropriately interact with humans. This course provides a description and demonstration of tools and methodologies for automatically detecting affective states with a multimodal approach.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117158971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalie Harrold, Chek Tien Tan, Daniel Rosser, T. Leong
Proper emotional development is important for young children, especially those with psychological disorders such as autism spectrum disorders (ASDs), whereby early intervention becomes crucial. However, traditional paper-based interventions are mostly laborious and difficult to employ for carers and parents, whilst current computer-aided interventions feel too much like obvious assistive tools and lack timely feedback to inform and aid progress. CopyMe is an iPad game we developed that allows children to learn emotions with instant feedback on performance. A pilot study revealed children with ASDs were able to enjoy and perform well in the game. CopyMe also demonstrates a novel affective game interface that incorporates state-of-the-art facial expression tracking and classification. This will be particularly interesting for CHI attendees working in the domain of affective interfaces and serious games, especially those that target children.
{"title":"CopyMe: an emotional development game for children","authors":"Natalie Harrold, Chek Tien Tan, Daniel Rosser, T. Leong","doi":"10.1145/2559206.2574785","DOIUrl":"https://doi.org/10.1145/2559206.2574785","url":null,"abstract":"Proper emotional development is important for young children, especially those with psychological disorders such as autism spectrum disorders (ASDs), whereby early intervention becomes crucial. However, traditional paper-based interventions are mostly laborious and difficult to employ for carers and parents, whilst current computer-aided interventions feel too much like obvious assistive tools and lack timely feedback to inform and aid progress. CopyMe is an iPad game we developed that allows children to learn emotions with instant feedback on performance. A pilot study revealed children with ASDs were able to enjoy and perform well in the game. CopyMe also demonstrates a novel affective game interface that incorporates state-of-the-art facial expression tracking and classification. This will be particularly interesting for CHI attendees working in the domain of affective interfaces and serious games, especially those that target children.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121246592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martha Ladly, Gerald Penn, C. C. Chen, Pavika Chintraruck, Maziar Ghaderi, Bryn A. Ludlow, Jessica Peter, R. Tanyag, P. Zhou, Siavash Kazemian
For the past 73 years, the CBC has disseminated a unique Canadian perspective across the world, producing a phenomenally rich multimedia record of the country and our social, political and cultural heritage and news. This project utilizes visualization and sonification of portions of an enormous historical CBC Newsworld data corpus to enable an "on this day" experience for viewers. The digitized collection of 24-hour news videos spans a 24-year period (1989-2013) within an immersive multiscreen environment, to enable gesture-driven context-aware browsing, information seeking, and segment review. Employing natural language processing technologies, the interface displays keywords and key phrases identified in the transcripts, enabling serendipitous video search and display and offering a unique browsing opportunity within this rich "big data" corpus.
{"title":"The CBC newsworld holodeck","authors":"Martha Ladly, Gerald Penn, C. C. Chen, Pavika Chintraruck, Maziar Ghaderi, Bryn A. Ludlow, Jessica Peter, R. Tanyag, P. Zhou, Siavash Kazemian","doi":"10.1145/2559206.2574795","DOIUrl":"https://doi.org/10.1145/2559206.2574795","url":null,"abstract":"For the past 73 years, the CBC has disseminated a unique Canadian perspective across the world, producing a phenomenally rich multimedia record of the country and our social, political and cultural heritage and news. This project utilizes visualization and sonification of portions of an enormous historical CBC Newsworld data corpus to enable an \"on this day\" experience for viewers. The digitized collection of 24-hour news videos spans a 24-year period (1989-2013) within an immersive multiscreen environment, to enable gesture-driven context-aware browsing, information seeking, and segment review. Employing natural language processing technologies, the interface displays keywords and key phrases identified in the transcripts, enabling serendipitous video search and display and offering a unique browsing opportunity within this rich \"big data\" corpus.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126105975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper details the design of i-dentity, a collaborative movement-based game where the game design deliberately conceals the players' associations to a digital representation. While movement-based digital games typically make it clear whose movement representation belongs to which player, we explore how making it ambiguous whose movement controls which representation can facilitate engaging play experiences. We call this "innominate movement representation" and explore this opportunity through our game "i-dentity". The game's setup has each player in a group hold Sony Move controllers, with one of the players' movements controlling all of the Move controller lights. Gameplay involves the group of players with Move controllers trying to perform movements together at the same time in order to conceal from other players whose movements are represented. With i-dentity, we aim to extend the range of multiplayer games with a novel and engaging approach to digital representation of player movement.
{"title":"I-dentity: concealing movement representation associations in games","authors":"J. Garner, Gavin Wood","doi":"10.1145/2559206.2580102","DOIUrl":"https://doi.org/10.1145/2559206.2580102","url":null,"abstract":"This paper details the design of i-dentity, a collaborative movement-based game where the game design deliberately conceals the players' associations to a digital representation. While movement-based digital games typically make it clear whose movement representation belongs to which player, we explore how making it ambiguous whose movement controls which representation can facilitate engaging play experiences. We call this \"innominate movement representation\" and explore this opportunity through our game \"i-dentity\". The game's setup has each player in a group hold Sony Move controllers, with one of the players' movements controlling all of the Move controller lights. Gameplay involves the group of players with Move controllers trying to perform movements together at the same time in order to conceal from other players whose movements are represented. With i-dentity, we aim to extend the range of multiplayer games with a novel and engaging approach to digital representation of player movement.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123701251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aims of this work were to investigate how decision making in production planning happens, to understand how energy efficiency could be included as a manufacturing goal and to design a decision support system that integrates into current work practices. This case study describes the research, design and development this system. An interactive visualisation provides an interface to a flexible optimisation engine and allows experts to interpret, amend and augment system inputs and outputs without the need for programming knowledge. This approach supports collaboration between automated and human agents, working as a joint cognitive system.
{"title":"Greybox scheduling: designing a joint cognitive system for sustainable manufacturing","authors":"C. Upton, F. Quilligan","doi":"10.1145/2559206.2559970","DOIUrl":"https://doi.org/10.1145/2559206.2559970","url":null,"abstract":"The aims of this work were to investigate how decision making in production planning happens, to understand how energy efficiency could be included as a manufacturing goal and to design a decision support system that integrates into current work practices. This case study describes the research, design and development this system. An interactive visualisation provides an interface to a flexible optimisation engine and allows experts to interpret, amend and augment system inputs and outputs without the need for programming knowledge. This approach supports collaboration between automated and human agents, working as a joint cognitive system.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125274956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This installation showcases an office desk, an imaginary wonderland, where a group of diligent kitties live happily and work industriously to make artifacts react to users' eye gaze. Kitties are shy about being seen, but determined to keep everything in the wonderland moving after they hide behind. The desk environment consists of two paper objects augmented by video projections and two devices with digital displays as well as other office stationaries. One desktop eye tracker was carefully setup in order to track gaze in the 3D physical space. Users sit in front of the desk and work as a normal office worker while objects and devices behave in different ways upon the presence and absence of eye gaze of users.
{"title":"Scopophobic kitties in wonderland: stories behind the scene of a gaze contingent environment","authors":"Mon-Chu Chen, Kuan-Ying Wu, Yi-Ching Huang","doi":"10.1145/2559206.2574822","DOIUrl":"https://doi.org/10.1145/2559206.2574822","url":null,"abstract":"This installation showcases an office desk, an imaginary wonderland, where a group of diligent kitties live happily and work industriously to make artifacts react to users' eye gaze. Kitties are shy about being seen, but determined to keep everything in the wonderland moving after they hide behind. The desk environment consists of two paper objects augmented by video projections and two devices with digital displays as well as other office stationaries. One desktop eye tracker was carefully setup in order to track gaze in the 3D physical space. Users sit in front of the desk and work as a normal office worker while objects and devices behave in different ways upon the presence and absence of eye gaze of users.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125367980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walter S. Lasecki, Mitchell L. Gordon, Steven W. Dow, Jeffrey P. Bigham
Behavioral coding is a common technique in the social sciences and human computer interaction for extracting meaning from video data [3]. Since computer vision cannot yet reliably interpret human actions and emotions, video coding remains a time-consuming manual process done by a small team of researchers. We present Glance, a tool that allows researchers to rapidly analyze video datasets for behavioral events that are difficult to detect automatically. Glance uses the crowd to interpret natural language queries, and then aggregates and summarizes the content of the video. We show that Glance can accurately code events in video in a fraction of the time it would take a single person. We also investigate speed improvements made possible by recruiting large crowds, showing that Glance is able to code 80% of an hour-long video in just 5 minutes. Rapid coding allows participants to have a "conversation with their data" to rapidly develop and refine research hypotheses in ways not previously possible.
{"title":"Glance: enabling rapid interactions with data using the crowd","authors":"Walter S. Lasecki, Mitchell L. Gordon, Steven W. Dow, Jeffrey P. Bigham","doi":"10.1145/2559206.2574817","DOIUrl":"https://doi.org/10.1145/2559206.2574817","url":null,"abstract":"Behavioral coding is a common technique in the social sciences and human computer interaction for extracting meaning from video data [3]. Since computer vision cannot yet reliably interpret human actions and emotions, video coding remains a time-consuming manual process done by a small team of researchers. We present Glance, a tool that allows researchers to rapidly analyze video datasets for behavioral events that are difficult to detect automatically. Glance uses the crowd to interpret natural language queries, and then aggregates and summarizes the content of the video. We show that Glance can accurately code events in video in a fraction of the time it would take a single person. We also investigate speed improvements made possible by recruiting large crowds, showing that Glance is able to code 80% of an hour-long video in just 5 minutes. Rapid coding allows participants to have a \"conversation with their data\" to rapidly develop and refine research hypotheses in ways not previously possible.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125569027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}