首页 > 最新文献

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems最新文献

英文 中文
WADE: simplified GUI add-on development for third-party software WADE:简化了第三方软件的GUI附加开发
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557349
Xiaojun Meng, Shengdong Zhao, Yongfeng Huang, Zhongyuan Zhang, James R. Eagan, Subramanian Ramanathan
We present the WADE Integrated Development Environment (IDE), which simplifies interface and functionality modification of existing third-party software without access to source code. WADE clones the Graphical User Interface (GUI) of a host program through dynamic-link library (DLL) injection, enabling modifications to (1) the GUI in a WYSIWYG fashion and (2) software functionality. We compare WADE with an alternative state-of-the-art runtime toolkit overloading approach in a user-study, whose results demonstrate that WADE significantly simplifies the task of GUI-based add-on development.
我们提出了WADE集成开发环境(IDE),它简化了现有第三方软件的接口和功能修改,而无需访问源代码。WADE通过动态链接库(DLL)注入克隆宿主程序的图形用户界面(GUI),从而能够以所见即所得的方式修改(1)GUI和(2)软件功能。我们在用户研究中将WADE与另一种最先进的运行时工具包重载方法进行了比较,其结果表明WADE显著地简化了基于gui的附加组件开发任务。
{"title":"WADE: simplified GUI add-on development for third-party software","authors":"Xiaojun Meng, Shengdong Zhao, Yongfeng Huang, Zhongyuan Zhang, James R. Eagan, Subramanian Ramanathan","doi":"10.1145/2556288.2557349","DOIUrl":"https://doi.org/10.1145/2556288.2557349","url":null,"abstract":"We present the WADE Integrated Development Environment (IDE), which simplifies interface and functionality modification of existing third-party software without access to source code. WADE clones the Graphical User Interface (GUI) of a host program through dynamic-link library (DLL) injection, enabling modifications to (1) the GUI in a WYSIWYG fashion and (2) software functionality. We compare WADE with an alternative state-of-the-art runtime toolkit overloading approach in a user-study, whose results demonstrate that WADE significantly simplifies the task of GUI-based add-on development.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82587669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Imperceptible depth shifts for touch interaction with stereoscopic objects 触摸与立体物体交互时难以察觉的深度变化
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557134
Dimitar Valkov, A. Giesler, K. Hinrichs
While touch technology has proven its usability for 2D interaction and has already become a standard input modality for many devices, the challenges to exploit its applicability with stereoscopically rendered content have barely been studied. In this paper we exploit the properties of the visual perception to allow users to touch stereoscopically displayed objects when the input is constrained to a 2D surface. Therefore, we have extended and generalized recent evaluations on the user's ability to discriminate small induced object shifts while reaching out to touch a virtual object, and we propose a practical interaction technique, the attracting shift technique, suitable for numerous application scenarios where shallow depth interaction is sufficient. In addition, our results indicate that slight object shifts during touch interaction make the virtual scene appear perceptually more stable compared to a static scene. As a consequence, applications have to manipulate the virtual objects to make them appear static for the user.
虽然触摸技术已经证明了它在2D交互中的可用性,并且已经成为许多设备的标准输入方式,但利用其在立体渲染内容中的适用性所面临的挑战几乎没有被研究过。在本文中,我们利用视觉感知的属性,允许用户在输入受限于2D表面时触摸立体显示的对象。因此,我们扩展和概括了最近对用户在伸手触摸虚拟物体时区分小的诱导物体移动的能力的评估,并提出了一种实用的交互技术,即吸引移动技术,适用于浅深度交互足够的众多应用场景。此外,我们的研究结果表明,在触摸交互过程中,轻微的物体移动使虚拟场景在感知上比静态场景更稳定。因此,应用程序必须操纵虚拟对象,使它们在用户看来是静态的。
{"title":"Imperceptible depth shifts for touch interaction with stereoscopic objects","authors":"Dimitar Valkov, A. Giesler, K. Hinrichs","doi":"10.1145/2556288.2557134","DOIUrl":"https://doi.org/10.1145/2556288.2557134","url":null,"abstract":"While touch technology has proven its usability for 2D interaction and has already become a standard input modality for many devices, the challenges to exploit its applicability with stereoscopically rendered content have barely been studied. In this paper we exploit the properties of the visual perception to allow users to touch stereoscopically displayed objects when the input is constrained to a 2D surface. Therefore, we have extended and generalized recent evaluations on the user's ability to discriminate small induced object shifts while reaching out to touch a virtual object, and we propose a practical interaction technique, the attracting shift technique, suitable for numerous application scenarios where shallow depth interaction is sufficient. In addition, our results indicate that slight object shifts during touch interaction make the virtual scene appear perceptually more stable compared to a static scene. As a consequence, applications have to manipulate the virtual objects to make them appear static for the user.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82306402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Toss 'n' turn: smartphone as sleep and sleep quality detector 翻转:智能手机作为睡眠和睡眠质量探测器
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557220
Jun-Ki Min, Afsaneh Doryab, Jason Wiese, Shahriyar Amini, J. Zimmerman, Jason I. Hong
The rapid adoption of smartphones along with a growing habit for using these devices as alarm clocks presents an opportunity to use this device as a sleep detector. This adds value to UbiComp and personal informatics in terms of user context and new performance data to collect and visualize, and it benefits healthcare as sleep is correlated with many health issues. To assess this opportunity, we collected one month of phone sensor and sleep diary entries from 27 people who have a variety of sleep contexts. We used this data to construct models that detect sleep and wake states, daily sleep quality, and global sleep quality. Our system classifies sleep state with 93.06% accuracy, daily sleep quality with 83.97% accuracy, and overall sleep quality with 81.48% accuracy. Individual models performed better than generally trained models, where the individual models require 3 days of ground truth data and 3 weeks of ground truth data to perform well on detecting sleep and sleep quality, respectively. Finally, the features of noise and movement were useful to infer sleep quality.
智能手机的迅速普及,以及将这些设备用作闹钟的习惯日益增长,为将这种设备用作睡眠探测器提供了机会。这增加了UbiComp和个人信息学在用户环境和新性能数据收集和可视化方面的价值,并有利于医疗保健,因为睡眠与许多健康问题相关。为了评估这一机会,我们收集了27名睡眠情况不同的人一个月的手机传感器和睡眠日记。我们利用这些数据构建了检测睡眠和清醒状态、日常睡眠质量和整体睡眠质量的模型。我们的系统对睡眠状态的分类准确率为93.06%,对日常睡眠质量的分类准确率为83.97%,对整体睡眠质量的分类准确率为81.48%。个体模型的表现优于一般训练的模型,其中个体模型分别需要3天的地面真值数据和3周的地面真值数据才能在检测睡眠和睡眠质量方面表现良好。最后,噪音和运动的特征有助于推断睡眠质量。
{"title":"Toss 'n' turn: smartphone as sleep and sleep quality detector","authors":"Jun-Ki Min, Afsaneh Doryab, Jason Wiese, Shahriyar Amini, J. Zimmerman, Jason I. Hong","doi":"10.1145/2556288.2557220","DOIUrl":"https://doi.org/10.1145/2556288.2557220","url":null,"abstract":"The rapid adoption of smartphones along with a growing habit for using these devices as alarm clocks presents an opportunity to use this device as a sleep detector. This adds value to UbiComp and personal informatics in terms of user context and new performance data to collect and visualize, and it benefits healthcare as sleep is correlated with many health issues. To assess this opportunity, we collected one month of phone sensor and sleep diary entries from 27 people who have a variety of sleep contexts. We used this data to construct models that detect sleep and wake states, daily sleep quality, and global sleep quality. Our system classifies sleep state with 93.06% accuracy, daily sleep quality with 83.97% accuracy, and overall sleep quality with 81.48% accuracy. Individual models performed better than generally trained models, where the individual models require 3 days of ground truth data and 3 weeks of ground truth data to perform well on detecting sleep and sleep quality, respectively. Finally, the features of noise and movement were useful to infer sleep quality.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82347593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 201
Session details: Interacting with the web 会话细节:与web交互
J. Golbeck
{"title":"Session details: Interacting with the web","authors":"J. Golbeck","doi":"10.1145/3250946","DOIUrl":"https://doi.org/10.1145/3250946","url":null,"abstract":"","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"191 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78651608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Sustainability and everyday practices 会议细节:可持续发展和日常实践
M. Hazas
{"title":"Session details: Sustainability and everyday practices","authors":"M. Hazas","doi":"10.1145/3250918","DOIUrl":"https://doi.org/10.1145/3250918","url":null,"abstract":"","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76036627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Issues that matter 会话细节:重要的问题
R. Comber
{"title":"Session details: Issues that matter","authors":"R. Comber","doi":"10.1145/3250960","DOIUrl":"https://doi.org/10.1145/3250960","url":null,"abstract":"","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87148028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An EEG-based approach for evaluating audio notifications under ambient sounds 一种基于脑电图的方法,用于评估环境声音下的音频通知
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557076
Yi-Chieh Lee, Wen-Chieh Lin, Jung-Tai King, L. Ko, Yu-Ting Huang, Fu-Yin Cherng
Audio notifications are an important means of prompting users of electronic products. Although useful in most environments, audio notifications are ineffective in certain situations, especially against particular auditory backgrounds or when the user is distracted. Several studies have used behavioral performance to evaluate audio notifications, but these studies failed to achieve consistent results due to factors including user subjectivity and environmental differences; thus, a new method and more objective indicators are necessary. In this study, we propose an approach based on electroencephalography (EEG) to evaluate audio notifications by measuring users' auditory perceptual responses (mismatch negativity) and attention shifting (P3a). We demonstrate our approach by applying it to the usability testing of audio notifications in realistic scenarios, such as users performing a major task amid ambient noises. Our results open a new perspective for evaluating the design of the audio notifications.
音频通知是电子产品提示用户使用的重要手段。尽管在大多数环境中都很有用,但音频通知在某些情况下是无效的,特别是针对特定的听觉背景或当用户分心时。一些研究使用行为表现来评估音频通知,但由于用户主观性和环境差异等因素,这些研究未能获得一致的结果;因此,需要一种新的方法和更客观的指标。在这项研究中,我们提出了一种基于脑电图(EEG)的方法,通过测量用户的听觉感知反应(错配消极)和注意力转移(P3a)来评估音频通知。我们通过将其应用于现实场景中音频通知的可用性测试来演示我们的方法,例如用户在环境噪声中执行主要任务。我们的研究结果为评估音频通知的设计提供了一个新的视角。
{"title":"An EEG-based approach for evaluating audio notifications under ambient sounds","authors":"Yi-Chieh Lee, Wen-Chieh Lin, Jung-Tai King, L. Ko, Yu-Ting Huang, Fu-Yin Cherng","doi":"10.1145/2556288.2557076","DOIUrl":"https://doi.org/10.1145/2556288.2557076","url":null,"abstract":"Audio notifications are an important means of prompting users of electronic products. Although useful in most environments, audio notifications are ineffective in certain situations, especially against particular auditory backgrounds or when the user is distracted. Several studies have used behavioral performance to evaluate audio notifications, but these studies failed to achieve consistent results due to factors including user subjectivity and environmental differences; thus, a new method and more objective indicators are necessary. In this study, we propose an approach based on electroencephalography (EEG) to evaluate audio notifications by measuring users' auditory perceptual responses (mismatch negativity) and attention shifting (P3a). We demonstrate our approach by applying it to the usability testing of audio notifications in realistic scenarios, such as users performing a major task amid ambient noises. Our results open a new perspective for evaluating the design of the audio notifications.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87534213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Visually impaired users on an online social network 在线社交网络上的视障用户
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557415
Shaomei Wu, Lada A. Adamic
In this paper we present the first large-scale empirical study of how visually impaired people use online social networks, specifically Facebook. We identify a sample of 50K visually impaired users, and study the activities they perform, the content they produce, and the friendship networks they build on Facebook. We find that visually impaired users participate on Facebook (e.g. status updates, comments, likes) as much as the general population, and receive more feedback (i.e., comments and likes) on average on their content. By analyzing the content produced by visually impaired users, we find that they share their experience and issues related to vision impairment. We also identify distinctive patterns in their language and technology use. We also show that, compared to other users, visually impaired users have smaller social networks, but such differences have decreased over time. Our findings have implications for improving the utility and usability of online social networks for visually impaired users.
在本文中,我们提出了第一个大规模的实证研究视障人士如何使用在线社交网络,特别是Facebook。我们选取了5万名视障用户作为样本,研究他们在Facebook上的活动、制作的内容以及建立的友谊网络。我们发现视障用户在Facebook上的参与度(如状态更新、评论、点赞)与普通人一样多,并且平均而言,他们的内容得到了更多的反馈(如评论和点赞)。通过分析视障用户制作的内容,我们发现他们分享了与视障相关的经历和问题。我们还发现了他们在语言和技术使用方面的独特模式。我们还表明,与其他用户相比,视障用户的社交网络更小,但这种差异随着时间的推移而减少。我们的研究结果对提高视障用户在线社交网络的实用性和可用性具有启示意义。
{"title":"Visually impaired users on an online social network","authors":"Shaomei Wu, Lada A. Adamic","doi":"10.1145/2556288.2557415","DOIUrl":"https://doi.org/10.1145/2556288.2557415","url":null,"abstract":"In this paper we present the first large-scale empirical study of how visually impaired people use online social networks, specifically Facebook. We identify a sample of 50K visually impaired users, and study the activities they perform, the content they produce, and the friendship networks they build on Facebook. We find that visually impaired users participate on Facebook (e.g. status updates, comments, likes) as much as the general population, and receive more feedback (i.e., comments and likes) on average on their content. By analyzing the content produced by visually impaired users, we find that they share their experience and issues related to vision impairment. We also identify distinctive patterns in their language and technology use. We also show that, compared to other users, visually impaired users have smaller social networks, but such differences have decreased over time. Our findings have implications for improving the utility and usability of online social networks for visually impaired users.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"2014 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88299885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 108
SurfacePhone: a mobile projection device for single- and multiuser everywhere tabletop interaction SurfacePhone:用于单用户和多用户桌面交互的移动投影设备
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557075
Christian Winkler, Markus Löchtefeld, D. Dobbelstein, A. Krüger, E. Rukzio
To maintain a mobile form factor, the screen real estate of a mobile device canIn this paper we present SurfacePhone; a novel configuration of a projector phone which aligns the projector to project onto a physical surface to allow tabletop-like interaction in a mobile setup. The projection is created behind the upright standing phone and is touch and gesture-enabled. Multiple projections can be merged to create shared spaces for multi-user collaboration. We investigate this new setup, starting with the concept that we evaluated with a concept prototype. Furthermore we present our technical prototype, a mobile phone case with integrated projector that allows for the aforementioned interaction. We discuss its technical requirements and evaluate the accuracy of interaction in a second user study. We conclude with lessons learned and design guidelines.
为了保持移动设备的外形因素,移动设备的屏幕实际面积可以。一种新颖的投影仪手机配置,它将投影仪投影到物理表面上,从而允许在移动设置中进行类似桌面的交互。投影在直立直立的手机后面创建,支持触摸和手势。多个投影可以合并为多用户协作创建共享空间。我们研究这个新的设置,从我们用概念原型评估的概念开始。此外,我们展示了我们的技术原型,一个集成投影仪的手机外壳,允许上述互动。我们讨论了它的技术要求,并在第二个用户研究中评估了交互的准确性。我们总结了经验教训和设计指南。
{"title":"SurfacePhone: a mobile projection device for single- and multiuser everywhere tabletop interaction","authors":"Christian Winkler, Markus Löchtefeld, D. Dobbelstein, A. Krüger, E. Rukzio","doi":"10.1145/2556288.2557075","DOIUrl":"https://doi.org/10.1145/2556288.2557075","url":null,"abstract":"To maintain a mobile form factor, the screen real estate of a mobile device canIn this paper we present SurfacePhone; a novel configuration of a projector phone which aligns the projector to project onto a physical surface to allow tabletop-like interaction in a mobile setup. The projection is created behind the upright standing phone and is touch and gesture-enabled. Multiple projections can be merged to create shared spaces for multi-user collaboration. We investigate this new setup, starting with the concept that we evaluated with a concept prototype. Furthermore we present our technical prototype, a mobile phone case with integrated projector that allows for the aforementioned interaction. We discuss its technical requirements and evaluate the accuracy of interaction in a second user study. We conclude with lessons learned and design guidelines.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"97 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86378878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
The role of interactive biclusters in sensemaking 交互双聚类在语义构建中的作用
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557337
Maoyuan Sun, Lauren Bradel, Chris North, Naren Ramakrishnan
Visual exploration of relationships within large, textual datasets is an important aid for human sensemaking. By understanding computed, structural relationships between entities of different types (e.g., people and locations), users can leverage domain expertise and intuition to determine the importance and relevance of these relationships for tasks, such as intelligence analysis. Biclusters are a potentially desirable method to facilitate this, because they reveal coordinated relationships that can represent meaningful relationships. Bixplorer, a visual analytics prototype, supports interactive exploration of textual datasets in a spatial workspace with biclusters. In this paper, we present results of a study that analyzes how users interact with biclusters to solve an intelligence analysis problem using Bixplorer. We found that biclusters played four principal roles in the analytical process: an effective starting point for analysis, a revealer of two levels of connections, an indicator of potentially important entities, and a useful label for clusters of organized information.
对大型文本数据集中的关系进行视觉探索是人类语义构建的重要辅助。通过理解不同类型实体(例如人员和位置)之间计算的结构关系,用户可以利用领域专业知识和直觉来确定这些关系对任务的重要性和相关性,例如智能分析。双聚类是促进这一点的潜在理想方法,因为它们揭示了可以表示有意义关系的协调关系。bixexplorer是一个可视化分析原型,支持在空间工作空间中使用双聚类对文本数据集进行交互式探索。在本文中,我们展示了一项研究的结果,该研究分析了用户如何与双集群交互,以使用bixexplorer解决智能分析问题。我们发现双聚类在分析过程中发挥了四个主要作用:分析的有效起点,两个层次连接的揭示器,潜在重要实体的指示器,以及组织信息集群的有用标签。
{"title":"The role of interactive biclusters in sensemaking","authors":"Maoyuan Sun, Lauren Bradel, Chris North, Naren Ramakrishnan","doi":"10.1145/2556288.2557337","DOIUrl":"https://doi.org/10.1145/2556288.2557337","url":null,"abstract":"Visual exploration of relationships within large, textual datasets is an important aid for human sensemaking. By understanding computed, structural relationships between entities of different types (e.g., people and locations), users can leverage domain expertise and intuition to determine the importance and relevance of these relationships for tasks, such as intelligence analysis. Biclusters are a potentially desirable method to facilitate this, because they reveal coordinated relationships that can represent meaningful relationships. Bixplorer, a visual analytics prototype, supports interactive exploration of textual datasets in a spatial workspace with biclusters. In this paper, we present results of a study that analyzes how users interact with biclusters to solve an intelligence analysis problem using Bixplorer. We found that biclusters played four principal roles in the analytical process: an effective starting point for analysis, a revealer of two levels of connections, an indicator of potentially important entities, and a useful label for clusters of organized information.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86411809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1