首页 > 最新文献

IUI. International Conference on Intelligent User Interfaces最新文献

英文 中文
IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022 IUI 2022:第27届智能用户界面国际会议,赫尔辛基,芬兰,2022年3月22日至25日
Pub Date : 2022-01-01 DOI: 10.1145/3490099
{"title":"IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022","authors":"","doi":"10.1145/3490099","DOIUrl":"https://doi.org/10.1145/3490099","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78970867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Employing Social Media to Improve Mental Health: Pitfalls, Lessons Learned, and the Next Frontier 利用社交媒体改善心理健康:陷阱、经验教训和下一个前沿
Pub Date : 2022-01-01 DOI: 10.1145/3490099.3519389
M. Choudhury
{"title":"Employing Social Media to Improve Mental Health: Pitfalls, Lessons Learned, and the Next Frontier","authors":"M. Choudhury","doi":"10.1145/3490099.3519389","DOIUrl":"https://doi.org/10.1145/3490099.3519389","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91448305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IUI '21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021 IUI '21:第26届智能用户界面国际会议,大学城,德克萨斯州,美国,4月13-17日
Pub Date : 2021-01-01 DOI: 10.1145/3397481
{"title":"IUI '21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021","authors":"","doi":"10.1145/3397481","DOIUrl":"https://doi.org/10.1145/3397481","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81904467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Making Videos Accessible for Low Vision Screen Magnifier Users. 让低视力屏幕放大镜用户也能观看视频。
Pub Date : 2020-03-01 DOI: 10.1145/3377325.3377494
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan

People with low vision who use screen magnifiers to interact with computing devices find it very challenging to interact with dynamically changing digital content such as videos, since they do not have the luxury of time to manually move, i.e., pan the magnifier lens to different regions of interest (ROIs) or zoom into these ROIs before the content changes across frames. In this paper, we present SViM, a first of its kind screen-magnifier interface for such users that leverages advances in computer vision, particularly video saliency models, to identify salient ROIs in videos. SViM's interface allows users to zoom in/out of any point of interest, switch between ROIs via mouse clicks and provides assistive panning with the added flexibility that lets the user explore other regions of the video besides the ROIs identified by SViM. Subjective and objective evaluation of a user study with 13 low vision screen magnifier users revealed that overall the participants had a better user experience with SViM over extant screen magnifiers, indicative of the former's promise and potential for making videos accessible to low vision screen magnifier users.

使用屏幕放大镜与计算机设备进行交互的低视力者发现,与动态变化的数字内容(如视频)进行交互非常具有挑战性,因为他们没有足够的时间在内容跨帧变化之前手动移动放大镜镜头,即平移到不同的感兴趣区域(ROI)或放大到这些感兴趣区域。在本文中,我们介绍了 SViM,这是首个面向此类用户的屏幕放大镜界面,它利用计算机视觉的进步,特别是视频显著性模型,来识别视频中的显著 ROI。SViM 的界面允许用户放大/缩小任何感兴趣的点,通过鼠标点击在 ROI 之间进行切换,并提供辅助平移功能,让用户可以更灵活地探索 SViM 识别出的 ROI 以外的其他视频区域。对 13 位低视力屏幕放大镜用户进行的主观和客观评估显示,总体而言,与现有的屏幕放大镜相比,SViM 给参与者带来了更好的用户体验,这表明 SViM 在让低视力屏幕放大镜用户观看视频方面大有可为。
{"title":"Towards Making Videos Accessible for Low Vision Screen Magnifier Users.","authors":"Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan","doi":"10.1145/3377325.3377494","DOIUrl":"10.1145/3377325.3377494","url":null,"abstract":"<p><p>People with low vision who use screen magnifiers to interact with computing devices find it very challenging to interact with dynamically changing digital content such as videos, since they do not have the luxury of time to manually move, i.e., pan the magnifier lens to different regions of interest (ROIs) or zoom into these ROIs before the content changes across frames. In this paper, we present SViM, a first of its kind screen-magnifier interface for such users that leverages advances in computer vision, particularly video saliency models, to identify salient ROIs in videos. SViM's interface allows users to zoom in/out of any point of interest, switch between ROIs via mouse clicks and provides assistive panning with the added flexibility that lets the user explore other regions of the video besides the ROIs identified by SViM. Subjective and objective evaluation of a user study with 13 low vision screen magnifier users revealed that overall the participants had a better user experience with SViM over extant screen magnifiers, indicative of the former's promise and potential for making videos accessible to low vision screen magnifier users.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7871698/pdf/nihms-1666230.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25358571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SaIL: Saliency-Driven Injection of ARIA Landmarks. SaIL:显著性驱动的ARIA地标注入。
Pub Date : 2020-03-01 DOI: 10.1145/3377325.3377540
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan

Navigating webpages with screen readers is a challenge even with recent improvements in screen reader technologies and the increased adoption of web standards for accessibility, namely ARIA. ARIA landmarks, an important aspect of ARIA, lets screen reader users access different sections of the webpage quickly, by enabling them to skip over blocks of irrelevant or redundant content. However, these landmarks are sporadically and inconsistently used by web developers, and in many cases, even absent in numerous web pages. Therefore, we propose SaIL, a scalable approach that automatically detects the important sections of a web page, and then injects ARIA landmarks into the corresponding HTML markup to facilitate quick access to these sections. The central concept underlying SaIL is visual saliency, which is determined using a state-of-the-art deep learning model that was trained on gaze-tracking data collected from sighted users in the context of web browsing. We present the findings of a pilot study that demonstrated the potential of SaIL in reducing both the time and effort spent in navigating webpages with screen readers.

使用屏幕阅读器浏览网页是一个挑战,即使最近屏幕阅读器技术有所改进,并且越来越多地采用可访问性网络标准,即ARIA。ARIA地标,ARIA的一个重要方面,让屏幕阅读器用户快速访问网页的不同部分,使他们能够跳过不相关或冗余的内容块。然而,这些地标被web开发人员偶尔和不一致地使用,在许多情况下,甚至在许多网页中都没有。因此,我们提出了SaIL,这是一种可扩展的方法,可以自动检测网页的重要部分,然后将ARIA标记注入相应的HTML标记中,以方便快速访问这些部分。SaIL的核心概念是视觉显著性,这是通过最先进的深度学习模型确定的,该模型是根据从浏览网页的正常用户收集的视线跟踪数据进行训练的。我们介绍了一项试点研究的结果,该研究证明了SaIL在减少使用屏幕阅读器浏览网页所花费的时间和精力方面的潜力。
{"title":"SaIL: Saliency-Driven Injection of ARIA Landmarks.","authors":"Ali Selman Aydin,&nbsp;Shirin Feiz,&nbsp;Vikas Ashok,&nbsp;I V Ramakrishnan","doi":"10.1145/3377325.3377540","DOIUrl":"https://doi.org/10.1145/3377325.3377540","url":null,"abstract":"<p><p>Navigating webpages with screen readers is a challenge even with recent improvements in screen reader technologies and the increased adoption of web standards for accessibility, namely ARIA. ARIA landmarks, an important aspect of ARIA, lets screen reader users access different sections of the webpage quickly, by enabling them to skip over blocks of irrelevant or redundant content. However, these landmarks are sporadically and inconsistently used by web developers, and in many cases, even absent in numerous web pages. Therefore, we propose SaIL, a scalable approach that automatically detects the important sections of a web page, and then injects ARIA landmarks into the corresponding HTML markup to facilitate quick access to these sections. The central concept underlying SaIL is visual saliency, which is determined using a state-of-the-art deep learning model that was trained on gaze-tracking data collected from sighted users in the context of web browsing. We present the findings of a pilot study that demonstrated the potential of SaIL in reducing both the time and effort spent in navigating webpages with screen readers.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3377325.3377540","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25368054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Scene Text Access: A Comparison of Mobile OCR Modalities for Blind Users. 场景文本访问:盲人用户移动OCR模式的比较。
Pub Date : 2019-03-01 DOI: 10.1145/3301275.3302271
Leo Neat, Ren Peng, Siyang Qin, Roberto Manduchi

We present a study with seven blind participants using three different mobile OCR apps to find text posted in various indoor environments. The first app considered was Microsoft SeeingAI in its Short Text mode, which reads any text in sight with a minimalistic interface. The second app was Spot+OCR, a custom application that separates the task of text detection from OCR proper. Upon detection of text in the image, Spot+OCR generates a short vibration; as soon as the user stabilizes the phone, a high-resolution snapshot is taken and OCR-processed. The third app, Guided OCR, was designed to guide the user in taking several pictures in a 360° span at the maximum resolution available by the camera, with minimum overlap between pictures. Quantitative results (in terms of true positive ratios and traversal speed) were recorded. Along with the qualitative observation and outcomes from an exit survey, these results allow us to identify and assess the different strategies used by our participants, as well as the challenges of operating these systems without sight.

我们对七名盲人参与者进行了一项研究,他们使用三种不同的移动OCR应用程序来查找张贴在各种室内环境中的文本。第一个被考虑的应用程序是微软SeeingAI的短文本模式,它可以通过极简主义的界面读取任何可见的文本。第二个应用程序是Spot+OCR,这是一个将文本检测任务与OCR本身分离的自定义应用程序。在检测到图像中的文本时,Spot+OCR会产生短暂的振动;一旦用户稳定手机,就会拍摄高分辨率快照并进行OCR处理。第三款应用程序“引导式OCR”旨在引导用户以相机可用的最大分辨率在360°范围内拍摄几张照片,并将照片之间的重叠最小化。记录定量结果(根据真阳性率和遍历速度)。除了定性观察和退出调查的结果外,这些结果使我们能够识别和评估参与者使用的不同策略,以及在不知情的情况下操作这些系统的挑战。
{"title":"Scene Text Access: A Comparison of Mobile OCR Modalities for Blind Users.","authors":"Leo Neat,&nbsp;Ren Peng,&nbsp;Siyang Qin,&nbsp;Roberto Manduchi","doi":"10.1145/3301275.3302271","DOIUrl":"https://doi.org/10.1145/3301275.3302271","url":null,"abstract":"<p><p>We present a study with seven blind participants using three different mobile OCR apps to find text posted in various indoor environments. The first app considered was Microsoft SeeingAI in its Short Text mode, which reads any text in sight with a minimalistic interface. The second app was Spot+OCR, a custom application that separates the task of text detection from OCR proper. Upon detection of text in the image, Spot+OCR generates a short vibration; as soon as the user stabilizes the phone, a high-resolution snapshot is taken and OCR-processed. The third app, Guided OCR, was designed to guide the user in taking several pictures in a 360° span at the maximum resolution available by the camera, with minimum overlap between pictures. Quantitative results (in terms of true positive ratios and traversal speed) were recorded. Along with the qualitative observation and outcomes from an exit survey, these results allow us to identify and assess the different strategies used by our participants, as well as the challenges of operating these systems without sight.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3301275.3302271","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41223152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Towards a Generalizable Method for Detecting Fluid Intake with Wrist-Mounted Sensors and Adaptive Segmentation. 用腕戴式传感器和自适应分割检测液体摄入的通用方法。
Pub Date : 2019-03-01 DOI: 10.1145/3301275.3302315
Keum San Chun, Ashley B Sanders, Rebecca Adaimi, Necole Streeper, David E Conroy, Edison Thomaz

Over the last decade, advances in mobile technologies have enabled the development of intelligent systems that attempt to recognize and model a variety of health-related human behaviors. While automated dietary monitoring based on passive sensors has been an area of increasing research activity for many years, much less attention has been given to tracking fluid intake. In this work, we apply an adaptive segmentation technique on a continuous stream of inertial data captured with a practical, off-the-shelf wrist-mounted device to detect fluid intake gestures passively. We evaluated our approach in a study with 30 participants where 561 drinking instances were recorded. Using a leave-one-participant-out (LOPO), we were able to detect drinking episodes with 90.3% precision and 91.0% recall, demonstrating the generalizability of our approach. In addition to our proposed method, we also contribute an anonymized and labeled dataset of drinking and non-drinking gestures to encourage further work in the field.

在过去的十年里,移动技术的进步推动了智能系统的发展,该系统试图识别和建模各种与健康相关的人类行为。尽管多年来,基于被动传感器的自动饮食监测一直是一个研究活动日益增多的领域,但对跟踪液体摄入的关注要少得多。在这项工作中,我们将自适应分割技术应用于一个实用的、现成的手腕安装设备捕获的连续惯性数据流,以被动检测流体摄入姿势。我们在一项对30名参与者的研究中评估了我们的方法,该研究记录了561起饮酒事件。使用遗漏一名参与者(LOPO),我们能够以90.3%的准确率和91.0%的召回率检测饮酒事件,证明了我们方法的可推广性。除了我们提出的方法外,我们还提供了一个匿名和标记的饮酒和非饮酒手势数据集,以鼓励该领域的进一步工作。
{"title":"Towards a Generalizable Method for Detecting Fluid Intake with Wrist-Mounted Sensors and Adaptive Segmentation.","authors":"Keum San Chun,&nbsp;Ashley B Sanders,&nbsp;Rebecca Adaimi,&nbsp;Necole Streeper,&nbsp;David E Conroy,&nbsp;Edison Thomaz","doi":"10.1145/3301275.3302315","DOIUrl":"10.1145/3301275.3302315","url":null,"abstract":"<p><p>Over the last decade, advances in mobile technologies have enabled the development of intelligent systems that attempt to recognize and model a variety of health-related human behaviors. While automated dietary monitoring based on passive sensors has been an area of increasing research activity for many years, much less attention has been given to tracking fluid intake. In this work, we apply an adaptive segmentation technique on a continuous stream of inertial data captured with a practical, off-the-shelf wrist-mounted device to detect fluid intake gestures passively. We evaluated our approach in a study with 30 participants where 561 drinking instances were recorded. Using a leave-one-participant-out (LOPO), we were able to detect drinking episodes with 90.3% precision and 91.0% recall, demonstrating the generalizability of our approach. In addition to our proposed method, we also contribute an anonymized and labeled dataset of drinking and non-drinking gestures to encourage further work in the field.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3301275.3302315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37194335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Providing Adaptive and Personalized Visual Support based on Behavioral Tracking of Children with Autism for Assessing Reciprocity and Coordination Skills in a Joint Attention Training Application 基于行为跟踪的自适应和个性化视觉支持在自闭症儿童互惠和协调技能评估中的应用
Pub Date : 2018-03-05 DOI: 10.1145/3180308.3180349
T. Tang, Pinata Winoto
Recent works have demonstrated the applicability of the activity and behavioral pattern analysis mechanisms to assist therapists, care-givers and individuals with development disorders including those with autism spectrum disorder (ASD); the computational cost and sophistication of such behavioral modeling systems might prevent them from deploying. As such, in this paper, we proposed an easily deployable automatic system to train joint attention (JA) skills, assess the frequency and degree of reciprocity and provide visual cues accordingly. Our proposed approach is different from most of earlier attempts in that we do not capitalize the sophisticated feature-space construction methodology; instead, the simple design, in-game automatic data collection for adaptive visual supports offers hassle-free benefits especially for low-functioning ASD individuals and those with severe verbal impairments.
最近的研究已经证明了活动和行为模式分析机制的适用性,可以帮助治疗师、护理人员和患有发育障碍的个体,包括自闭症谱系障碍(ASD);这种行为建模系统的计算成本和复杂性可能会阻碍它们的部署。因此,在本文中,我们提出了一个易于部署的自动系统来训练联合注意(JA)技能,评估互惠的频率和程度,并相应地提供视觉提示。我们提出的方法与大多数早期的尝试不同,因为我们没有利用复杂的特征空间构建方法;相反,简单的设计,游戏内的自动数据收集为自适应视觉支持提供了无障碍的好处,特别是对于低功能的ASD患者和那些有严重语言障碍的人。
{"title":"Providing Adaptive and Personalized Visual Support based on Behavioral Tracking of Children with Autism for Assessing Reciprocity and Coordination Skills in a Joint Attention Training Application","authors":"T. Tang, Pinata Winoto","doi":"10.1145/3180308.3180349","DOIUrl":"https://doi.org/10.1145/3180308.3180349","url":null,"abstract":"Recent works have demonstrated the applicability of the activity and behavioral pattern analysis mechanisms to assist therapists, care-givers and individuals with development disorders including those with autism spectrum disorder (ASD); the computational cost and sophistication of such behavioral modeling systems might prevent them from deploying. As such, in this paper, we proposed an easily deployable automatic system to train joint attention (JA) skills, assess the frequency and degree of reciprocity and provide visual cues accordingly. Our proposed approach is different from most of earlier attempts in that we do not capitalize the sophisticated feature-space construction methodology; instead, the simple design, in-game automatic data collection for adaptive visual supports offers hassle-free benefits especially for low-functioning ASD individuals and those with severe verbal impairments.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80103481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Configurable and Contextually Expandable Interactive Picture Exchange Communication System (PECS) for Chinese Children with Autism 面向中国自闭症儿童的可配置、情境可扩展的交互式图片交换通信系统
Pub Date : 2018-03-05 DOI: 10.1145/3180308.3180348
T. Tang, Pinata Winoto
The electronic versions of PECS (picture exchange communication system) have been introduced to non-verbal children with autism spectrum disorder (ASD) in the past decade. In this paper, we discuss some related issues and propose the design of more versatile electronic PECS (ePECS) as a comprehensive language training tool.
在过去的十年里,电子版本的PECS(图片交换通信系统)已经被引入到患有自闭症谱系障碍(ASD)的非语言儿童中。在本文中,我们讨论了一些相关问题,并提出了设计更多功能的电子PECS (ePECS)作为综合语言培训工具。
{"title":"A Configurable and Contextually Expandable Interactive Picture Exchange Communication System (PECS) for Chinese Children with Autism","authors":"T. Tang, Pinata Winoto","doi":"10.1145/3180308.3180348","DOIUrl":"https://doi.org/10.1145/3180308.3180348","url":null,"abstract":"The electronic versions of PECS (picture exchange communication system) have been introduced to non-verbal children with autism spectrum disorder (ASD) in the past decade. In this paper, we discuss some related issues and propose the design of more versatile electronic PECS (ePECS) as a comprehensive language training tool.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84258282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Can We Predict the Scenic Beauty of Locations from Geo-tagged Flickr Images? 我们能从地理标记的Flickr图片中预测地点的美景吗?
Pub Date : 2018-03-05 DOI: 10.1145/3172944.3173000
Ch. Md. Rakin Haider, Mohammed Eunus Ali
In this work, we propose a novel technique to determine the aesthetic score of a location from social metadata of Flickr photos. In particular, we built machine learning classifiers to predict the class of a location where each class corresponds to a set of locations having equal aesthetic rating. These models are trained on two empirically build datasets containing locations in two different cities (Rome and Paris) where aesthetic ratings of locations were gathered from TripAdvisor.com. In this work we exploit the idea that in a location with higher aesthetic rating, it is more likely for an user to capture a photo and other users are more likely to interact with that photo. Our models achieved as high as 79.48% accuracy (78.60% precision and 79.27% recall) on Rome dataset and 73.78% accuracy(75.62% precision and 78.07% recall) on Paris dataset. The proposed technique can facilitate urban planning, tour planning and recommending aesthetically pleasing paths.
在这项工作中,我们提出了一种新的技术,从Flickr照片的社会元数据中确定一个地点的美学评分。特别是,我们构建了机器学习分类器来预测一个位置的类别,其中每个类别对应于一组具有相同美学评级的位置。这些模型是在两个基于经验构建的数据集上进行训练的,这些数据集包含两个不同城市(罗马和巴黎)的地点,这些地点的审美评级是从TripAdvisor.com上收集的。在这项工作中,我们利用了这样一个想法,即在一个审美等级较高的地方,用户更有可能捕捉到照片,而其他用户更有可能与这张照片互动。我们的模型在罗马数据集上达到了79.48%的准确率(精度78.60%,召回率79.27%),在巴黎数据集上达到了73.78%的准确率(精度75.62%,召回率78.07%)。所提出的技术可以促进城市规划,旅游规划和推荐美观的路径。
{"title":"Can We Predict the Scenic Beauty of Locations from Geo-tagged Flickr Images?","authors":"Ch. Md. Rakin Haider, Mohammed Eunus Ali","doi":"10.1145/3172944.3173000","DOIUrl":"https://doi.org/10.1145/3172944.3173000","url":null,"abstract":"In this work, we propose a novel technique to determine the aesthetic score of a location from social metadata of Flickr photos. In particular, we built machine learning classifiers to predict the class of a location where each class corresponds to a set of locations having equal aesthetic rating. These models are trained on two empirically build datasets containing locations in two different cities (Rome and Paris) where aesthetic ratings of locations were gathered from TripAdvisor.com. In this work we exploit the idea that in a location with higher aesthetic rating, it is more likely for an user to capture a photo and other users are more likely to interact with that photo. Our models achieved as high as 79.48% accuracy (78.60% precision and 79.27% recall) on Rome dataset and 73.78% accuracy(75.62% precision and 78.07% recall) on Paris dataset. The proposed technique can facilitate urban planning, tour planning and recommending aesthetically pleasing paths.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81330229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IUI. International Conference on Intelligent User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1