首页 > 最新文献

26th International Conference on Intelligent User Interfaces - Companion最新文献

英文 中文
COVID19α: Interactive Spatio-Temporal Visualization of COVID-19 Symptoms through Tweet Analysis COVID-19 α:通过推文分析的COVID-19症状互动时空可视化
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450715
Biddut Sarker Bijoy, Syeda Jannatus Saba, Souvik Sarkar, Md. Saiful Islam, Sheikh Rabiul Islam, M. R. Amin, Shubhra (Santu) Karmaker
In this demo, we focus on analyzing COVID-19 related symptoms across the globe reported through tweets by building an interactive spatio-temporal visualization tool, i.e., COVID19α. Using around 462 million tweets collected over a span of six months, COVID19α provides three different types of visualization tools: 1) Spatial Visualization with a focus on visualizing COVID-19 symptoms across different geographic locations; 2) Temporal Visualization with a focus on visualizing the evolution of COVID-19 symptoms over time for a particular geographic location; and 3) Spatio-Temporal Visualization with a focus on combining both spatial and temporal analysis to provide comparative visualizations between two (or more) symptoms across time and space. We believe that health professionals, scientists, and policymakers will be able to leverage this interactive tool to devise better and targeted health intervention policies. Our developed interactive visualization tool is publicly available at https://bijoy-sust.github.io/Covid19/.
在本次演示中,我们通过构建交互式时空可视化工具COVID-19 α,重点分析全球范围内通过推特报告的COVID-19相关症状。COVID-19 α利用在六个月内收集的约4.62亿条推文,提供了三种不同类型的可视化工具:1)空间可视化,重点是可视化不同地理位置的COVID-19症状;2)时间可视化,重点是可视化特定地理位置的COVID-19症状随时间的演变;3)时空可视化,重点是将空间和时间分析相结合,提供跨时间和空间的两种(或多种)症状之间的比较可视化。我们相信,卫生专业人员、科学家和政策制定者将能够利用这一互动工具来制定更好的、有针对性的卫生干预政策。我们开发的交互式可视化工具可在https://bijoy-sust.github.io/Covid19/上公开获取。
{"title":"COVID19α: Interactive Spatio-Temporal Visualization of COVID-19 Symptoms through Tweet Analysis","authors":"Biddut Sarker Bijoy, Syeda Jannatus Saba, Souvik Sarkar, Md. Saiful Islam, Sheikh Rabiul Islam, M. R. Amin, Shubhra (Santu) Karmaker","doi":"10.1145/3397482.3450715","DOIUrl":"https://doi.org/10.1145/3397482.3450715","url":null,"abstract":"In this demo, we focus on analyzing COVID-19 related symptoms across the globe reported through tweets by building an interactive spatio-temporal visualization tool, i.e., COVID19α. Using around 462 million tweets collected over a span of six months, COVID19α provides three different types of visualization tools: 1) Spatial Visualization with a focus on visualizing COVID-19 symptoms across different geographic locations; 2) Temporal Visualization with a focus on visualizing the evolution of COVID-19 symptoms over time for a particular geographic location; and 3) Spatio-Temporal Visualization with a focus on combining both spatial and temporal analysis to provide comparative visualizations between two (or more) symptoms across time and space. We believe that health professionals, scientists, and policymakers will be able to leverage this interactive tool to devise better and targeted health intervention policies. Our developed interactive visualization tool is publicly available at https://bijoy-sust.github.io/Covid19/.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133043983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Back-end semantics for multimodal dialog on XR devices XR设备上多模态对话框的后端语义
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450719
P. Poller, Margarita Chikobava, Jack Hodges, Mareike Kritzler, F. Michahelles, Tilman Becker
Extended Reality (XR) devices have great potential to become the next wave in mobile interaction. They provide powerful, easy-to-use Augmented Reality (AR) and/or Mixed Reality (MR) in conjunction with multimodal interaction facilities using gaze, gesture, and speech. However, current implementations typically lack a coherent semantic representation for the virtual elements, backend-communication, and dialog capabilities. Existing devices are often restricted to mere command and control interactions. To improve these shortcomings and realize enhanced system capabilities and comprehensive interactivity, we have developed a flexible modular approach that integrates powerful back-end platforms using standard API interfaces. As a concrete example, we present our distributed implementation of a multimodal dialog system on the Microsoft Hololens®. It uses the SiAM-dp multimodal dialog platform as a back-end service and an Open Semantic Framework (OSF) back-end server to extract the semantic models for creating the dialog domain model.
扩展现实(XR)设备具有成为下一波移动交互浪潮的巨大潜力。它们提供功能强大,易于使用的增强现实(AR)和/或混合现实(MR),以及使用凝视,手势和语音的多模式交互设施。然而,当前的实现通常缺乏虚拟元素、后端通信和对话功能的一致语义表示。现有的设备通常仅限于命令和控制交互。为了改善这些缺点并实现增强的系统功能和全面的交互性,我们开发了一种灵活的模块化方法,使用标准API接口集成强大的后端平台。作为一个具体的例子,我们介绍了我们在Microsoft Hololens®上的多模态对话系统的分布式实现。它使用SiAM-dp多模态对话平台作为后端服务,并使用开放语义框架(Open Semantic Framework, OSF)后端服务器提取用于创建对话域模型的语义模型。
{"title":"Back-end semantics for multimodal dialog on XR devices","authors":"P. Poller, Margarita Chikobava, Jack Hodges, Mareike Kritzler, F. Michahelles, Tilman Becker","doi":"10.1145/3397482.3450719","DOIUrl":"https://doi.org/10.1145/3397482.3450719","url":null,"abstract":"Extended Reality (XR) devices have great potential to become the next wave in mobile interaction. They provide powerful, easy-to-use Augmented Reality (AR) and/or Mixed Reality (MR) in conjunction with multimodal interaction facilities using gaze, gesture, and speech. However, current implementations typically lack a coherent semantic representation for the virtual elements, backend-communication, and dialog capabilities. Existing devices are often restricted to mere command and control interactions. To improve these shortcomings and realize enhanced system capabilities and comprehensive interactivity, we have developed a flexible modular approach that integrates powerful back-end platforms using standard API interfaces. As a concrete example, we present our distributed implementation of a multimodal dialog system on the Microsoft Hololens®. It uses the SiAM-dp multimodal dialog platform as a back-end service and an Open Semantic Framework (OSF) back-end server to extract the semantic models for creating the dialog domain model.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125808856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Healthy Interfaces (HEALTHI) Workshop 健康接口(HEALTHI)研讨会
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450710
Michael Sobolev, Katrin Hänsel, Tanzeem Choudhury
The first workshop on Healthy Interfaces (HEALTHI), collocated with the 2021 ACM Intelligent User Interfaces (IUI) conference, offers a forum that brings academics and industry researchers together and seeks submissions broadly related to the design of healthy user interfaces. The workshop will discuss intelligent user interfaces such as screens, wearables, voices assistants, and chatbots in the context of supporting health, health behavior, and wellbeing.
首届健康界面研讨会(HEALTHI)与2021年ACM智能用户界面(IUI)会议同时举行,提供了一个论坛,将学者和行业研究人员聚集在一起,并寻求与健康用户界面设计广泛相关的提交。研讨会将讨论智能用户界面,如屏幕、可穿戴设备、语音助手和聊天机器人,以支持健康、健康行为和福祉。
{"title":"Healthy Interfaces (HEALTHI) Workshop","authors":"Michael Sobolev, Katrin Hänsel, Tanzeem Choudhury","doi":"10.1145/3397482.3450710","DOIUrl":"https://doi.org/10.1145/3397482.3450710","url":null,"abstract":"The first workshop on Healthy Interfaces (HEALTHI), collocated with the 2021 ACM Intelligent User Interfaces (IUI) conference, offers a forum that brings academics and industry researchers together and seeks submissions broadly related to the design of healthy user interfaces. The workshop will discuss intelligent user interfaces such as screens, wearables, voices assistants, and chatbots in the context of supporting health, health behavior, and wellbeing.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130096290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fifth HUMANIZE workshop on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory: Summary 第五届人性化研讨会:基于心理学理论的用户建模在自适应系统中的透明度和可解释性:总结
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450708
Mark P. Graus, B. Ferwerda, M. Tkalcic, Panagiotis Germanakos
The fifth HUMANIZE workshop1 on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory took place in conjunction with the 26th annual meeting of the Intelligent User Interfaces (IUI)2 community in Texas, USA on April 17, 2021. The workshop provided a venue for researchers from different fields to interact by accepting contributions on the intersection of practical data mining methods and theoretical knowledge for personalization. A total of five papers was accepted for this edition of the workshop.
第五届基于心理学理论的用户建模在自适应系统中的透明度和可解释性的HUMANIZE研讨会于2021年4月17日在美国德克萨斯州与第26届智能用户界面(IUI)社区年会上一起举行。研讨会为来自不同领域的研究人员提供了一个互动的场所,通过接受关于实际数据挖掘方法和个性化理论知识交叉的贡献。本次研讨会共接受了五篇论文。
{"title":"Fifth HUMANIZE workshop on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory: Summary","authors":"Mark P. Graus, B. Ferwerda, M. Tkalcic, Panagiotis Germanakos","doi":"10.1145/3397482.3450708","DOIUrl":"https://doi.org/10.1145/3397482.3450708","url":null,"abstract":"The fifth HUMANIZE workshop1 on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory took place in conjunction with the 26th annual meeting of the Intelligent User Interfaces (IUI)2 community in Texas, USA on April 17, 2021. The workshop provided a venue for researchers from different fields to interact by accepting contributions on the intersection of practical data mining methods and theoretical knowledge for personalization. A total of five papers was accepted for this edition of the workshop.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"146 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129852833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE) 个性化界面的社会和文化整合(社会化)
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450709
F. Agrusti, Fabio Gasparetti, Cristina Gena, G. Sansonetti, M. Tkalcic
This is the first edition of the SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE) workshop. The main goal is to bring together all those interested in the development of interactive techniques that may contribute to foster the social and cultural inclusion of a broad range of users. More specifically, we intend to attract research that takes into account the interaction peculiarities typical of different realities, with a focus on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic and disabled people). Among others, we are also interested in human-robot interaction techniques aimed at the development of social robots, that is, autonomous robots that interact with people by engaging in social-affective behaviors, abilities, and rules related to their collaborative role.
这是社会和文化整合与个性化界面(社会化)研讨会的第一版。主要目标是将所有对开发交互技术感兴趣的人聚集在一起,这些技术可能有助于促进广泛用户的社会和文化包容性。更具体地说,我们打算吸引考虑到不同现实中典型的相互作用特点的研究,重点关注弱势和风险类别(例如,难民和移民)和弱势群体(例如,儿童,老人,自闭症和残疾人)。除此之外,我们还对旨在开发社交机器人的人机交互技术感兴趣,即通过参与与其协作角色相关的社会情感行为,能力和规则与人互动的自主机器人。
{"title":"SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE)","authors":"F. Agrusti, Fabio Gasparetti, Cristina Gena, G. Sansonetti, M. Tkalcic","doi":"10.1145/3397482.3450709","DOIUrl":"https://doi.org/10.1145/3397482.3450709","url":null,"abstract":"This is the first edition of the SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE) workshop. The main goal is to bring together all those interested in the development of interactive techniques that may contribute to foster the social and cultural inclusion of a broad range of users. More specifically, we intend to attract research that takes into account the interaction peculiarities typical of different realities, with a focus on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic and disabled people). Among others, we are also interested in human-robot interaction techniques aimed at the development of social robots, that is, autonomous robots that interact with people by engaging in social-affective behaviors, abilities, and rules related to their collaborative role.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123570293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VisRec: A Hands-on Tutorial on Deep Learning for Visual Recommender Systems VisRec:视觉推荐系统的深度学习实践教程
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450620
Denis Parra, Antonio Ossa-Guerra, Manuel Cartagena, Patricio Cerda-Mardini, Felipe del-Rio
This tutorial serves as an introduction to deep learning approaches to build visual recommendation systems. Deep learning models can be used as feature extractors, and perform extremely well in visual recommender systems to create representations of visual items. This tutorial covers the foundations of convolutional neural networks and then how to use them to build state-of-the-art personalized recommendation systems. The tutorial is designed as a hands-on experience, focused on providing both theoretical knowledge as well as practical experience on the topics of the course.
本教程介绍了构建视觉推荐系统的深度学习方法。深度学习模型可以用作特征提取器,并且在视觉推荐系统中表现非常好,可以创建视觉项目的表示。本教程涵盖了卷积神经网络的基础,然后介绍了如何使用它们来构建最先进的个性化推荐系统。本教程设计为实践体验,重点是提供理论知识以及课程主题的实践经验。
{"title":"VisRec: A Hands-on Tutorial on Deep Learning for Visual Recommender Systems","authors":"Denis Parra, Antonio Ossa-Guerra, Manuel Cartagena, Patricio Cerda-Mardini, Felipe del-Rio","doi":"10.1145/3397482.3450620","DOIUrl":"https://doi.org/10.1145/3397482.3450620","url":null,"abstract":"This tutorial serves as an introduction to deep learning approaches to build visual recommendation systems. Deep learning models can be used as feature extractors, and perform extremely well in visual recommender systems to create representations of visual items. This tutorial covers the foundations of convolutional neural networks and then how to use them to build state-of-the-art personalized recommendation systems. The tutorial is designed as a hands-on experience, focused on providing both theoretical knowledge as well as practical experience on the topics of the course.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116881490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SynZ: Enhanced Synthetic Dataset for Training UI Element Detectors 用于训练UI元素检测器的增强合成数据集
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450725
Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke
User Interface (UI) prototyping is an iterative process where designers initially sketch UIs before transforming them into interactive digital designs. Recent research applies Deep Neural Networks (DNNs) to identify the constituent UI elements of these UI sketches and transform these sketches into front-end code. Training such DNN models requires a large-scale dataset of UI sketches, which is time-consuming and expensive to collect. Therefore, we earlier proposed Syn to generate UI sketches synthetically by random allocation of UI element sketches. However, these UI sketches are not statistically similar to real-life UI screens. To bridge this gap, in this paper, we introduce the SynZ dataset, which contains 175,377 synthetically generated UI sketches statistically similar to real-life UI screens. To generate SynZ, we analyzed, enhanced, and extracted annotations from the RICO dataset and used 17,979 hand-drawn UI element sketches from the UISketch dataset. Further, we fine-tuned a UI element detector with SynZ and observed that it doubles the mean Average Precision of UI element detection compared to the Syn dataset.
用户界面(UI)原型设计是一个迭代过程,设计师在将其转换为交互式数字设计之前,首先绘制UI草图。最近的研究应用深度神经网络(dnn)来识别这些UI草图的组成元素,并将这些草图转换为前端代码。训练这样的DNN模型需要一个大规模的UI草图数据集,这是耗时和昂贵的收集。因此,我们早前提出Syn通过随机分配UI元素草图来综合生成UI草图。然而,这些UI草图在统计上与真实的UI屏幕并不相似。为了弥补这一差距,在本文中,我们引入了SynZ数据集,其中包含175,377个合成生成的UI草图,统计上与现实生活中的UI屏幕相似。为了生成SynZ,我们从RICO数据集中分析、增强和提取注释,并使用了来自usisketch数据集的17,979个手绘UI元素草图。此外,我们使用SynZ对UI元素检测器进行了微调,并观察到与Syn数据集相比,它将UI元素检测的平均精度提高了一倍。
{"title":"SynZ: Enhanced Synthetic Dataset for Training UI Element Detectors","authors":"Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke","doi":"10.1145/3397482.3450725","DOIUrl":"https://doi.org/10.1145/3397482.3450725","url":null,"abstract":"User Interface (UI) prototyping is an iterative process where designers initially sketch UIs before transforming them into interactive digital designs. Recent research applies Deep Neural Networks (DNNs) to identify the constituent UI elements of these UI sketches and transform these sketches into front-end code. Training such DNN models requires a large-scale dataset of UI sketches, which is time-consuming and expensive to collect. Therefore, we earlier proposed Syn to generate UI sketches synthetically by random allocation of UI element sketches. However, these UI sketches are not statistically similar to real-life UI screens. To bridge this gap, in this paper, we introduce the SynZ dataset, which contains 175,377 synthetically generated UI sketches statistically similar to real-life UI screens. To generate SynZ, we analyzed, enhanced, and extracted annotations from the RICO dataset and used 17,979 hand-drawn UI element sketches from the UISketch dataset. Further, we fine-tuned a UI element detector with SynZ and observed that it doubles the mean Average Precision of UI element detection compared to the Syn dataset.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114354053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
LectYS: A System for Summarizing Lecture Videos on YouTube 讲座ys:一个总结YouTube上讲座视频的系统
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450722
Taewon Yoo, Hyewon Jeong, Donghwan Lee, Hyunggu Jung
Students leverage online resources such as online classes and YouTube is increasing. Still, there remain challenges for students to easily find the right lecture video online at the right time. Multiple video search methods have been proposed, but to our knowledge, no previous study has proposed a system that summarize YouTube lecture videos using subtitles. This demo proposes LectYS, a system for summarizing lecture videos on YouTube to support students search for lecture video content on YouTube. The key features of our proposed system are: (1) to summarize the lecture video using the subtitle of the video, (2) to access to the specific parts of the video using the start time of video subtitle, and (3) to search for the video with keyword. Using LectYS, students are allowed to search for lecture videos on YouTube faster and more accurately.
越来越多的学生利用在线课程和YouTube等在线资源。然而,对于学生来说,在合适的时间在网上轻松找到合适的讲座视频仍然存在挑战。已经提出了多种视频搜索方法,但据我们所知,之前没有研究提出过一个使用字幕来总结YouTube讲座视频的系统。这个演示提出了LectYS,一个总结YouTube上的讲座视频的系统,以支持学生在YouTube上搜索讲座视频内容。我们提出的系统的主要特点是:(1)利用视频的字幕对讲座视频进行总结,(2)利用视频字幕的开始时间对视频的特定部分进行访问,(3)利用关键字对视频进行搜索。使用LectYS,学生可以更快、更准确地搜索YouTube上的讲座视频。
{"title":"LectYS: A System for Summarizing Lecture Videos on YouTube","authors":"Taewon Yoo, Hyewon Jeong, Donghwan Lee, Hyunggu Jung","doi":"10.1145/3397482.3450722","DOIUrl":"https://doi.org/10.1145/3397482.3450722","url":null,"abstract":"Students leverage online resources such as online classes and YouTube is increasing. Still, there remain challenges for students to easily find the right lecture video online at the right time. Multiple video search methods have been proposed, but to our knowledge, no previous study has proposed a system that summarize YouTube lecture videos using subtitles. This demo proposes LectYS, a system for summarizing lecture videos on YouTube to support students search for lecture video content on YouTube. The key features of our proposed system are: (1) to summarize the lecture video using the subtitle of the video, (2) to access to the specific parts of the video using the start time of video subtitle, and (3) to search for the video with keyword. Using LectYS, students are allowed to search for lecture videos on YouTube faster and more accurately.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122181396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Akin: Generating UI Wireframes From UI Design Patterns Using Deep Learning 类似:使用深度学习从UI设计模式生成UI线框图
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450727
Nishit Gajjar, Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke
During the User interface (UI) design process, designers use UI design patterns for conceptualizing different UI wireframes for an application. This paper introduces Akin, a UI wireframe generator that allows designers to chose a UI design pattern and provides them with multiple UI wireframes for a given UI design pattern. Akin uses a fine-tuned Self-Attention Generative Adversarial Network trained with 500 UI wireframes of 5 android UI design patterns. Upon evaluation, Akin’s generative model provides an Inception Score of 1.63 (SD=0.34) and Fréchet Inception Distance of 297.19. We further conducted user studies with 15 UI/UX designers to evaluate the quality of Akin-generated UI wireframes. The results show that UI/UX designers considered wireframes generated by Akin are as good as wireframes made by designers. Moreover, designers identified Akin-generated wireframes as designer-made 50% of the time. This paper provides a baseline for further research in UI wireframe generation by providing a baseline metric.
在用户界面(UI)设计过程中,设计人员使用UI设计模式对应用程序的不同UI线框进行概念化。本文介绍了一个UI线框生成器Akin,它允许设计师选择一个UI设计模式,并为给定的UI设计模式提供多个UI线框。Akin使用了一个经过微调的自注意生成对抗网络,该网络由5种android UI设计模式的500个UI线框图训练而成。经评估,Akin的生成模型的Inception Score为1.63 (SD=0.34), fr Inception Distance为297.19。我们进一步与15名UI/UX设计师进行了用户研究,以评估akin生成的UI线框的质量。结果表明,UI/UX设计师认为由Akin生成的线框图与设计师制作的线框图一样好。此外,设计师在50%的时间里将akin生成的线框识别为设计师制作的。本文通过提供一个基线度量,为进一步研究UI线框生成提供了一个基线。
{"title":"Akin: Generating UI Wireframes From UI Design Patterns Using Deep Learning","authors":"Nishit Gajjar, Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke","doi":"10.1145/3397482.3450727","DOIUrl":"https://doi.org/10.1145/3397482.3450727","url":null,"abstract":"During the User interface (UI) design process, designers use UI design patterns for conceptualizing different UI wireframes for an application. This paper introduces Akin, a UI wireframe generator that allows designers to chose a UI design pattern and provides them with multiple UI wireframes for a given UI design pattern. Akin uses a fine-tuned Self-Attention Generative Adversarial Network trained with 500 UI wireframes of 5 android UI design patterns. Upon evaluation, Akin’s generative model provides an Inception Score of 1.63 (SD=0.34) and Fréchet Inception Distance of 297.19. We further conducted user studies with 15 UI/UX designers to evaluate the quality of Akin-generated UI wireframes. The results show that UI/UX designers considered wireframes generated by Akin are as good as wireframes made by designers. Moreover, designers identified Akin-generated wireframes as designer-made 50% of the time. This paper provides a baseline for further research in UI wireframe generation by providing a baseline metric.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131412918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
ModelGenGUIs – High-level Interaction Design with Discourse Models for Automated GUI Generation modelgengui -使用话语模型进行高级交互设计,用于自动生成GUI
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450619
H. Kaindl
Since manual creation of user interfaces is hard and expensive, automated generation may become more and more important in the future. Instead of generating UIs from simple abstractions, transforming them from high-level models should be more attractive. In particular, we let an interaction designer model discourses in the sense of dialogues (supported by a tool), inspired by human-human communication. This tutorial informs about our approach, both about its advantages and its challenges (e.g., in terms of usability of generated UIs). In particular, our unique approach to optimization for a given device (e.g., a Smartphone) that applies Artificial Intelligence (AI) techniques will be high-lighted, as well as the techniques based on ontologies for automated GUI generation and customization. We also address low-vision accessibility of Web-pages, by combining automated design-time generation of Web-pages with responsive design for improving accessibility.
由于手动创建用户界面既困难又昂贵,因此自动化生成在未来可能变得越来越重要。比起从简单的抽象生成ui,从高级模型转换它们应该更有吸引力。特别是,我们让交互设计师在对话的意义上对话语进行建模(由工具支持),灵感来自人与人之间的交流。本教程介绍了我们的方法,包括它的优点和挑战(例如,就生成ui的可用性而言)。特别是,我们针对特定设备(例如智能手机)应用人工智能(AI)技术的独特优化方法,以及基于自动化GUI生成和定制的本体技术,将得到重点介绍。通过将网页的自动设计时生成与响应式设计相结合以改进可访问性,我们还解决了网页的低视觉可访问性问题。
{"title":"ModelGenGUIs – High-level Interaction Design with Discourse Models for Automated GUI Generation","authors":"H. Kaindl","doi":"10.1145/3397482.3450619","DOIUrl":"https://doi.org/10.1145/3397482.3450619","url":null,"abstract":"Since manual creation of user interfaces is hard and expensive, automated generation may become more and more important in the future. Instead of generating UIs from simple abstractions, transforming them from high-level models should be more attractive. In particular, we let an interaction designer model discourses in the sense of dialogues (supported by a tool), inspired by human-human communication. This tutorial informs about our approach, both about its advantages and its challenges (e.g., in terms of usability of generated UIs). In particular, our unique approach to optimization for a given device (e.g., a Smartphone) that applies Artificial Intelligence (AI) techniques will be high-lighted, as well as the techniques based on ontologies for automated GUI generation and customization. We also address low-vision accessibility of Web-pages, by combining automated design-time generation of Web-pages with responsive design for improving accessibility.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133269792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
26th International Conference on Intelligent User Interfaces - Companion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1