首页 > 最新文献

Proceedings of the International Conference on Advanced Visual Interfaces最新文献

英文 中文
Circles of Affordance: Proposal for a diagnostic tool to support usability studies 功能圈:建议使用诊断工具来支持可用性研究
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399719
R. Spence, Leah Redmond
We propose, for interactive systems, a representation that is potentially useful as a diagnostic tool. It is based on the concept of affordances that can be offered to and deployed by a user. The proposal is illustrated by reference to an interface designed for a smartphone app that allows a person with Type-1 diabetes to self-manage their condition.
对于交互式系统,我们提出了一种可能作为诊断工具的表示。它基于可提供给用户并由用户部署的功能支持的概念。该提议是通过一个智能手机应用程序设计的界面来说明的,该应用程序允许1型糖尿病患者自我管理自己的病情。
{"title":"Circles of Affordance: Proposal for a diagnostic tool to support usability studies","authors":"R. Spence, Leah Redmond","doi":"10.1145/3399715.3399719","DOIUrl":"https://doi.org/10.1145/3399715.3399719","url":null,"abstract":"We propose, for interactive systems, a representation that is potentially useful as a diagnostic tool. It is based on the concept of affordances that can be offered to and deployed by a user. The proposal is illustrated by reference to an interface designed for a smartphone app that allows a person with Type-1 diabetes to self-manage their condition.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127422185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SelfLens SelfLens
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399941
Giulio Galesi, Luciano Giunipero, B. Leporini, Gianni Verdi
Independently selecting food items while shopping, or storing and cooking food items correctly can be a very difficult task for people with special needs. Product labels on food packaging contain an ever-increasing amount of information, which can also be in a variety of languages. The amount of information and also the features of the text can make it difficult or impossible to read, in particular for those with visual impairments or the elderly. Several tools or applications are available on the market or have been proposed to support this type of activity (e.g. barcode or QR code reading), but they are limited and may require the user to have specific digital skills. Moreover, repeatedly using an application to read the label contents can require numerous steps on a touch-screen, and consequently be time-consuming. In this work, a portable tool is proposed to support people in reading the contents of labels and acquiring additional information, while they are using the item at home or shopping at the supermarket. The aim of our study is to propose a simple portable assistive technology tool which 1) can be used by anyone, regardless of their digital personal skills 2) does not require a smartphone or complex device, 3) is a low-cost solution for the user.
{"title":"SelfLens","authors":"Giulio Galesi, Luciano Giunipero, B. Leporini, Gianni Verdi","doi":"10.1145/3399715.3399941","DOIUrl":"https://doi.org/10.1145/3399715.3399941","url":null,"abstract":"Independently selecting food items while shopping, or storing and cooking food items correctly can be a very difficult task for people with special needs. Product labels on food packaging contain an ever-increasing amount of information, which can also be in a variety of languages. The amount of information and also the features of the text can make it difficult or impossible to read, in particular for those with visual impairments or the elderly. Several tools or applications are available on the market or have been proposed to support this type of activity (e.g. barcode or QR code reading), but they are limited and may require the user to have specific digital skills. Moreover, repeatedly using an application to read the label contents can require numerous steps on a touch-screen, and consequently be time-consuming. In this work, a portable tool is proposed to support people in reading the contents of labels and acquiring additional information, while they are using the item at home or shopping at the supermarket. The aim of our study is to propose a simple portable assistive technology tool which 1) can be used by anyone, regardless of their digital personal skills 2) does not require a smartphone or complex device, 3) is a low-cost solution for the user.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128137936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TeMoCo-Doc: A visualization for supporting temporal and contextual analysis of dialogues and associated documents TeMoCo-Doc:支持对话和相关文档的时间和上下文分析的可视化
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399956
Shane Sheehan, S. Luz, Pierre Albert, M. Masoodian
A common task in a number of application areas is to create textual documents based on recorded audio data. Visualizations designed to support such tasks require linking temporal audio data with contextual data contained in the resulting documents. In this paper, we present a tool for the visualization of temporal and contextual links between recorded dialogues and their summary documents.
许多应用程序领域中的一个常见任务是基于录制的音频数据创建文本文档。为支持此类任务而设计的可视化需要将时间音频数据与结果文档中包含的上下文数据链接起来。在本文中,我们提出了一种工具,用于可视化记录对话及其摘要文档之间的时间和上下文链接。
{"title":"TeMoCo-Doc: A visualization for supporting temporal and contextual analysis of dialogues and associated documents","authors":"Shane Sheehan, S. Luz, Pierre Albert, M. Masoodian","doi":"10.1145/3399715.3399956","DOIUrl":"https://doi.org/10.1145/3399715.3399956","url":null,"abstract":"A common task in a number of application areas is to create textual documents based on recorded audio data. Visualizations designed to support such tasks require linking temporal audio data with contextual data contained in the resulting documents. In this paper, we present a tool for the visualization of temporal and contextual links between recorded dialogues and their summary documents.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128231487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The CrazySquare solution: a gamified ICT tool to support the musical learning in pre-adolescents crazyssquare解决方案:一个游戏化的ICT工具,支持青少年前音乐学习
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399943
Carlo Centofanti, Alessandro D'errico, F. Caruso, Sara Peretti
In this paper, we present the current prototype of the CrazySquare Project which is aimed to provide a gamified ICT (Information and Communications Technology) solution for musical education. The project is inspired by Gordon's Musical Learning Theory. It is dedicated to the guitar since it is one of the most played instrument in Italian's Middle Schools. The TPACK (Technological Pedagogical Content Knowledge) framework has been used as a way to effectively integrate the technology into teaching activities. Moreover, the CrazySquare project follows an iterative process based on the TEL-oriented UCD approach. Currently, after carrying out an expert-based evaluation with several domain-experts, we are designing the user-based evaluation phase which will conclude the second iteration.
在本文中,我们介绍了crazyssquare项目的当前原型,该项目旨在为音乐教育提供游戏化ICT(信息和通信技术)解决方案。这个项目的灵感来自戈登的音乐学习理论。它致力于吉他,因为它是意大利中学中使用最多的乐器之一。TPACK (Technological Pedagogical Content Knowledge,技术教学内容知识)框架被用作有效地将技术整合到教学活动中的一种方式。此外,crazyssquare项目遵循基于面向tel的UCD方法的迭代过程。目前,在与几位领域专家进行了基于专家的评估之后,我们正在设计基于用户的评估阶段,该阶段将结束第二次迭代。
{"title":"The CrazySquare solution: a gamified ICT tool to support the musical learning in pre-adolescents","authors":"Carlo Centofanti, Alessandro D'errico, F. Caruso, Sara Peretti","doi":"10.1145/3399715.3399943","DOIUrl":"https://doi.org/10.1145/3399715.3399943","url":null,"abstract":"In this paper, we present the current prototype of the CrazySquare Project which is aimed to provide a gamified ICT (Information and Communications Technology) solution for musical education. The project is inspired by Gordon's Musical Learning Theory. It is dedicated to the guitar since it is one of the most played instrument in Italian's Middle Schools. The TPACK (Technological Pedagogical Content Knowledge) framework has been used as a way to effectively integrate the technology into teaching activities. Moreover, the CrazySquare project follows an iterative process based on the TEL-oriented UCD approach. Currently, after carrying out an expert-based evaluation with several domain-experts, we are designing the user-based evaluation phase which will conclude the second iteration.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128242239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preserving Contextual Awareness during Selection of Moving Targets in Animated Stream Visualizations 在动画流可视化中选择运动目标时保留上下文意识
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399832
E. Ragan, Andrew Pachuilo, J. Goodall, F. Bacim
In many types of dynamic interactive visualizations, it is often desired to interact with moving objects. Stopping moving objects can make selection easier, but pausing animated content can disrupt perception and understanding of the visualization. To address such problems, we explore selection techniques that only pause a subset of all moving targets in the visualization. We present various designs for controlling pause regions based on cursor trajectory or cursor position. We then report a dual-task experiment that evaluates how different techniques affect both target selection performance and contextual awareness of the visualization. Our findings indicate that all pause techniques significantly improved selection performance as compared to the baseline method without pause, but the results also show that pausing the entire visualization can interfere with contextual awareness. However, the problem with reduced contextual awareness was not observed with our new techniques that only pause a limited region of the visualization. Thus, our research provides evidence that region-limited pause techniques can retain the advantages of selection in dynamic visualizations without imposing a negative effect on contextual awareness.
在许多类型的动态交互式可视化中,通常需要与移动对象进行交互。停止移动的对象可以使选择更容易,但暂停动画内容可能会破坏对可视化的感知和理解。为了解决这些问题,我们探索了在可视化中只暂停所有移动目标子集的选择技术。我们提出了各种基于光标轨迹或光标位置控制暂停区域的设计。然后,我们报告了一个双任务实验,评估不同的技术如何影响目标选择性能和可视化的上下文意识。我们的研究结果表明,与没有暂停的基准方法相比,所有暂停技术都显著提高了选择性能,但结果也表明,暂停整个可视化过程会干扰上下文意识。然而,我们的新技术只暂停了可视化的有限区域,并没有观察到上下文意识降低的问题。因此,我们的研究提供了证据,表明区域限制暂停技术可以保留动态可视化中选择的优势,而不会对上下文意识产生负面影响。
{"title":"Preserving Contextual Awareness during Selection of Moving Targets in Animated Stream Visualizations","authors":"E. Ragan, Andrew Pachuilo, J. Goodall, F. Bacim","doi":"10.1145/3399715.3399832","DOIUrl":"https://doi.org/10.1145/3399715.3399832","url":null,"abstract":"In many types of dynamic interactive visualizations, it is often desired to interact with moving objects. Stopping moving objects can make selection easier, but pausing animated content can disrupt perception and understanding of the visualization. To address such problems, we explore selection techniques that only pause a subset of all moving targets in the visualization. We present various designs for controlling pause regions based on cursor trajectory or cursor position. We then report a dual-task experiment that evaluates how different techniques affect both target selection performance and contextual awareness of the visualization. Our findings indicate that all pause techniques significantly improved selection performance as compared to the baseline method without pause, but the results also show that pausing the entire visualization can interfere with contextual awareness. However, the problem with reduced contextual awareness was not observed with our new techniques that only pause a limited region of the visualization. Thus, our research provides evidence that region-limited pause techniques can retain the advantages of selection in dynamic visualizations without imposing a negative effect on contextual awareness.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128255697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
V-DOOR: A Real-Time Virtual Dressing Room Application Using Oculus Rift V-DOOR:使用Oculus Rift的实时虚拟更衣室应用程序
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399959
Silvestro V. Veneruso, T. Catarci, Lauren S. Ferro, Andrea Marrella, Massimo Mecella
In recent years and with its accessibility, the use of online shopping for clothing has increased. Virtual Dressing Rooms (VDRs) represent an effective way to enact the ability to "try before buying, thus removing an important obstacle for online shopping. While most of the VDR tools that have been realized so far are based on Augmented Reality and are installed directly inside the retail shops, this paper proposes a real-time VDR application titled V-DOOR that leverages the features of Oculus Rift to create an immersive experience that enables customers to try on clothes virtually in the comfort of their own home rather than physically in the retail shop.
近年来,由于其可访问性,网上购物服装的使用有所增加。虚拟试衣间(vdr)是实现“先试后买”能力的一种有效方式,从而消除了网上购物的一个重要障碍。虽然到目前为止,大多数已经实现的VDR工具都是基于增强现实的,并且直接安装在零售商店内,但本文提出了一种名为V-DOOR的实时VDR应用程序,它利用Oculus Rift的功能来创造一种身临其境的体验,使顾客能够在舒适的家中虚拟地试穿衣服,而不是在零售商店里。
{"title":"V-DOOR: A Real-Time Virtual Dressing Room Application Using Oculus Rift","authors":"Silvestro V. Veneruso, T. Catarci, Lauren S. Ferro, Andrea Marrella, Massimo Mecella","doi":"10.1145/3399715.3399959","DOIUrl":"https://doi.org/10.1145/3399715.3399959","url":null,"abstract":"In recent years and with its accessibility, the use of online shopping for clothing has increased. Virtual Dressing Rooms (VDRs) represent an effective way to enact the ability to \"try before buying, thus removing an important obstacle for online shopping. While most of the VDR tools that have been realized so far are based on Augmented Reality and are installed directly inside the retail shops, this paper proposes a real-time VDR application titled V-DOOR that leverages the features of Oculus Rift to create an immersive experience that enables customers to try on clothes virtually in the comfort of their own home rather than physically in the retail shop.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wearable Interfaces and Advanced Sensors to Enhance Firefighters Safety in Forest Fires 可穿戴接口和先进传感器提高消防员在森林火灾中的安全
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399961
Pietro Battistoni, M. D. Gregorio, Domenico Giordano, M. Sebillo, G. Tortora, G. Vitiello
The forest fires represent a social emergency that requires significant economic and organizational commitment. Safety and the lack of reliable and timely localization of firefighters is a big problem. In this paper, we present Karya Advanced Sensor, an automatic, accurate, and reliable IT solution able to locate firefighters in harsh environments and support decision making activities at control rooms. The system consists of sensors perfectly integrated into firefighters' uniforms, which are used to monitor in real-time individual operators' activities as well as the entire fire area. In particular, in case a firefighter gets injured, the system will activate the rescue teams quickly, as there will be a constant link between the firefighters and the medical assistance. The firefighter can also specify the reason for the accident, which is critical information for a more timely and appropriate health intervention. Moreover, the system is able to perform an automatic real-time mapping of forest fires and possibly estimate its propagation rate, providing precious support to control rooms, which are the center of the team coordination.
森林火灾是一种社会紧急情况,需要作出重大的经济和组织承诺。消防人员缺乏可靠、及时的安全定位是一个大问题。在本文中,我们介绍了Karya高级传感器,这是一种自动、准确、可靠的IT解决方案,能够在恶劣环境中定位消防员,并支持控制室的决策活动。该系统由完美集成在消防员制服中的传感器组成,用于实时监控单个操作员的活动以及整个火灾区域。特别是,如果消防员受伤,该系统将迅速启动救援小组,因为消防员和医疗援助之间存在持续的联系。消防员还可以说明事故的原因,这对于更及时和适当的健康干预是至关重要的信息。此外,该系统能够对森林火灾进行自动实时测绘,并可能估计其传播速度,为控制室提供宝贵的支持,控制室是团队协调的中心。
{"title":"Wearable Interfaces and Advanced Sensors to Enhance Firefighters Safety in Forest Fires","authors":"Pietro Battistoni, M. D. Gregorio, Domenico Giordano, M. Sebillo, G. Tortora, G. Vitiello","doi":"10.1145/3399715.3399961","DOIUrl":"https://doi.org/10.1145/3399715.3399961","url":null,"abstract":"The forest fires represent a social emergency that requires significant economic and organizational commitment. Safety and the lack of reliable and timely localization of firefighters is a big problem. In this paper, we present Karya Advanced Sensor, an automatic, accurate, and reliable IT solution able to locate firefighters in harsh environments and support decision making activities at control rooms. The system consists of sensors perfectly integrated into firefighters' uniforms, which are used to monitor in real-time individual operators' activities as well as the entire fire area. In particular, in case a firefighter gets injured, the system will activate the rescue teams quickly, as there will be a constant link between the firefighters and the medical assistance. The firefighter can also specify the reason for the accident, which is critical information for a more timely and appropriate health intervention. Moreover, the system is able to perform an automatic real-time mapping of forest fires and possibly estimate its propagation rate, providing precious support to control rooms, which are the center of the team coordination.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129239450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Playful Citizen Science Tool for Casual Users 休闲用户的一个有趣的公民科学工具
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399937
Risa Kimura, Keren Jiang, Di Zhang, T. Nakajima
We present a playful citizen science tool to explore various protein docking through dance like body actions for casual users. For more attracting casual users, the tool offers a social watching functionality based on a virtual reality platform that presents multiple persons' visual perspectives in a virtual space. We also investigate some preliminary insights of our current tool.
我们为休闲用户提供了一个有趣的公民科学工具,通过舞蹈般的身体动作来探索各种蛋白质对接。为了吸引更多的临时用户,该工具提供了基于虚拟现实平台的社交观看功能,可以在虚拟空间中呈现多人的视觉视角。我们还调查了当前工具的一些初步见解。
{"title":"A Playful Citizen Science Tool for Casual Users","authors":"Risa Kimura, Keren Jiang, Di Zhang, T. Nakajima","doi":"10.1145/3399715.3399937","DOIUrl":"https://doi.org/10.1145/3399715.3399937","url":null,"abstract":"We present a playful citizen science tool to explore various protein docking through dance like body actions for casual users. For more attracting casual users, the tool offers a social watching functionality based on a virtual reality platform that presents multiple persons' visual perspectives in a virtual space. We also investigate some preliminary insights of our current tool.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134074806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Visual Environment for End-User Creation of IoT Customization Rules with Recommendation Support 为终端用户创建具有推荐支持的物联网定制规则的可视化环境
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399833
Andrea Mattioli, F. Paternò
Personalization rules based on the trigger-action paradigm have recently garnered increasing interest in Internet of Things (IoT) applications. However, composing trigger-action rules can be a challenging task for end users, especially when the rules' complexity increases. Users have to decide about various aspects: which triggers and actions to select, how to combine multiple triggers or actions, and whether some previously defined rule can help in the composition process. We propose a visual environment, Block Rule Composer, to address these problems. It consists of a tool for creating rules based on visual blocks, integrated with recommendation techniques in order to provide intelligent support during rule creation. We also report on a first test which provided positive indications and suggestions for further design improvements.
基于触发-操作范式的个性化规则最近在物联网(IoT)应用中引起了越来越多的兴趣。然而,对于最终用户来说,组合触发-操作规则可能是一项具有挑战性的任务,特别是当规则的复杂性增加时。用户必须决定各个方面:选择哪些触发器和操作,如何组合多个触发器或操作,以及之前定义的一些规则是否可以在组合过程中有所帮助。我们提出了一个可视化环境,块规则编写器,来解决这些问题。它包含一个用于基于可视化块创建规则的工具,该工具集成了推荐技术,以便在规则创建期间提供智能支持。我们还报告了第一次测试,为进一步的设计改进提供了积极的指示和建议。
{"title":"A Visual Environment for End-User Creation of IoT Customization Rules with Recommendation Support","authors":"Andrea Mattioli, F. Paternò","doi":"10.1145/3399715.3399833","DOIUrl":"https://doi.org/10.1145/3399715.3399833","url":null,"abstract":"Personalization rules based on the trigger-action paradigm have recently garnered increasing interest in Internet of Things (IoT) applications. However, composing trigger-action rules can be a challenging task for end users, especially when the rules' complexity increases. Users have to decide about various aspects: which triggers and actions to select, how to combine multiple triggers or actions, and whether some previously defined rule can help in the composition process. We propose a visual environment, Block Rule Composer, to address these problems. It consists of a tool for creating rules based on visual blocks, integrated with recommendation techniques in order to provide intelligent support during rule creation. We also report on a first test which provided positive indications and suggestions for further design improvements.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124329303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
ParVis
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399853
G. Costagliola, Mattia De Rosa, V. Fuccella, Mark Minas
In this paper, we present ParVis, an interactive visual system for the animated visualization of logged parser trace executions. The system allows a parser implementer to create a visualizer for generated parsers by simply defining a JavaScript module that maps each logged parser instruction into a set of events driving the visual system interface. The result is a set of interacting graphical/text windows that allows users to explore logged parser executions and helps them to have a complete understanding of how the parser behaves during its execution on a given input. We used our system to visualize the behavior of textual as well as visual parsers and describe here its use with the well known CUP parser generator. Preliminary tests with users have provided good feedback on its use.
{"title":"ParVis","authors":"G. Costagliola, Mattia De Rosa, V. Fuccella, Mark Minas","doi":"10.1145/3399715.3399853","DOIUrl":"https://doi.org/10.1145/3399715.3399853","url":null,"abstract":"In this paper, we present ParVis, an interactive visual system for the animated visualization of logged parser trace executions. The system allows a parser implementer to create a visualizer for generated parsers by simply defining a JavaScript module that maps each logged parser instruction into a set of events driving the visual system interface. The result is a set of interacting graphical/text windows that allows users to explore logged parser executions and helps them to have a complete understanding of how the parser behaves during its execution on a given input. We used our system to visualize the behavior of textual as well as visual parsers and describe here its use with the well known CUP parser generator. Preliminary tests with users have provided good feedback on its use.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116550279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the International Conference on Advanced Visual Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1