Camille Fayollas, C. Martinie, D. Navarre, Philippe A. Palanque
In the field of critical systems, safety standards such as DO-178C define Development Assurance Levels (DALs) for software systems (or sub-systems). The higher the consequence of a failure the higher DAL is required by certification authorities. Developing a system at a DAL A requires the use of formal description techniques and is thus expensive. For lower DALs, standard software development is accepted. While operating such systems, reaching a given goal might require operators to perform tasks using sub-systems of different DALs. Operations thus take place via mixed-criticality systems developed using several different techniques. In order to guarantee the effectiveness of the developed systems, it is necessary to ensure the compatibility of the operators' tasks and the system (whatever technique has been used for its development). While DAL identification is outside the scope of the paper, this article presents a task-model based approach for addressing multiple DALs for mixed-criticality interactive software. That approach proposes a systematic process for engineering mixed-criticality interactive applications. This process is supported by a software modeling and development environment integrating both formal description techniques and standard software programming techniques. The process and the development environment are illustrated with a case study of a mixed-criticality interactive cockpit application.
{"title":"Engineering mixed-criticality interactive applications","authors":"Camille Fayollas, C. Martinie, D. Navarre, Philippe A. Palanque","doi":"10.1145/2933242.2933258","DOIUrl":"https://doi.org/10.1145/2933242.2933258","url":null,"abstract":"In the field of critical systems, safety standards such as DO-178C define Development Assurance Levels (DALs) for software systems (or sub-systems). The higher the consequence of a failure the higher DAL is required by certification authorities. Developing a system at a DAL A requires the use of formal description techniques and is thus expensive. For lower DALs, standard software development is accepted. While operating such systems, reaching a given goal might require operators to perform tasks using sub-systems of different DALs. Operations thus take place via mixed-criticality systems developed using several different techniques. In order to guarantee the effectiveness of the developed systems, it is necessary to ensure the compatibility of the operators' tasks and the system (whatever technique has been used for its development). While DAL identification is outside the scope of the paper, this article presents a task-model based approach for addressing multiple DALs for mixed-criticality interactive software. That approach proposes a systematic process for engineering mixed-criticality interactive applications. This process is supported by a software modeling and development environment integrating both formal description techniques and standard software programming techniques. The process and the development environment are illustrated with a case study of a mixed-criticality interactive cockpit application.","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114090040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
User centered design and development of interactive systems utilizes theoretically well-grounded, yet practical ways to capture user's goals and intentions. Task models are an established approach to break down a central objective into a set of hierarchical organized tasks. While task models achieve to provide a good overview of the overall system, they often lack detail necessary to (semi-) automatically generate user interfaces. Based on requirements derived from a comprehensive overview of existing task model extensions, improvements and development methods, an approach that integrates logical rules with task models is introduced. By means of practical examples it is shown, that the integration of rules enables advanced execution flows as well as leaner task models thus improving their practical value.
{"title":"Rule-enhanced task models for increased expressiveness and compactness","authors":"Werner Gaulke, J. Ziegler","doi":"10.1145/2933242.2933243","DOIUrl":"https://doi.org/10.1145/2933242.2933243","url":null,"abstract":"User centered design and development of interactive systems utilizes theoretically well-grounded, yet practical ways to capture user's goals and intentions. Task models are an established approach to break down a central objective into a set of hierarchical organized tasks. While task models achieve to provide a good overview of the overall system, they often lack detail necessary to (semi-) automatically generate user interfaces. Based on requirements derived from a comprehensive overview of existing task model extensions, improvements and development methods, an approach that integrates logical rules with task models is introduced. By means of practical examples it is shown, that the integration of rules enables advanced execution flows as well as leaner task models thus improving their practical value.","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124524067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently developed computer-aided design (CAD) tools automate design of rational hierarchical user menu structures. Proper choice of the optimization criterion is the key factor of success for such a CAD tool. We suggest user preference share as a novel metric of menu layout performance. It has clear economic grounds and is sound for management. We show how the preference share of a menu layout can be evaluated from laboratory experiments and predicted using the experimental menu navigation time and menu layout characteristics. Although navigation time is the most important factor, sometimes the faster does not mean the better. The logical compliance of a menu is also valuable for users.
{"title":"Users' preference share as a criterion for hierarchical menu optimization","authors":"M. Goubko, A. Varnavsky","doi":"10.1145/2933242.2935875","DOIUrl":"https://doi.org/10.1145/2933242.2935875","url":null,"abstract":"Recently developed computer-aided design (CAD) tools automate design of rational hierarchical user menu structures. Proper choice of the optimization criterion is the key factor of success for such a CAD tool. We suggest user preference share as a novel metric of menu layout performance. It has clear economic grounds and is sound for management. We show how the preference share of a menu layout can be evaluated from laboratory experiments and predicted using the experimental menu navigation time and menu layout characteristics. Although navigation time is the most important factor, sometimes the faster does not mean the better. The logical compliance of a menu is also valuable for users.","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125599167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammed Alnusayri, Gang Hu, Elham Alghamdi, Derek F. Reilly
Interpersonal spatial relationships ("proxemics") play an important role when people collaborate or work near one another. While state-of-art tracking systems and toolkits have been demonstrated that are able to provide proxemics data, application developers require the ability to define their own custom proxemics events (i.e., the meaning behind specific spatial configurations) rather than conduct manual tests in the UI layer. At the same time, proxemics-aware applications involving shared displays need mechanisms to tightly integrate UI events and proxemics events. This requires a middleware framework that allows developers to define proxemics events by composing low-level proxemics data and optionally UI events. In this paper, we present a proximity-based event model suited to collaboration around large displays, and a corresponding framework for building proxemics-aware applications. The design of this model is derived from a review of prior work, and direct experience implementing and evaluating a proxemics-aware interactive tabletop application (a museum information kiosk).
{"title":"ProxemicUI: object-oriented middleware and event model for proxemics-aware applications on large displays","authors":"Mohammed Alnusayri, Gang Hu, Elham Alghamdi, Derek F. Reilly","doi":"10.1145/2933242.2933252","DOIUrl":"https://doi.org/10.1145/2933242.2933252","url":null,"abstract":"Interpersonal spatial relationships (\"proxemics\") play an important role when people collaborate or work near one another. While state-of-art tracking systems and toolkits have been demonstrated that are able to provide proxemics data, application developers require the ability to define their own custom proxemics events (i.e., the meaning behind specific spatial configurations) rather than conduct manual tests in the UI layer. At the same time, proxemics-aware applications involving shared displays need mechanisms to tightly integrate UI events and proxemics events. This requires a middleware framework that allows developers to define proxemics events by composing low-level proxemics data and optionally UI events. In this paper, we present a proximity-based event model suited to collaboration around large displays, and a corresponding framework for building proxemics-aware applications. The design of this model is derived from a review of prior work, and direct experience implementing and evaluating a proxemics-aware interactive tabletop application (a museum information kiosk).","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133119827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prototyping is a core activity in User-Centered Design (UCD) process and is aimed at supporting iterations of design ideas until all users' requirements are met. Although many dedicated prototyping tools exist, we have found out that most of them lack features for the traceability of information that can be useful for driving the evolution of prototypes. In this paper, we presents a prototyping tool called PANDA which has been specifically conceived to investigate features for dealing with the evolution of prototypes. Herein we present a view at glace about the tool a more specifically annotations mechanisms that can be used for recording design choices, new requirements, fixing design typos, and so on.
{"title":"PANDA: prototyping using annotation and decision analysis","authors":"Jean-Luc Hak, M. Winckler, D. Navarre","doi":"10.1145/2933242.2935873","DOIUrl":"https://doi.org/10.1145/2933242.2935873","url":null,"abstract":"Prototyping is a core activity in User-Centered Design (UCD) process and is aimed at supporting iterations of design ideas until all users' requirements are met. Although many dedicated prototyping tools exist, we have found out that most of them lack features for the traceability of information that can be useful for driving the evolution of prototypes. In this paper, we presents a prototyping tool called PANDA which has been specifically conceived to investigate features for dealing with the evolution of prototypes. Herein we present a view at glace about the tool a more specifically annotations mechanisms that can be used for recording design choices, new requirements, fixing design typos, and so on.","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116359513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce the MVIC pattern for creating multidevice and multimodal interfaces. We discuss the advantages provided by introducing a new component to the MVC pattern for those interfaces which must adapt to different devices and modalities. The proposed solution is based on an input model defining equivalent and complementary sequence of inputs for the same interaction. In addition, we discuss Djestit, a javascript library which allows creating multidevice and multimodal input models for web applications, applying the aforementioned pattern. The library supports the integration of multiple devices (Kinect 2, Leap Motion, touchscreens) and different modalities (gestural, vocal and touch).
{"title":"A design pattern for multimodal and multidevice user interfaces","authors":"A. Carcangiu, G. Fenu, L. D. Spano","doi":"10.1145/2933242.2935876","DOIUrl":"https://doi.org/10.1145/2933242.2935876","url":null,"abstract":"In this paper, we introduce the MVIC pattern for creating multidevice and multimodal interfaces. We discuss the advantages provided by introducing a new component to the MVC pattern for those interfaces which must adapt to different devices and modalities. The proposed solution is based on an input model defining equivalent and complementary sequence of inputs for the same interaction. In addition, we discuss Djestit, a javascript library which allows creating multidevice and multimodal input models for web applications, applying the aforementioned pattern. The library supports the integration of multiple devices (Kinect 2, Leap Motion, touchscreens) and different modalities (gestural, vocal and touch).","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123677844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a solution to monitor and guide stroke patients during Activities of the Daily Living. It consists of a self-content smart glass that the patient can use to drink at different times of the day (water, coffee, etc.). The smart glass embeds a series of sensors that track in a transparent way the patients activity in everyday life (glass orientation, liquid level, target reaching and tremors). This solution allows therapists to monitor and analyze easily the Activities of the Daily Living of the patient in order to adapt the weekly rehabilitation sessions with suitable exercises. In addition, the smart glass embeds visual displays aimed at providing gestural guidance information when the patient do not use properly the glass. The paper presents the first prototype of the smart glass by highlighting the methodology adopted to design the software and hardware components of the platform.
{"title":"SyMPATHy: smart glass for monitoring and guiding stroke patients in a home-based context","authors":"M. Bobin, M. Anastassova, M. Boukallel, M. Ammi","doi":"10.1145/2933242.2935870","DOIUrl":"https://doi.org/10.1145/2933242.2935870","url":null,"abstract":"This paper presents a solution to monitor and guide stroke patients during Activities of the Daily Living. It consists of a self-content smart glass that the patient can use to drink at different times of the day (water, coffee, etc.). The smart glass embeds a series of sensors that track in a transparent way the patients activity in everyday life (glass orientation, liquid level, target reaching and tremors). This solution allows therapists to monitor and analyze easily the Activities of the Daily Living of the patient in order to adapt the weekly rehabilitation sessions with suitable exercises. In addition, the smart glass embeds visual displays aimed at providing gestural guidance information when the patient do not use properly the glass. The paper presents the first prototype of the smart glass by highlighting the methodology adopted to design the software and hardware components of the platform.","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"657 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121983587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For usable and acceptable Graphical User Interfaces (GUIs) in practice, predefined widgets of some given toolkit are often insufficient. An example is flight booking, where special widgets for selecting seats are common in today's applications. Model-driven GUI generation uses predefined widgets as building blocks, and also needs to include such special widgets customized for a given application. The problem is how to allow a GUI designer other than a framework developer to include such special widgets in the automated generation process. In this paper, we present a new approach for adding such special widgets to model-driven GUI generation. It does so by including custom widgets already during the automated generation, in order to make the result persistent also in case of re-generation. We explain our extensions to a generation framework for that purpose, so that a GUI designer using it for automated GUI generation can integrate custom widgets into the generated GUI, without having its source code or in-depth knowledge of the generation framework. This involves specifying them in Custom Widget Templates including design-time variability, so that they can be integrated in the design-time generation process starting from the highest level of abstraction.
{"title":"Adding custom widgets to model-driven GUI generation","authors":"Thomas Rathfux, R. Popp, H. Kaindl","doi":"10.1145/2933242.2933251","DOIUrl":"https://doi.org/10.1145/2933242.2933251","url":null,"abstract":"For usable and acceptable Graphical User Interfaces (GUIs) in practice, predefined widgets of some given toolkit are often insufficient. An example is flight booking, where special widgets for selecting seats are common in today's applications. Model-driven GUI generation uses predefined widgets as building blocks, and also needs to include such special widgets customized for a given application. The problem is how to allow a GUI designer other than a framework developer to include such special widgets in the automated generation process. In this paper, we present a new approach for adding such special widgets to model-driven GUI generation. It does so by including custom widgets already during the automated generation, in order to make the result persistent also in case of re-generation. We explain our extensions to a generation framework for that purpose, so that a GUI designer using it for automated GUI generation can integrate custom widgets into the generated GUI, without having its source code or in-depth knowledge of the generation framework. This involves specifying them in Custom Widget Templates including design-time variability, so that they can be integrated in the design-time generation process starting from the highest level of abstraction.","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130102038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we report the results from assessing the FLUIDE Framework for model-based specification of user interfaces supporting emergency responders. First, we outline the special challenges faced when developing such user interfaces, and the approach used in the FLUIDE Framework to meet these challenges. Then we introduce the framework, including its two specification languages. Thereafter, we present the case addressing the specification of user interfaces for three existing emergency response applications. Based on these specifications, we discuss how well we succeeded, concluding that we were able to describe the applications in a comprehensive and understandable way taking similarities and difference between the applications into account. The language constructs function as intended, having two languages has proven valuable, and the specifications scale quite well.
{"title":"A case-based assessment of the FLUIDE framework for specifying emergency response user interfaces","authors":"Erik G. Nilsson, K. Stølen","doi":"10.1145/2933242.2933253","DOIUrl":"https://doi.org/10.1145/2933242.2933253","url":null,"abstract":"In this paper, we report the results from assessing the FLUIDE Framework for model-based specification of user interfaces supporting emergency responders. First, we outline the special challenges faced when developing such user interfaces, and the approach used in the FLUIDE Framework to meet these challenges. Then we introduce the framework, including its two specification languages. Thereafter, we present the case addressing the specification of user interfaces for three existing emergency response applications. Based on these specifications, we discuss how well we succeeded, concluding that we were able to describe the applications in a comprehensive and understandable way taking similarities and difference between the applications into account. The language constructs function as intended, having two languages has proven valuable, and the specifications scale quite well.","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"10 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116755794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Algorithms have revolutionized almost every field of manufacturing and engineering. Is interaction design the next? This talk gives an overview of what future holds for optimization methods in interaction design. I introduce the idea of using predictive models and simulations of end-user behavior in combinatorial optimization of user interfaces. I demonstrate it with an interactive layout optimizer and provide an overview of research results. I tell about the models we use, the limitations of this approach, how it fits the HCI engineering cycle, and how we validate and verify this approach. To conclude, I provoke a critical discussion on the potentials and pitfalls of this approach.
{"title":"Can computers design interaction?","authors":"Antti Oulasvirta","doi":"10.1145/2933242.2948131","DOIUrl":"https://doi.org/10.1145/2933242.2948131","url":null,"abstract":"Algorithms have revolutionized almost every field of manufacturing and engineering. Is interaction design the next? This talk gives an overview of what future holds for optimization methods in interaction design. I introduce the idea of using predictive models and simulations of end-user behavior in combinatorial optimization of user interfaces. I demonstrate it with an interactive layout optimizer and provide an overview of research results. I tell about the models we use, the limitations of this approach, how it fits the HCI engineering cycle, and how we validate and verify this approach. To conclude, I provoke a critical discussion on the potentials and pitfalls of this approach.","PeriodicalId":287624,"journal":{"name":"Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116718127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}