Nadia Elouali, Xavier Le Pallec, J. Rouillard, Jean-Claude Tarby
{"title":"MIMIC:在多模式移动应用中利用基于传感器的交互","authors":"Nadia Elouali, Xavier Le Pallec, J. Rouillard, Jean-Claude Tarby","doi":"10.1145/2559206.2581222","DOIUrl":null,"url":null,"abstract":"In recent years, there has been an increasing interest in the presence of sensors in mobile devices. Emergence of multiple modalities based on these sensors greatly enriches the human-mobile interaction. However, mobile applications slightly involve sensors and rarely combine them simultaneously. In this paper, we seek to remedy this problem by detailing the key challenges that face developers who want to integrate several sensor-based modalities and combine them. We then present our model-based approach solution. We introduce M4L modeling language and MIMIC framework that aim to produce easily sensor-based multimodal mobile applications by generating up to 100% of their interfaces.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"MIMIC: leveraging sensor-based interactions in multimodal mobile applications\",\"authors\":\"Nadia Elouali, Xavier Le Pallec, J. Rouillard, Jean-Claude Tarby\",\"doi\":\"10.1145/2559206.2581222\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, there has been an increasing interest in the presence of sensors in mobile devices. Emergence of multiple modalities based on these sensors greatly enriches the human-mobile interaction. However, mobile applications slightly involve sensors and rarely combine them simultaneously. In this paper, we seek to remedy this problem by detailing the key challenges that face developers who want to integrate several sensor-based modalities and combine them. We then present our model-based approach solution. We introduce M4L modeling language and MIMIC framework that aim to produce easily sensor-based multimodal mobile applications by generating up to 100% of their interfaces.\",\"PeriodicalId\":125796,\"journal\":{\"name\":\"CHI '14 Extended Abstracts on Human Factors in Computing Systems\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CHI '14 Extended Abstracts on Human Factors in Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2559206.2581222\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2559206.2581222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MIMIC: leveraging sensor-based interactions in multimodal mobile applications
In recent years, there has been an increasing interest in the presence of sensors in mobile devices. Emergence of multiple modalities based on these sensors greatly enriches the human-mobile interaction. However, mobile applications slightly involve sensors and rarely combine them simultaneously. In this paper, we seek to remedy this problem by detailing the key challenges that face developers who want to integrate several sensor-based modalities and combine them. We then present our model-based approach solution. We introduce M4L modeling language and MIMIC framework that aim to produce easily sensor-based multimodal mobile applications by generating up to 100% of their interfaces.