C. F. Julià, Panos Papiotis, C. SebastiánMealla, S. Jordà
{"title":"Prototyping interactions with Online Multimodal Repositories and Interactive Machine Learning","authors":"C. F. Julià, Panos Papiotis, C. SebastiánMealla, S. Jordà","doi":"10.1145/2948910.2948915","DOIUrl":null,"url":null,"abstract":"Interaction designers often use machine learning tools to generate intuitive mappings between complex inputs and outputs. These tools are usually trained live, which is not always feasible or practical. We combine RepoVizz, an online repository and visualizer for multimodal data, with a suite of Interactive Machine Learning tools, to demonstrate a technical solution for prototyping multimodal interactions that decouples the data acquisition step from the model training step. This way, different input data set-ups can be easily replicated, shared and experimented upon their capability to control complex output without the need to repeat the technical set-up.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Symposium on Movement and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2948910.2948915","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Interaction designers often use machine learning tools to generate intuitive mappings between complex inputs and outputs. These tools are usually trained live, which is not always feasible or practical. We combine RepoVizz, an online repository and visualizer for multimodal data, with a suite of Interactive Machine Learning tools, to demonstrate a technical solution for prototyping multimodal interactions that decouples the data acquisition step from the model training step. This way, different input data set-ups can be easily replicated, shared and experimented upon their capability to control complex output without the need to repeat the technical set-up.