P. Seipel, Adrian Stock, S. Santhanam, Artur Baranowski, N. Hochgeschwender, A. Schreiber
{"title":"Speak to your Software Visualization—Exploring Component-Based Software Architectures in Augmented Reality with a Conversational Interface","authors":"P. Seipel, Adrian Stock, S. Santhanam, Artur Baranowski, N. Hochgeschwender, A. Schreiber","doi":"10.1109/VISSOFT.2019.00017","DOIUrl":null,"url":null,"abstract":"Exploring of software architectures with software visualization in Augmented Reality (AR) is possible with different interaction methods, such gesture, gaze, and speech. For interaction with speech (i.e., natural language), we present an architecture and an implementation of conversational interfaces for the Microsoft HoloLens device. We aim to remedy some peculiarities of AR devices, but also enhancing the exploration task at hand. To implement the conversational interface different natural language processing (NLP) components such as natural language generation and intent recognition are typically required. Our proposed architecture integrates conversational components with the AR-based software visualization. We describe its implementation based on different user utterances, where the system provides information about the to-be-explored component-based software architecture in the form of adjusted visualizations and speech-based results. We apply out tool to explore OSGi-based software architectures.","PeriodicalId":375862,"journal":{"name":"2019 Working Conference on Software Visualization (VISSOFT)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Working Conference on Software Visualization (VISSOFT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VISSOFT.2019.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Exploring of software architectures with software visualization in Augmented Reality (AR) is possible with different interaction methods, such gesture, gaze, and speech. For interaction with speech (i.e., natural language), we present an architecture and an implementation of conversational interfaces for the Microsoft HoloLens device. We aim to remedy some peculiarities of AR devices, but also enhancing the exploration task at hand. To implement the conversational interface different natural language processing (NLP) components such as natural language generation and intent recognition are typically required. Our proposed architecture integrates conversational components with the AR-based software visualization. We describe its implementation based on different user utterances, where the system provides information about the to-be-explored component-based software architecture in the form of adjusted visualizations and speech-based results. We apply out tool to explore OSGi-based software architectures.