Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke
{"title":"用于训练UI元素检测器的增强合成数据集","authors":"Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke","doi":"10.1145/3397482.3450725","DOIUrl":null,"url":null,"abstract":"User Interface (UI) prototyping is an iterative process where designers initially sketch UIs before transforming them into interactive digital designs. Recent research applies Deep Neural Networks (DNNs) to identify the constituent UI elements of these UI sketches and transform these sketches into front-end code. Training such DNN models requires a large-scale dataset of UI sketches, which is time-consuming and expensive to collect. Therefore, we earlier proposed Syn to generate UI sketches synthetically by random allocation of UI element sketches. However, these UI sketches are not statistically similar to real-life UI screens. To bridge this gap, in this paper, we introduce the SynZ dataset, which contains 175,377 synthetically generated UI sketches statistically similar to real-life UI screens. To generate SynZ, we analyzed, enhanced, and extracted annotations from the RICO dataset and used 17,979 hand-drawn UI element sketches from the UISketch dataset. Further, we fine-tuned a UI element detector with SynZ and observed that it doubles the mean Average Precision of UI element detection compared to the Syn dataset.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"SynZ: Enhanced Synthetic Dataset for Training UI Element Detectors\",\"authors\":\"Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke\",\"doi\":\"10.1145/3397482.3450725\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"User Interface (UI) prototyping is an iterative process where designers initially sketch UIs before transforming them into interactive digital designs. Recent research applies Deep Neural Networks (DNNs) to identify the constituent UI elements of these UI sketches and transform these sketches into front-end code. Training such DNN models requires a large-scale dataset of UI sketches, which is time-consuming and expensive to collect. Therefore, we earlier proposed Syn to generate UI sketches synthetically by random allocation of UI element sketches. However, these UI sketches are not statistically similar to real-life UI screens. To bridge this gap, in this paper, we introduce the SynZ dataset, which contains 175,377 synthetically generated UI sketches statistically similar to real-life UI screens. To generate SynZ, we analyzed, enhanced, and extracted annotations from the RICO dataset and used 17,979 hand-drawn UI element sketches from the UISketch dataset. Further, we fine-tuned a UI element detector with SynZ and observed that it doubles the mean Average Precision of UI element detection compared to the Syn dataset.\",\"PeriodicalId\":216190,\"journal\":{\"name\":\"26th International Conference on Intelligent User Interfaces - Companion\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"26th International Conference on Intelligent User Interfaces - Companion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3397482.3450725\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"26th International Conference on Intelligent User Interfaces - Companion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3397482.3450725","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SynZ: Enhanced Synthetic Dataset for Training UI Element Detectors
User Interface (UI) prototyping is an iterative process where designers initially sketch UIs before transforming them into interactive digital designs. Recent research applies Deep Neural Networks (DNNs) to identify the constituent UI elements of these UI sketches and transform these sketches into front-end code. Training such DNN models requires a large-scale dataset of UI sketches, which is time-consuming and expensive to collect. Therefore, we earlier proposed Syn to generate UI sketches synthetically by random allocation of UI element sketches. However, these UI sketches are not statistically similar to real-life UI screens. To bridge this gap, in this paper, we introduce the SynZ dataset, which contains 175,377 synthetically generated UI sketches statistically similar to real-life UI screens. To generate SynZ, we analyzed, enhanced, and extracted annotations from the RICO dataset and used 17,979 hand-drawn UI element sketches from the UISketch dataset. Further, we fine-tuned a UI element detector with SynZ and observed that it doubles the mean Average Precision of UI element detection compared to the Syn dataset.