Dynamically forming networks of cyber-physical systems are becoming increasingly widespread in manufacturing, transportation, automotive, avionics and more domains. The emergence of future internet technology and the ambition for ever closer integration of different systems leads to highly collaborative cyber-physical systems. Such cyber-physical systems form networks to provide additional functions, behavior, and benefits the individual systems cannot provide on their own. As safety is a major concern of systems from these domains, there is a need to provide adequate support for safety analyses of these collaborative cyber-physical systems. This support must explicitly consider the dynamically formed networks of cyber-physical systems. This is a challenging task as the configurations of these cyber-physical system networks (i.e. the architecture of the super system the individual system joins) can differ enormously depending on the actual systems joining a cyber-physical system network. Furthermore, the configuration of the network heavily impacts the adaptations performed by the individual systems and thereby impacting the architecture not only of the system network but of all individual systems involved. As existing safety analysis techniques, however, are not meant for supporting such an array of potential system network configurations the individual system will have to be able to cope with at runtime, we propose automated support for safety analysis for these systems that considers the configuration of the system network. Initial evaluation results from the application to industrial case examples show that the proposed support can aid in the detection of safety defects.
{"title":"Towards automated safety analysis for architectures of dynamically forming networks of cyber-physical systems","authors":"Jennifer Brings, Marian Daun","doi":"10.1145/3387940.3391474","DOIUrl":"https://doi.org/10.1145/3387940.3391474","url":null,"abstract":"Dynamically forming networks of cyber-physical systems are becoming increasingly widespread in manufacturing, transportation, automotive, avionics and more domains. The emergence of future internet technology and the ambition for ever closer integration of different systems leads to highly collaborative cyber-physical systems. Such cyber-physical systems form networks to provide additional functions, behavior, and benefits the individual systems cannot provide on their own. As safety is a major concern of systems from these domains, there is a need to provide adequate support for safety analyses of these collaborative cyber-physical systems. This support must explicitly consider the dynamically formed networks of cyber-physical systems. This is a challenging task as the configurations of these cyber-physical system networks (i.e. the architecture of the super system the individual system joins) can differ enormously depending on the actual systems joining a cyber-physical system network. Furthermore, the configuration of the network heavily impacts the adaptations performed by the individual systems and thereby impacting the architecture not only of the system network but of all individual systems involved. As existing safety analysis techniques, however, are not meant for supporting such an array of potential system network configurations the individual system will have to be able to cope with at runtime, we propose automated support for safety analysis for these systems that considers the configuration of the system network. Initial evaluation results from the application to industrial case examples show that the proposed support can aid in the detection of safety defects.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127453680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of identifying critical components in large scale networked Cyber-Physical Systems comes up as an underlying issue when attempting to enhance the efficiency, the safety and the security of such systems. Graph theory is one of the well-studied methods that are often used to model complex systems and to facilitate the analysis of network-based features of systems to identify critical components. However, recent studies mainly focus on identifying influential nodes in a system and neglect the importance of links. In this paper, we heed to the identification of both key links and nodes in a system, and we aggregate the result by leveraging the multi-variable synthetic evaluation and multiple-criteria decision-making M-TOPSIS method to rank the system components based on their importance.
{"title":"Identifying Critical Components in Large Scale Cyber Physical Systems","authors":"Aida Akbarzadeh, S. Katsikas","doi":"10.1145/3387940.3391473","DOIUrl":"https://doi.org/10.1145/3387940.3391473","url":null,"abstract":"The problem of identifying critical components in large scale networked Cyber-Physical Systems comes up as an underlying issue when attempting to enhance the efficiency, the safety and the security of such systems. Graph theory is one of the well-studied methods that are often used to model complex systems and to facilitate the analysis of network-based features of systems to identify critical components. However, recent studies mainly focus on identifying influential nodes in a system and neglect the importance of links. In this paper, we heed to the identification of both key links and nodes in a system, and we aggregate the result by leveraging the multi-variable synthetic evaluation and multiple-criteria decision-making M-TOPSIS method to rank the system components based on their importance.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127646030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current research on the testing of machine translation software mainly focuses on functional correctness for valid, well-formed inputs. By contrast, robustness testing, which involves the ability of the software to handle erroneous or unanticipated inputs, is often overlooked. In this paper, we propose to address this important shortcoming. Using the metamorphic robustness testing approach, we compare the translations of original inputs with those of follow-up inputs having different categories of minor typos. Our empirical results reveal a lack of robustness in Google Translate, thereby opening a new research direction for the quality assurance of neural machine translators.
{"title":"Metamorphic Robustness Testing of Google Translate","authors":"Dickson T. S. Lee, Z. Zhou, T. H. Tse","doi":"10.1145/3387940.3391484","DOIUrl":"https://doi.org/10.1145/3387940.3391484","url":null,"abstract":"Current research on the testing of machine translation software mainly focuses on functional correctness for valid, well-formed inputs. By contrast, robustness testing, which involves the ability of the software to handle erroneous or unanticipated inputs, is often overlooked. In this paper, we propose to address this important shortcoming. Using the metamorphic robustness testing approach, we compare the translations of original inputs with those of follow-up inputs having different categories of minor typos. Our empirical results reveal a lack of robustness in Google Translate, thereby opening a new research direction for the quality assurance of neural machine translators.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"341 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134044997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Similarity analysis plays an important role in automated program repair by finding the correct solution earlier. However, the effectiveness of similarity is mostly validated using common benchmark Defects4J which consists of 6 large projects. To mitigate the threat of generalizability, this study examines the performance of similarity in repairing small programs. For this purpose, existing syntactic and semantic similarity based approaches, as well as a new technique of combining both similarities, are used. These approaches are evaluated using QuixBugs, a dataset of diverse type bugs from 40 small programs. These techniques fix bugs faster by validating fewer patches than random patch selection based approach. Thus, it proves the effectiveness of similarity in repairing small programs.
{"title":"Impact of Similarity on Repairing Small Programs: A Case Study on QuixBugs Benchmark","authors":"Moumita Asad, Kishan Kumar Ganguly, K. Sakib","doi":"10.1145/3387940.3392182","DOIUrl":"https://doi.org/10.1145/3387940.3392182","url":null,"abstract":"Similarity analysis plays an important role in automated program repair by finding the correct solution earlier. However, the effectiveness of similarity is mostly validated using common benchmark Defects4J which consists of 6 large projects. To mitigate the threat of generalizability, this study examines the performance of similarity in repairing small programs. For this purpose, existing syntactic and semantic similarity based approaches, as well as a new technique of combining both similarities, are used. These approaches are evaluated using QuixBugs, a dataset of diverse type bugs from 40 small programs. These techniques fix bugs faster by validating fewer patches than random patch selection based approach. Thus, it proves the effectiveness of similarity in repairing small programs.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129403353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ensuring the quality of user experience is very important for increasing the acceptance likelihood of software applications, which can be affected by several contextual factors that continuously change over time (e.g., emotional state of end-user). Due to these changes in the context, software continually needs to adapt for delivering software services that can satisfy user needs. However, to achieve this adaptation, it is important to gather and understand the user feedback. In this paper, we mainly investigate whether physiological data can be considered and used as a form of implicit user feedback. To this end, we conducted a case study involving a tourist traveling abroad, who used a wearable device for monitoring his physiological data, and a smartphone with a mobile app for reminding him to take his medication on time during four days. Through the case study, we were able to identify some factors and activities as emotional triggers, which were used for understanding the user context. Our results highlight the importance of having a context analyzer, which can help the system to determine whether the detected stress could be considered as actionable and consequently as implicit user feedback.
{"title":"Understanding Implicit User Feedback from Multisensorial and Physiological Data: A case study","authors":"Franci Suni Lopez, Nelly Condori-Fernández, Alejandro Catalá","doi":"10.1145/3387940.3391466","DOIUrl":"https://doi.org/10.1145/3387940.3391466","url":null,"abstract":"Ensuring the quality of user experience is very important for increasing the acceptance likelihood of software applications, which can be affected by several contextual factors that continuously change over time (e.g., emotional state of end-user). Due to these changes in the context, software continually needs to adapt for delivering software services that can satisfy user needs. However, to achieve this adaptation, it is important to gather and understand the user feedback. In this paper, we mainly investigate whether physiological data can be considered and used as a form of implicit user feedback. To this end, we conducted a case study involving a tourist traveling abroad, who used a wearable device for monitoring his physiological data, and a smartphone with a mobile app for reminding him to take his medication on time during four days. Through the case study, we were able to identify some factors and activities as emotional triggers, which were used for understanding the user context. Our results highlight the importance of having a context analyzer, which can help the system to determine whether the detected stress could be considered as actionable and consequently as implicit user feedback.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128658475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advancements in the field of internet of things, artificial intelligence, machine learning, and data analytics has laid the path to the evolution of digital twin technology. The digital twin is a high-fidelity digital model of a physical system or asset that can be used e.g. to optimize operations and predict faults of the physical system. To understand different use cases of digital twin and its potential for cybersecurity incident prediction, we have performed a Systematic Literature Review (SLR). In this paper, we summarize the definition of digital twin and state-of-the-art on the development of digital twin including reported work on the usability of a digital twin for cybersecurity. Existing tools and technologies for developing digital twin is discussed.
{"title":"Digital Twin for Cybersecurity Incident Prediction: A Multivocal Literature Review","authors":"Abhishek Pokhrel, Vikash Katta, Ricardo Colomo Palacios","doi":"10.1145/3387940.3392199","DOIUrl":"https://doi.org/10.1145/3387940.3392199","url":null,"abstract":"The advancements in the field of internet of things, artificial intelligence, machine learning, and data analytics has laid the path to the evolution of digital twin technology. The digital twin is a high-fidelity digital model of a physical system or asset that can be used e.g. to optimize operations and predict faults of the physical system. To understand different use cases of digital twin and its potential for cybersecurity incident prediction, we have performed a Systematic Literature Review (SLR). In this paper, we summarize the definition of digital twin and state-of-the-art on the development of digital twin including reported work on the usability of a digital twin for cybersecurity. Existing tools and technologies for developing digital twin is discussed.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127670670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Klünder, Nils Prenner, Ann-Kathrin Windmann, Marek Stess, Michael Nolting, Fabian Kortum, Lisa Handke, K. Schneider, S. Kauffeld
Software development is a very cooperative and communicative task. In most software projects, meetings are a very important medium to share information. However, these meetings are often not as effective as expected. One big issue hindering productive and satisfying meetings is inappropriate behavior such as complaining. In particular, talking about problems without at least trying to solve them decreases motivation and mood of the team. Interaction analyses in meetings allow the assessment of appropriate and inappropriate behavior influencing the quality of a meeting. Derived from an established interaction analysis coding scheme in psychology, we present act4teams-short which allows real-time coding of meetings in software projects. We apply act4teams-short in an industrial case study at Volkswagen Commercial Vehicles, a large German company in the automotive domain. We analyze ten team-internal meetings at early project stages. Our results reveal difficulties due to missing project structure and the overall project goal. Furthermore, the team has an intrinsic interest in identifying problems and solving them, without any extrinsic input being required.
{"title":"Do You Just Discuss or Do You Solve?: Meeting Analysis in a Software Project at Early Stages","authors":"J. Klünder, Nils Prenner, Ann-Kathrin Windmann, Marek Stess, Michael Nolting, Fabian Kortum, Lisa Handke, K. Schneider, S. Kauffeld","doi":"10.1145/3387940.3391468","DOIUrl":"https://doi.org/10.1145/3387940.3391468","url":null,"abstract":"Software development is a very cooperative and communicative task. In most software projects, meetings are a very important medium to share information. However, these meetings are often not as effective as expected. One big issue hindering productive and satisfying meetings is inappropriate behavior such as complaining. In particular, talking about problems without at least trying to solve them decreases motivation and mood of the team. Interaction analyses in meetings allow the assessment of appropriate and inappropriate behavior influencing the quality of a meeting. Derived from an established interaction analysis coding scheme in psychology, we present act4teams-short which allows real-time coding of meetings in software projects. We apply act4teams-short in an industrial case study at Volkswagen Commercial Vehicles, a large German company in the automotive domain. We analyze ten team-internal meetings at early project stages. Our results reveal difficulties due to missing project structure and the overall project goal. Furthermore, the team has an intrinsic interest in identifying problems and solving them, without any extrinsic input being required.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124602116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A "dialogue act" is a written or spoken action during a conversation. Dialogue acts are usually only a few words long, and are often categorized by researchers into a relatively small set of dialogue act types, such as eliciting information, expressing an opinion, or making a greeting. Research interest into automatic classification of dialogue acts has grown recently due to the proliferation of Virtual Agents (VA) e.g. Siri, Cortana, Alexa. But unfortunately, the gains made into VA development in one domain are generally not applicable to other domains, since the composition of dialogue acts differs in different conversations. In this paper, we target the problem of dialogue act classification for a VA for software engineers repairing bugs. A problem in the SE domain is that very little sample data exists - the only public dataset is a recently-released Wizard of Oz study with 30 conversations. Therefore, we present a transfer-learning technique to learn on a much larger dataset for general business conversations, and apply the knowledge to the SE dataset. In an experiment, we observe between 8% and 20% improvement over two key baselines.
{"title":"Dialogue Act Classification for Virtual Agents for Software Engineers during Debugging","authors":"Andrew Wood, Zachary Eberhart, Collin McMillan","doi":"10.1145/3387940.3391487","DOIUrl":"https://doi.org/10.1145/3387940.3391487","url":null,"abstract":"A \"dialogue act\" is a written or spoken action during a conversation. Dialogue acts are usually only a few words long, and are often categorized by researchers into a relatively small set of dialogue act types, such as eliciting information, expressing an opinion, or making a greeting. Research interest into automatic classification of dialogue acts has grown recently due to the proliferation of Virtual Agents (VA) e.g. Siri, Cortana, Alexa. But unfortunately, the gains made into VA development in one domain are generally not applicable to other domains, since the composition of dialogue acts differs in different conversations. In this paper, we target the problem of dialogue act classification for a VA for software engineers repairing bugs. A problem in the SE domain is that very little sample data exists - the only public dataset is a recently-released Wizard of Oz study with 30 conversations. Therefore, we present a transfer-learning technique to learn on a much larger dataset for general business conversations, and apply the knowledge to the SE dataset. In an experiment, we observe between 8% and 20% improvement over two key baselines.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117080965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Communication between a software development team and business partners is often a challenging task due to the different context of terms used in the information exchange. The various contexts in which the concepts are defined or used create slightly different semantic fields that can evolve into information and communication silos. Due to the silo effect, the necessary information is often inadequately forwarded to developers resulting in poorly specified software requirements or misinterpreted user feedback. Communication difficulties can be reduced by introducing a mapping between the semantic fields of the parties involved in the communication based on the commonly used terminologies. Our research aims to obtain a suitable semantic database in the form of a semantic network built from the Stack Overflow corpus, which can be considered to encompass the common tacit knowledge of the software development community. Terminologies used in the business world can be assigned to our semantic network, so software developers do not miss features that are not specific to their world but relevant to their clients. We present an initial experiment of mining semantic network from Stack Overflow and provide insights of the newly captured relations compared to WordNet.
{"title":"Mining Hypernyms Semantic Relations from Stack Overflow","authors":"L. Tóth, Balázs Nagy, T. Gyimóthy, László Vidács","doi":"10.1145/3387940.3392160","DOIUrl":"https://doi.org/10.1145/3387940.3392160","url":null,"abstract":"Communication between a software development team and business partners is often a challenging task due to the different context of terms used in the information exchange. The various contexts in which the concepts are defined or used create slightly different semantic fields that can evolve into information and communication silos. Due to the silo effect, the necessary information is often inadequately forwarded to developers resulting in poorly specified software requirements or misinterpreted user feedback. Communication difficulties can be reduced by introducing a mapping between the semantic fields of the parties involved in the communication based on the commonly used terminologies. Our research aims to obtain a suitable semantic database in the form of a semantic network built from the Stack Overflow corpus, which can be considered to encompass the common tacit knowledge of the software development community. Terminologies used in the business world can be assigned to our semantic network, so software developers do not miss features that are not specific to their world but relevant to their clients. We present an initial experiment of mining semantic network from Stack Overflow and provide insights of the newly captured relations compared to WordNet.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122521792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated program repair (APR) is an emerging technique that can automatically generate patches for fixing bugs or vulnerabilities. To ensure correctness, the auto-generated patches are usually sent to developers for verification before applied in the program. To review patches, developers must figure out the root cause of a bug and understand the semantic impact of the patch, which is not straightforward and easy even for expert programmers. In this position paper, we envision an interactive patch suggestion approach that avoids such complex reasoning by instead enabling developers to review patches with a few clicks. We first automatically translate patch semantics into a set of what and how questions. Basically, the what questions formulate the expected program behaviors, while the how questions represent how to modify the program to realize the expected behaviors. We could leverage the existing APR technique to generate those questions and corresponding answers. Then, to evaluate the correctness of patches, developers just need to ask questions and click the corresponding answers.
{"title":"Interactive Patch Generation and Suggestion","authors":"Xiang Gao, Abhik Roychoudhury","doi":"10.1145/3387940.3392179","DOIUrl":"https://doi.org/10.1145/3387940.3392179","url":null,"abstract":"Automated program repair (APR) is an emerging technique that can automatically generate patches for fixing bugs or vulnerabilities. To ensure correctness, the auto-generated patches are usually sent to developers for verification before applied in the program. To review patches, developers must figure out the root cause of a bug and understand the semantic impact of the patch, which is not straightforward and easy even for expert programmers. In this position paper, we envision an interactive patch suggestion approach that avoids such complex reasoning by instead enabling developers to review patches with a few clicks. We first automatically translate patch semantics into a set of what and how questions. Basically, the what questions formulate the expected program behaviors, while the how questions represent how to modify the program to realize the expected behaviors. We could leverage the existing APR technique to generate those questions and corresponding answers. Then, to evaluate the correctness of patches, developers just need to ask questions and click the corresponding answers.","PeriodicalId":309659,"journal":{"name":"Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123537470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}