John Ahlgren, Kinga Bojarczuk, S. Drossopoulou, Inna Dvortsova, Johann George, Natalija Gucevska, M. Harman, M. Lomeli, S. Lucas, E. Meijer, Steve Omohundro, Rubmary Rojas, Silvia Sapora, Norm Zhou
A cyber–cyber digital twin is a simulation of a software system. By contrast, a cyber–physical digital twin is a simulation of a non-software (physical) system. Although cyber–physical digital twins have received a lot of recent attention, their cyber–cyber counterparts have been comparatively overlooked. In this paper we show how the unique properties of cyber–cyber digital twins open up exciting opportunities for research and development. Like all digital twins, the cyber–cyber digital twin is both informed by and informs the behaviour of the twin it simulates. It is therefore a software system that simulates another software system, making it conceptually truly a twin, blurring the distinction between the simulated and the simulator. Cyber–cyber digital twins can be twins of other cyber–cyber digital twins, leading to a hierarchy of twins. As we shall see, these apparently philosophical observations have practical ramifications for the design, implementation and deployment of digital twins at Facebook.
{"title":"Facebook’s Cyber–Cyber and Cyber–Physical Digital Twins","authors":"John Ahlgren, Kinga Bojarczuk, S. Drossopoulou, Inna Dvortsova, Johann George, Natalija Gucevska, M. Harman, M. Lomeli, S. Lucas, E. Meijer, Steve Omohundro, Rubmary Rojas, Silvia Sapora, Norm Zhou","doi":"10.1145/3463274.3463275","DOIUrl":"https://doi.org/10.1145/3463274.3463275","url":null,"abstract":"A cyber–cyber digital twin is a simulation of a software system. By contrast, a cyber–physical digital twin is a simulation of a non-software (physical) system. Although cyber–physical digital twins have received a lot of recent attention, their cyber–cyber counterparts have been comparatively overlooked. In this paper we show how the unique properties of cyber–cyber digital twins open up exciting opportunities for research and development. Like all digital twins, the cyber–cyber digital twin is both informed by and informs the behaviour of the twin it simulates. It is therefore a software system that simulates another software system, making it conceptually truly a twin, blurring the distinction between the simulated and the simulator. Cyber–cyber digital twins can be twins of other cyber–cyber digital twins, leading to a hierarchy of twins. As we shall see, these apparently philosophical observations have practical ramifications for the design, implementation and deployment of digital twins at Facebook.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"288 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132321463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
“Technical excellence” is a nebulous term in agile software development. This vagueness is risky, as it creates a gap in the understanding of agile that may have consequences on how software development practitioners operate. Technical excellence is the only reference to quality in the agile manifesto. Hence, it is fundamental to understand how agile software development practitioners both interpret and implement it. We conducted interviews with twenty agile practitioners about their understanding of the term “technical excellence” and how they approach the task of fostering it. To validate the findings, two focus group meetings were conducted after the interviews and the analysis of the data were completed. We found that the configuration of technical excellence is made of four traits: (1) software craftsmanship; (2) software quality (3) mindset for excellence; and (4) consistency with good software engineering practices. Fostering technical excellence is a continuous endeavor. Further, we identified three key principles that were commonly cited as essential to implementing technical excellence, namely: 1) continuous learning; 2) continuous improvement; and 3) control of excellence. Based on our findings, we present several recommendations for software development teams seeking to better realize the goal of technical excellence in their agile implementation.
{"title":"How Do Agile Practitioners Interpret and Foster “Technical Excellence”?","authors":"A. Alami, M. Paasivaara","doi":"10.1145/3463274.3463322","DOIUrl":"https://doi.org/10.1145/3463274.3463322","url":null,"abstract":"“Technical excellence” is a nebulous term in agile software development. This vagueness is risky, as it creates a gap in the understanding of agile that may have consequences on how software development practitioners operate. Technical excellence is the only reference to quality in the agile manifesto. Hence, it is fundamental to understand how agile software development practitioners both interpret and implement it. We conducted interviews with twenty agile practitioners about their understanding of the term “technical excellence” and how they approach the task of fostering it. To validate the findings, two focus group meetings were conducted after the interviews and the analysis of the data were completed. We found that the configuration of technical excellence is made of four traits: (1) software craftsmanship; (2) software quality (3) mindset for excellence; and (4) consistency with good software engineering practices. Fostering technical excellence is a continuous endeavor. Further, we identified three key principles that were commonly cited as essential to implementing technical excellence, namely: 1) continuous learning; 2) continuous improvement; and 3) control of excellence. Based on our findings, we present several recommendations for software development teams seeking to better realize the goal of technical excellence in their agile implementation.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116440083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katarzyna Biesialska, Xavier Franch, V. Muntés-Mulero
Context: Coordination in large-scale software development is critical yet difficult, as it faces the problem of dependency management and resolution. In this work, we focus on managing requirement dependencies that in Agile software development (ASD) come in the form of user stories. Objective: This work studies decisions of large-scale Agile teams regarding identification of dependencies between user stories. Our goal is to explain detection of dependencies through users’ behavior in large-scale, distributed projects. Method: We perform empirical evaluation on a large real-world dataset from an Agile software organization, provider of a leading software for Agile project management. We mine the usage data of the Agile Lifecycle Management (ALM) tool to extract large-scale development project data for more than 70 teams running over a five-year period. Results: Our results demonstrate that dependencies among user stories are not frequently observed (the problem affects around 10% of user stories), however, their implications on large-scale ASD are considerable. Dependencies have impact on software releases and increase work coordination complexity for members of different teams. Conclusion: Requirement dependencies undermine Agile teams’ autonomy and are difficult to manage at scale. We conclude that leveraging ALM monitoring data to automatically detect dependencies could help Agile teams address work coordination needs and manage risks related to dependencies in a timely manner.
{"title":"Mining Dependencies in Large-Scale Agile Software Development Projects: A Quantitative Industry Study","authors":"Katarzyna Biesialska, Xavier Franch, V. Muntés-Mulero","doi":"10.1145/3463274.3463323","DOIUrl":"https://doi.org/10.1145/3463274.3463323","url":null,"abstract":"Context: Coordination in large-scale software development is critical yet difficult, as it faces the problem of dependency management and resolution. In this work, we focus on managing requirement dependencies that in Agile software development (ASD) come in the form of user stories. Objective: This work studies decisions of large-scale Agile teams regarding identification of dependencies between user stories. Our goal is to explain detection of dependencies through users’ behavior in large-scale, distributed projects. Method: We perform empirical evaluation on a large real-world dataset from an Agile software organization, provider of a leading software for Agile project management. We mine the usage data of the Agile Lifecycle Management (ALM) tool to extract large-scale development project data for more than 70 teams running over a five-year period. Results: Our results demonstrate that dependencies among user stories are not frequently observed (the problem affects around 10% of user stories), however, their implications on large-scale ASD are considerable. Dependencies have impact on software releases and increase work coordination complexity for members of different teams. Conclusion: Requirement dependencies undermine Agile teams’ autonomy and are difficult to manage at scale. We conclude that leveraging ALM monitoring data to automatically detect dependencies could help Agile teams address work coordination needs and manage risks related to dependencies in a timely manner.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128391195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Software engineering research articles should make precise claims regarding their contribution, so that practitioners can decide when they might be interested and researchers can better recognize (1) whether the given research is valid, (2) which published works to use as stepping stones for their own research (and which not), and (3) where additional research is required. In particular, articles should spell out what assumptions were made at each research step. Question: Can we identify recurring patterns of assumptions that are not spelled out? Method: This is a position paper. It formulates impressions, but does not present concrete evidence. Results: Assumptions that are wrong or assumptions that are risky and not explicit threaten the integrity of the scientific record. There are several recurring types of such assumptions. The frequency of these problems is currently unknown. Conclusion: The software engineering research community should become more conscious and more explicit with respect to the assumptions that underlie individual research works.
{"title":"On Implicit Assumptions Underlying Software Engineering Research","authors":"L. Prechelt","doi":"10.1145/3463274.3463356","DOIUrl":"https://doi.org/10.1145/3463274.3463356","url":null,"abstract":"Background: Software engineering research articles should make precise claims regarding their contribution, so that practitioners can decide when they might be interested and researchers can better recognize (1) whether the given research is valid, (2) which published works to use as stepping stones for their own research (and which not), and (3) where additional research is required. In particular, articles should spell out what assumptions were made at each research step. Question: Can we identify recurring patterns of assumptions that are not spelled out? Method: This is a position paper. It formulates impressions, but does not present concrete evidence. Results: Assumptions that are wrong or assumptions that are risky and not explicit threaten the integrity of the scientific record. There are several recurring types of such assumptions. The frequency of these problems is currently unknown. Conclusion: The software engineering research community should become more conscious and more explicit with respect to the assumptions that underlie individual research works.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115923610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The mobile Web is growing as more and more people use a smart device to access online services. This rapid growth of mobile Web usage is accompanied by the evolution of the mobile Web browser as a fully fledged software platform. Due to these two trends, the expectations of users in terms of quality of experience (QoE) when browsing the Web on their mobile device has increased drastically. As a result, the number of studies using measurement-based experiments to investigate the factors influencing QoE has grown. However, conducting measurement-based experiments on the mobile Web is not a trivial task as it requires a significant experience and knowledge about both technical and methodological aspects. Unfortunately, there is no systematic study on the state of the art of conducting measurement-based experiments on the mobile Web that could guide researchers and practitioners when planning and performing such experiments. The goal of this work is to build a map of existing studies that conduct measurement-based experiments on the mobile Web. In total 640 potentially relevant studies are identified. After a rigorous selection procedure the set of primary studies consists of 28 papers from which we extracted data and gathered insights. Specifically, we investigate on (i) which metrics are collected, how they are measured, and how they are analysed, (ii) the platforms on which the experiments are run, (iii) what subjects are used, and (iv) the used tools and environments under which the experiments are run. This study benefits researchers and practitioners by presenting common techniques, empirical practices, and tools to properly conduct measurement-based experiments on the mobile Web.
{"title":"Measurement-based Experiments on the Mobile Web: A Systematic Mapping Study","authors":"Omar De Munk, I. Malavolta","doi":"10.1145/3463274.3463318","DOIUrl":"https://doi.org/10.1145/3463274.3463318","url":null,"abstract":"The mobile Web is growing as more and more people use a smart device to access online services. This rapid growth of mobile Web usage is accompanied by the evolution of the mobile Web browser as a fully fledged software platform. Due to these two trends, the expectations of users in terms of quality of experience (QoE) when browsing the Web on their mobile device has increased drastically. As a result, the number of studies using measurement-based experiments to investigate the factors influencing QoE has grown. However, conducting measurement-based experiments on the mobile Web is not a trivial task as it requires a significant experience and knowledge about both technical and methodological aspects. Unfortunately, there is no systematic study on the state of the art of conducting measurement-based experiments on the mobile Web that could guide researchers and practitioners when planning and performing such experiments. The goal of this work is to build a map of existing studies that conduct measurement-based experiments on the mobile Web. In total 640 potentially relevant studies are identified. After a rigorous selection procedure the set of primary studies consists of 28 papers from which we extracted data and gathered insights. Specifically, we investigate on (i) which metrics are collected, how they are measured, and how they are analysed, (ii) the platforms on which the experiments are run, (iii) what subjects are used, and (iv) the used tools and environments under which the experiments are run. This study benefits researchers and practitioners by presenting common techniques, empirical practices, and tools to properly conduct measurement-based experiments on the mobile Web.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121181209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arghavan Moradi Dakhel, M. Desmarais, Foutse Khomh
Accurate assessment of developer expertise is crucial for the assignment of an individual to perform a task or, more generally, to be involved in a project that requires an adequate level of knowledge. Potential programmers can come from a large pool. Therefore, automatic means to provide such assessment of expertise from written programs would be highly valuable in such context. Previous works towards this goal have generally used heuristics such as Line 10 Rule or linguistic information in source files such as comments or identifiers to represent the knowledge of developers and evaluate their expertise. In this paper, we focus on syntactic patterns mastery as an evidence of knowledge in programming and propose a theoretical definition of programming knowledge based on the distribution of Syntax Patterns (SPs) in source code, namely Zipf’s law. We first validate the model and its scalability over synthetic data of “Expert” and “Novice” programmers. This provides a ground truth and allows us to explore the space of validity of the model. Then, we assess the performance of the model over real data from programmers. The results show that our proposed approach outperforms the recent state of the art approaches for the task of classifying programming experts.
{"title":"Assessing Developer Expertise from the Statistical Distribution of Programming Syntax Patterns","authors":"Arghavan Moradi Dakhel, M. Desmarais, Foutse Khomh","doi":"10.1145/3463274.3463343","DOIUrl":"https://doi.org/10.1145/3463274.3463343","url":null,"abstract":"Accurate assessment of developer expertise is crucial for the assignment of an individual to perform a task or, more generally, to be involved in a project that requires an adequate level of knowledge. Potential programmers can come from a large pool. Therefore, automatic means to provide such assessment of expertise from written programs would be highly valuable in such context. Previous works towards this goal have generally used heuristics such as Line 10 Rule or linguistic information in source files such as comments or identifiers to represent the knowledge of developers and evaluate their expertise. In this paper, we focus on syntactic patterns mastery as an evidence of knowledge in programming and propose a theoretical definition of programming knowledge based on the distribution of Syntax Patterns (SPs) in source code, namely Zipf’s law. We first validate the model and its scalability over synthetic data of “Expert” and “Novice” programmers. This provides a ground truth and allows us to explore the space of validity of the model. Then, we assess the performance of the model over real data from programmers. The results show that our proposed approach outperforms the recent state of the art approaches for the task of classifying programming experts.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116667195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Behaviour-Driven Development (BDD) stories have gained considerable attention in recent years as an effective way to specify and test user requirements in agile software development projects. External testing frameworks also allow developers to automate the execution of BDD stories and check whether a fully functional software system behaves as expected. However, other software artifacts may quite often lose synchronization with the stories, and many inconsistencies can arise with respect to requirements representation. This paper reports on preliminary empirical findings regarding the performance of two existing approaches in the literature intended to support consistency assurance between BDD stories and software artifacts. The first approach involves the parsing of BDD stories in order to identify conceptual elements to automatically generate consistent class diagrams, while the second approach seeks to identify interaction elements to automatically assess the consistency of task models and GUI prototypes. We report on the precision of these approaches when applied to a study with BDD stories previously written by Product Owners (POs). Based on the results, we also identify a set of challenges and opportunities for BDD stories in the consistency assurance of such artifacts.
{"title":"Empirical Findings on BDD Story Parsing to Support Consistency Assurance between Requirements and Artifacts","authors":"T. Silva, B. Fitzgerald","doi":"10.1145/3463274.3463807","DOIUrl":"https://doi.org/10.1145/3463274.3463807","url":null,"abstract":"Behaviour-Driven Development (BDD) stories have gained considerable attention in recent years as an effective way to specify and test user requirements in agile software development projects. External testing frameworks also allow developers to automate the execution of BDD stories and check whether a fully functional software system behaves as expected. However, other software artifacts may quite often lose synchronization with the stories, and many inconsistencies can arise with respect to requirements representation. This paper reports on preliminary empirical findings regarding the performance of two existing approaches in the literature intended to support consistency assurance between BDD stories and software artifacts. The first approach involves the parsing of BDD stories in order to identify conceptual elements to automatically generate consistent class diagrams, while the second approach seeks to identify interaction elements to automatically assess the consistency of task models and GUI prototypes. We report on the precision of these approaches when applied to a study with BDD stories previously written by Product Owners (POs). Based on the results, we also identify a set of challenges and opportunities for BDD stories in the consistency assurance of such artifacts.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116139930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Schnappinger, Arnaud Fietzke, A. Pretschner
One of the greatest challenges in software quality control is the efficient and effective measurement of maintainability. Thorough expert assessments are precise yet slow and expensive, whereas automated static analysis yields imprecise yet rapid feedback. Several machine learning approaches aim to integrate the advantages of both concepts. However, most prior studies did not adhere to expert judgment and predicted the number of changed lines as a proxy for maintainability, or were biased towards a small group of experts. In contrast, the present study builds on a manually labeled and validated dataset. Prediction is done using static code metrics where we found simple structural metrics such as the size of a class and its methods to yield the highest predictive power towards maintainability. Using just a small set of these metrics, our models can distinguish easy from hard to maintain code with an F-score of 91.3% and AUC of 82.3%. In addition, we perform a more detailed ordinal classification and compare the quality of the classification with the performance of experts. Here, we use the deviations between the individual expert’s ratings and the eventually determined consensus of all experts. In sum, our models achieve the same level of performance as an average human expert. In fact, the obtained accuracy and mean squared error outperform human performance. We hence argue that our models provide an automated and trustworthy prediction of software maintainability.
{"title":"Human-level Ordinal Maintainability Prediction Based on Static Code Metrics","authors":"Markus Schnappinger, Arnaud Fietzke, A. Pretschner","doi":"10.1145/3463274.3463315","DOIUrl":"https://doi.org/10.1145/3463274.3463315","url":null,"abstract":"One of the greatest challenges in software quality control is the efficient and effective measurement of maintainability. Thorough expert assessments are precise yet slow and expensive, whereas automated static analysis yields imprecise yet rapid feedback. Several machine learning approaches aim to integrate the advantages of both concepts. However, most prior studies did not adhere to expert judgment and predicted the number of changed lines as a proxy for maintainability, or were biased towards a small group of experts. In contrast, the present study builds on a manually labeled and validated dataset. Prediction is done using static code metrics where we found simple structural metrics such as the size of a class and its methods to yield the highest predictive power towards maintainability. Using just a small set of these metrics, our models can distinguish easy from hard to maintain code with an F-score of 91.3% and AUC of 82.3%. In addition, we perform a more detailed ordinal classification and compare the quality of the classification with the performance of experts. Here, we use the deviations between the individual expert’s ratings and the eventually determined consensus of all experts. In sum, our models achieve the same level of performance as an average human expert. In fact, the obtained accuracy and mean squared error outperform human performance. We hence argue that our models provide an automated and trustworthy prediction of software maintainability.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122146548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Caulo, R. Francese, G. Scanniello, G. Tortora
In this paper, we conduct an empirical study aiming at investigating how personality traits can affect the productivity of software developers in the context of the distributed development of multi-platform apps within a software project stored in GitHub. Participants were 31 master’s students in Computer Science grouped in 13 teams. Data were gathered from the compilation of the IPIP-NEO-120 questionnaire, a largely adopted tool to estimate personality traits, and from the software projects. We analyzed the correlation between personality traits (and their facets) and the productivity metrics. The results of this preliminary study seem to reveal that the most productive participants are those with the highest scores for the personality traits of Agreeableness and Conscientiousness.
{"title":"Relationships between Personality Traits and Productivity in a Multi-platform Development Context","authors":"Maria Caulo, R. Francese, G. Scanniello, G. Tortora","doi":"10.1145/3463274.3463327","DOIUrl":"https://doi.org/10.1145/3463274.3463327","url":null,"abstract":"In this paper, we conduct an empirical study aiming at investigating how personality traits can affect the productivity of software developers in the context of the distributed development of multi-platform apps within a software project stored in GitHub. Participants were 31 master’s students in Computer Science grouped in 13 teams. Data were gathered from the compilation of the IPIP-NEO-120 questionnaire, a largely adopted tool to estimate personality traits, and from the software projects. We analyzed the correlation between personality traits (and their facets) and the productivity metrics. The results of this preliminary study seem to reveal that the most productive participants are those with the highest scores for the personality traits of Agreeableness and Conscientiousness.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125524399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The world is facing an energy crisis. The smart building concept is presented by many researchers using IoT devices that do not have sufficient computational power to compute the data to decide about the results. According to an estimation, there will be 50 billion IoT devices in 2020. Most IoT devices send data to the cloud for processing. Latency and network usage will be increased in cloud servers because the cloud servers will not be able to handle millions of requests spontaneously. In this paper, we have proposed a framework that minimizes latency and energy consumption in cloud computing. The proposed framework uses edge computing and most of the processing is performed on fog nodes. The framework contributes significantly to energy saving by supporting behavioral and physical changes in the cloud network.
{"title":"Fog Based Energy Efficient Process Framework for Smart Building","authors":"Danish Iqbal, Barbora Buhnova","doi":"10.1145/3463274.3463364","DOIUrl":"https://doi.org/10.1145/3463274.3463364","url":null,"abstract":"The world is facing an energy crisis. The smart building concept is presented by many researchers using IoT devices that do not have sufficient computational power to compute the data to decide about the results. According to an estimation, there will be 50 billion IoT devices in 2020. Most IoT devices send data to the cloud for processing. Latency and network usage will be increased in cloud servers because the cloud servers will not be able to handle millions of requests spontaneously. In this paper, we have proposed a framework that minimizes latency and energy consumption in cloud computing. The proposed framework uses edge computing and most of the processing is performed on fog nodes. The framework contributes significantly to energy saving by supporting behavioral and physical changes in the cloud network.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115097111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}