Vendor lock-in is the issues in auto-scaling configuration; scaling configuration of a service cannot automatically transfer when the service is migrated from one cloud to another cloud. To facilitate fast service deployment, there is a need to automate the operations of auto-scaling configuration and deployment.
{"title":"A model driven method to deploy auto-scaling configuration for cloud services","authors":"H. Alipour, Yan Liu","doi":"10.1145/2993274.3011285","DOIUrl":"https://doi.org/10.1145/2993274.3011285","url":null,"abstract":"Vendor lock-in is the issues in auto-scaling configuration; scaling configuration of a service cannot automatically transfer when the service is migrated from one cloud to another cloud. To facilitate fast service deployment, there is a need to automate the operations of auto-scaling configuration and deployment.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"15 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115202842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
About a year ago I was trying to improve our automated deployment and testing processes but found that getting access to a functioning environment reliably just wasn't possible. At the time our test environments were pets. Each was built partially by script and then finished by hand with a great expenditure of time, effort and frustration for everyone involved. After some period of use, that varied depending on what you tested on the environment, it would break again and you'd have to make some, frequently wrong, decision about whether to just start fresh (that could take up to a week) or try to debug the environment instead (that could take even longer and often did). Here's how we went about automating the creation and management of our test environment to increase developer productivity, reduce costs and increase our ability to experiment with infrastructure configuration with reduced risk.
{"title":"The SpudFarm: converting test environments from pets into cattle","authors":"Benjamin Lau","doi":"10.1145/2993274.2993280","DOIUrl":"https://doi.org/10.1145/2993274.2993280","url":null,"abstract":"About a year ago I was trying to improve our automated deployment and testing processes but found that getting access to a functioning environment reliably just wasn't possible. At the time our test environments were pets. Each was built partially by script and then finished by hand with a great expenditure of time, effort and frustration for everyone involved. After some period of use, that varied depending on what you tested on the environment, it would break again and you'd have to make some, frequently wrong, decision about whether to just start fresh (that could take up to a week) or try to debug the environment instead (that could take even longer and often did). Here's how we went about automating the creation and management of our test environment to increase developer productivity, reduce costs and increase our ability to experiment with infrastructure configuration with reduced risk.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124545902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GNU Autotools is a widely used build tool in the open source community. As open source projects grow more complex, maintaining their build systems becomes more challenging, due to the lack of tool support. In this paper, we propose a platform to build support tools for GNU Autotools build systems. The platform provides an abstraction of the build system to be used in different analysis techniques.
GNU Autotools是开源社区中广泛使用的构建工具。随着开源项目变得越来越复杂,由于缺乏工具支持,维护它们的构建系统变得更具挑战性。在本文中,我们提出了一个为GNU Autotools构建系统构建支持工具的平台。该平台提供了构建系统的抽象,可以在不同的分析技术中使用。
{"title":"Escaping AutoHell: a vision for automated analysis and migration of autotools build systems","authors":"Jafar M. Al-Kofahi, T. Nguyen, Christian Kästner","doi":"10.1145/2993274.2993279","DOIUrl":"https://doi.org/10.1145/2993274.2993279","url":null,"abstract":"GNU Autotools is a widely used build tool in the open source community. As open source projects grow more complex, maintaining their build systems becomes more challenging, due to the lack of tool support. In this paper, we propose a platform to build support tools for GNU Autotools build systems. The platform provides an abstraction of the build system to be used in different analysis techniques.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115297933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Market and user characteristics of mobile apps make their release managements different from proprietary software products and web services. Despite the wealth of information regarding users' feedback of an app, an in-depth analysis of app releases is difficult due to the inconsistency and uncertainty of the information. To better understand and potentially improve app release processes, we analyze major, minor and patch releases for releases following semantic versioning. In particular, we were interested in finding out the difference between marketed and not-marketed releases. Our results show that, in general, major, minor and patch releases have significant differences in the release cycle duration, nature and change velocity. We also observed that there is a significant difference between marketed and non-marketed mobile app releases in terms of cycle duration, nature and the extent of changes, and the number of opened and closed issues.
{"title":"Analysis of marketed versus not-marketed mobile app releases","authors":"Maleknaz Nayebi, Homayoon Farrahi, G. Ruhe","doi":"10.1145/2993274.2993281","DOIUrl":"https://doi.org/10.1145/2993274.2993281","url":null,"abstract":"Market and user characteristics of mobile apps make their release managements different from proprietary software products and web services. Despite the wealth of information regarding users' feedback of an app, an in-depth analysis of app releases is difficult due to the inconsistency and uncertainty of the information. To better understand and potentially improve app release processes, we analyze major, minor and patch releases for releases following semantic versioning. In particular, we were interested in finding out the difference between marketed and not-marketed releases. Our results show that, in general, major, minor and patch releases have significant differences in the release cycle duration, nature and change velocity. We also observed that there is a significant difference between marketed and non-marketed mobile app releases in terms of cycle duration, nature and the extent of changes, and the number of opened and closed issues.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134229528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction Games are traditionally developed as a boxed-product. There is a development phase, followed by a bug-fixing phase. Once the level of quality is acceptable, game is released, development team moves on to a new project. They rarely need to maintain the product and release updates after the first few months. Games are architected as a monolithic application, developed in C++. Game package contains the executable and all the art contents, which makes up most of the package. During the development phase, the level of quality is generally low, game crashes a lot. Developers mainly care about implementing their own feature and do not think too much about the stability and quality of the game as a whole. Developers spend very little time writing automated tests and rely on manual testers to verify features. It's a common practice to develop features on feature branches. The perceived benefit is developers are productive because they can submit their work to feature branches. All features come together in the bug-fixing phase when all different parts are integrated together. At this stage, many things are broken. This is a clear example of local optimisation, as a feature submitted in a feature branch does not provide any values until it’s integrated with the rest of the game and can be released. Number of bugs could be several thousands. Everyone crunches whilst getting the game to an acceptable level. Rare’s Approach At Rare, we decided to change our approach and adopt Continuous Delivery. The main advantages compared to traditional approach are: •Sustainably delivering new features that are useful to players over a long period of time. •Minimising crunch and having happier and productive developers. •Applying hypothesis-driven development mind-set and getting rapid feedback on whether a feature is achieving the intended outcome. This allows us to listen to user feedback and deliver a better quality game that’s more fun and enjoyable for players. •Reduce the cost of having a large manual test team.
{"title":"Adopting continuous delivery in AAA console games","authors":"Jafar Soltani","doi":"10.1145/2993274.2993276","DOIUrl":"https://doi.org/10.1145/2993274.2993276","url":null,"abstract":"Introduction Games are traditionally developed as a boxed-product. There is a development phase, followed by a bug-fixing phase. Once the level of quality is acceptable, game is released, development team moves on to a new project. They rarely need to maintain the product and release updates after the first few months. Games are architected as a monolithic application, developed in C++. Game package contains the executable and all the art contents, which makes up most of the package. During the development phase, the level of quality is generally low, game crashes a lot. Developers mainly care about implementing their own feature and do not think too much about the stability and quality of the game as a whole. Developers spend very little time writing automated tests and rely on manual testers to verify features. It's a common practice to develop features on feature branches. The perceived benefit is developers are productive because they can submit their work to feature branches. All features come together in the bug-fixing phase when all different parts are integrated together. At this stage, many things are broken. This is a clear example of local optimisation, as a feature submitted in a feature branch does not provide any values until it’s integrated with the rest of the game and can be released. Number of bugs could be several thousands. Everyone crunches whilst getting the game to an acceptable level. Rare’s Approach At Rare, we decided to change our approach and adopt Continuous Delivery. The main advantages compared to traditional approach are: •Sustainably delivering new features that are useful to players over a long period of time. •Minimising crunch and having happier and productive developers. •Applying hypothesis-driven development mind-set and getting rapid feedback on whether a feature is achieving the intended outcome. This allows us to listen to user feedback and deliver a better quality game that’s more fun and enjoyable for players. •Reduce the cost of having a large manual test team.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133303985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Installers generate a huge amount of data such as product files, registries, signature bits, and permissions.Product stakeholders require the ability to compare the difference between two builds. Usually, this comparison is performed manually by deploying the builds every time such a comparison is required, followed by running some script or tool like Beyond Compare to evaluate the differences or verifying signing/registry or permission issues. The data is then stored in XLS or CSV files for further actions. The real problem occurs when a similar comparison needs to be accomplished for multiple builds in a release cycle. In this scenario, the above-mentioned process becomes extremely inefficient as it requires an enormous amount of time and is also prone to errors or faults. To solve this problem efficiently, we have developed a system that allows users to view their product’s structural changes and run comparisons across releases, builds and versions.
{"title":"Your build data is precious, donźt waste it! leverage it to deliver great releases","authors":"Rishika Karira, Vinay Awasthi","doi":"10.1145/2993274.3011283","DOIUrl":"https://doi.org/10.1145/2993274.3011283","url":null,"abstract":"Installers generate a huge amount of data such as product files, registries, signature bits, and permissions.Product stakeholders require the ability to compare the difference between two builds. Usually, this comparison is performed manually by deploying the builds every time such a comparison is required, followed by running some script or tool like Beyond Compare to evaluate the differences or verifying signing/registry or permission issues. The data is then stored in XLS or CSV files for further actions. The real problem occurs when a similar comparison needs to be accomplished for multiple builds in a release cycle. In this scenario, the above-mentioned process becomes extremely inefficient as it requires an enormous amount of time and is also prone to errors or faults. To solve this problem efficiently, we have developed a system that allows users to view their product’s structural changes and run comparisons across releases, builds and versions.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116857279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At Amazon, Release Engineering falls under what we call Operational Excellence: designing, implementing, maintaining, and releasing a scalable product. There is an even more basic component that is often ignored: source control. Good source control practices are necessary but not sufficient for delivering good software. Over the 25+ years source control has been used, each tool has come with its own set of pitfalls: CVS, subversion, mercurial, and most recently, git. For decades, the unwritten rule has been for each organization to identify and mitigate these pitfalls independently, with an expectation that the next innovation would remediate it. This approach scales neither for large organizations such as Amazon nor the software engineering community at large. The real source of this dysfunction—remote collaboration between software engineers—must be examined and ultimately fixed. In the interim, it is up to the engineering community to share practices independent of software process to make up the difference. At its core, source control is a fundamental tool of software engineers, expected to be easily understood and “just work;” this assumption is invalid on a number of dimensions. Neither Software Configuration Management (SCM) nor the tools used are intuitive to new practitioners, and must be taught. The changing landscape of newer tools misleads even expert users of past tools who are not screened for this critical skill. And finally, success is dependent upon synthesis of past experience and tuning a pre-determined process to both project goals and the team. Success, then, is stacked against the engineering team—so what happens when source control usage goes horribly wrong? The baseline and team end up in “Git Hell,” slowed down, or even blocked, by the very tool that facilitates collaboration and parallel development. “Git Hell” originates from various sources: poor tool design, misuse or misconfiguration of the command line interface, and lack of understanding of the “nuts and bolts” of the tool. For example, poor interface design or configuration, even with the command line interface, has widespread impact. A substantial flaw in the mechanics of git push caused substantial pain at multiple engineering firms. The interface was straightforward: a push sends all branches with updates to the server; adding the -f option forces an update; combining them proved disastrous, as an engineer with minimal knowledge of git could harm the integrity of the baseline without even realizing it. This prior version required each developer to add local configuration to his workstation, ensuring others in the future would repeat the mistake. These classes of issues are repeated at company after company, group after group—illustrating a systemic problem with git, its configuration, the instruction in its usage, and the interaction between collaborating engineers. To combat this, I generalized preventative measures as a workaround in a workshop entitled “Get
在Amazon,发布工程属于我们所谓的卓越运营:设计、实现、维护和发布可伸缩的产品。还有一个更基本的组件经常被忽略:源代码控制。好的源代码控制实践对于交付好的软件是必要的,但还不够。在使用源代码控制的25年多的时间里,每个工具都有自己的一组缺陷:CVS、subversion、mercurial,以及最近的git。几十年来,每个组织都有一个不成文的规则,即独立地识别和减轻这些缺陷,并期望下一次创新能够弥补这些缺陷。这种方法既不适用于像Amazon这样的大型组织,也不适用于整个软件工程社区。这种功能失调的真正根源——软件工程师之间的远程协作——必须被检查并最终解决。在此期间,由工程社区共享独立于软件过程的实践来弥补差异。在其核心,源代码控制是软件工程师的基本工具,期望易于理解和“正常工作”;这个假设在许多维度上是无效的。软件配置管理(SCM)和所使用的工具对新的从业者来说都不是直观的,必须教授。新工具不断变化的前景甚至误导了过去工具的专家用户,他们没有被筛选出这一关键技能。最后,成功依赖于对过去经验的综合,并根据项目目标和团队调整预先确定的过程。那么,成功是对工程团队不利的——那么,当源代码控制的使用出现严重错误时会发生什么呢?基线和团队最终会陷入“Git地狱”,被促进协作和并行开发的工具拖慢,甚至阻碍。“Git地狱”源于各种各样的原因:糟糕的工具设计、命令行界面的误用或错误配置,以及对工具的“具体细节”缺乏理解。例如,糟糕的界面设计或配置,即使使用命令行界面,也会产生广泛的影响。git推送机制中的一个重大缺陷给多家工程公司带来了巨大的痛苦。界面很简单:推送将带有更新的所有分支发送到服务器;添加-f选项强制更新;将它们结合在一起被证明是灾难性的,因为一个对git知之甚少的工程师可能会在没有意识到的情况下损害基线的完整性。之前的版本要求每个开发人员将本地配置添加到他的工作站,以确保其他人在将来会重复这个错误。这类问题在一个接一个的公司、一个接一个的小组中反复出现——说明git的一个系统性问题、它的配置、它的使用说明以及协作工程师之间的交互。为了解决这个问题,我在一个名为“Get out of Git Hell”的研讨会上将预防措施概括为一种解决方法,可以在工程师之间共享,无论经验或过程如何,至少直到可以研究和补救根本原因为止。
{"title":"Get out of Git hell: preventing common pitfalls of Git","authors":"David A. Lippa","doi":"10.1145/2993274.3011284","DOIUrl":"https://doi.org/10.1145/2993274.3011284","url":null,"abstract":"At Amazon, Release Engineering falls under what we call Operational Excellence: designing, implementing, maintaining, and releasing a scalable product. There is an even more basic component that is often ignored: source control. Good source control practices are necessary but not sufficient for delivering good software. Over the 25+ years source control has been used, each tool has come with its own set of pitfalls: CVS, subversion, mercurial, and most recently, git. For decades, the unwritten rule has been for each organization to identify and mitigate these pitfalls independently, with an expectation that the next innovation would remediate it. This approach scales neither for large organizations such as Amazon nor the software engineering community at large. The real source of this dysfunction—remote collaboration between software engineers—must be examined and ultimately fixed. In the interim, it is up to the engineering community to share practices independent of software process to make up the difference. At its core, source control is a fundamental tool of software engineers, expected to be easily understood and “just work;” this assumption is invalid on a number of dimensions. Neither Software Configuration Management (SCM) nor the tools used are intuitive to new practitioners, and must be taught. The changing landscape of newer tools misleads even expert users of past tools who are not screened for this critical skill. And finally, success is dependent upon synthesis of past experience and tuning a pre-determined process to both project goals and the team. Success, then, is stacked against the engineering team—so what happens when source control usage goes horribly wrong? The baseline and team end up in “Git Hell,” slowed down, or even blocked, by the very tool that facilitates collaboration and parallel development. “Git Hell” originates from various sources: poor tool design, misuse or misconfiguration of the command line interface, and lack of understanding of the “nuts and bolts” of the tool. For example, poor interface design or configuration, even with the command line interface, has widespread impact. A substantial flaw in the mechanics of git push caused substantial pain at multiple engineering firms. The interface was straightforward: a push sends all branches with updates to the server; adding the -f option forces an update; combining them proved disastrous, as an engineer with minimal knowledge of git could harm the integrity of the baseline without even realizing it. This prior version required each developer to add local configuration to his workstation, ensuring others in the future would repeat the mistake. These classes of issues are repeated at company after company, group after group—illustrating a systemic problem with git, its configuration, the instruction in its usage, and the interaction between collaborating engineers. To combat this, I generalized preventative measures as a workaround in a workshop entitled “Get ","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115325661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shopify is one of the largest Rails apps in the world and yet remains to be massively scalable and reliable. The platform is able to manage large unexpected spikes in traffic that accompany events such as new product releases, holiday shopping seasons and flash sales, and has been benchmarked to process over 25,000 requests per second, all while powering more than 300,000 businesses. Even at such a large scale, all our developers still continue to push to master and regularly deploy Shopify within 4 minutes. My talk will break down everything that can happen when deploying Shopify or any really big application.
{"title":"Building a deploy system that works at 40000 feet","authors":"Kat Drobnjakovic","doi":"10.1145/2993274.2993275","DOIUrl":"https://doi.org/10.1145/2993274.2993275","url":null,"abstract":"Shopify is one of the largest Rails apps in the world and yet remains to be massively scalable and reliable. The platform is able to manage large unexpected spikes in traffic that accompany events such as new product releases, holiday shopping seasons and flash sales, and has been benchmarked to process over 25,000 requests per second, all while powering more than 300,000 businesses. Even at such a large scale, all our developers still continue to push to master and regularly deploy Shopify within 4 minutes. My talk will break down everything that can happen when deploying Shopify or any really big application.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130382527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. B. Rayana, S. Killian, Nicolas Trangez, A. Calmettes
Collaborative software development presents organizations with a near-constant flow of day-to-day challenges, and there is no available off-the-shelf solution that covers all needs. This paper provides insight into the hurdles that Scality's Engineering team faced in developing and extending a sophisticated storage solution, while coping with ever-growing development teams, challenging - and regularly shifting - business requirements, and non-trivial new feature development. The authors present a novel combination of a Git-based Version Control and Branching model with a set of innovative tools dubbed GitWaterFlow to cope with the issues encountered, including the need to both support old product versions and to provide time-critical delivery of bug fixes. In the spirit of Continuous Delivery, Scality Release Engineering aims to ensure high quality and stability, to present short and predictable release cycles, and to minimize development disruption. The team's experience with the GitWaterFlow model suggests that the approach has been effective in meeting these goals in the given setting, with room for unceasing fine-tuning and improvement of processes and tools.
{"title":"GitWaterFlow: a successful branching model and tooling, for achieving continuous delivery with multiple version branches","authors":"R. B. Rayana, S. Killian, Nicolas Trangez, A. Calmettes","doi":"10.1145/2993274.2993277","DOIUrl":"https://doi.org/10.1145/2993274.2993277","url":null,"abstract":"Collaborative software development presents organizations with a near-constant flow of day-to-day challenges, and there is no available off-the-shelf solution that covers all needs. This paper provides insight into the hurdles that Scality's Engineering team faced in developing and extending a sophisticated storage solution, while coping with ever-growing development teams, challenging - and regularly shifting - business requirements, and non-trivial new feature development. The authors present a novel combination of a Git-based Version Control and Branching model with a set of innovative tools dubbed GitWaterFlow to cope with the issues encountered, including the need to both support old product versions and to provide time-critical delivery of bug fixes. In the spirit of Continuous Delivery, Scality Release Engineering aims to ensure high quality and stability, to present short and predictable release cycles, and to minimize development disruption. The team's experience with the GitWaterFlow model suggests that the approach has been effective in meeting these goals in the given setting, with room for unceasing fine-tuning and improvement of processes and tools.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122287668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Software product release build process usually involves posting a lot of artifacts that are shipped or used as part of the Quality Assurance or Quality Engineering. All the artifacts that are shared or posted together constitute a successful build that can be shipped out. Sometimes, a few of the artifacts might fail to be posted to a shared location that might need an immediate attention in order to repost the artifact with manual intervention. A system and process is implemented for analyzing metadata generated by an automated build process to detect inconsistencies in generation of build artifacts. The system analyzes data retrieved from meta-data streams, once the start of an expected metadata stream is detected the system generates a list of artifacts that the build is expected to generate, based on the prediction model. Information attributes of the meta-data stream are used for deciding on the anticipated behavior of build. Events are generated based on whether the build data is consistent with the predictions made by the model. The system can enable error detection and recovery in an automated build process. The system can adapt to changing build environment by analyzing data stream for historically relevant data properties.
{"title":"System for meta-data analysis using prediction based constraints for detecting inconsistences in release process with auto-correction","authors":"A. Bhushan, Pradeep R. Revankar","doi":"10.1145/2993274.2993278","DOIUrl":"https://doi.org/10.1145/2993274.2993278","url":null,"abstract":"The Software product release build process usually involves posting a lot of artifacts that are shipped or used as part of the Quality Assurance or Quality Engineering. All the artifacts that are shared or posted together constitute a successful build that can be shipped out. Sometimes, a few of the artifacts might fail to be posted to a shared location that might need an immediate attention in order to repost the artifact with manual intervention. A system and process is implemented for analyzing metadata generated by an automated build process to detect inconsistencies in generation of build artifacts. The system analyzes data retrieved from meta-data streams, once the start of an expected metadata stream is detected the system generates a list of artifacts that the build is expected to generate, based on the prediction model. Information attributes of the meta-data stream are used for deciding on the anticipated behavior of build. Events are generated based on whether the build data is consistent with the predictions made by the model. The system can enable error detection and recovery in an automated build process. The system can adapt to changing build environment by analyzing data stream for historically relevant data properties.","PeriodicalId":143542,"journal":{"name":"Proceedings of the 4th International Workshop on Release Engineering","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129885250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}