Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.19
L. Giang, Dongwon Kang, Doo-Hwan Bae
Our daily life increasingly relies on Web applications. Web applications provide us with abundant services to support our everyday activities. As a result, quality assurance for Web applications is becoming important and has gained much attention from software engineering community. In recent years, in order to enhance software quality, many software fault prediction models have been constructed to predict which software modules are likely to be faulty during operations. Such models can be utilized to raise the effectiveness of software testing activities and reduce project risks. Although current fault prediction models can be applied to predict faulty modules of Web applications, one limitation of them is that they do not consider particular characteristics of Web applications. In this paper, we try to build fault prediction models aiming for Web applications after analyzing major characteristics which may impact on their quality. The experimental study shows that our approach achieves very promising results.
{"title":"Software Fault Prediction Models for Web Applications","authors":"L. Giang, Dongwon Kang, Doo-Hwan Bae","doi":"10.1109/COMPSACW.2010.19","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.19","url":null,"abstract":"Our daily life increasingly relies on Web applications. Web applications provide us with abundant services to support our everyday activities. As a result, quality assurance for Web applications is becoming important and has gained much attention from software engineering community. In recent years, in order to enhance software quality, many software fault prediction models have been constructed to predict which software modules are likely to be faulty during operations. Such models can be utilized to raise the effectiveness of software testing activities and reduce project risks. Although current fault prediction models can be applied to predict faulty modules of Web applications, one limitation of them is that they do not consider particular characteristics of Web applications. In this paper, we try to build fault prediction models aiming for Web applications after analyzing major characteristics which may impact on their quality. The experimental study shows that our approach achieves very promising results.","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126749185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.18
Hua Li, Yongge Peng, Xinming Ye, J.-.Y. Yue
Property based testing is to test the interesting property of software. It can reduce the amount of testing work and further improve the efficiency of testing. Programming slicing is a way to analyze and decompose system code. In the paper, the category of the property is given and the primitive property is informal defined. A kind of property extraction method is presented and Petri net is constructed. Property model and dynamic slicing are combined to generate test sequence. As an example, the system structure of Minix3 is introduced. Exec, one of key system callings of Minix3, is modeling, slicing and its test sequences are generated. Minix3 provides open interfaces and modular. The results of slicing can be used to improve the process of software re-use. Finally the conclusion and the research work in the future are introduced.
{"title":"Test Sequence Generation from Combining Property Modeling and Program Slicing","authors":"Hua Li, Yongge Peng, Xinming Ye, J.-.Y. Yue","doi":"10.1109/COMPSACW.2010.18","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.18","url":null,"abstract":"Property based testing is to test the interesting property of software. It can reduce the amount of testing work and further improve the efficiency of testing. Programming slicing is a way to analyze and decompose system code. In the paper, the category of the property is given and the primitive property is informal defined. A kind of property extraction method is presented and Petri net is constructed. Property model and dynamic slicing are combined to generate test sequence. As an example, the system structure of Minix3 is introduced. Exec, one of key system callings of Minix3, is modeling, slicing and its test sequences are generated. Minix3 provides open interfaces and modular. The results of slicing can be used to improve the process of software re-use. Finally the conclusion and the research work in the future are introduced.","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"10 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116548219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.16
Tuoye Xu, Tong Li, Lin Liu, B. Bryant
Negotiation is an important activity in both conventional requirements engineering and the new online service era. It encompasses a wide range of functional and non-functional requirements (QoS), including price, performance, security, delivery time, etc. In order to meet service requirements in the open and distributed environment, systems need to be adaptive to the changing needs. In this paper, we propose a negotiation modeling and evaluation mechanism for strategic actors, who have service requirements and capabilities. The strategic modeling approach allows actors in service environments to negotiate for service level agreements including service functionalities and multiple QoS categories; it also includes a generic service negotiation protocol that enables service requestors and providers to select an optimal alternative. Offers are generated and evaluated in accordance to actor’s preferences and best interest. We use typical service scenarios as running examples to illustrate the proposed approach and evaluate its viability.
{"title":"Negotiating Service Requirements among Strategic Actors","authors":"Tuoye Xu, Tong Li, Lin Liu, B. Bryant","doi":"10.1109/COMPSACW.2010.16","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.16","url":null,"abstract":"Negotiation is an important activity in both conventional requirements engineering and the new online service era. It encompasses a wide range of functional and non-functional requirements (QoS), including price, performance, security, delivery time, etc. In order to meet service requirements in the open and distributed environment, systems need to be adaptive to the changing needs. In this paper, we propose a negotiation modeling and evaluation mechanism for strategic actors, who have service requirements and capabilities. The strategic modeling approach allows actors in service environments to negotiate for service level agreements including service functionalities and multiple QoS categories; it also includes a generic service negotiation protocol that enables service requestors and providers to select an optimal alternative. Offers are generated and evaluated in accordance to actor’s preferences and best interest. We use typical service scenarios as running examples to illustrate the proposed approach and evaluate its viability.","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121301426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.38
Meixia Zhu, Hanpin Wang, W. Jin, Zizhen Wang, Chunxiang Xu
The Sequence Diagram(SD) of UML2.0 enriches those of previous versions by two new operators, assert and negate, for specifying required and forbidden behaviors. The semantics of SD, however, being based on pairs of valid and invalid sets of traces, is inadequate, and prevents the new operators from being used effectively. The semantic confusions between assert and negate operators in UML SD are significant, since they pose great difficulty to the confirmation of the security of the system they designed. A new Petri-net model named LPNforSD is designed in this paper. Transformation rules from SD to LPNforSD are given out. We take fragment that described by assert or negate operator as independent part and transform it into LPNforSDs. An algorithm is also designed to check whether the SD is safe by comparing its traces with the traces getting from negate and assert fragments. By this way, we cannot only eliminate the semantic confusions between assert and negate operators, but also reduce the numbers of contingent traces. Thus, we can ensure the system more reliable
{"title":"Semantic Analysis of UML2.0 Sequence Diagram Based on Model Transformation","authors":"Meixia Zhu, Hanpin Wang, W. Jin, Zizhen Wang, Chunxiang Xu","doi":"10.1109/COMPSACW.2010.38","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.38","url":null,"abstract":"The Sequence Diagram(SD) of UML2.0 enriches those of previous versions by two new operators, assert and negate, for specifying required and forbidden behaviors. The semantics of SD, however, being based on pairs of valid and invalid sets of traces, is inadequate, and prevents the new operators from being used effectively. The semantic confusions between assert and negate operators in UML SD are significant, since they pose great difficulty to the confirmation of the security of the system they designed. A new Petri-net model named LPNforSD is designed in this paper. Transformation rules from SD to LPNforSD are given out. We take fragment that described by assert or negate operator as independent part and transform it into LPNforSDs. An algorithm is also designed to check whether the SD is safe by comparing its traces with the traces getting from negate and assert fragments. By this way, we cannot only eliminate the semantic confusions between assert and negate operators, but also reduce the numbers of contingent traces. Thus, we can ensure the system more reliable","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121454752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.70
I. Brandić, Vincent C. Emeakaroha, M. Maurer, S. Dustdar, S. Ács, A. Kertész, G. Kecskeméti
Cloud computing represents a promising computing paradigm where computing resources have to be allocated to software for their execution. Self-manageable Cloud infrastructures are required to achieve that level of flexibility on one hand, and to comply to users' requirements specified by means of Service Level Agreements (SLAs) on the other. Such infrastructures should automatically respond to changing component, workload, and environmental conditions minimizing user interactions with the system and preventing violations of agreed SLAs. However, identification of sources responsible for the possible SLA violation and the decision about the reactive actions necessary to prevent SLA violation is far from trivial. First, in this paper we present a novel approach for mapping low-level resource metrics to SLA parameters necessary for the identification of failure sources. Second, we devise a layered Cloud architecture for the bottom-up propagation of failures to the layer, which can react to sensed SLA violation threats. Moreover, we present a communication model for the propagation of SLA violation threats to the appropriate layer of the Cloud infrastructure, which includes negotiators, brokers, and automatic service deployer.
{"title":"LAYSI: A Layered Approach for SLA-Violation Propagation in Self-Manageable Cloud Infrastructures","authors":"I. Brandić, Vincent C. Emeakaroha, M. Maurer, S. Dustdar, S. Ács, A. Kertész, G. Kecskeméti","doi":"10.1109/COMPSACW.2010.70","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.70","url":null,"abstract":"Cloud computing represents a promising computing paradigm where computing resources have to be allocated to software for their execution. Self-manageable Cloud infrastructures are required to achieve that level of flexibility on one hand, and to comply to users' requirements specified by means of Service Level Agreements (SLAs) on the other. Such infrastructures should automatically respond to changing component, workload, and environmental conditions minimizing user interactions with the system and preventing violations of agreed SLAs. However, identification of sources responsible for the possible SLA violation and the decision about the reactive actions necessary to prevent SLA violation is far from trivial. First, in this paper we present a novel approach for mapping low-level resource metrics to SLA parameters necessary for the identification of failure sources. Second, we devise a layered Cloud architecture for the bottom-up propagation of failures to the layer, which can react to sensed SLA violation threats. Moreover, we present a communication model for the propagation of SLA violation threats to the appropriate layer of the Cloud infrastructure, which includes negotiators, brokers, and automatic service deployer.","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131043792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.28
Li Zhou, Yong Zhang, Chunxiao Xing
Log data is critical to applications and the management and analysis of log data is a hot research topic. Existing log managements are normally tightly integrated with applications themselves, which may lead to problems including performance, local storage efficiency, security and non real-time functionality. To solve these problems, we present a SaaS method which shifts writing log data from local disk to clouds, at the same time the log management and analysis functionalities are also done by a cloud. We analyze two architectures to implement this method which are Web Service and Shift-LogActive MQ Initial experiments show the efficiency of later one. In the future, we can apply this tool to application systems which are based on web and database systems to improve their performances
{"title":"ULMS: An Accelerator for the Applications by Shifting Writing Log from Local Disk to Clouds","authors":"Li Zhou, Yong Zhang, Chunxiao Xing","doi":"10.1109/COMPSACW.2010.28","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.28","url":null,"abstract":"Log data is critical to applications and the management and analysis of log data is a hot research topic. Existing log managements are normally tightly integrated with applications themselves, which may lead to problems including performance, local storage efficiency, security and non real-time functionality. To solve these problems, we present a SaaS method which shifts writing log data from local disk to clouds, at the same time the log management and analysis functionalities are also done by a cloud. We analyze two architectures to implement this method which are Web Service and Shift-LogActive MQ Initial experiments show the efficiency of later one. In the future, we can apply this tool to application systems which are based on web and database systems to improve their performances","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124337276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.45
Qian Wu, Qianxiang Wang
A Defect pattern repository collects different kinds of defect patterns, which are general descriptions of the characteristics of commonly occurring software code defects. Defect patterns can be widely used by programmers, static defect analysis tools, and even runtime verification. Following the idea of web 2.0, defect pattern repositories allow these users to submit defect patterns they found. However, submission of duplicate patterns would lead to a redundancy in the repository. This paper introduces an approach to suggest potential duplicates based on natural language processing. Our approach first computes field similarities based on Vector Space Model, and then employs Information Entropy to determine the field importance, and next combines the field similarities to form the final defect pattern similarity. Two strategies are introduced to make our approach adaptive to special situations. Finally, groups of duplicates are obtained by adopting Hierarchical Clustering. Evaluation indicates that our approach could detect most of the actual duplicates (72% in our experiment) in the repository.
{"title":"Natural Language Processing Based Detection of Duplicate Defect Patterns","authors":"Qian Wu, Qianxiang Wang","doi":"10.1109/COMPSACW.2010.45","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.45","url":null,"abstract":"A Defect pattern repository collects different kinds of defect patterns, which are general descriptions of the characteristics of commonly occurring software code defects. Defect patterns can be widely used by programmers, static defect analysis tools, and even runtime verification. Following the idea of web 2.0, defect pattern repositories allow these users to submit defect patterns they found. However, submission of duplicate patterns would lead to a redundancy in the repository. This paper introduces an approach to suggest potential duplicates based on natural language processing. Our approach first computes field similarities based on Vector Space Model, and then employs Information Entropy to determine the field importance, and next combines the field similarities to form the final defect pattern similarity. Two strategies are introduced to make our approach adaptive to special situations. Finally, groups of duplicates are obtained by adopting Hierarchical Clustering. Evaluation indicates that our approach could detect most of the actual duplicates (72% in our experiment) in the repository.","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124611171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.20
Atef Mohamed, Mohammad Zulkernine
In fault tolerant software systems, the Level of Decomposition (LoD) where design diversity is applied has a major impact on software system reliability. By disregarding this impact, current fault tolerance techniques are prone to reliability decrease due to the inappropriate application level of design diversity. In this paper, we quantify the effect of the LoD on system reliability during software recomposition when the functionalities of the system are redistributed across its components. We discuss the LoD in fault tolerant software architectures according to three component failure transitions: component failure occurrence, component failure propagation, and component failure impact. We illustrate the component aspects that relate the LoD to each of these failure transitions. Finally, we quantify the effect of the LoD on system reliability according to a series of decomposition and/or merge operations that may occur during software recomposition.
{"title":"The Level of Decomposition Impact on Component Fault Tolerance","authors":"Atef Mohamed, Mohammad Zulkernine","doi":"10.1109/COMPSACW.2010.20","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.20","url":null,"abstract":"In fault tolerant software systems, the Level of Decomposition (LoD) where design diversity is applied has a major impact on software system reliability. By disregarding this impact, current fault tolerance techniques are prone to reliability decrease due to the inappropriate application level of design diversity. In this paper, we quantify the effect of the LoD on system reliability during software recomposition when the functionalities of the system are redistributed across its components. We discuss the LoD in fault tolerant software architectures according to three component failure transitions: component failure occurrence, component failure propagation, and component failure impact. We illustrate the component aspects that relate the LoD to each of these failure transitions. Finally, we quantify the effect of the LoD on system reliability according to a series of decomposition and/or merge operations that may occur during software recomposition.","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128553734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.41
Beibei Yin, Ling-Zan Zhu, K. Cai
Following the growing research interests in complex networks, in recent years many researchers treated software static structures as complex networks and revealed that most of these networks follow a scale-free degree distribution. Different from the perspectives adopted in these works, our previous work found that the networks of software dynamic execution processes may also be scale-free. Scale-free degree distribution demonstrates that during the execution process the methods being invoked only a few times are far more abundant than those being frequently invoked, i.e. the software structural profile is heterogeneous. Software structural profile describes the probabilities of software modules or states being invoked. Since many unique properties of complex networks are due to the heterogeneity, the quantitative measure of it is important and desirable. This paper proposes two quantitative measures of heterogeneity of software structural profile based on entropy. Three case studies are presented to show the effectiveness of the proposed measures.
{"title":"Entropy-Based Measures of Heterogeneity of Software Structural Profile","authors":"Beibei Yin, Ling-Zan Zhu, K. Cai","doi":"10.1109/COMPSACW.2010.41","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.41","url":null,"abstract":"Following the growing research interests in complex networks, in recent years many researchers treated software static structures as complex networks and revealed that most of these networks follow a scale-free degree distribution. Different from the perspectives adopted in these works, our previous work found that the networks of software dynamic execution processes may also be scale-free. Scale-free degree distribution demonstrates that during the execution process the methods being invoked only a few times are far more abundant than those being frequently invoked, i.e. the software structural profile is heterogeneous. Software structural profile describes the probabilities of software modules or states being invoked. Since many unique properties of complex networks are due to the heterogeneity, the quantitative measure of it is important and desirable. This paper proposes two quantitative measures of heterogeneity of software structural profile based on entropy. Three case studies are presented to show the effectiveness of the proposed measures.","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121658903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-19DOI: 10.1109/COMPSACW.2010.21
M. Lettner, Michael Tschernuth
When it comes to embedded devices, producing highly dependable, fail-safe and efficient software solutions is indispensable. Such devices usually ship in large numbers, should run 24/7, have real-time constraints and work autonomously most of the time, which is why a very high software quality is required. At the same time, as companies are always looking for ways to cut costs, it is not easy to provide reliability and guarantee a high level of product quality all at the same time. Changing requirements and the fast pace of innovation in terms of ever new hardware capabilities, combined with the need for frequent software updates, demand an easy mechanism to change software quickly and enable reuse. Applying smart software solutions is a way of addressing above issues. Formal approaches such as model driven architecture (MDA) have been proposed, but often lack realizability due to various problems in practice. The proposed solution addresses these issues and focuses on what it takes to fully take advantage of MDA by pointing out methodologies and tool chains that have been applied in a real world project to enable high quality code generation for software of a low-cost mobile phone.
{"title":"Applied MDA for Embedded Devices: Software Design and Code Generation for a Low-Cost Mobile Phone","authors":"M. Lettner, Michael Tschernuth","doi":"10.1109/COMPSACW.2010.21","DOIUrl":"https://doi.org/10.1109/COMPSACW.2010.21","url":null,"abstract":"When it comes to embedded devices, producing highly dependable, fail-safe and efficient software solutions is indispensable. Such devices usually ship in large numbers, should run 24/7, have real-time constraints and work autonomously most of the time, which is why a very high software quality is required. At the same time, as companies are always looking for ways to cut costs, it is not easy to provide reliability and guarantee a high level of product quality all at the same time. Changing requirements and the fast pace of innovation in terms of ever new hardware capabilities, combined with the need for frequent software updates, demand an easy mechanism to change software quickly and enable reuse. Applying smart software solutions is a way of addressing above issues. Formal approaches such as model driven architecture (MDA) have been proposed, but often lack realizability due to various problems in practice. The proposed solution addresses these issues and focuses on what it takes to fully take advantage of MDA by pointing out methodologies and tool chains that have been applied in a real world project to enable high quality code generation for software of a low-cost mobile phone.","PeriodicalId":121135,"journal":{"name":"2010 IEEE 34th Annual Computer Software and Applications Conference Workshops","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121683020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}