We present a CPU scheduling algorithm, called the Energy-Bounded Utility Accrual Algorithm (or EBUA). EBUA is a polynomial-time algorithm that satisfies bounds on system-level energy consumption and activities' accrued timeliness utility. We analytically establish several timeliness properties of EBUA. Our simulation experiments using AMD's (DVS-enabled) k6 processor model confirm the algorithm's effectiveness and superiority.
{"title":"On bounding energy consumption in dynamic, embedded real-time systems","authors":"H. Wu, B. Ravindran, E. Jensen","doi":"10.1145/1141277.1141494","DOIUrl":"https://doi.org/10.1145/1141277.1141494","url":null,"abstract":"We present a CPU scheduling algorithm, called the Energy-Bounded Utility Accrual Algorithm (or EBUA). EBUA is a polynomial-time algorithm that satisfies bounds on system-level energy consumption and activities' accrued timeliness utility. We analytically establish several timeliness properties of EBUA. Our simulation experiments using AMD's (DVS-enabled) k6 processor model confirm the algorithm's effectiveness and superiority.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122389533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The cost of sending queries to a server is high for mobile ubiquitous hosts. To address this, we adopt query consolidation mechanism, by exploiting the knowledge of similar queries generated by neighboring hosts to the server, especially in location-dependent applications. We propose a group-based query processing scheme, where group members are close in location and moving direction, to collectively deliver the aggregate querying need from members to the server. A leader or boss elected within each group is responsible for gathering and consolidating data requests from members, within an adaptive query listening period. We conducted simulated experiments to study the performance improvement with our scheme.
{"title":"BBQ: group-based querying in a ubiquitous environment","authors":"G. Lam, H. Leong, S. Chan","doi":"10.1145/1141277.1141729","DOIUrl":"https://doi.org/10.1145/1141277.1141729","url":null,"abstract":"The cost of sending queries to a server is high for mobile ubiquitous hosts. To address this, we adopt query consolidation mechanism, by exploiting the knowledge of similar queries generated by neighboring hosts to the server, especially in location-dependent applications. We propose a group-based query processing scheme, where group members are close in location and moving direction, to collectively deliver the aggregate querying need from members to the server. A leader or boss elected within each group is responsible for gathering and consolidating data requests from members, within an adaptive query listening period. We conducted simulated experiments to study the performance improvement with our scheme.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122468153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Slammer, which is currently the fastest computer worm in recorded history, was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes. Although the main action that the Slammer worm takes is a relatively unsophisticated replication of itself, it still spreads so quickly that human response was ineffective. Most proposed countermeasures strategies are based primarily on rate detection and limiting algorithms. However, such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.In our work, we put forth the hypothesis that next generation worms will be radically different, and potentially such techniques will prove ineffective. Specifically, we propose to study a new generation of worms called "Swarm Worms", whose behavior is predicated on the concept of "emergent intelligence". Emergent Intelligence is the behavior of systems, very much like biological systems such as ants or bees, where simple local interactions of autonomous members, with simple primitive actions, gives rise to complex and intelligent global behavior. In this manuscript we will introduce the basic principles behind the idea of "Swarm Worms", as well as the basic structure required in order to be considered a "swarm worm". In addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm. We will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities.
{"title":"An initial analysis and presentation of malware exhibiting swarm-like behavior","authors":"F. C. Osorio, Zachi Klopman","doi":"10.1145/1141277.1141356","DOIUrl":"https://doi.org/10.1145/1141277.1141356","url":null,"abstract":"The Slammer, which is currently the fastest computer worm in recorded history, was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes. Although the main action that the Slammer worm takes is a relatively unsophisticated replication of itself, it still spreads so quickly that human response was ineffective. Most proposed countermeasures strategies are based primarily on rate detection and limiting algorithms. However, such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.In our work, we put forth the hypothesis that next generation worms will be radically different, and potentially such techniques will prove ineffective. Specifically, we propose to study a new generation of worms called \"Swarm Worms\", whose behavior is predicated on the concept of \"emergent intelligence\". Emergent Intelligence is the behavior of systems, very much like biological systems such as ants or bees, where simple local interactions of autonomous members, with simple primitive actions, gives rise to complex and intelligent global behavior. In this manuscript we will introduce the basic principles behind the idea of \"Swarm Worms\", as well as the basic structure required in order to be considered a \"swarm worm\". In addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm. We will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122480785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. M. Göschka, Svein O. Hallsteinsen, R. Oliveira, A. Romanovsky
Distributed systems and databases are at the core of the information society and increasingly pervade many aspects of our daily lives. While mobility and pervasiveness require support for systems that adapt themselves to changing environments, the middleware infrastructures become more and more heterogeneous and complex. In addition, we can see an increasing demand for dependability of such systems, taking into account the software as well as the surrounding environment. Generally, adaptiveness can either satisfy a change in user requirements or seek to fulfill the same requirements in a changing system context and environment. In particular, adaptation is also a means to achieve dependability in a computing infrastructure with dynamically varying structure and properties. Fault tolerance can consequently be seen as a special case, where the adaptation seeks to overcome an otherwise negative effect of a change in the computing infrastructure that can be classified as a fault. However, dependability can not only be achieved by fault tolerance, but also by other means like fault avoidance (e.g. through formal methods). Therefore, future middleware needs to support adaptiveness and dependability while maintaining scalability and mastering complexity. Still, software legacy must be integrated in a way, such that open and standardized interfaces support not only functional integration, but also a seamless integration of non-functional aspects. Moreover, service-oriented architectures need coordination in order to achieve dependability and can further benefit from context-aware approaches.
{"title":"Editorial message: special track on dependable and adaptive distributed systems","authors":"K. M. Göschka, Svein O. Hallsteinsen, R. Oliveira, A. Romanovsky","doi":"10.1145/1141277.1141431","DOIUrl":"https://doi.org/10.1145/1141277.1141431","url":null,"abstract":"Distributed systems and databases are at the core of the information society and increasingly pervade many aspects of our daily lives. While mobility and pervasiveness require support for systems that adapt themselves to changing environments, the middleware infrastructures become more and more heterogeneous and complex. In addition, we can see an increasing demand for dependability of such systems, taking into account the software as well as the surrounding environment. Generally, adaptiveness can either satisfy a change in user requirements or seek to fulfill the same requirements in a changing system context and environment. In particular, adaptation is also a means to achieve dependability in a computing infrastructure with dynamically varying structure and properties. Fault tolerance can consequently be seen as a special case, where the adaptation seeks to overcome an otherwise negative effect of a change in the computing infrastructure that can be classified as a fault. However, dependability can not only be achieved by fault tolerance, but also by other means like fault avoidance (e.g. through formal methods). Therefore, future middleware needs to support adaptiveness and dependability while maintaining scalability and mastering complexity. Still, software legacy must be integrated in a way, such that open and standardized interfaces support not only functional integration, but also a seamless integration of non-functional aspects. Moreover, service-oriented architectures need coordination in order to achieve dependability and can further benefit from context-aware approaches.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128474104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Randomization has been a primary tool to hide sensitive private information during privacy preserving data mining. The previous work based on spectral filtering, show the noise may be separated from the perturbed data under some conditions and as a result privacy can be seriously compromised. In this paper, we explicitly assess the effects of perturbation on the accuracy of the estimated value and give the explicit relation on how the estimation error varies with perturbation. In particular, we derive one upper bound for the Frobenius norm of reconstruction error. This upper bound may be exploited by attackers to determine how close their estimates are from the original data using spectral filtering technique, which imposes a serious threat of privacy breaches.
{"title":"On the use of spectral filtering for privacy preserving data mining","authors":"Songtao Guo, Xintao Wu","doi":"10.1145/1141277.1141418","DOIUrl":"https://doi.org/10.1145/1141277.1141418","url":null,"abstract":"Randomization has been a primary tool to hide sensitive private information during privacy preserving data mining. The previous work based on spectral filtering, show the noise may be separated from the perturbed data under some conditions and as a result privacy can be seriously compromised. In this paper, we explicitly assess the effects of perturbation on the accuracy of the estimated value and give the explicit relation on how the estimation error varies with perturbation. In particular, we derive one upper bound for the Frobenius norm of reconstruction error. This upper bound may be exploited by attackers to determine how close their estimates are from the original data using spectral filtering technique, which imposes a serious threat of privacy breaches.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128505942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the pervasive nature of current short range, low-power wireless connectivity and easy availability of low-cost light weight mobile devices, it is necessary to have an omnipresent customizable service. It can be used by different types of users different fields such as education, healthcare, marketing, or business, at any time, and at any place. These devices can reach ubiquitously to neighboring devices using a free short range ad hoc network. Unfortunately, to the best of our knowledge, no one has designed such a service. In this paper, we present the details of the Ubicomp Assistant (UA), which is designed to accomplish the above objectives. To evaluate the design, we have developed an application which uses UA as a service. It uses MARKS (Middleware Adaptability for Resource Discovery, Knowledge Usability and Self-healing) as an underlying core service provider.
{"title":"Ubicomp assistant: an omnipresent customizable service using MARKS (middleware adaptability for resource discovery, knowledge usability and self-healing)","authors":"Moushumi Sharmin, Shameem Ahmed, Sheikh Iqbal Ahamed","doi":"10.1145/1141277.1141518","DOIUrl":"https://doi.org/10.1145/1141277.1141518","url":null,"abstract":"Due to the pervasive nature of current short range, low-power wireless connectivity and easy availability of low-cost light weight mobile devices, it is necessary to have an omnipresent customizable service. It can be used by different types of users different fields such as education, healthcare, marketing, or business, at any time, and at any place. These devices can reach ubiquitously to neighboring devices using a free short range ad hoc network. Unfortunately, to the best of our knowledge, no one has designed such a service. In this paper, we present the details of the Ubicomp Assistant (UA), which is designed to accomplish the above objectives. To evaluate the design, we have developed an application which uses UA as a service. It uses MARKS (Middleware Adaptability for Resource Discovery, Knowledge Usability and Self-healing) as an underlying core service provider.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123826580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the number of components in XML documents is much larger than that of 'flat' documents, we believe it is essential to provide users of XML information retrieval systems with overviews of the content of retrieved elements. In this paper, we investigate the use of summarisation in XML retrieval as a means of helping users in their searching process.
{"title":"Investigating the use of summarisation for interactive XML retrieval","authors":"Z. Szlávik, A. Tombros, M. Lalmas","doi":"10.1145/1141277.1141529","DOIUrl":"https://doi.org/10.1145/1141277.1141529","url":null,"abstract":"As the number of components in XML documents is much larger than that of 'flat' documents, we believe it is essential to provide users of XML information retrieval systems with overviews of the content of retrieved elements. In this paper, we investigate the use of summarisation in XML retrieval as a means of helping users in their searching process.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121464201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elvira Rolón Aguilar, F. Ruiz, Félix García, M. Piattini
This work presents a set of measures to evaluate the structural complexity of business process models at a conceptual level, and also the general plan of a family of experiments whose aim is to validate the measures proposed. We believe that the early evaluation of business process models would provide business process management with support which would make their maintenance tasks easier. This proposal is based on the standard notation for business process modelling BPMN and on the adoption and extension of the FMESP framework.
{"title":"Evaluation measures for business process models","authors":"Elvira Rolón Aguilar, F. Ruiz, Félix García, M. Piattini","doi":"10.1145/1141277.1141641","DOIUrl":"https://doi.org/10.1145/1141277.1141641","url":null,"abstract":"This work presents a set of measures to evaluate the structural complexity of business process models at a conceptual level, and also the general plan of a family of experiments whose aim is to validate the measures proposed. We believe that the early evaluation of business process models would provide business process management with support which would make their maintenance tasks easier. This proposal is based on the standard notation for business process modelling BPMN and on the adoption and extension of the FMESP framework.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124323349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rule conflicts can arise in machine learning systems that utilise unordered rule sets. A rule conflict is when two or more rules cover the same example but differ in their majority classes. This conflict must be solved before a classification can be made. The standard methods for solving this type of problem are to use naive Bayes to solve the conflict or using the most frequent class (CN2). This paper studies the problem of rule conflicts in the area of numerical features. A novel family of methods, called distance based methods, for solving rule conflicts in continuous domains is presented. An empirical evaluation between a distance based method, CN2 and naive Bayes is made. It is shown that the distance based method significantly outperforms both naive Bayes and CN2.
{"title":"On handling conflicts between rules with numerical features","authors":"Tony Lindgren","doi":"10.1145/1141277.1141284","DOIUrl":"https://doi.org/10.1145/1141277.1141284","url":null,"abstract":"Rule conflicts can arise in machine learning systems that utilise unordered rule sets. A rule conflict is when two or more rules cover the same example but differ in their majority classes. This conflict must be solved before a classification can be made. The standard methods for solving this type of problem are to use naive Bayes to solve the conflict or using the most frequent class (CN2). This paper studies the problem of rule conflicts in the area of numerical features. A novel family of methods, called distance based methods, for solving rule conflicts in continuous domains is presented. An empirical evaluation between a distance based method, CN2 and naive Bayes is made. It is shown that the distance based method significantly outperforms both naive Bayes and CN2.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127666314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haifeng Chen, Guofei Jiang, C. Ungureanu, K. Yoshihira
Fast and accurate fault detection is becoming an essential component of management software for mission critical systems. A good fault detector makes possible to initiate repair actions quickly, increasing the availability of the system. The contribution of this paper is twofold. First a new concept of supervised and unsupervised monitoring is proposed for system fault detection. We use a statistical method, canonical correlation analysis (CCA), to model the contextual dependencies between system inputs u and internal behavior x. By means of CCA, the space x is transformed into two subsets of variables, which are monitored in a supervised and unsupervised manner respectively. By doing so, our approach can reduce the false alarms resulting from unusual workload changes, and hence achieve high fault detection rate. Second, in order to test the performance of our approach, we simulate a variety of system faults in a real e-commerce application based on the multi-tiered J2EE architecture. Experimental results demonstrate that the CCA based approach can detect injected failures at their early stages when unusual phenomenon is very weak, and hence contribute to enormous time and cost savings in managing large scale distributed systems.
{"title":"Combining supervised and unsupervised monitoring for fault detection in distributed computing systems","authors":"Haifeng Chen, Guofei Jiang, C. Ungureanu, K. Yoshihira","doi":"10.1145/1141277.1141438","DOIUrl":"https://doi.org/10.1145/1141277.1141438","url":null,"abstract":"Fast and accurate fault detection is becoming an essential component of management software for mission critical systems. A good fault detector makes possible to initiate repair actions quickly, increasing the availability of the system. The contribution of this paper is twofold. First a new concept of supervised and unsupervised monitoring is proposed for system fault detection. We use a statistical method, canonical correlation analysis (CCA), to model the contextual dependencies between system inputs u and internal behavior x. By means of CCA, the space x is transformed into two subsets of variables, which are monitored in a supervised and unsupervised manner respectively. By doing so, our approach can reduce the false alarms resulting from unusual workload changes, and hence achieve high fault detection rate. Second, in order to test the performance of our approach, we simulate a variety of system faults in a real e-commerce application based on the multi-tiered J2EE architecture. Experimental results demonstrate that the CCA based approach can detect injected failures at their early stages when unusual phenomenon is very weak, and hence contribute to enormous time and cost savings in managing large scale distributed systems.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126294020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}