Xiaoxuan Meng, Chengxiang Si, Xiaoming Han, Jiangang Zhang, Lu Xu
With popularity of different kind of search engines on WWW, it requires the backend storage system to provide better physical I/O performance to speedup the query service perceived by end users. However, existing general purpose designed replacement algorithm can’t performs well for the web search applications. This paper first studies the access pattern of various real-life web search workload and then propose a new replacement algorithm RED-LRU based on the observed access properties. The simulation results shows that our proposed algorithm uniformly outperform the other replacement algorithms for all the workloads and cache size. To validate the simulation results, we integrate RED-LRU algorithm into a real storage cache DPCache. The experiment results in real system confirm the effectiveness of our proposed algorithm in improving the caching performance for web search application. Moreover, the runtime overhead of RED-LRU is also fairly low in practice.
{"title":"A Replacement Algorithm Designed for the Web Search Engine and Its Application in Storage Cache","authors":"Xiaoxuan Meng, Chengxiang Si, Xiaoming Han, Jiangang Zhang, Lu Xu","doi":"10.1109/ISPA.2009.36","DOIUrl":"https://doi.org/10.1109/ISPA.2009.36","url":null,"abstract":"With popularity of different kind of search engines on WWW, it requires the backend storage system to provide better physical I/O performance to speedup the query service perceived by end users. However, existing general purpose designed replacement algorithm can’t performs well for the web search applications. This paper first studies the access pattern of various real-life web search workload and then propose a new replacement algorithm RED-LRU based on the observed access properties. The simulation results shows that our proposed algorithm uniformly outperform the other replacement algorithms for all the workloads and cache size. To validate the simulation results, we integrate RED-LRU algorithm into a real storage cache DPCache. The experiment results in real system confirm the effectiveness of our proposed algorithm in improving the caching performance for web search application. Moreover, the runtime overhead of RED-LRU is also fairly low in practice.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115588887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A primary problem for the security of web service is how to precisely express and match the security policy of each participant that may be in different security domain. Presently, most schemes use syntactic approaches, where pairs of policies are compared for structural and syntactic similarity to determine compatibility, which is prone to result in false negative because of lacking semantics. In this paper, we propose a novel approach to express and match the security policy of web service based on semantics. Through constructing a general security ontology, we present the definition method and matching algorithm of semantic security policy for web service. The use of semantic security policy enables richer representations of policy intent and allows matching of policies with compatible intent, but dissimilar syntax, which is not possible with syntactic approaches. The proposed security ontology is extensible and the semantic security policy is of strong inferability and adaptability, and these characteristics are extremely important to the heterogeneous and dynamic environment of web service.
{"title":"Semantic Security Policy for Web Service","authors":"He Zheng-qiu, Wu Li-fa, Hong Zheng, Lai Hai-guang","doi":"10.1109/ISPA.2009.10","DOIUrl":"https://doi.org/10.1109/ISPA.2009.10","url":null,"abstract":"A primary problem for the security of web service is how to precisely express and match the security policy of each participant that may be in different security domain. Presently, most schemes use syntactic approaches, where pairs of policies are compared for structural and syntactic similarity to determine compatibility, which is prone to result in false negative because of lacking semantics. In this paper, we propose a novel approach to express and match the security policy of web service based on semantics. Through constructing a general security ontology, we present the definition method and matching algorithm of semantic security policy for web service. The use of semantic security policy enables richer representations of policy intent and allows matching of policies with compatible intent, but dissimilar syntax, which is not possible with syntactic approaches. The proposed security ontology is extensible and the semantic security policy is of strong inferability and adaptability, and these characteristics are extremely important to the heterogeneous and dynamic environment of web service.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"434 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116007627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is trendy that virtualization technology is adopted by server and desktop computers recently. Binary translation is an important method to implement full virtualization supporting any guest operating system without modification. Traditional methods use trap or interrupt to catch sensitive instruction's execution. Its performance is influenced by trap's context switch overhead. This article proposes a novel code scanning and replacing strategy, named as Block-based In-Place Replacement. BIPR tries to find a code block whose length is longer than 5 bytes and replaces the block with 5-bytes JMP instruction. The translated code block has same run-time mode as original code. As a result, BIPR's cost is lower than traditional trap methods. Moreover, it gives an optimize strategy, i.e. Super Block-based In-Place Replacement, to reduce unnecessary translation overhead of BIPR and get better performances. Experiment results prove that SBIPR performs pretty.
{"title":"Block-Based In-Place Replacement Strategy for x86 Sensitive Instructions in Virtual Machine","authors":"Yusong Tan, Weihua Zhang, Q. Wu","doi":"10.1109/ISPA.2009.33","DOIUrl":"https://doi.org/10.1109/ISPA.2009.33","url":null,"abstract":"It is trendy that virtualization technology is adopted by server and desktop computers recently. Binary translation is an important method to implement full virtualization supporting any guest operating system without modification. Traditional methods use trap or interrupt to catch sensitive instruction's execution. Its performance is influenced by trap's context switch overhead. This article proposes a novel code scanning and replacing strategy, named as Block-based In-Place Replacement. BIPR tries to find a code block whose length is longer than 5 bytes and replaces the block with 5-bytes JMP instruction. The translated code block has same run-time mode as original code. As a result, BIPR's cost is lower than traditional trap methods. Moreover, it gives an optimize strategy, i.e. Super Block-based In-Place Replacement, to reduce unnecessary translation overhead of BIPR and get better performances. Experiment results prove that SBIPR performs pretty.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132607645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A mix network is a cryptographic construction for anonymous communications. In addition to anonymity, a reputable mix network first defined by Golle offers a reputation property: the mix-net can prove that every message it outputs corresponds to an input submitted by a user without revealing which input. This property can shield the mix-net from liability in the event that an output message is objectionable or illegal. In this work we analyze two reputable ElGamal based mix-net schemes proposed by Golle and present two active attacks for them. Our attacks rely on the homomorphism properties of RSA signature and ElGamal cryptosystem and can break the reputation properties of those schemes. We also show how to counter our attacks by using secure hash functions.
{"title":"Active Attacks on Reputable Mix Networks","authors":"LongHai Li, Shaofeng Fu, XiangQuan Che","doi":"10.1109/ISPA.2009.38","DOIUrl":"https://doi.org/10.1109/ISPA.2009.38","url":null,"abstract":"A mix network is a cryptographic construction for anonymous communications. In addition to anonymity, a reputable mix network first defined by Golle offers a reputation property: the mix-net can prove that every message it outputs corresponds to an input submitted by a user without revealing which input. This property can shield the mix-net from liability in the event that an output message is objectionable or illegal. In this work we analyze two reputable ElGamal based mix-net schemes proposed by Golle and present two active attacks for them. Our attacks rely on the homomorphism properties of RSA signature and ElGamal cryptosystem and can break the reputation properties of those schemes. We also show how to counter our attacks by using secure hash functions.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131744400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With our society more information-driven, we have begun to distribute data in wide-area storage systems. At the same time, both physical failure and logic error have made it difficult to bring the necessary recovery to bear on remote data disaster, and understanding this proceeding. We describe ARRAY, a system architecture for data disaster recovery that combines reliability, storage space, and security to improve performance for data recovery applications. The paper presents an exhaustive analysis of the design space of ARRAY systems, focusing on the trade-offs between reliability, storage space, security, and performance that ARRAY must make. We present RSRAII (Replication-based Snapshot Redundant Array of Independent Imagefiles) which is a configurable RAID-like data erasure-coding, and also others benefits come from consolidation both erasure-coding and replication strategies. A novel algorithm is proposed to improve snapshot performance referred to as SMPDP (Snapshot based on Multi-Parallel Degree Pipeline).
随着我们的社会越来越多的信息驱动,我们已经开始在广域存储系统中分布数据。同时,物理故障和逻辑错误使得在远程数据灾难中进行必要的恢复变得困难,并且很难理解这一过程。我们描述ARRAY,这是一种用于数据灾难恢复的系统架构,它结合了可靠性、存储空间和安全性,以提高数据恢复应用程序的性能。本文对ARRAY系统的设计空间进行了详尽的分析,重点讨论了ARRAY必须在可靠性、存储空间、安全性和性能之间做出的权衡。我们提出了RSRAII(基于复制的独立映像文件快照冗余阵列),它是一种可配置的类似raid的数据擦除编码,并且其他好处来自于擦除编码和复制策略的整合。提出了一种改进快照性能的新算法SMPDP (snapshot based on Multi-Parallel Degree Pipeline)。
{"title":"ARRAY: A Non-application-Related, Secure, Wide-Area Disaster Recovery Storage System","authors":"Lingfang Zeng, D. Feng, B. Veeravalli, Q. Wei","doi":"10.1109/ISPA.2009.54","DOIUrl":"https://doi.org/10.1109/ISPA.2009.54","url":null,"abstract":"With our society more information-driven, we have begun to distribute data in wide-area storage systems. At the same time, both physical failure and logic error have made it difficult to bring the necessary recovery to bear on remote data disaster, and understanding this proceeding. We describe ARRAY, a system architecture for data disaster recovery that combines reliability, storage space, and security to improve performance for data recovery applications. The paper presents an exhaustive analysis of the design space of ARRAY systems, focusing on the trade-offs between reliability, storage space, security, and performance that ARRAY must make. We present RSRAII (Replication-based Snapshot Redundant Array of Independent Imagefiles) which is a configurable RAID-like data erasure-coding, and also others benefits come from consolidation both erasure-coding and replication strategies. A novel algorithm is proposed to improve snapshot performance referred to as SMPDP (Snapshot based on Multi-Parallel Degree Pipeline).","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124386061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrating mobility into WSNs can significantly reduce the energy consumption of sensor nodes. However, this may lead to unacceptable data collection latency at the same time. In our previous work, we alleviated the problem under the assumption of a mobile base station (BS). In this paper, we discuss how the problem can be solved when the BS itself is not capable of moving, but it can instead employ some mobile elements (MEs). The data collection latency is mainly determined by the longest tour of the MEs in this case. Each ME should be assigned a similar workload to reduce the latency. Furthermore, the total length of the tours should be minimized to decrease the working cost of MEs. We propose three methods to solve the problem with these two-fold objectives. In the first two methods, we cluster the network according to some criteria, and then construct the data collection tour for each ME. We apply a heuristic operator based on the genetic algorithm in the third method, whose fitness function is defined according to the two-fold objectives. These methods are evaluated by comprehensive experiments. The results show that the genetic method can provide us more steady solutions in term of data collection latency. We also compare the mobile BS model and the multiple MEs model, whose results show that the latter can get us better solutions when the number of MEs gets larger.
{"title":"Optimize Multiple Mobile Elements Touring in Wireless Sensor Networks","authors":"Liang He, Jingdong Xu, Yuntao Yu","doi":"10.1109/ISPA.2009.16","DOIUrl":"https://doi.org/10.1109/ISPA.2009.16","url":null,"abstract":"Integrating mobility into WSNs can significantly reduce the energy consumption of sensor nodes. However, this may lead to unacceptable data collection latency at the same time. In our previous work, we alleviated the problem under the assumption of a mobile base station (BS). In this paper, we discuss how the problem can be solved when the BS itself is not capable of moving, but it can instead employ some mobile elements (MEs). The data collection latency is mainly determined by the longest tour of the MEs in this case. Each ME should be assigned a similar workload to reduce the latency. Furthermore, the total length of the tours should be minimized to decrease the working cost of MEs. We propose three methods to solve the problem with these two-fold objectives. In the first two methods, we cluster the network according to some criteria, and then construct the data collection tour for each ME. We apply a heuristic operator based on the genetic algorithm in the third method, whose fitness function is defined according to the two-fold objectives. These methods are evaluated by comprehensive experiments. The results show that the genetic method can provide us more steady solutions in term of data collection latency. We also compare the mobile BS model and the multiple MEs model, whose results show that the latter can get us better solutions when the number of MEs gets larger.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116904985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There was a lot of interest in multicast communications within this decade as it is an essential part of many network applications, e.g. video-on-demand, etc. In this paper, we model flow rate allocation for application overlay as a utility based optimization problem constrained by capacity limitations of physical links and overlay constraints. The optimization flow control presented here addresses not only concave utility functions which are suitable for applications with elastic traffics, but also especial forms of non-concave utilities that are used to model applications with inelastic traffics, which might demand for hard delay and rate requirements. We then propose an iterative algorithm as the solution to the optimization flow control problem and investigate especial forms of non-concave utilities that are supported by this model. Simulation results show that the iterative algorithm can be used to deal with sigmoidal-like utilities which are useful for modeling real-time applications such as live streaming.
{"title":"Semantically Reliable Multicast Based on the (m-k)-Firm Technique","authors":"Wilian Queiroz, L. Lung, Luciana Rech, L. Lima","doi":"10.1109/ISPA.2009.109","DOIUrl":"https://doi.org/10.1109/ISPA.2009.109","url":null,"abstract":"There was a lot of interest in multicast communications within this decade as it is an essential part of many network applications, e.g. video-on-demand, etc. In this paper, we model flow rate allocation for application overlay as a utility based optimization problem constrained by capacity limitations of physical links and overlay constraints. The optimization flow control presented here addresses not only concave utility functions which are suitable for applications with elastic traffics, but also especial forms of non-concave utilities that are used to model applications with inelastic traffics, which might demand for hard delay and rate requirements. We then propose an iterative algorithm as the solution to the optimization flow control problem and investigate especial forms of non-concave utilities that are supported by this model. Simulation results show that the iterative algorithm can be used to deal with sigmoidal-like utilities which are useful for modeling real-time applications such as live streaming.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124792289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Workflow engines often being based on WS-BPEL, currently rely on a mix of recovery / modification strategies that are either part of the workflow description, part of the workflow engine, or realized as plugins to the workflow engine. To foster the development of distributed cloud-based workflow engines and novel repair algorithms, workflow engines have to be modularized in order to overcome the static and inflexible APIs provided by these workflow engines. Dynamic features gained by a modularization include the creation of external modules to monitor as well as modify a workflow to provide error handling in conjunction with Service Level Agreement (SLA) constraints. The aim of this paper is to present a flexible Workflow Execution Engine to facilitate the development of a new dynamic infrastructure to realize dynamic workflow engines with a focus on cloud-based environments.
{"title":"A Domain Specific Language and Workflow Execution Engine to Enable Dynamic Workflows","authors":"Gerhard Stuermer, Juergen Mangler, E. Schikuta","doi":"10.1109/ISPA.2009.106","DOIUrl":"https://doi.org/10.1109/ISPA.2009.106","url":null,"abstract":"Workflow engines often being based on WS-BPEL, currently rely on a mix of recovery / modification strategies that are either part of the workflow description, part of the workflow engine, or realized as plugins to the workflow engine. To foster the development of distributed cloud-based workflow engines and novel repair algorithms, workflow engines have to be modularized in order to overcome the static and inflexible APIs provided by these workflow engines. Dynamic features gained by a modularization include the creation of external modules to monitor as well as modify a workflow to provide error handling in conjunction with Service Level Agreement (SLA) constraints. The aim of this paper is to present a flexible Workflow Execution Engine to facilitate the development of a new dynamic infrastructure to realize dynamic workflow engines with a focus on cloud-based environments.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129466043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jung-Ho Um, Miyoung Jang, Kyoung-Jin Jo, Jae-Woo Chang
In Location-Based Services (LBSs), users send location-based queries to LBS servers along with their exact locations, but the location information of the users can be misused by adversaries. In this regard, there must be a mechanism which can deal with the privacy protection of the users. In this paper, we propose a cloaking method considering both K-anonymity and L-diversity. Our cloaking method creates a minimum cloaking region by finding L number of buildings (L-diversity) and then finds K number of users (K-anonymity). To support it, we use R*-tree based index structures as well as efficient filtering techniques to generate a minimum cloaking region. Finally, we show from our performance analysis that our cloaking method outperforms the existing grid-based cloaking method in terms of the size of cloaking regions and cloaking region creation time.
{"title":"A New Cloaking Method Supporting both K-anonymity and L-diversity for Privacy Protection in Location-Based Service","authors":"Jung-Ho Um, Miyoung Jang, Kyoung-Jin Jo, Jae-Woo Chang","doi":"10.1109/ISPA.2009.93","DOIUrl":"https://doi.org/10.1109/ISPA.2009.93","url":null,"abstract":"In Location-Based Services (LBSs), users send location-based queries to LBS servers along with their exact locations, but the location information of the users can be misused by adversaries. In this regard, there must be a mechanism which can deal with the privacy protection of the users. In this paper, we propose a cloaking method considering both K-anonymity and L-diversity. Our cloaking method creates a minimum cloaking region by finding L number of buildings (L-diversity) and then finds K number of users (K-anonymity). To support it, we use R*-tree based index structures as well as efficient filtering techniques to generate a minimum cloaking region. Finally, we show from our performance analysis that our cloaking method outperforms the existing grid-based cloaking method in terms of the size of cloaking regions and cloaking region creation time.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121959275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since non-functional properties play a more and more import role during the process of web service composition, the QoS issues for web service composition have obtained great interest in both research community and IT domain. Yet identifying the consumers¡¯ degree of satisfaction on QoS is a significant but challenging problem. This paper proposes a global QoS-driven evaluation strategy for web service composition on top of some previous work, aiming at improving some previous QoS frameworks. The executions of services are analyzed with the ACP (Algebra of Communication Process, a dialect of process algebra), and then the composite QoS values of a composite service are calculated by the ACP expressions. Finally, for the purpose that the composite service is evaluated via aggregating the criteria values into the degree of consumers¡¯ satisfaction, a synthetic method is formulated for comprehensively estimating the weights assigned to the criteria in terms of the consumers¡¯ preferences.
由于非功能属性在web服务组合过程中扮演着越来越重要的角色,web服务组合的QoS问题受到了学术界和IT界的极大关注。然而,确定消费者对服务质量的满意程度是一个重要但具有挑战性的问题。本文在前人研究的基础上,提出了一种面向web服务组合的全局QoS驱动评估策略,旨在改进现有的QoS框架。利用ACP (Algebra of Communication Process,进程代数的一种方言)对服务的执行过程进行分析,然后利用ACP表达式计算复合服务的复合QoS值。最后,为了通过将标准值汇总为消费者满意程度来评估复合服务,我们制定了一种综合方法,根据消费者的偏好综合估计赋予标准的权重。
{"title":"A Global QoS-Driven Evaluation Strategy for Web Services Composition","authors":"Xuyun Zhang, Wanchun Dou","doi":"10.1109/ISPA.2009.81","DOIUrl":"https://doi.org/10.1109/ISPA.2009.81","url":null,"abstract":"Since non-functional properties play a more and more import role during the process of web service composition, the QoS issues for web service composition have obtained great interest in both research community and IT domain. Yet identifying the consumers¡¯ degree of satisfaction on QoS is a significant but challenging problem. This paper proposes a global QoS-driven evaluation strategy for web service composition on top of some previous work, aiming at improving some previous QoS frameworks. The executions of services are analyzed with the ACP (Algebra of Communication Process, a dialect of process algebra), and then the composite QoS values of a composite service are calculated by the ACP expressions. Finally, for the purpose that the composite service is evaluated via aggregating the criteria values into the degree of consumers¡¯ satisfaction, a synthetic method is formulated for comprehensively estimating the weights assigned to the criteria in terms of the consumers¡¯ preferences.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121960670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}