The purpose of this paper is to build a fundamental framework of discovering and analyzing a workflow-based social network formed through workflow-based organizational business operations. A little more precisely speaking, the framework formalizes a series of theoretical steps from discovering a workflow-based social network to analyzing the discovered social network. For the sake of the discovery phase, we conceive an algorithm that is able to automatically discover the workflow-based social network from a workflow procedure; while on the other hand, in the analysis phase we apply the degree centrality algorithm to the discovered social network, which is one of the well-known social network analysis algorithms in the literature. Consequently, the crucial implication of the framework is in quantifying the degree of work-intimacy among performers who are involved in enacting the corresponding workflow procedure. Also, as a conceptual extension of the framework, it can be applied to discovering and analyzing d gree centrality or collaborative closeness and betweenness among architectural components and nodes of collaborative cloud workflow computing environments.
{"title":"A Framework: Workflow-Based Social Network Discovery and Analysis","authors":"Jihye Song, Minjoon Kim, Haksung Kim, K. Kim","doi":"10.1109/CSE.2010.74","DOIUrl":"https://doi.org/10.1109/CSE.2010.74","url":null,"abstract":"The purpose of this paper is to build a fundamental framework of discovering and analyzing a workflow-based social network formed through workflow-based organizational business operations. A little more precisely speaking, the framework formalizes a series of theoretical steps from discovering a workflow-based social network to analyzing the discovered social network. For the sake of the discovery phase, we conceive an algorithm that is able to automatically discover the workflow-based social network from a workflow procedure; while on the other hand, in the analysis phase we apply the degree centrality algorithm to the discovered social network, which is one of the well-known social network analysis algorithms in the literature. Consequently, the crucial implication of the framework is in quantifying the degree of work-intimacy among performers who are involved in enacting the corresponding workflow procedure. Also, as a conceptual extension of the framework, it can be applied to discovering and analyzing d gree centrality or collaborative closeness and betweenness among architectural components and nodes of collaborative cloud workflow computing environments.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116650448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reversible data hiding is a technique not only the secret data can be extracted from a covered image but also the cover image can be completely rebuilt after the extraction process. Therefore, it is the choice in cases of secret data hiding where the full recovery of the cover image is essential. In this paper, we propose a reversible data hiding technique based on Neighbor Mean Interpolation (NMI) method utilizing the R-weighted Coding Method (RCM). Experimental results show the practicability and superiority of the proposed method over its classical counterparts, providing high performance in terms of PSNR and data hiding capacity.
{"title":"An Image Interpolation Based Reversible Data Hiding Method Using R-Weighted Coding","authors":"Y. Yalman, F. Akar, I. Erturk","doi":"10.1109/CSE.2010.52","DOIUrl":"https://doi.org/10.1109/CSE.2010.52","url":null,"abstract":"Reversible data hiding is a technique not only the secret data can be extracted from a covered image but also the cover image can be completely rebuilt after the extraction process. Therefore, it is the choice in cases of secret data hiding where the full recovery of the cover image is essential. In this paper, we propose a reversible data hiding technique based on Neighbor Mean Interpolation (NMI) method utilizing the R-weighted Coding Method (RCM). Experimental results show the practicability and superiority of the proposed method over its classical counterparts, providing high performance in terms of PSNR and data hiding capacity.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"46 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120924836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To maintain, analyze and reuse many of today¡¯s Complex Real-Time Embedded Systems (CRTES) is very difficult and expensive, which, nevertheless, offers high business value in response to great concern in industry. In such context, not only functional behavior but also non-functional properties of systems have to be assured, i.e., Worst-Case Response Time (WCRT) of tasks has to be known. However, due to high complexity of such systems and the nature of the problem, the exact WCRT of tasks is impossible to find in practice, but may only be bounded. In addition, the existing relatively well developed theories for modeling and analysis of real-time systems are having problems which limit their application in the context. In this paper, we address this challenge by presenting a framework for approximate timing analysis of CRTES, namely AESIR-CORES, which provides a tight interval of WCRT estimates of tasks by the usage of two novel contributions. Our evaluation using three models inspired by two fictive but representative industrial CRTES indicates that AESIR-CORES can either successfully obtain the actual WCRT values, or have the potential to bound the unknown actual WCRT values from a statistical perspective.
{"title":"An Approximate Timing Analysis Framework for Complex Real-Time Embedded Systems","authors":"Yue Lu, Thomas Nolte, J. Kraft","doi":"10.1109/CSE.2010.21","DOIUrl":"https://doi.org/10.1109/CSE.2010.21","url":null,"abstract":"To maintain, analyze and reuse many of today¡¯s Complex Real-Time Embedded Systems (CRTES) is very difficult and expensive, which, nevertheless, offers high business value in response to great concern in industry. In such context, not only functional behavior but also non-functional properties of systems have to be assured, i.e., Worst-Case Response Time (WCRT) of tasks has to be known. However, due to high complexity of such systems and the nature of the problem, the exact WCRT of tasks is impossible to find in practice, but may only be bounded. In addition, the existing relatively well developed theories for modeling and analysis of real-time systems are having problems which limit their application in the context. In this paper, we address this challenge by presenting a framework for approximate timing analysis of CRTES, namely AESIR-CORES, which provides a tight interval of WCRT estimates of tasks by the usage of two novel contributions. Our evaluation using three models inspired by two fictive but representative industrial CRTES indicates that AESIR-CORES can either successfully obtain the actual WCRT values, or have the potential to bound the unknown actual WCRT values from a statistical perspective.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127175911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generally, it is often a challenging issue to develop a QoS-optimal service composition schema from vast amount of services. In view of this challenge, an automatic service composition method is proposed, in this paper, to deal with the situation that the input functional properties and the output functional properties are specified in advance. More specifically, upon a vast amount of services, two filter algorithms are respectively imposed on the input functional properties and the output functional properties for achieving a small set of services that will be engaged in later service composition schema. Then a searching algorithm is imposed on this group of services, taking advantage of backtracking theory to develop a QoS-optimal service composition schema. At last, an integrated platform is investigated for promoting the application of the QoS-optimal service composition method.
{"title":"A QoS-Optimal Automatic Service Composition Method Based on Backtracting Theory","authors":"Wenmin Lin, Rutao Yang, Xiaojie Si, Lianyong Qi, Wanchun Dou","doi":"10.1109/CSE.2010.71","DOIUrl":"https://doi.org/10.1109/CSE.2010.71","url":null,"abstract":"Generally, it is often a challenging issue to develop a QoS-optimal service composition schema from vast amount of services. In view of this challenge, an automatic service composition method is proposed, in this paper, to deal with the situation that the input functional properties and the output functional properties are specified in advance. More specifically, upon a vast amount of services, two filter algorithms are respectively imposed on the input functional properties and the output functional properties for achieving a small set of services that will be engaged in later service composition schema. Then a searching algorithm is imposed on this group of services, taking advantage of backtracking theory to develop a QoS-optimal service composition schema. At last, an integrated platform is investigated for promoting the application of the QoS-optimal service composition method.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"76 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131613516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiple Criterion Group Decision Making (MCGDM) which is based on the procedure has the virtue of drawing on the wisdom of masses with the defect of time and resource wasting. Then the experience of the historical MCGDM processes as the collective knowledge for future tasks is possible to be made use of to overcome the shortcoming of MCGDM without losing its advantage. But the existing techniques seldom handle the linguistic data in MCGDM as the knowledge availably. In this article, we propose a method of mining the briefest rules as the group experience from the decision table built from the historical MCGDM process. This method is based on genetic algorithms, which is designed by us. And the whole model is integrated in our prototype of knowledge oriented group decision support system and shows good impact on the instance.
{"title":"Research on Mining Rules from Multi-criterion Group Decision Making Based on Genetic Algorithms","authors":"Xinqiao Yu, N. Xiong, Wei Zhang","doi":"10.1109/CSE.2010.45","DOIUrl":"https://doi.org/10.1109/CSE.2010.45","url":null,"abstract":"Multiple Criterion Group Decision Making (MCGDM) which is based on the procedure has the virtue of drawing on the wisdom of masses with the defect of time and resource wasting. Then the experience of the historical MCGDM processes as the collective knowledge for future tasks is possible to be made use of to overcome the shortcoming of MCGDM without losing its advantage. But the existing techniques seldom handle the linguistic data in MCGDM as the knowledge availably. In this article, we propose a method of mining the briefest rules as the group experience from the decision table built from the historical MCGDM process. This method is based on genetic algorithms, which is designed by us. And the whole model is integrated in our prototype of knowledge oriented group decision support system and shows good impact on the instance.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132140311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the price discrimination in wireless service allocation on the basis of game theory. The wireless service providers can supply the service with different prices. Two service price discrimination models are proposed. The first one is one wireless service provider with n types of wireless users. Each type has different preference parameter. The wireless users are proved to be individual rational and incentive compatible. The optimal quantity of the service and the optimal price for each type of wireless users are computed to maximize the utility of the wireless service provider. The second model is n service providers with k different prices. The optimal quantity of the service and the optimal prices are computed to maximize the utility of the wireless service provider.
{"title":"Service Price Discrimination in Wireless Network","authors":"Zhide Chen, Li Xu","doi":"10.1109/CSE.2010.14","DOIUrl":"https://doi.org/10.1109/CSE.2010.14","url":null,"abstract":"This paper discusses the price discrimination in wireless service allocation on the basis of game theory. The wireless service providers can supply the service with different prices. Two service price discrimination models are proposed. The first one is one wireless service provider with n types of wireless users. Each type has different preference parameter. The wireless users are proved to be individual rational and incentive compatible. The optimal quantity of the service and the optimal price for each type of wireless users are computed to maximize the utility of the wireless service provider. The second model is n service providers with k different prices. The optimal quantity of the service and the optimal prices are computed to maximize the utility of the wireless service provider.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130685395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Lackovic, D. Talia, Rafael Tolosana-Calasanz, J. A. Bañares, O. Rana
Scientific workflows generally involve the distribution of tasks to distributed resources, which may exist in different administrative domains. The use of distributed resources in this way may lead to faults, and detecting them, identifying them and subsequently correcting them remains an important research challenge. We introduce a fault taxonomy for scientific workflows that may help in conducting a systematic analysis of faults, so that the potential faults that may arise at execution time can be corrected (recovered from). The presented taxonomy is motivated by previous work [4], but has a particular focus on workflow environments (compared to previous work which focused on Grid-based resource management) and demonstrated through its use in Weka4WS.
{"title":"A Taxonomy for the Analysis of Scientific Workflow Faults","authors":"M. Lackovic, D. Talia, Rafael Tolosana-Calasanz, J. A. Bañares, O. Rana","doi":"10.1109/CSE.2010.59","DOIUrl":"https://doi.org/10.1109/CSE.2010.59","url":null,"abstract":"Scientific workflows generally involve the distribution of tasks to distributed resources, which may exist in different administrative domains. The use of distributed resources in this way may lead to faults, and detecting them, identifying them and subsequently correcting them remains an important research challenge. We introduce a fault taxonomy for scientific workflows that may help in conducting a systematic analysis of faults, so that the potential faults that may arise at execution time can be corrected (recovered from). The presented taxonomy is motivated by previous work [4], but has a particular focus on workflow environments (compared to previous work which focused on Grid-based resource management) and demonstrated through its use in Weka4WS.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131170906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service process has been increasingly adopted in various domains with the development of service computing. As more and more service processes come to have complex business logic for changing business requirements, service process must have the ability to configure itself automatically in a certain business context. In this paper, we provide an approach to realize service process¡¯s self-configuration based on the process structure tree. In this way, context-based rules with business context are defined for each fragment in different hierarchy, and when the fragment is invoked, rules that meet the needs of the current environment are chosen and applied on the fragment, making the service flexible enough to always adapt itself to satisfy the context during the runtime.
{"title":"Runtime Configurable Service Process Model","authors":"Liang Zhou, Jian Cao","doi":"10.1109/CSE.2010.55","DOIUrl":"https://doi.org/10.1109/CSE.2010.55","url":null,"abstract":"Service process has been increasingly adopted in various domains with the development of service computing. As more and more service processes come to have complex business logic for changing business requirements, service process must have the ability to configure itself automatically in a certain business context. In this paper, we provide an approach to realize service process¡¯s self-configuration based on the process structure tree. In this way, context-based rules with business context are defined for each fragment in different hierarchy, and when the fragment is invoked, rules that meet the needs of the current environment are chosen and applied on the fragment, making the service flexible enough to always adapt itself to satisfy the context during the runtime.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132534410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Next-generation, high-throughput sequencers are now capable of producing hundreds of billions of short sequences (reads) in a single day. The task of accurately mapping the reads back to a reference genome is of particular importance because it is used in several other biological applications, e.g., genome re-sequencing, DNA methylation, and ChiP sequencing. On a personal computer (PC), the computationally intensive short-read mapping task currently requires several hours to execute while working on very large sets of reads and genomes. Accelerating this task requires parallel computing. Among the current parallel computing platforms, the graphics processing unit (GPU) provides massively parallel computational prowess that holds the promise of accelerating scientific applications at low cost. In this paper, we propose GPU-RMAP, a massively parallel version of the RMAP short-read mapping tool that is highly optimized for the NVIDIA family of GPUs. We then evaluate GPU-RMAP by mapping millions of synthetic and real reads of varying widths on the mosquito (Aedes aegypti) and human genomes. We also discuss the effects of various input parameters, such as read width, number of reads, and chromosome size, on the performance of GPU-RMAP. We then show that despite using the conventionally “slower” but GPU-compatible binary search algorithm, GPU-RMAP outperforms the sequential RMAP implementation, which uses the “faster” hashing technique on a PC. Our data-parallel GPU implementation results in impressive speedups of up to 14:5-times for the mapping kernel and up to 9:6-times for the overall program execution time over the sequential RMAP implementation on a traditional PC.
{"title":"GPU-RMAP: Accelerating Short-Read Mapping on Graphics Processors","authors":"Ashwin M. Aji, Liqing Zhang, Wu-chun Feng","doi":"10.1109/CSE.2010.29","DOIUrl":"https://doi.org/10.1109/CSE.2010.29","url":null,"abstract":"Next-generation, high-throughput sequencers are now capable of producing hundreds of billions of short sequences (reads) in a single day. The task of accurately mapping the reads back to a reference genome is of particular importance because it is used in several other biological applications, e.g., genome re-sequencing, DNA methylation, and ChiP sequencing. On a personal computer (PC), the computationally intensive short-read mapping task currently requires several hours to execute while working on very large sets of reads and genomes. Accelerating this task requires parallel computing. Among the current parallel computing platforms, the graphics processing unit (GPU) provides massively parallel computational prowess that holds the promise of accelerating scientific applications at low cost. In this paper, we propose GPU-RMAP, a massively parallel version of the RMAP short-read mapping tool that is highly optimized for the NVIDIA family of GPUs. We then evaluate GPU-RMAP by mapping millions of synthetic and real reads of varying widths on the mosquito (Aedes aegypti) and human genomes. We also discuss the effects of various input parameters, such as read width, number of reads, and chromosome size, on the performance of GPU-RMAP. We then show that despite using the conventionally “slower” but GPU-compatible binary search algorithm, GPU-RMAP outperforms the sequential RMAP implementation, which uses the “faster” hashing technique on a PC. Our data-parallel GPU implementation results in impressive speedups of up to 14:5-times for the mapping kernel and up to 9:6-times for the overall program execution time over the sequential RMAP implementation on a traditional PC.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114330617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liang Yuan, Yunquan Zhang, Yuxin Tang, L. Rao, Xiangzheng Sun
In large-scale cluster systems, interconnecting thousands of computing nodes increase the complexity of the network topology. Nevertheless, few existing computational models consider the impact of hierarchical communication latencies and bandwidths caused by the network complexity. In this paper we propose a new parallel computational model called LogGPH with a new parameter H incorporated into the LogGP model to describe the communication hierarchy. Through predicting and analyzing the point-to-point and collective MPI_Allgather communication on two 100-Terascale supercomputers, the Dawning 5000A and the Deep Comp 7000, with the new model, it shows that the new model is more accurate than the LogGP model. The mean of absolute error of our model on point-to-point communications is 13%, but the value is 30% without the hierarchical communication consideration.
{"title":"LogGPH: A Parallel Computational Model with Hierarchical Communication Awareness","authors":"Liang Yuan, Yunquan Zhang, Yuxin Tang, L. Rao, Xiangzheng Sun","doi":"10.1109/CSE.2010.40","DOIUrl":"https://doi.org/10.1109/CSE.2010.40","url":null,"abstract":"In large-scale cluster systems, interconnecting thousands of computing nodes increase the complexity of the network topology. Nevertheless, few existing computational models consider the impact of hierarchical communication latencies and bandwidths caused by the network complexity. In this paper we propose a new parallel computational model called LogGPH with a new parameter H incorporated into the LogGP model to describe the communication hierarchy. Through predicting and analyzing the point-to-point and collective MPI_Allgather communication on two 100-Terascale supercomputers, the Dawning 5000A and the Deep Comp 7000, with the new model, it shows that the new model is more accurate than the LogGP model. The mean of absolute error of our model on point-to-point communications is 13%, but the value is 30% without the hierarchical communication consideration.","PeriodicalId":342688,"journal":{"name":"2010 13th IEEE International Conference on Computational Science and Engineering","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115923194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}