Resource performance monitoring is among the most active research topics in distributed computing. In this paper, we propose an adaptive resource monitoring method for applications in heterogeneous computing environment. According to the operating environment of distributed heterogeneous system and the changes of system resource workload, the method combines periodic pull mode with event-driven push mode to adaptively publish and retrieve system resource information. Preliminary experiments reveal that, by using our adaptive monitoring method, the efficiency of system monitoring is improved over that accrued by using regular monitoring approaches.
{"title":"An Adaptive Resource Monitoring Method for Distributed Heterogeneous Computing Environment","authors":"Gang Yang, Kaibo Wang, Xingshe Zhou","doi":"10.1109/ISPA.2009.13","DOIUrl":"https://doi.org/10.1109/ISPA.2009.13","url":null,"abstract":"Resource performance monitoring is among the most active research topics in distributed computing. In this paper, we propose an adaptive resource monitoring method for applications in heterogeneous computing environment. According to the operating environment of distributed heterogeneous system and the changes of system resource workload, the method combines periodic pull mode with event-driven push mode to adaptively publish and retrieve system resource information. Preliminary experiments reveal that, by using our adaptive monitoring method, the efficiency of system monitoring is improved over that accrued by using regular monitoring approaches.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125513415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to solve the problem that there exists unbalanced detection performance on different types of attacks in current large-scale network intrusion detection algorithms, Distributed Transfer Network Learning algorithm is proposed in this paper. The algorithm introduces transfer learning into Distributed Network Boosting algorithm for instructing the attacks learning with poor performance, in which the instances transfer learning is adopted for different domain adaptation. The experimental results on the Kdd Cup’99 Data Set show that the proposed algorithm has higher efficacy and better performance. Further, the detection accuracy of R2L attacks has been improved greatly while maintaining higher detection accuracy of other attack types.
为了解决目前大规模网络入侵检测算法对不同类型攻击检测性能不均衡的问题,本文提出了分布式传输网络学习算法。该算法将迁移学习引入到分布式网络增强算法中,指导性能较差的攻击学习,并采用实例迁移学习进行不同领域的自适应。在Kdd Cup ' 99数据集上的实验结果表明,该算法具有较高的效率和较好的性能。此外,在对其他攻击类型保持较高检测精度的同时,R2L攻击的检测精度得到了很大的提高。
{"title":"Distributed Transfer Network Learning Based Intrusion Detection","authors":"S. Gou, Yuqin Wang, L. Jiao, Jing Feng, Yao Yao","doi":"10.1109/ISPA.2009.92","DOIUrl":"https://doi.org/10.1109/ISPA.2009.92","url":null,"abstract":"In order to solve the problem that there exists unbalanced detection performance on different types of attacks in current large-scale network intrusion detection algorithms, Distributed Transfer Network Learning algorithm is proposed in this paper. The algorithm introduces transfer learning into Distributed Network Boosting algorithm for instructing the attacks learning with poor performance, in which the instances transfer learning is adopted for different domain adaptation. The experimental results on the Kdd Cup’99 Data Set show that the proposed algorithm has higher efficacy and better performance. Further, the detection accuracy of R2L attacks has been improved greatly while maintaining higher detection accuracy of other attack types.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114647785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualization is often used in systems for the purpose of offering isolation among applications running in separate virtual machines (VM). Current virtual machine monitors (VMMs) have done a decent job in resource isolation in memory, CPU and I/O devices. However, when looking further into the usage of lower-level shared cache, we notice that one virtual machine’s cache behavior may interfere with another’s due to the uncontrolled cache sharing. In this situation, performance isolation cannot be guaranteed. This paper presents a cache partitioning approach which can be implemented in the VMM. We have implemented this mechanism in Xen VMM using the page coloring technique traditionally applied to the OS. Our VMM-based implementation is fully transparent to the guest OSes. It thus shows the advantages of simplicity and flexibility. Our evaluation shows that our cache partitioning method can work efficiently and improve the performance of co-scheduled applications running within different VMs. In the concurrent workloads selected from the SPEC CPU 2006 benchmarks, our technique achieves a performance improvement by up to 19% for the most sensitive workloads
虚拟化通常用于系统中,目的是在运行在不同虚拟机(VM)中的应用程序之间提供隔离。当前的虚拟机监视器(vmm)在内存、CPU和I/O设备的资源隔离方面做得不错。然而,当进一步研究低级共享缓存的使用情况时,我们注意到由于不受控制的缓存共享,一个虚拟机的缓存行为可能会干扰另一个虚拟机的缓存行为。在这种情况下,不能保证性能隔离。本文提出了一种可以在VMM中实现的缓存分区方法。我们在Xen VMM中使用传统上应用于操作系统的页面着色技术实现了这种机制。我们基于vm的实现对来宾操作系统是完全透明的。因此,它显示了简单和灵活的优点。我们的评估表明,我们的缓存分区方法可以有效地工作,并提高在不同vm中运行的协同调度应用程序的性能。在从SPEC CPU 2006基准测试中选择的并发工作负载中,对于最敏感的工作负载,我们的技术实现了高达19%的性能改进
{"title":"A Simple Cache Partitioning Approach in a Virtualized Environment","authors":"Xinxin Jin, Haogang Chen, Xiaolin Wang, Zhenlin Wang, Xiang Wen, Yingwei Luo, Xiaoming Li","doi":"10.1109/ISPA.2009.47","DOIUrl":"https://doi.org/10.1109/ISPA.2009.47","url":null,"abstract":"Virtualization is often used in systems for the purpose of offering isolation among applications running in separate virtual machines (VM). Current virtual machine monitors (VMMs) have done a decent job in resource isolation in memory, CPU and I/O devices. However, when looking further into the usage of lower-level shared cache, we notice that one virtual machine’s cache behavior may interfere with another’s due to the uncontrolled cache sharing. In this situation, performance isolation cannot be guaranteed. This paper presents a cache partitioning approach which can be implemented in the VMM. We have implemented this mechanism in Xen VMM using the page coloring technique traditionally applied to the OS. Our VMM-based implementation is fully transparent to the guest OSes. It thus shows the advantages of simplicity and flexibility. Our evaluation shows that our cache partitioning method can work efficiently and improve the performance of co-scheduled applications running within different VMs. In the concurrent workloads selected from the SPEC CPU 2006 benchmarks, our technique achieves a performance improvement by up to 19% for the most sensitive workloads","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128808894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cancan Liu, Weimin Zhang, Zhigang Luo, X. Cao, Hai Liu
The development of large-scale parallel scientific computing applications has put forward more urgent demands for powerful computing capacities and complex process managing technologies. Meanwhile, the scientific experiment processes become more and more complicated which makes it becomes a hard work for e-scientists to control the experiment analysis processes by hand. In this paper, we provide a scientific workflow system called EPSWFlow for the escientists in climate domain for services composition and workflow orchestration. In order to integrate the large number of the existing legacy applications into the system, we provide a service wrapping method and a unified interface for the workflow users to access to the services. The workflow system can process the experiment process dynamically and manage the heterogeneous grid resources transparently.
{"title":"Managing Large-Scale Scientific Computing in Ensemble Prediction Using BPEL","authors":"Cancan Liu, Weimin Zhang, Zhigang Luo, X. Cao, Hai Liu","doi":"10.1109/ISPA.2009.105","DOIUrl":"https://doi.org/10.1109/ISPA.2009.105","url":null,"abstract":"The development of large-scale parallel scientific computing applications has put forward more urgent demands for powerful computing capacities and complex process managing technologies. Meanwhile, the scientific experiment processes become more and more complicated which makes it becomes a hard work for e-scientists to control the experiment analysis processes by hand. In this paper, we provide a scientific workflow system called EPSWFlow for the escientists in climate domain for services composition and workflow orchestration. In order to integrate the large number of the existing legacy applications into the system, we provide a service wrapping method and a unified interface for the workflow users to access to the services. The workflow system can process the experiment process dynamically and manage the heterogeneous grid resources transparently.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130970814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zuoxian Nie, Xin-hua Jiang, Jian-cheng Liu, Hai-yan Yang
Completion time estimation for workflow instances is an important basis for real time workflow management and scheduling. Current researches on this topic omitted the fact that: instances of workflow that contains OR-SPLIT are certain to bypass some activities, thus at a particular time point, estimation of completion time for workflow instances should be based merely on activities that were already visited by it and are possible to be visited by it in the future. Firstly, rules were proposed to build reachable subnets for workflow instances. Subsequently, the complex problem of completion time estimation for generalized well-formed reachable subnet was decomposed to simpler ones for potential instance subgraphs, which were computed based on active transition performance equivalent model. Lastly, an example was given to demonstrate the process of completion time estimation for generalized well-formed workflow instances.
{"title":"Completion Time Estimation for Instances of Generalized Well-Formed Workflow","authors":"Zuoxian Nie, Xin-hua Jiang, Jian-cheng Liu, Hai-yan Yang","doi":"10.1109/ISPA.2009.43","DOIUrl":"https://doi.org/10.1109/ISPA.2009.43","url":null,"abstract":"Completion time estimation for workflow instances is an important basis for real time workflow management and scheduling. Current researches on this topic omitted the fact that: instances of workflow that contains OR-SPLIT are certain to bypass some activities, thus at a particular time point, estimation of completion time for workflow instances should be based merely on activities that were already visited by it and are possible to be visited by it in the future. Firstly, rules were proposed to build reachable subnets for workflow instances. Subsequently, the complex problem of completion time estimation for generalized well-formed reachable subnet was decomposed to simpler ones for potential instance subgraphs, which were computed based on active transition performance equivalent model. Lastly, an example was given to demonstrate the process of completion time estimation for generalized well-formed workflow instances.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113998639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typical load balancing strategies have been proved to be effective in traditional storage system in that they succeeded in providing specific algorithms on data partition and workload distribution. When comes to OBS, an Object-based Storage System, most of these strategies are hard to comply with the new feature— a smarter storage device named Object Storage Device (OSD) that is capable of expanding with upper burden to release the storage-aware jobs from distributed file systems. While more intelligent and powerful functions begin to be depolied on the OSD, such as Replication Technology, which was widely adopted by modern distributed storage systems, however, its misuse will lead to the downgrade on network bandwidth and computing performance. So load balance within Replica technique emerges as a hot issue in nowadays storage application. In this paper, we propose a Replica-based Duplex Load Balancing Strategy (DLBS) to better load balancing. In general, DLBS is a dynamic combination of an active strategy working on replica reproduction control, and another passive strategy that supervises hot-spot appearance, transfering control as well. The model puts them together in handling load balance influenced by replicas. Theoretical analysis and simulation results demonstrate that DLBS can be utilized in the real OBS system to provide more effective and efficient load balance than other methods.
{"title":"DLBS: Duplex Loading Balancing Strategy on Object Storage System","authors":"Tang Zhipeng, F. Dan, Tu Xudong, H. Fei","doi":"10.1109/ISPA.2009.5","DOIUrl":"https://doi.org/10.1109/ISPA.2009.5","url":null,"abstract":"Typical load balancing strategies have been proved to be effective in traditional storage system in that they succeeded in providing specific algorithms on data partition and workload distribution. When comes to OBS, an Object-based Storage System, most of these strategies are hard to comply with the new feature— a smarter storage device named Object Storage Device (OSD) that is capable of expanding with upper burden to release the storage-aware jobs from distributed file systems. While more intelligent and powerful functions begin to be depolied on the OSD, such as Replication Technology, which was widely adopted by modern distributed storage systems, however, its misuse will lead to the downgrade on network bandwidth and computing performance. So load balance within Replica technique emerges as a hot issue in nowadays storage application. In this paper, we propose a Replica-based Duplex Load Balancing Strategy (DLBS) to better load balancing. In general, DLBS is a dynamic combination of an active strategy working on replica reproduction control, and another passive strategy that supervises hot-spot appearance, transfering control as well. The model puts them together in handling load balance influenced by replicas. Theoretical analysis and simulation results demonstrate that DLBS can be utilized in the real OBS system to provide more effective and efficient load balance than other methods.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130214216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fenglong Song, Zhiyong Liu, Dongrui Fan, He Huang, Nan Yuan, Lei-Ping Yu, Junchao Zhang
On-chip many-core architecture is an emerging and promising computation platform. High speed on-chip communication and abundant chipped resources are two outstanding advantages of this architecture, which provide an opportunity to implement efficient synchronization scheme. The practical execution efficiency of synchronization scheme is critical to this platform. However, there are few researches on systematic evaluation method of choice synchronization schemes for on-chip many-core processors, and effect of dedicated hardware support in this context. So we focus on the evaluation method and criterion of synchronization scheme on the platform. Firstly, we present several criterions proper to on-chip many-core architecture, that is, absolute overhead of synchronization operation, the transferring time between different synchronization operations, overhead caused by load imbalance, and the network congestion caused by synchronization operation. Secondly, we illustrate how to design microbenchmarks which one dedicated to evaluate a performance criterion respectively. Finally, we implement these microbenchmarks and synchronization schemes on an on-chip many-core processor with shared level-two cache and AMD Opteron commercial chip multi-processor, respectively. And we analyze effect of dedicated hardware support. Results show that the most overhead of synchronization is caused by load imbalance and serialization on synchronization point. It also shows that synchronization scheme supported with dedicated hardware can improve its performance obviously for chipped many-core processor.
{"title":"Evaluation Method of Synchronization for Shared-Memory On-Chip Many-Core Processor","authors":"Fenglong Song, Zhiyong Liu, Dongrui Fan, He Huang, Nan Yuan, Lei-Ping Yu, Junchao Zhang","doi":"10.1109/ISPA.2009.6","DOIUrl":"https://doi.org/10.1109/ISPA.2009.6","url":null,"abstract":"On-chip many-core architecture is an emerging and promising computation platform. High speed on-chip communication and abundant chipped resources are two outstanding advantages of this architecture, which provide an opportunity to implement efficient synchronization scheme. The practical execution efficiency of synchronization scheme is critical to this platform. However, there are few researches on systematic evaluation method of choice synchronization schemes for on-chip many-core processors, and effect of dedicated hardware support in this context. So we focus on the evaluation method and criterion of synchronization scheme on the platform. Firstly, we present several criterions proper to on-chip many-core architecture, that is, absolute overhead of synchronization operation, the transferring time between different synchronization operations, overhead caused by load imbalance, and the network congestion caused by synchronization operation. Secondly, we illustrate how to design microbenchmarks which one dedicated to evaluate a performance criterion respectively. Finally, we implement these microbenchmarks and synchronization schemes on an on-chip many-core processor with shared level-two cache and AMD Opteron commercial chip multi-processor, respectively. And we analyze effect of dedicated hardware support. Results show that the most overhead of synchronization is caused by load imbalance and serialization on synchronization point. It also shows that synchronization scheme supported with dedicated hardware can improve its performance obviously for chipped many-core processor.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134054873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new semi-reliable multicast algorithm based on the (m, k)-firm scheduling technique, where in each consecutive k window messages sent by a sender, at least m of these messages must be received by the receiver. To assure this restriction, message recovery mechanisms from reliable multicast protocols can be used. This algorithm is mainly applicable to applications that may suffer losses, as long as these losses do not occur consecutively and do not overrun a specified maximum value of message, without any degradation of the quality of service.
{"title":"A Case-Based Component Selection Framework for Mobile Context-Aware Applications","authors":"Fan Dong, Li Zhang, Dexter H. Hu, Cho-Li Wang","doi":"10.1109/ISPA.2009.110","DOIUrl":"https://doi.org/10.1109/ISPA.2009.110","url":null,"abstract":"This paper proposes a new semi-reliable multicast algorithm based on the (m, k)-firm scheduling technique, where in each consecutive k window messages sent by a sender, at least m of these messages must be received by the receiver. To assure this restriction, message recovery mechanisms from reliable multicast protocols can be used. This algorithm is mainly applicable to applications that may suffer losses, as long as these losses do not occur consecutively and do not overrun a specified maximum value of message, without any degradation of the quality of service.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127755543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spectrum access scheme is a fundamental component in building efficient wireless networks. Conventional methods such as proactive channel assignment is costly due to large amount of protocol overhead. Also, those algorithms suffer from its inability in dealing with channel dynamics. The opportunistic methods however, spend more time on probing, and suffer from the myopic decisions as well. We present a decision based dynamic spectrum access algorithm (DDSA), which is built upon the Markov decision process (MDP), and could adaptively handle the DSA process for higher throughput. We employ quiet probing and dynamic controlling mechanisms in DDSA, so as to achieve a reduced protocol overhead and improved adaptivity. Different from previous methods, the DDSA is a model driven method, and we use the modeling technique on the IEEE 802.11 DCF for virtual channel state probing. The modeling technique could help us improve the accuracy on channel state, and reduce protocol overhead. Using a heuristic and adaptive algorithm named `hindsight optimization', we solve the hardness in computing the MDP. Moreover, under the feasibility testing and scaling processes, the validated decision can be confidentially applied for a congestion-free DSA.
{"title":"DDSA: A Sampling and Validation Based Spectrum Access Algorithm in Wireless Networks","authors":"Panlong Yang, Hai Wang, Guihai Chen","doi":"10.1109/ISPA.2009.64","DOIUrl":"https://doi.org/10.1109/ISPA.2009.64","url":null,"abstract":"Spectrum access scheme is a fundamental component in building efficient wireless networks. Conventional methods such as proactive channel assignment is costly due to large amount of protocol overhead. Also, those algorithms suffer from its inability in dealing with channel dynamics. The opportunistic methods however, spend more time on probing, and suffer from the myopic decisions as well. We present a decision based dynamic spectrum access algorithm (DDSA), which is built upon the Markov decision process (MDP), and could adaptively handle the DSA process for higher throughput. We employ quiet probing and dynamic controlling mechanisms in DDSA, so as to achieve a reduced protocol overhead and improved adaptivity. Different from previous methods, the DDSA is a model driven method, and we use the modeling technique on the IEEE 802.11 DCF for virtual channel state probing. The modeling technique could help us improve the accuracy on channel state, and reduce protocol overhead. Using a heuristic and adaptive algorithm named `hindsight optimization', we solve the hardness in computing the MDP. Moreover, under the feasibility testing and scaling processes, the validated decision can be confidentially applied for a congestion-free DSA.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129186790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Process enactment plays a pivotal role in BPM systems. In order to enhance the scalability and robustness of BPM systems, a straightforward solution is to provide a redundant system with multi-engine architecture. However, without an effective scheduler, the multi-engine BPM systems cannot play out its advantages. This paper focuses on the design of an adaptive scheduler which can handle both process-level and activity-level scheduling based on dynamic weighted scheduling algorithm. The experiments show that, with the scheduler, the load capacity of the multi-engine BPM system can be improved and the average response time of process requests can be reduced, especially when each engine node has some differences in configurations.
{"title":"An Adaptive Scheduler for Enhancing the Efficiency of Multi-engine BPM Systems","authors":"Junyi Sun, Houfu Li, Yanbo Han","doi":"10.1109/ISPA.2009.78","DOIUrl":"https://doi.org/10.1109/ISPA.2009.78","url":null,"abstract":"Process enactment plays a pivotal role in BPM systems. In order to enhance the scalability and robustness of BPM systems, a straightforward solution is to provide a redundant system with multi-engine architecture. However, without an effective scheduler, the multi-engine BPM systems cannot play out its advantages. This paper focuses on the design of an adaptive scheduler which can handle both process-level and activity-level scheduling based on dynamic weighted scheduling algorithm. The experiments show that, with the scheduler, the load capacity of the multi-engine BPM system can be improved and the average response time of process requests can be reduced, especially when each engine node has some differences in configurations.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129654181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}