In regards to applications like 3D seismic migration, it is quite important to improve the I/O performance within an cluster computing system. Such seismic data processing applications are the I/O intensive applications. For example, large 3D data volume cannot be hold totally in computer memories. Therefore the input data files have to be divided into many fine-grained chunks. Intermediate results are written out at various stages during the execution, and final results are written out by the master process. This paper describes a novel manner for optimizing the parallel I/O data access strategy and load balancing for the above-mentioned particular program model. The optimization, based on the application defined API, reduces the number of I/O operations and communication (as compared to the original model). This is done by forming groups of threads with "group roots", so to speak, that read input data (determined by an index retrieved from the master process) and then send it to their group members. In the original model, each process/thread reads the whole input data and outputs its own results. Moreover the loads are balanced, for the on-line dynamic scheduling of access request to process the migration data. Finally, in the actual performance test, the improvement of performance is often more than 60% by comparison with the original model.
{"title":"A Task-Pool Parallel I/O Paradigm for an I/O Intensive Application","authors":"Jianjiang Li, Lin Yan, Zhe Gao, D. Hei","doi":"10.1109/ISPA.2009.20","DOIUrl":"https://doi.org/10.1109/ISPA.2009.20","url":null,"abstract":"In regards to applications like 3D seismic migration, it is quite important to improve the I/O performance within an cluster computing system. Such seismic data processing applications are the I/O intensive applications. For example, large 3D data volume cannot be hold totally in computer memories. Therefore the input data files have to be divided into many fine-grained chunks. Intermediate results are written out at various stages during the execution, and final results are written out by the master process. This paper describes a novel manner for optimizing the parallel I/O data access strategy and load balancing for the above-mentioned particular program model. The optimization, based on the application defined API, reduces the number of I/O operations and communication (as compared to the original model). This is done by forming groups of threads with \"group roots\", so to speak, that read input data (determined by an index retrieved from the master process) and then send it to their group members. In the original model, each process/thread reads the whole input data and outputs its own results. Moreover the loads are balanced, for the on-line dynamic scheduling of access request to process the migration data. Finally, in the actual performance test, the improvement of performance is often more than 60% by comparison with the original model.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122467916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motif is overrepresented pattern in biological sequence and Motif finding is an important problem in bioinformatics. Due to high computational complexity of motif finding, more and more computational capabilities are required as the rapid growth of available biological data, such as gene transcription data. Among many motif finding algorithms, Gibbs sampling is an effective method for long motif finding. In this paper we present an improved Gibbs sampling method on graphics processing units (GPU) to accelerate motif finding. Experimental data support that, compared to traditional programs on CPU, our program running on GPU provides an effective and low-cost solution for motif finding problem, especially for long Motif finding.
{"title":"A Parallel Gibbs Sampling Algorithm for Motif Finding on GPU","authors":"Linbin Yu, Yun Xu","doi":"10.1109/ISPA.2009.88","DOIUrl":"https://doi.org/10.1109/ISPA.2009.88","url":null,"abstract":"Motif is overrepresented pattern in biological sequence and Motif finding is an important problem in bioinformatics. Due to high computational complexity of motif finding, more and more computational capabilities are required as the rapid growth of available biological data, such as gene transcription data. Among many motif finding algorithms, Gibbs sampling is an effective method for long motif finding. In this paper we present an improved Gibbs sampling method on graphics processing units (GPU) to accelerate motif finding. Experimental data support that, compared to traditional programs on CPU, our program running on GPU provides an effective and low-cost solution for motif finding problem, especially for long Motif finding.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126988543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuri Nishikawa, M. Koibuchi, Masato Yoshimi, Akihiro Shitara, K. Miura, H. Amano
ClearSpeed's CSX600 that consists of 96 Processing Elements (PEs) employs a one-dimensional array topology for a simple SIMD processing. To clearly show the performance factors and practical issues of NoCs in an existing modern many-core SIMD system, this paper measures and analyzes NoCs of CSX600 called Swazzle and ClearConnect. Evaluation and analysis results show that the sending and receiving overheads are the major limitation factors to the effective network bandwidth. We found that (1) the number of used PEs, (2) the size of transferred data, and (3) data alignment of a shared memory are three main points to make the best use of bandwidth. In addition, we estimated the best- and worst-case latencies of data transfers in parallel applications.
{"title":"Performance Analysis of ClearSpeed's CSX600 Interconnects","authors":"Yuri Nishikawa, M. Koibuchi, Masato Yoshimi, Akihiro Shitara, K. Miura, H. Amano","doi":"10.1109/ISPA.2009.102","DOIUrl":"https://doi.org/10.1109/ISPA.2009.102","url":null,"abstract":"ClearSpeed's CSX600 that consists of 96 Processing Elements (PEs) employs a one-dimensional array topology for a simple SIMD processing. To clearly show the performance factors and practical issues of NoCs in an existing modern many-core SIMD system, this paper measures and analyzes NoCs of CSX600 called Swazzle and ClearConnect. Evaluation and analysis results show that the sending and receiving overheads are the major limitation factors to the effective network bandwidth. We found that (1) the number of used PEs, (2) the size of transferred data, and (3) data alignment of a shared memory are three main points to make the best use of bandwidth. In addition, we estimated the best- and worst-case latencies of data transfers in parallel applications.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131193361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
He Huang, Lei Liu, Nan Yuan, Wei Lin, Fenglong Song, Junchao Zhang, Dongrui Fan
The efficient support of cache coherence is extremely important to design and implement many-core processors. In this paper, we propose a synchronization-based coherence (SBC) protocol to efficiently support cache coherence for shared memory many-core architectures. The unique feature of our scheme is that it doesn’t use directory at all. Inspired by scope consistency memory model, our protocol maintains coherence at synchronization point. Within critical section, processor cores record write-sets (which lines have been written in critical section) with bloom-filter function. When the core releases the lock, the write-set is transferred to a synchronization manager. When another core acquires the same lock, it gets the write-set from the synchronization manager and invalidates stale data in its local cache. Experimental results show that the SBC outperforms by averages of 5% in execution time across a suite of scientific applications. At the mean time, the SBC is more cost-effective comparing to directory-based protocol that requires large amount of hardware resource and huge design verification effort.
{"title":"A Synchronization-Based Alternative to Directory Protocol","authors":"He Huang, Lei Liu, Nan Yuan, Wei Lin, Fenglong Song, Junchao Zhang, Dongrui Fan","doi":"10.1109/ISPA.2009.25","DOIUrl":"https://doi.org/10.1109/ISPA.2009.25","url":null,"abstract":"The efficient support of cache coherence is extremely important to design and implement many-core processors. In this paper, we propose a synchronization-based coherence (SBC) protocol to efficiently support cache coherence for shared memory many-core architectures. The unique feature of our scheme is that it doesn’t use directory at all. Inspired by scope consistency memory model, our protocol maintains coherence at synchronization point. Within critical section, processor cores record write-sets (which lines have been written in critical section) with bloom-filter function. When the core releases the lock, the write-set is transferred to a synchronization manager. When another core acquires the same lock, it gets the write-set from the synchronization manager and invalidates stale data in its local cache. Experimental results show that the SBC outperforms by averages of 5% in execution time across a suite of scientific applications. At the mean time, the SBC is more cost-effective comparing to directory-based protocol that requires large amount of hardware resource and huge design verification effort.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"37 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongbin Zhou, Junchao Zhang, Shuai Zhang, Nan Yuan, Dongrui Fan
to date, most of many-core prototypes employ tiled topologies connected through on-chip networks. The throughput and latency of the on-chip networks usually become to the bottleneck to achieve peak performance especially for communication intensive applications. Most of studies are focus on on-chip networks only, such as routing algorithms or router micro-architecture, to improve the above metrics. The salient aspect of our approach is that we provide a data management framework to implement high efficient on-chip traffic based on overall many-core system. The major contributions of this paper include that: (1) providing a novel tiled many-core architecture which supports software controlled on-chip data storage and movement management; (2) identifying that the asynchronous bulk data transfer mechanism is an effective method to tolerant the latency of 2-D mesh on-chip networks. At last, we evaluate the 1-D FFT algorithm on the framework and the performance achieves 47.6 Gflops with 24.8% computation efficiency.
{"title":"Data Management: The Spirit to Pursuit Peak Performance on Many-Core Processor","authors":"Yongbin Zhou, Junchao Zhang, Shuai Zhang, Nan Yuan, Dongrui Fan","doi":"10.1109/ISPA.2009.22","DOIUrl":"https://doi.org/10.1109/ISPA.2009.22","url":null,"abstract":"to date, most of many-core prototypes employ tiled topologies connected through on-chip networks. The throughput and latency of the on-chip networks usually become to the bottleneck to achieve peak performance especially for communication intensive applications. Most of studies are focus on on-chip networks only, such as routing algorithms or router micro-architecture, to improve the above metrics. The salient aspect of our approach is that we provide a data management framework to implement high efficient on-chip traffic based on overall many-core system. The major contributions of this paper include that: (1) providing a novel tiled many-core architecture which supports software controlled on-chip data storage and movement management; (2) identifying that the asynchronous bulk data transfer mechanism is an effective method to tolerant the latency of 2-D mesh on-chip networks. At last, we evaluate the 1-D FFT algorithm on the framework and the performance achieves 47.6 Gflops with 24.8% computation efficiency.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128737491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P-Cache to provide prioritized caching service for storage server which is used to serve multiple concurrently accessing applications with diverse access patterns and unequal importance. Given the replacement algorithm and the application access patterns, the end performance of each individual application in a shared cache is actually determined by its allocated cache resource. So, P-Cache adopts a dynamic partitioning approach to explicitly divide cache resource among applications and utilizes a global cache allocation policy to make adaptive cache allocations to guarantee the preset relative caching priority among competing applications. We have implemented P-Cache in Linux kernel 2.6.18 as a pseudo device driver and measured its performance using synthetic benchmark and real-life workloads. The experiment results show that the prioritized caching service provided by P-Cache can not only be used to support application priority but can also be utilized to improve the overall storage system performance. Its runtime overhead is also smaller compared with Linux page cache.
{"title":"P-Cache: Providing Prioritized Caching Service for Storage System","authors":"Xiaoxuan Meng, Chengxiang Si, Wenwu Na, Lu Xu","doi":"10.1109/ISPA.2009.40","DOIUrl":"https://doi.org/10.1109/ISPA.2009.40","url":null,"abstract":"P-Cache to provide prioritized caching service for storage server which is used to serve multiple concurrently accessing applications with diverse access patterns and unequal importance. Given the replacement algorithm and the application access patterns, the end performance of each individual application in a shared cache is actually determined by its allocated cache resource. So, P-Cache adopts a dynamic partitioning approach to explicitly divide cache resource among applications and utilizes a global cache allocation policy to make adaptive cache allocations to guarantee the preset relative caching priority among competing applications. We have implemented P-Cache in Linux kernel 2.6.18 as a pseudo device driver and measured its performance using synthetic benchmark and real-life workloads. The experiment results show that the prioritized caching service provided by P-Cache can not only be used to support application priority but can also be utilized to improve the overall storage system performance. Its runtime overhead is also smaller compared with Linux page cache.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125942667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualization is a new area for research in recent years, and virtualization technology can bring convenience to the management of computing resources. Together with the development of the network and the network computing, it gives the virtualization technology more scenarios. The cloud computing technology uses the virtualization technology as while. With the development of the technology, it meets some security problems, such as rootkit attacks and malignant tampers. Malicious programs can plug into the system, and be booted at the any time of the virtualized system. There is little theoretical research on booting a trusted virtualized system. We propose an active trusted model in order to give a theoretical model for not only analyzing the state of a virtualized system, but also helping to design trusted virtual machine application. TBoot is a project to boot a trusted virtual machine. We use our model to illustrate that TBoot can boot a trusted virtual machine theoretically.
{"title":"An Active Trusted Model for Virtual Machine Systems","authors":"Wentao Qu, Minglu Li, Chuliang Weng","doi":"10.1109/ISPA.2009.68","DOIUrl":"https://doi.org/10.1109/ISPA.2009.68","url":null,"abstract":"Virtualization is a new area for research in recent years, and virtualization technology can bring convenience to the management of computing resources. Together with the development of the network and the network computing, it gives the virtualization technology more scenarios. The cloud computing technology uses the virtualization technology as while. With the development of the technology, it meets some security problems, such as rootkit attacks and malignant tampers. Malicious programs can plug into the system, and be booted at the any time of the virtualized system. There is little theoretical research on booting a trusted virtualized system. We propose an active trusted model in order to give a theoretical model for not only analyzing the state of a virtualized system, but also helping to design trusted virtual machine application. TBoot is a project to boot a trusted virtual machine. We use our model to illustrate that TBoot can boot a trusted virtual machine theoretically.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133707470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Load balancing is an important problem for parallel applications. Recently, many super computers are built on multi-core processors which are usually sharing the last level cache. On one hand different accesses from different cores conflict each other, on the other hand different cores have different work loads resulting in load unbalancing. In this paper, we present a novel technique for balancing parallel applications for multi-core processors based on cache partitioning which can allocate different part of shared caches to different cores exclusively. Our intuitive idea is partitioning shared cache to different cores based on their workloads. That is to say, a heavy load core will get more shared caches than a light load core, so the heavy load core runs faster. We give 2 algorithms in this paper, initial cache partitioning algorithm (ICP) and dynamical cache partitioning algorithm (DCP). ICP is used to determine the best partition when application starting while DCP is used to adjust the initial partition based on the changes of load balancing. Our experiment results show that the running time can be reduced by 7% on average when our load balancing mechanism based on cache partitioning is used.
{"title":"Balancing Parallel Applications on Multi-core Processors Based on Cache Partitioning","authors":"Guang Suo, Xuejun Yang","doi":"10.1109/ISPA.2009.37","DOIUrl":"https://doi.org/10.1109/ISPA.2009.37","url":null,"abstract":"Load balancing is an important problem for parallel applications. Recently, many super computers are built on multi-core processors which are usually sharing the last level cache. On one hand different accesses from different cores conflict each other, on the other hand different cores have different work loads resulting in load unbalancing. In this paper, we present a novel technique for balancing parallel applications for multi-core processors based on cache partitioning which can allocate different part of shared caches to different cores exclusively. Our intuitive idea is partitioning shared cache to different cores based on their workloads. That is to say, a heavy load core will get more shared caches than a light load core, so the heavy load core runs faster. We give 2 algorithms in this paper, initial cache partitioning algorithm (ICP) and dynamical cache partitioning algorithm (DCP). ICP is used to determine the best partition when application starting while DCP is used to adjust the initial partition based on the changes of load balancing. Our experiment results show that the running time can be reduced by 7% on average when our load balancing mechanism based on cache partitioning is used.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128011713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fault-tolerance is an important issue for the design of interconnection networks. In this paper, a new fault-tolerant routing algorithm is presented and is applied in mesh networks employing wormhole switching. Due to its low routing restrictions, the presented routing algorithm is so highly adaptive that it is connected and deadlock-free in spite of the various fault regions in mesh networks. Due to the minimal virtual channels it uses, the presented routing algorithm only employs as few buffers as possible and is suitable for fault-tolerant interconnection networks with low cost. Since it chooses the path around fault regions according to the local fault information, the presented routing algorithm takes routing decisions quickly and is applicable in interconnection networks. Moreover, a simulation is conducted for the proposed routing algorithm and the results show that the algorithm exhibits a graceful degradation in performance.
{"title":"Fault-Tolerant Routing Schemes for Wormhole Mesh","authors":"Xinming Duan, Dakun Zhang, Xuemei Sun","doi":"10.1109/ISPA.2009.62","DOIUrl":"https://doi.org/10.1109/ISPA.2009.62","url":null,"abstract":"Fault-tolerance is an important issue for the design of interconnection networks. In this paper, a new fault-tolerant routing algorithm is presented and is applied in mesh networks employing wormhole switching. Due to its low routing restrictions, the presented routing algorithm is so highly adaptive that it is connected and deadlock-free in spite of the various fault regions in mesh networks. Due to the minimal virtual channels it uses, the presented routing algorithm only employs as few buffers as possible and is suitable for fault-tolerant interconnection networks with low cost. Since it chooses the path around fault regions according to the local fault information, the presented routing algorithm takes routing decisions quickly and is applicable in interconnection networks. Moreover, a simulation is conducted for the proposed routing algorithm and the results show that the algorithm exhibits a graceful degradation in performance.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114853373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of intrusion technologies, dynamic forensics is becoming more and more important. Dynamic forensics using IDS or honeypot are all based on a common hypothesis that the system is still in a reliable working situation and collected evidences are believable even if the system is suffered from intrusion. In fact, the system has already transferred into an insecurity and unreliable state, it is uncertain that whether the intrusion detectors and investigators could run as normal and whether the obtained evidences are credible. Although intrusion tolerance has been applied in many areas of security for years, few researches are referred to network forensics. The work presented in this paper is based on an idea to integrate Intrusion tolerance into dynamic forensics to make the system under control, ensure the reliability of evidences and aim to gather more useful evidences for investigation. A mechanism of dynamic forensics based on intrusion forensics is proposed. This paper introduces the architecture of the model which uses IDS as tolerance and forensics trigger and honeypot as shadow server, the finite state machine model is described to specify the mechanism, and then two cases are analyzed to illuminate the mechanism.
{"title":"Dynamic Forensics Based on Intrusion Tolerance","authors":"Lin Chen, Zhitang Li, C. Gao, Lan Liu","doi":"10.1109/ISPA.2009.66","DOIUrl":"https://doi.org/10.1109/ISPA.2009.66","url":null,"abstract":"With the development of intrusion technologies, dynamic forensics is becoming more and more important. Dynamic forensics using IDS or honeypot are all based on a common hypothesis that the system is still in a reliable working situation and collected evidences are believable even if the system is suffered from intrusion. In fact, the system has already transferred into an insecurity and unreliable state, it is uncertain that whether the intrusion detectors and investigators could run as normal and whether the obtained evidences are credible. Although intrusion tolerance has been applied in many areas of security for years, few researches are referred to network forensics. The work presented in this paper is based on an idea to integrate Intrusion tolerance into dynamic forensics to make the system under control, ensure the reliability of evidences and aim to gather more useful evidences for investigation. A mechanism of dynamic forensics based on intrusion forensics is proposed. This paper introduces the architecture of the model which uses IDS as tolerance and forensics trigger and honeypot as shadow server, the finite state machine model is described to specify the mechanism, and then two cases are analyzed to illuminate the mechanism.","PeriodicalId":346815,"journal":{"name":"2009 IEEE International Symposium on Parallel and Distributed Processing with Applications","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130380111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}