Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366764
XiaoJian Wu, A. Reddy
This paper considers the problem of efficiently managing storage space in a hybrid storage system employing flash and disk drives. The flash and disk drives exhibit different performance characteristics of read and write behavior. We propose a technique for balancing the workload properties across flash and disk drives in such a hybrid storage system. The presented approach automatically and transparently manages migration of data blocks among flash and disk drives based on their access patterns. This paper presents the design and an evaluation of the proposed approach on a Linux testbed through realistic experiments.
{"title":"Managing storage space in a flash and disk hybrid storage system","authors":"XiaoJian Wu, A. Reddy","doi":"10.1109/MASCOT.2009.5366764","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366764","url":null,"abstract":"This paper considers the problem of efficiently managing storage space in a hybrid storage system employing flash and disk drives. The flash and disk drives exhibit different performance characteristics of read and write behavior. We propose a technique for balancing the workload properties across flash and disk drives in such a hybrid storage system. The presented approach automatically and transparently manages migration of data blocks among flash and disk drives based on their access patterns. This paper presents the design and an evaluation of the proposed approach on a Linux testbed through realistic experiments.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123784277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366283
Chang-Burm Cho, James Poe, Tao Li, Jingling Yuan
As microprocessors become more complex, early design space exploration plays an essential role in reducing the time to market and post-silicon surprises. The trend toward multi-/many- core processors will result in sophisticated large-scale architecture substrates (e.g. non-uniformly accessed caches interconnected by network-on-chip) that exhibit increasingly complex and heterogeneous behavior. While conventional analytical modeling techniques can be used to efficiently explore the characteristics (e.g. IPC and power) of monolithic architecture design, existing methods lack the ability to accurately and informatively forecast the complex behavior of large and distributed architecture substrates across the design space. This limitation will only be exacerbated with the rapidly increased integration scale (e.g. number of cores per chip). In this paper, we propose novel, multi-scale 2D predictive models which can efficiently reason the characteristics of large and sophisticated multi-core oriented architectures during the design space exploration stage without using detailed cycle-level simulations. Our proposed techniques employ 2D wavelet multiresolution analysis and neural network regression modeling. We extensively evaluate the efficiency of our predictive models in forecasting the complex and heterogeneous characteristics of large and distributed shared cache interconnected by a network on chip in multi-core designs using both multi-programmed and multithreaded workloads. Experimental results show that the models achieve high accuracy while maintaining low complexity and computation overhead. Through case studies, we demonstrate that the proposed techniques can be used to informatively explore and accurately evaluate global, cooperative multi-core resource allocation and thermal-aware designs that cannot be achieved using conventional design exploration methods.
{"title":"Accurate, scalable and informative design space exploration for large and sophisticated multi-core oriented architectures","authors":"Chang-Burm Cho, James Poe, Tao Li, Jingling Yuan","doi":"10.1109/MASCOT.2009.5366283","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366283","url":null,"abstract":"As microprocessors become more complex, early design space exploration plays an essential role in reducing the time to market and post-silicon surprises. The trend toward multi-/many- core processors will result in sophisticated large-scale architecture substrates (e.g. non-uniformly accessed caches interconnected by network-on-chip) that exhibit increasingly complex and heterogeneous behavior. While conventional analytical modeling techniques can be used to efficiently explore the characteristics (e.g. IPC and power) of monolithic architecture design, existing methods lack the ability to accurately and informatively forecast the complex behavior of large and distributed architecture substrates across the design space. This limitation will only be exacerbated with the rapidly increased integration scale (e.g. number of cores per chip). In this paper, we propose novel, multi-scale 2D predictive models which can efficiently reason the characteristics of large and sophisticated multi-core oriented architectures during the design space exploration stage without using detailed cycle-level simulations. Our proposed techniques employ 2D wavelet multiresolution analysis and neural network regression modeling. We extensively evaluate the efficiency of our predictive models in forecasting the complex and heterogeneous characteristics of large and distributed shared cache interconnected by a network on chip in multi-core designs using both multi-programmed and multithreaded workloads. Experimental results show that the models achieve high accuracy while maintaining low complexity and computation overhead. Through case studies, we demonstrate that the proposed techniques can be used to informatively explore and accurately evaluate global, cooperative multi-core resource allocation and thermal-aware designs that cannot be achieved using conventional design exploration methods.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123422429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366768
Hyejeong Lee, H. Bahn
Recently, NAND flash memory is being used as the swap space of virtual memory as well as the file storage of embedded systems. Since temporal locality is dominant in page references of virtual memory, LRU and its approximated algorithms are widely used. However, we show that this is not true for write references. We analyze the characteristics of virtual memory read and write references separately, and find that the temporal locality of write references is weak and irregular. Based on this observation, we present a new page replacement algorithm that uses different strategies for read and write operations in predicting the re-reference likelihood of pages. For read operations, temporal locality alone is used, but for write operations, write frequency as well as temporal locality is used. The algorithm partitions the memory space into a read area and a write area to keep track of their reference patterns precisely, and then adjusts their sizes dynamically based on their reference patterns and I/O costs. Though the algorithm has no external parameter to tune, it performs better than CLOCK, CAR, and CFLRU by 20–66%. It also supports optimized implementations for virtual memory systems.
{"title":"Characterizing virtual memory write references for efficient page replacement in NAND flash memory","authors":"Hyejeong Lee, H. Bahn","doi":"10.1109/MASCOT.2009.5366768","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366768","url":null,"abstract":"Recently, NAND flash memory is being used as the swap space of virtual memory as well as the file storage of embedded systems. Since temporal locality is dominant in page references of virtual memory, LRU and its approximated algorithms are widely used. However, we show that this is not true for write references. We analyze the characteristics of virtual memory read and write references separately, and find that the temporal locality of write references is weak and irregular. Based on this observation, we present a new page replacement algorithm that uses different strategies for read and write operations in predicting the re-reference likelihood of pages. For read operations, temporal locality alone is used, but for write operations, write frequency as well as temporal locality is used. The algorithm partitions the memory space into a read area and a write area to keep track of their reference patterns precisely, and then adjusts their sizes dynamically based on their reference patterns and I/O costs. Though the algorithm has no external parameter to tune, it performs better than CLOCK, CAR, and CFLRU by 20–66%. It also supports optimized implementations for virtual memory systems.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131960560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366154
Jaehong Kim, Dawoon Jung, Jin-Soo Kim, Jaehyuk Huh
Solid state disks (SSDs) consisting of NAND flash memory are being widely used in laptops, desktops, and even enterprise servers. SSDs have many advantages over hard disk drives (HDDs) in terms of reliability, performance, durability, and power efficiency. Typically, the internal hardware and software organization varies significantly from SSD to SSD and thus each SSD exhibits different parameters which influence the overall performance. In this paper, we propose a methodology which can extract several essential parameters affecting the performance of SSDs. The target parameters of SSDs considered in this paper are (1) the size of read/write unit, (2) the size of erase unit, (3) the type of NAND flash memory used, (4) the size of read buffer, and (5) the size of write buffer. Obtaining these parameters will allow us to understand the internal architecture of the target SSD better and to get the most performance out of SSD by performing SSD-specific optimizations.
{"title":"A methodology for extracting performance parameters in solid state disks (SSDs)","authors":"Jaehong Kim, Dawoon Jung, Jin-Soo Kim, Jaehyuk Huh","doi":"10.1109/MASCOT.2009.5366154","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366154","url":null,"abstract":"Solid state disks (SSDs) consisting of NAND flash memory are being widely used in laptops, desktops, and even enterprise servers. SSDs have many advantages over hard disk drives (HDDs) in terms of reliability, performance, durability, and power efficiency. Typically, the internal hardware and software organization varies significantly from SSD to SSD and thus each SSD exhibits different parameters which influence the overall performance. In this paper, we propose a methodology which can extract several essential parameters affecting the performance of SSDs. The target parameters of SSDs considered in this paper are (1) the size of read/write unit, (2) the size of erase unit, (3) the type of NAND flash memory used, (4) the size of read buffer, and (5) the size of write buffer. Obtaining these parameters will allow us to understand the internal architecture of the target SSD better and to get the most performance out of SSD by performing SSD-specific optimizations.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129727951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366627
F. Guo, T. Chiueh
Bulk file access is a read access to a large number of files in a file system. Example applications that use bulk file access extensively are anti-virus (AV) scanner, file-level data back-up agent, file system defragmentation tool, etc. This paper describes the design, implementation, and evaluation of an optimization to modern file systems that is designed to improve the read efficiency of bulk file accesses. The resulting scheme, called DAFT (Disk geometry-Aware File system Traversal), provides a bulk file access application with individual files while fetching these files into memory in a way that respects the disk geometry and thus is as efficient as it can be. We have successfully implemented a fully operational DAFT prototype, and tested it with commercial AV scanners and data back-up agents. Empirical measurements on this prototype demonstrate that it can reduce the elapsed time of enumerating all files in a file system by a factor of 5 to 15 for both fragmented and non-fragmented file systems on fast and slow disks.
{"title":"DAFT: Disk geometry-Aware File system Traversal","authors":"F. Guo, T. Chiueh","doi":"10.1109/MASCOT.2009.5366627","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366627","url":null,"abstract":"Bulk file access is a read access to a large number of files in a file system. Example applications that use bulk file access extensively are anti-virus (AV) scanner, file-level data back-up agent, file system defragmentation tool, etc. This paper describes the design, implementation, and evaluation of an optimization to modern file systems that is designed to improve the read efficiency of bulk file accesses. The resulting scheme, called DAFT (Disk geometry-Aware File system Traversal), provides a bulk file access application with individual files while fetching these files into memory in a way that respects the disk geometry and thus is as efficient as it can be. We have successfully implemented a fully operational DAFT prototype, and tested it with commercial AV scanners and data back-up agents. Empirical measurements on this prototype demonstrate that it can reduce the elapsed time of enumerating all files in a file system by a factor of 5 to 15 for both fragmented and non-fragmented file systems on fast and slow disks.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121392955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366808
Hars Vardhan, Shreejith Billenahalli, Wanjun Huang, M. Razo, Arularasi Sivasankaran, L. Tang, P. Monti, M. Tacca, A. Fumagalli
This paper presents an algorithm to find a simple path in the given network with multiple must-include nodes in the path. The problem of finding a path with must-include node(s) can be easily found in some special cases. However, in general, including multiple nodes in the simple path has been shown to be NP-Complete. This problem may arise in network areas such as forcing the route to go through particular nodes, which have wavelength converter (optical), have monitoring provision (telecom), have gateway functions (in OSPF) or are base stations (in MANET). In this paper, a heuristic algorithm is described that follows divide and conquer approach, by dividing the problem in two subproblems. It is shown that the algorithm does not grow exponentially in this application and initial re-ordering of the given sequence of must-include nodes can improve the result. The experimental results demonstrate that the algorithm successfully computes near optimal path in reasonable time.
{"title":"Finding a simple path with multiple must-include nodes","authors":"Hars Vardhan, Shreejith Billenahalli, Wanjun Huang, M. Razo, Arularasi Sivasankaran, L. Tang, P. Monti, M. Tacca, A. Fumagalli","doi":"10.1109/MASCOT.2009.5366808","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366808","url":null,"abstract":"This paper presents an algorithm to find a simple path in the given network with multiple must-include nodes in the path. The problem of finding a path with must-include node(s) can be easily found in some special cases. However, in general, including multiple nodes in the simple path has been shown to be NP-Complete. This problem may arise in network areas such as forcing the route to go through particular nodes, which have wavelength converter (optical), have monitoring provision (telecom), have gateway functions (in OSPF) or are base stations (in MANET). In this paper, a heuristic algorithm is described that follows divide and conquer approach, by dividing the problem in two subproblems. It is shown that the algorithm does not grow exponentially in this application and initial re-ordering of the given sequence of must-include nodes can improve the result. The experimental results demonstrate that the algorithm successfully computes near optimal path in reasonable time.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125593716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366968
Jiangtian Li, A. Deshpande, J. Srinivasan, Xiaosong Ma
The rapid advances in multi-core architecture and the predicted emergence of 100-core personal computers bring new appeal to volunteer computing. The availability of massive compute power under-utilized by personal computing tasks is a blessing to volunteer computing customers. Meanwhile the reduced performance impact of running a foreign workload, thanks to the increased hardware parallelism, makes volunteering resources more acceptable to PC owners. In addition, we suspect that with aggressive volunteer computing, which assigns foreign tasks to active computers (as opposed to idle ones in the common practice), we can obtain significant energy savings. In this paper, we assess the efficacy of such aggressive volunteer computing model by evaluating the energy saving and performance impact of co-executing resource-intensive foreign workloads with native personal computing tasks. Our results from executing 30 native-foreign workload combinations suggest that aggressive volunteer computing can achieve an average energy saving of around 52% compared to running the foreign workloads on high-end cluster nodes, and around 33% compared to using the traditional, more conservative volunteer computing model. We have also observed highly varied performance interference behavior between the workloads, and evaluated the effectiveness of foreign workload intensity throttling.
{"title":"Energy and performance impact of aggressive volunteer computing with multi-core computers","authors":"Jiangtian Li, A. Deshpande, J. Srinivasan, Xiaosong Ma","doi":"10.1109/MASCOT.2009.5366968","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366968","url":null,"abstract":"The rapid advances in multi-core architecture and the predicted emergence of 100-core personal computers bring new appeal to volunteer computing. The availability of massive compute power under-utilized by personal computing tasks is a blessing to volunteer computing customers. Meanwhile the reduced performance impact of running a foreign workload, thanks to the increased hardware parallelism, makes volunteering resources more acceptable to PC owners. In addition, we suspect that with aggressive volunteer computing, which assigns foreign tasks to active computers (as opposed to idle ones in the common practice), we can obtain significant energy savings. In this paper, we assess the efficacy of such aggressive volunteer computing model by evaluating the energy saving and performance impact of co-executing resource-intensive foreign workloads with native personal computing tasks. Our results from executing 30 native-foreign workload combinations suggest that aggressive volunteer computing can achieve an average energy saving of around 52% compared to running the foreign workloads on high-end cluster nodes, and around 33% compared to using the traditional, more conservative volunteer computing model. We have also observed highly varied performance interference behavior between the workloads, and evaluated the effectiveness of foreign workload intensity throttling.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125803357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366635
Guanying Wang, A. Butt, C. Gniady
Modern enterprises employ hundreds of workstations for daily business operations, which consume a lot of energy and thus have significant operating costs. To reduce such costs, dynamic energy management is often employed. However, dynamic energy management, especially that for disks, introduces delays when an accessed disk is in a low power state and needs to be brought into active state. In this paper, we propose System-wide Alternative Retrieval of Data (SARD) that exploits the large number of machines in an enterprise environment to transparently retrieve binaries from other nodes, thus avoiding access delays when the local disk is in a low power mode. SARD uses a software-based approach to reduce spin-up delays while eliminating the need for major operating system changes, custom buffering, or shared memory infrastructure.
{"title":"Mitigating disk energy management delays by exploiting peer memory","authors":"Guanying Wang, A. Butt, C. Gniady","doi":"10.1109/MASCOT.2009.5366635","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366635","url":null,"abstract":"Modern enterprises employ hundreds of workstations for daily business operations, which consume a lot of energy and thus have significant operating costs. To reduce such costs, dynamic energy management is often employed. However, dynamic energy management, especially that for disks, introduces delays when an accessed disk is in a low power state and needs to be brought into active state. In this paper, we propose System-wide Alternative Retrieval of Data (SARD) that exploits the large number of machines in an enterprise environment to transparently retrieve binaries from other nodes, thus avoiding access delays when the local disk is in a low power mode. SARD uses a software-based approach to reduce spin-up delays while eliminating the need for major operating system changes, custom buffering, or shared memory infrastructure.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125816447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366174
Mingwei Gong, C. Williamson
Scheduling decisions can have a pronounced impact on the performance of multi-radio wireless systems. In this paper, we study the effects of dispatch policies and queue scheduling strategies on the user-perceived performance for Internet traffic flows in a multi-channel WLAN. Our work is carried out using simulation and an empirical Web workload trace, with mean response time as the primary performance metric. The simulation results demonstrate the good/bad combination of the dispatch policy with queue scheduling strategy, the advantages of deferred dispatch over immediate dispatch, and the sensitivity of dispatch policies to heavy-tailed workload characteristics. The results also highlight the pros and cons of a simple lookahead scheduling policy, particularly in the presence of high variability workloads on a heterogeneous multi-channel system with random losses. Our results provide insights into efficient and robust scheduling policies for multi-channel WLANs.
{"title":"Scheduling issues in multi-channel wireless networks","authors":"Mingwei Gong, C. Williamson","doi":"10.1109/MASCOT.2009.5366174","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366174","url":null,"abstract":"Scheduling decisions can have a pronounced impact on the performance of multi-radio wireless systems. In this paper, we study the effects of dispatch policies and queue scheduling strategies on the user-perceived performance for Internet traffic flows in a multi-channel WLAN. Our work is carried out using simulation and an empirical Web workload trace, with mean response time as the primary performance metric. The simulation results demonstrate the good/bad combination of the dispatch policy with queue scheduling strategy, the advantages of deferred dispatch over immediate dispatch, and the sensitivity of dispatch policies to heavy-tailed workload characteristics. The results also highlight the pros and cons of a simple lookahead scheduling policy, particularly in the presence of high variability workloads on a heterogeneous multi-channel system with random losses. Our results provide insights into efficient and robust scheduling policies for multi-channel WLANs.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121679155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366752
N. Aschenbruck, Christoph Fuchs, P. Martini
In the recent years Voice over IP (VoIP) telephony started to migrate from research to the market. In the future, All-IP networks will substitute the classical Public Switched Telephone Networks (PSTNs). Nowadays, there is no All-IP network yet, but many VoIP-providers already enable calls from VoIP to a PSTN and vice versa. By doing so, critical infrastructures within the PSTN like Public Safety Answering Points (PSAP), are accessible from the VoIP network (e.g. the Internet). Thus, there is a need for reliable performance modeling and evaluation. One aspect of particular interest e.g. for the performance evaluation of intrusion detection architectures for emergency call services is the characterization and modeling of emergency call length and frequency. In this paper, we provide a detailed analysis of traces from different PSAPs. Our work is based on empirical long-time measurements at two PSAPs. Based on these traces we characterize the load's interarrival times and call lengths concerning variation of the load, dependencies, and scalability. Furthermore, we provide fittings of the empirical data to standard probability distributions.
{"title":"Traffic characteristics and modeling of emergency calls at the PSAP","authors":"N. Aschenbruck, Christoph Fuchs, P. Martini","doi":"10.1109/MASCOT.2009.5366752","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366752","url":null,"abstract":"In the recent years Voice over IP (VoIP) telephony started to migrate from research to the market. In the future, All-IP networks will substitute the classical Public Switched Telephone Networks (PSTNs). Nowadays, there is no All-IP network yet, but many VoIP-providers already enable calls from VoIP to a PSTN and vice versa. By doing so, critical infrastructures within the PSTN like Public Safety Answering Points (PSAP), are accessible from the VoIP network (e.g. the Internet). Thus, there is a need for reliable performance modeling and evaluation. One aspect of particular interest e.g. for the performance evaluation of intrusion detection architectures for emergency call services is the characterization and modeling of emergency call length and frequency. In this paper, we provide a detailed analysis of traces from different PSAPs. Our work is based on empirical long-time measurements at two PSAPs. Based on these traces we characterize the load's interarrival times and call lengths concerning variation of the load, dependencies, and scalability. Furthermore, we provide fittings of the empirical data to standard probability distributions.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115252005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}