Traditional cache invalidation schemes are not suitable to be employed in wireless environments due to the affections of mobility, energy consumption, and limited bandwidth. Cache invalidation report (IR) is proposed to deal with the cache consistency problem. However, the main drawback of IR-based schemes is the long latency of data access because the mobile hosts (MHs) need to wait next IR interval for cache invalidation when the cache hit happens. In this paper, we propose a dynamic invalidation report (DIR) to reduce the latency of data access when the MHs query data. DIR contains an early cache validation mechanism by utilizing the validation messages. Therefore, the MHs can verify their cached data as soon as possible. Next, we design a predictive method to dynamically adjust IR interval to further reduce the latency called DIR-AI (DIR with adjustable interval) scheme. Finally, we evaluate the performance of the DIR and DIR-AI and compare them with the existing invalidation report schemes by using NS2 (network simulator). The experimental results show that DIR reduces averagely 54.3% and 34.3% of latency; DIR-AI reduces averagely 57.35 and 38.6% of latency compared with TS (TimeStamp) and UIR (updated IR) schemes respectively.
{"title":"Dynamic Cache Invalidation Scheme in IR-Based Wireless Environments","authors":"Yeim-Kuan Chang, Y. Ting, Tai-Hong Lin","doi":"10.1109/AINA.2008.118","DOIUrl":"https://doi.org/10.1109/AINA.2008.118","url":null,"abstract":"Traditional cache invalidation schemes are not suitable to be employed in wireless environments due to the affections of mobility, energy consumption, and limited bandwidth. Cache invalidation report (IR) is proposed to deal with the cache consistency problem. However, the main drawback of IR-based schemes is the long latency of data access because the mobile hosts (MHs) need to wait next IR interval for cache invalidation when the cache hit happens. In this paper, we propose a dynamic invalidation report (DIR) to reduce the latency of data access when the MHs query data. DIR contains an early cache validation mechanism by utilizing the validation messages. Therefore, the MHs can verify their cached data as soon as possible. Next, we design a predictive method to dynamically adjust IR interval to further reduce the latency called DIR-AI (DIR with adjustable interval) scheme. Finally, we evaluate the performance of the DIR and DIR-AI and compare them with the existing invalidation report schemes by using NS2 (network simulator). The experimental results show that DIR reduces averagely 54.3% and 34.3% of latency; DIR-AI reduces averagely 57.35 and 38.6% of latency compared with TS (TimeStamp) and UIR (updated IR) schemes respectively.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131065415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jui-Hung Chen, T. Shih, Chun-Chia Wang, Shu-Wei Yeh, Chen-Yu Lee
Respecting related technologies within e-learning domain, the use of learning management system (LMS) has been considered as the most important and essential component. LMS can manage the curriculum learning content, learner's learning profile and learning process. However it lacks suitable study assistance functions and personalized interface. Accordingly LMS is hard to promote and to be utilized by learners. This paper proposed an integrated framework which combined the personal learning blog functionalities to LMS by using the tools interoperability (TI) architecture in order to develop the suitable learning functionalities and interface in LMS for learners. And we hope the TI-based blog functionalities which can be utilized by other LMS in order to improve the LMS utility rate and to prove the feasibility of TI-based blog system.
{"title":"Combine Personal Blog Functionalities with LMS Using Tools Interoperability Architecture","authors":"Jui-Hung Chen, T. Shih, Chun-Chia Wang, Shu-Wei Yeh, Chen-Yu Lee","doi":"10.1109/AINA.2008.43","DOIUrl":"https://doi.org/10.1109/AINA.2008.43","url":null,"abstract":"Respecting related technologies within e-learning domain, the use of learning management system (LMS) has been considered as the most important and essential component. LMS can manage the curriculum learning content, learner's learning profile and learning process. However it lacks suitable study assistance functions and personalized interface. Accordingly LMS is hard to promote and to be utilized by learners. This paper proposed an integrated framework which combined the personal learning blog functionalities to LMS by using the tools interoperability (TI) architecture in order to develop the suitable learning functionalities and interface in LMS for learners. And we hope the TI-based blog functionalities which can be utilized by other LMS in order to improve the LMS utility rate and to prove the feasibility of TI-based blog system.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128149385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the problem of making use of target constraints to integrate XML data from different sources under a target schema. We recognize that target constraints are necessary in data integration, as the constraints are essential part of data semantics, and should be satisfied by integrated data. When integrating data from multiple data sources with overlapping data, constraints can express data merging rules at the target as well. We give a general constraint model for XML to express target constraints, which extends the relational equality-generating and tuple- generating dependencies. We provide a chase method to reason about data in the integrated XML document based on target constraints, by inferring data values not given explicitly, and inserting new subtrees as necessary. Singleton and key constraints are used to uniquely specify a certain entity, as a rule for data merging in the integration.
{"title":"Reasoning and Merging in XML Data Integration","authors":"Zijing Tan, Wei Wang, Baile Shi","doi":"10.1109/AINA.2008.26","DOIUrl":"https://doi.org/10.1109/AINA.2008.26","url":null,"abstract":"In this paper, we study the problem of making use of target constraints to integrate XML data from different sources under a target schema. We recognize that target constraints are necessary in data integration, as the constraints are essential part of data semantics, and should be satisfied by integrated data. When integrating data from multiple data sources with overlapping data, constraints can express data merging rules at the target as well. We give a general constraint model for XML to express target constraints, which extends the relational equality-generating and tuple- generating dependencies. We provide a chase method to reason about data in the integrated XML document based on target constraints, by inferring data values not given explicitly, and inserting new subtrees as necessary. Singleton and key constraints are used to uniquely specify a certain entity, as a rule for data merging in the integration.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122651135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a trusted time-stamping service which issues time-stamps with enhanced security by a practical forward-secure proxy signature mechanism. This signature scheme provides a way to verify the validity of the delegation from the trusted time source through the common PKI certification hierarchy. The forward-security of this signature scheme provides better protection against key-exposure attack when time-stamping server gets inruded. The design of this signature scheme is tied closely to the time-stamping service based on hierarchical distributed time sources. The signature scheme is implemented with standard RSA signature and verification algorithms. The computation of signing and verification in providing the forward-security feature is absorbed into the proxy scheme. Only delegation and key-updating require minor extra computation. In addition, one safety assumption made implicitly in Krawczyk's forward-secure signature scheme is identified and eliminated such that the security of our scheme outperforms its predecessor.
{"title":"Enhancing the Security Promise of a Digital Time-Stamp","authors":"Pei-yih Ting, F. Chu","doi":"10.1109/AINA.2008.111","DOIUrl":"https://doi.org/10.1109/AINA.2008.111","url":null,"abstract":"In this paper we present a trusted time-stamping service which issues time-stamps with enhanced security by a practical forward-secure proxy signature mechanism. This signature scheme provides a way to verify the validity of the delegation from the trusted time source through the common PKI certification hierarchy. The forward-security of this signature scheme provides better protection against key-exposure attack when time-stamping server gets inruded. The design of this signature scheme is tied closely to the time-stamping service based on hierarchical distributed time sources. The signature scheme is implemented with standard RSA signature and verification algorithms. The computation of signing and verification in providing the forward-security feature is absorbed into the proxy scheme. Only delegation and key-updating require minor extra computation. In addition, one safety assumption made implicitly in Krawczyk's forward-secure signature scheme is identified and eliminated such that the security of our scheme outperforms its predecessor.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123079147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-03-25DOI: 10.1093/ietcom/e91-b.9.2889
Xiaoliang Wang, Xiaohong Jiang, S. Horiguchi
Shared-memory optical packet (SMOP) switch architecture is very promising for significantly reducing the amount of required optical memory, which is typically constructed from fiber delay lines (FDLs). The current reservation-based scheduling algorithms for SMOP switches can effectively utilize the FDLs and achieve a low packet loss rate by simply reserving the departure time for each arrival packet. It is notable, however, that such a simple scheduling scheme may introduce a significant packets out of order problem. In this paper, we first identify the two main sources of packets out of order in the current reservation-based SMOP switches. We then show that by introducing a "last-timestamp " variable and modifying the corresponding FDLs arrangement as well as the scheduling process in the current reservation-based SMOP switches, it is possible to keep packets in-sequence while still maintaining a similar delay and packet loss performance as the previous design.
{"title":"Maintaining Packet Order in Reservation-Based Shared-Memory Optical Packet Switch","authors":"Xiaoliang Wang, Xiaohong Jiang, S. Horiguchi","doi":"10.1093/ietcom/e91-b.9.2889","DOIUrl":"https://doi.org/10.1093/ietcom/e91-b.9.2889","url":null,"abstract":"Shared-memory optical packet (SMOP) switch architecture is very promising for significantly reducing the amount of required optical memory, which is typically constructed from fiber delay lines (FDLs). The current reservation-based scheduling algorithms for SMOP switches can effectively utilize the FDLs and achieve a low packet loss rate by simply reserving the departure time for each arrival packet. It is notable, however, that such a simple scheduling scheme may introduce a significant packets out of order problem. In this paper, we first identify the two main sources of packets out of order in the current reservation-based SMOP switches. We then show that by introducing a \"last-timestamp \" variable and modifying the corresponding FDLs arrangement as well as the scheduling process in the current reservation-based SMOP switches, it is possible to keep packets in-sequence while still maintaining a similar delay and packet loss performance as the previous design.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114242986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Object models adopted in the object oriented software development reflect complex relations that exist between various entities in the real world. On the other hand, directory models adopted in directory service technologies map various relations into tree structures. Therefore, developing directory services by object oriented languages require writing lengthy procedures that bridge the gaps between object models and directory models. To overcome this drawback, in this paper, we propose a new technology named Object- Directory Mapping. We also present a Java based framework S2Directory to prove the effectiveness of the concept of Object-Directory Mapping. Implementation details are also given together with new dynamic implementation injection techniques.
{"title":"S2Directory A Framework for Object-Directory Mapping with Dynamic Implementation Injection","authors":"Jun Futagawa, S. Yukita","doi":"10.1109/AINA.2008.35","DOIUrl":"https://doi.org/10.1109/AINA.2008.35","url":null,"abstract":"Object models adopted in the object oriented software development reflect complex relations that exist between various entities in the real world. On the other hand, directory models adopted in directory service technologies map various relations into tree structures. Therefore, developing directory services by object oriented languages require writing lengthy procedures that bridge the gaps between object models and directory models. To overcome this drawback, in this paper, we propose a new technology named Object- Directory Mapping. We also present a Java based framework S2Directory to prove the effectiveness of the concept of Object-Directory Mapping. Implementation details are also given together with new dynamic implementation injection techniques.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114924998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensor network is remarkable for its promising use on human-unattended information collection, such as forest fire monitoring. In order to support efficient communication, many routing algorithms specially designed for such networks have been proposed. However, there is no idea about whether these proposed routing algorithms are already good enough or still have a long way to become "perfect", since there is currently a lack of understanding about the optimal routing performance. This paper makes a progress in the understanding of the optimal routing performance. Metrics here used to measure the routing performance are the network lifetime finally acquired and the total information finally collected. The condition used to judge the network's death is defined by the user's requirement on the guaranteed network information collecting ability. Optimization models based on the metrics and death condition mentioned above are proposed. Experiments show some existing routing proposals already work well when the user's requirement is strict, but few of them satisfy when the user's requirement is loose.
{"title":"An Effort to Understand the Optimal Routing Performance in Wireless Sensor Network","authors":"Qinghua Wang, Tingting Zhang, S. Pettersson","doi":"10.1109/AINA.2008.47","DOIUrl":"https://doi.org/10.1109/AINA.2008.47","url":null,"abstract":"Wireless sensor network is remarkable for its promising use on human-unattended information collection, such as forest fire monitoring. In order to support efficient communication, many routing algorithms specially designed for such networks have been proposed. However, there is no idea about whether these proposed routing algorithms are already good enough or still have a long way to become \"perfect\", since there is currently a lack of understanding about the optimal routing performance. This paper makes a progress in the understanding of the optimal routing performance. Metrics here used to measure the routing performance are the network lifetime finally acquired and the total information finally collected. The condition used to judge the network's death is defined by the user's requirement on the guaranteed network information collecting ability. Optimization models based on the metrics and death condition mentioned above are proposed. Experiments show some existing routing proposals already work well when the user's requirement is strict, but few of them satisfy when the user's requirement is loose.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126640170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Walters, V. Chaudhary, Minsuk Cha, Salvatore J. Guercio, S. Gallo
Virtualization is a common strategy for improving the utilization of existing computing resources, particularly within data centers. However, its use for high performance computing (HPC) applications is currently limited despite its potential for both improving resource utilization as well as providing resource guarantees to its users. This paper systematically evaluates various VMs for computationally intensive HPC applications using various standard benchmarks. Using VMWare Server, xen, and OpenVZ we examine the suitability of full virtualization, paravirtualization, and operating system-level virtualization in terms of network utilization SMP performance, file system performance, and MPI scalability. We show that the operating system-level virtualization provided by OpenVZ provides the best overall performance, particularly for MPI scalability.
{"title":"A Comparison of Virtualization Technologies for HPC","authors":"J. Walters, V. Chaudhary, Minsuk Cha, Salvatore J. Guercio, S. Gallo","doi":"10.1109/AINA.2008.45","DOIUrl":"https://doi.org/10.1109/AINA.2008.45","url":null,"abstract":"Virtualization is a common strategy for improving the utilization of existing computing resources, particularly within data centers. However, its use for high performance computing (HPC) applications is currently limited despite its potential for both improving resource utilization as well as providing resource guarantees to its users. This paper systematically evaluates various VMs for computationally intensive HPC applications using various standard benchmarks. Using VMWare Server, xen, and OpenVZ we examine the suitability of full virtualization, paravirtualization, and operating system-level virtualization in terms of network utilization SMP performance, file system performance, and MPI scalability. We show that the operating system-level virtualization provided by OpenVZ provides the best overall performance, particularly for MPI scalability.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123443565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; nodes are constrained in processing power and storage capacity; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a simple, but practical unbalanced multiple description (UMD) codec that uses single path transport only. At the UMD decoder, the lost Low- Resolution (LR) frames can be receded from the corresponding ones of the correctly received High- Resolution (HR) stream. This makes LR more robust to packet losses and ensures continuous video playback. The simulation results show that the proposed UMD codec has higher decoded quality, smaller quality fluctuation and lower probability of pause than other error resilience techniques, especially when channel burstiness becomes large. We also propose a novel sequence-based error concealment (EC) algorithm for our UMD decoder. It recursively uses multi-frame recovery principle to frame-by-frame reduce error drift in the HR description with respect to the LR description. In fact, the algorithm can be applied to most UMD schemes. It has the advantage of working on integer-, half- or quarter- pixel precision and looking backward without buffering future frames. Experimental results show that error decreases the fastest if the UMD decoder uses our EC algorithm. It can provide satisfactory performance on both objective and subjective evaluation.
{"title":"Supporting Live Video on Ad Hoc Wireless Networks: Unbalanced Multiple Description Coding, Single Path Transport and Recursive Error Concealment","authors":"F. Huang, Lifeng Sun, Bin Li, Y. Zhong","doi":"10.1109/AINA.2008.136","DOIUrl":"https://doi.org/10.1109/AINA.2008.136","url":null,"abstract":"Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; nodes are constrained in processing power and storage capacity; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a simple, but practical unbalanced multiple description (UMD) codec that uses single path transport only. At the UMD decoder, the lost Low- Resolution (LR) frames can be receded from the corresponding ones of the correctly received High- Resolution (HR) stream. This makes LR more robust to packet losses and ensures continuous video playback. The simulation results show that the proposed UMD codec has higher decoded quality, smaller quality fluctuation and lower probability of pause than other error resilience techniques, especially when channel burstiness becomes large. We also propose a novel sequence-based error concealment (EC) algorithm for our UMD decoder. It recursively uses multi-frame recovery principle to frame-by-frame reduce error drift in the HR description with respect to the LR description. In fact, the algorithm can be applied to most UMD schemes. It has the advantage of working on integer-, half- or quarter- pixel precision and looking backward without buffering future frames. Experimental results show that error decreases the fastest if the UMD decoder uses our EC algorithm. It can provide satisfactory performance on both objective and subjective evaluation.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121857545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edwin W. C. Peh, Winston K.G. Seah, Y. Chew, Y. Ge
In this paper, a hybrid network consisting of both WiMAX and WiFi links is set up and used as a testbed for Voice-over-IP (VoIP) performance studies. Relevant metrics are defined and used to analyse the performance of this hybrid network and its ability to support VoIP calls. End-to-end delay and packets loss are measured as a function of number of ongoing VoIP calls. Based on these measured results, we propose a procedure to evaluate the maximum number of ongoing calls can be supported by the hybrid network while maintain the call quality at the desired level using the ITU-T specified E-model. The experimental studies show that this hybrid network is capable of supporting real-time VoIP calls, and that the network is able to support up to 12 simultaneous G.711-based VoIP calls and more than 20 simultaneous G.729-based VoIP calls. While it remains a challenge to ascertain the exact reasons behind the low network utilization, the proposed procedure for determining the maximum number of VoIP calls which meet the QoS requirements will be useful and valuable to service providers with the intention to provide VoIP services over such hybrid networks that interoperate these two technologies in the near future.
{"title":"Experimental Study of Voice over IP Services over Broadband Wireless Networks","authors":"Edwin W. C. Peh, Winston K.G. Seah, Y. Chew, Y. Ge","doi":"10.1109/AINA.2008.65","DOIUrl":"https://doi.org/10.1109/AINA.2008.65","url":null,"abstract":"In this paper, a hybrid network consisting of both WiMAX and WiFi links is set up and used as a testbed for Voice-over-IP (VoIP) performance studies. Relevant metrics are defined and used to analyse the performance of this hybrid network and its ability to support VoIP calls. End-to-end delay and packets loss are measured as a function of number of ongoing VoIP calls. Based on these measured results, we propose a procedure to evaluate the maximum number of ongoing calls can be supported by the hybrid network while maintain the call quality at the desired level using the ITU-T specified E-model. The experimental studies show that this hybrid network is capable of supporting real-time VoIP calls, and that the network is able to support up to 12 simultaneous G.711-based VoIP calls and more than 20 simultaneous G.729-based VoIP calls. While it remains a challenge to ascertain the exact reasons behind the low network utilization, the proposed procedure for determining the maximum number of VoIP calls which meet the QoS requirements will be useful and valuable to service providers with the intention to provide VoIP services over such hybrid networks that interoperate these two technologies in the near future.","PeriodicalId":328651,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications (aina 2008)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121765101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}