Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273442
A. Jeyakumar, K. Baskaran, V. Sumathy
Multicast is an efficient way to distribute information from single source to multiple destination as well as many to many [Dr. A. Ebenezer Jeyakumar et al., Jan. 2003]. This paper shows the problem of real time delay bounded multicasting in wavelength division multiplexing network to avoid problem of synchronization between video and audio frames. This work describes a genetic algorithm based technique to synthesize wavelength division multiplexing (WDM) network topologies that can, with a high degree of confidence, assure that the multicast traffic is delivered in user specified limits on time. Unlike existing approaches to WDM network design, we first find a virtual topology that can meet the delay constraints. An embedding of virtual rings into physical links is then carried out, followed by an assignment of wavelengths to virtual links. The problem of finding the virtual topology is difficult because of a large number of parameters. A number of heuristic approaches have been proposed to solve such optimization problems. In this approach, the main aim is to explore the suitability of genetic algorithms to solve the WDM network design problem. A genetic algorithm can explore a far greater range of potential solutions to a problem than do conventional approaches. The advantage of a genetic algorithm, compared with other algorithms, which use a initial guess, e.g. gradient, descent is to use more information of estimation region, and to decrease the probability of falling into local minimum. This paper describes quantitative and qualitative results obtained by using our software tool on several benchmark examples.
组播是一种将信息从单个源分发到多个目的地以及多对多的有效方法[Dr. A. Ebenezer Jeyakumar等人,2003年1月]。本文研究了波分复用网络中实时延迟有界多播的问题,以避免视频和音频帧之间的同步问题。这项工作描述了一种基于遗传算法的技术来合成波分复用(WDM)网络拓扑结构,该拓扑结构具有高度的可信度,可以确保多播流量在用户指定的时间限制内交付。与现有的WDM网络设计方法不同,我们首先找到一个可以满足延迟约束的虚拟拓扑。然后将虚拟环嵌入到物理链路中,然后将波长分配给虚拟链路。由于存在大量的参数,虚拟拓扑的查找问题非常困难。许多启发式方法已经被提出来解决这类优化问题。在这种方法中,主要目的是探索遗传算法在解决WDM网络设计问题中的适用性。与传统方法相比,遗传算法可以探索更大范围的问题潜在解决方案。与梯度、下降等初始猜测算法相比,遗传算法的优点是利用了更多的估计区域信息,降低了陷入局部最小值的概率。本文描述了用我们的软件工具对几个基准算例所得到的定量和定性结果。
{"title":"Genetic algorithm for optimal design of delay bounded WDM multicast networks","authors":"A. Jeyakumar, K. Baskaran, V. Sumathy","doi":"10.1109/TENCON.2003.1273442","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273442","url":null,"abstract":"Multicast is an efficient way to distribute information from single source to multiple destination as well as many to many [Dr. A. Ebenezer Jeyakumar et al., Jan. 2003]. This paper shows the problem of real time delay bounded multicasting in wavelength division multiplexing network to avoid problem of synchronization between video and audio frames. This work describes a genetic algorithm based technique to synthesize wavelength division multiplexing (WDM) network topologies that can, with a high degree of confidence, assure that the multicast traffic is delivered in user specified limits on time. Unlike existing approaches to WDM network design, we first find a virtual topology that can meet the delay constraints. An embedding of virtual rings into physical links is then carried out, followed by an assignment of wavelengths to virtual links. The problem of finding the virtual topology is difficult because of a large number of parameters. A number of heuristic approaches have been proposed to solve such optimization problems. In this approach, the main aim is to explore the suitability of genetic algorithms to solve the WDM network design problem. A genetic algorithm can explore a far greater range of potential solutions to a problem than do conventional approaches. The advantage of a genetic algorithm, compared with other algorithms, which use a initial guess, e.g. gradient, descent is to use more information of estimation region, and to decrease the probability of falling into local minimum. This paper describes quantitative and qualitative results obtained by using our software tool on several benchmark examples.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114510566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273345
Man Chun Yeung, C. Y. Chung, F. Hartano
As the Internet continues to grow in popularity and size, so do the scalability demands on its infrastructure. It is well-known that proxy caching is one of the approaches for providing scalable content distribution. However, due to the limited storage space and bandwidth between the proxy server and the clients, the number of clients that can be supported is limited. To alleviate this problem, this paper presents the design and implementation of a distributed video caching system using the peer-to-peer computing technology. The proposed architecture allows the clients to take part in a video distribution network. These clients cache the large video objects that they viewed and later stream the videos to other clients that request the same video objects. Our numerical results show that the proposed system can scale better than the traditional video caches.
{"title":"Peer-to-peer video distribution over the Internet","authors":"Man Chun Yeung, C. Y. Chung, F. Hartano","doi":"10.1109/TENCON.2003.1273345","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273345","url":null,"abstract":"As the Internet continues to grow in popularity and size, so do the scalability demands on its infrastructure. It is well-known that proxy caching is one of the approaches for providing scalable content distribution. However, due to the limited storage space and bandwidth between the proxy server and the clients, the number of clients that can be supported is limited. To alleviate this problem, this paper presents the design and implementation of a distributed video caching system using the peer-to-peer computing technology. The proposed architecture allows the clients to take part in a video distribution network. These clients cache the large video objects that they viewed and later stream the videos to other clients that request the same video objects. Our numerical results show that the proposed system can scale better than the traditional video caches.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128498476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273153
P. Mondal, K. Rajan, L. Patnaik
Image reconstruction in Bayesian framework is far more advantageous over other reconstruction methods like convolution back projection, weighted least square method and maximum likelihood estimation. The power of Bayesian estimation ties in its ability to incorporate the prior distribution knowledge, enabling better reconstruction. Proper specification of clique potentials in Bayesian estimation plays a crucial role in the reconstruction process by favors the presence of desired characteristics in the image lattice like nearest neighbor interactions and homogeneity. Homogenous Markov random fields have been successfully used for modeling such interactions. Though reconstructions produced by such models are far more efficient, they often require large iterations for producing an approximate reconstruction. To deal with this problem, we have extended the Bayesian estimation in order to support sharp reconstruction. We propose to use sharp potential in Bayesian estimation once an approximate reconstruction is available using homogenous potentials in Bayesian domain The advantage of the proposed potential is its ability to recognize correlated nearest neighbors. The proposed reconstruction is a hybrid of both smooth and sharp potential in Bayesian framework and hence it is termed as hybrid reconstruction. Simulated experiments have shown that the proposed hybrid estimation method produces superior and sharp reconstruction as compared to the reconstruction produced by other Bayesian estimation methods.
{"title":"Hybrid reconstruction in Bayesian domain","authors":"P. Mondal, K. Rajan, L. Patnaik","doi":"10.1109/TENCON.2003.1273153","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273153","url":null,"abstract":"Image reconstruction in Bayesian framework is far more advantageous over other reconstruction methods like convolution back projection, weighted least square method and maximum likelihood estimation. The power of Bayesian estimation ties in its ability to incorporate the prior distribution knowledge, enabling better reconstruction. Proper specification of clique potentials in Bayesian estimation plays a crucial role in the reconstruction process by favors the presence of desired characteristics in the image lattice like nearest neighbor interactions and homogeneity. Homogenous Markov random fields have been successfully used for modeling such interactions. Though reconstructions produced by such models are far more efficient, they often require large iterations for producing an approximate reconstruction. To deal with this problem, we have extended the Bayesian estimation in order to support sharp reconstruction. We propose to use sharp potential in Bayesian estimation once an approximate reconstruction is available using homogenous potentials in Bayesian domain The advantage of the proposed potential is its ability to recognize correlated nearest neighbors. The proposed reconstruction is a hybrid of both smooth and sharp potential in Bayesian framework and hence it is termed as hybrid reconstruction. Simulated experiments have shown that the proposed hybrid estimation method produces superior and sharp reconstruction as compared to the reconstruction produced by other Bayesian estimation methods.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129450024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273163
V. Gurkhe
MPEG-1/2 audio layer-3 (MP3) is the must popular format for playback of high quality compressed audio for portable devices such as audio players and mobile phones. Typically these devices are based on either DSP or RISC processors. While the DSP architecture is more efficient for implementing the MP3 algorithm, the challenges a RISC implementation are lesser understood. This paper describes the challenges and optimization techniques useful for implementing the MP3 decoder algorithm on the RISC-based ARM9TDMI processor. Some of these techniques are generic and hence applicable to the any audio codec implementation on RISC-based platforms. Our results, which are among the best in the industry, indicate that stereo MP3 at 44 kHz and 128 kbps can be decoded using 27 MIPS on the ARM9TDMI. In addition, the output of our decoder is fully bit-compliant with the standard on the ISO test vectors.
{"title":"Optimization of an MP3 decoder on the ARM processor","authors":"V. Gurkhe","doi":"10.1109/TENCON.2003.1273163","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273163","url":null,"abstract":"MPEG-1/2 audio layer-3 (MP3) is the must popular format for playback of high quality compressed audio for portable devices such as audio players and mobile phones. Typically these devices are based on either DSP or RISC processors. While the DSP architecture is more efficient for implementing the MP3 algorithm, the challenges a RISC implementation are lesser understood. This paper describes the challenges and optimization techniques useful for implementing the MP3 decoder algorithm on the RISC-based ARM9TDMI processor. Some of these techniques are generic and hence applicable to the any audio codec implementation on RISC-based platforms. Our results, which are among the best in the industry, indicate that stereo MP3 at 44 kHz and 128 kbps can be decoded using 27 MIPS on the ARM9TDMI. In addition, the output of our decoder is fully bit-compliant with the standard on the ISO test vectors.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124220170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273417
K. Divya, P. N. Nagendra Rao
This paper presents a new simulation scheme for AGC studies of power system. The proposed scheme uses a different set of assumptions from that conventionally used. In this scheme, all the areas have been considered to be operating at the same frequency. Further, the proposed approach can preserve the identity of each generating unit. Additionally, the computational complexity has been reduced by resorting to lower order generating unit models. In this context, model order reduction techniques have been used to obtain lower order models for AGC studies. The effectiveness of the new simulation approach is demonstrated by considering the IEEE 30 bus test system as a 3 area system.
{"title":"A novel AGC simulation scheme based on reduced order prime mover models","authors":"K. Divya, P. N. Nagendra Rao","doi":"10.1109/TENCON.2003.1273417","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273417","url":null,"abstract":"This paper presents a new simulation scheme for AGC studies of power system. The proposed scheme uses a different set of assumptions from that conventionally used. In this scheme, all the areas have been considered to be operating at the same frequency. Further, the proposed approach can preserve the identity of each generating unit. Additionally, the computational complexity has been reduced by resorting to lower order generating unit models. In this context, model order reduction techniques have been used to obtain lower order models for AGC studies. The effectiveness of the new simulation approach is demonstrated by considering the IEEE 30 bus test system as a 3 area system.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121736427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273230
R. Prasanna, K. Ramakrishnan, C. Bhattacharyya
In image retrieval, relevance feedback uses information, obtained interactively from the user, to understand the user's perceptions of a query image and to improve retrieval accuracy. We propose simultaneous relevant feature selection and classification using the samples provided by the user to improve retrieval accuracy. The classifier is defined by a separating hyperplane, while the sparse weight vector characterizing the hyperplane defines a small set of relevant features. This set of relevant features is used for classification and can be used for analysis at a later stage. Mutually exclusive sets of images are shown to the user at each iteration to obtain maximum information from the user. Experimental results show that our algorithm performs better than the feature relevance weighting and feature selection schemes and comparably with the classification scheme using SVMs, in terms of retrieval accuracy, and it has the advantage of being faster than the classification scheme using SVMs.
{"title":"Simultaneous feature selection and classification for relevance feedback in image retrieval","authors":"R. Prasanna, K. Ramakrishnan, C. Bhattacharyya","doi":"10.1109/TENCON.2003.1273230","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273230","url":null,"abstract":"In image retrieval, relevance feedback uses information, obtained interactively from the user, to understand the user's perceptions of a query image and to improve retrieval accuracy. We propose simultaneous relevant feature selection and classification using the samples provided by the user to improve retrieval accuracy. The classifier is defined by a separating hyperplane, while the sparse weight vector characterizing the hyperplane defines a small set of relevant features. This set of relevant features is used for classification and can be used for analysis at a later stage. Mutually exclusive sets of images are shown to the user at each iteration to obtain maximum information from the user. Experimental results show that our algorithm performs better than the feature relevance weighting and feature selection schemes and comparably with the classification scheme using SVMs, in terms of retrieval accuracy, and it has the advantage of being faster than the classification scheme using SVMs.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121748809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273208
Mohamed Mohideen Anver, R. Stonier
In this paper we present an effective scheme for impulse noise removal from highly corrupted images using a soft-computing approach. The filter is capable of preserving the intricate details of the image and is based on a combination of fuzzy impulse detection and restoration of corrupted pixels. In the first stage a fuzzy knowledge base required for detection of impulses as well as the optimum parameters for the fuzzy membership functions employed, is effectively 'learnt' using an evolutionary algorithm (EA). For the detection of noisy pixels and the subsequent replacement, a novel scheme where a pixel is transferred to a simulated noise free environment is introduced. We present the results for several real images and make comparisons with some of the existing noise removal methods wherever applicable to show the effectiveness of the proposed technique.
{"title":"Parameter optimization and rule base selection for fuzzy impulse filters using evolutionary algorithms","authors":"Mohamed Mohideen Anver, R. Stonier","doi":"10.1109/TENCON.2003.1273208","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273208","url":null,"abstract":"In this paper we present an effective scheme for impulse noise removal from highly corrupted images using a soft-computing approach. The filter is capable of preserving the intricate details of the image and is based on a combination of fuzzy impulse detection and restoration of corrupted pixels. In the first stage a fuzzy knowledge base required for detection of impulses as well as the optimum parameters for the fuzzy membership functions employed, is effectively 'learnt' using an evolutionary algorithm (EA). For the detection of noisy pixels and the subsequent replacement, a novel scheme where a pixel is transferred to a simulated noise free environment is introduced. We present the results for several real images and make comparisons with some of the existing noise removal methods wherever applicable to show the effectiveness of the proposed technique.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"37 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113984712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273255
A. Van Der Meer, R. Liyana-Pathirana
Hybrid acquisition systems for spread spectrum, that offer a trade-off between the fast acquisition time of parallel search schemes and the reduced hardware complexity of a serial search scheme are analysed in this paper. The presence of multiple acquisition cells, due to multipath fading and the size of the search cell being less than one PN chip, is assumed. Under this assumption a novel verification scheme for the hybrid acquisition system is developed and its performance in terms of mean acquisition time is evaluated. The performance of the proposed hybrid scheme is compared to that of the traditional hybrid scheme as well as the totally parallel phase cell search.
{"title":"Performance analysis of a hybrid acquisition system for DS spread spectrum","authors":"A. Van Der Meer, R. Liyana-Pathirana","doi":"10.1109/TENCON.2003.1273255","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273255","url":null,"abstract":"Hybrid acquisition systems for spread spectrum, that offer a trade-off between the fast acquisition time of parallel search schemes and the reduced hardware complexity of a serial search scheme are analysed in this paper. The presence of multiple acquisition cells, due to multipath fading and the size of the search cell being less than one PN chip, is assumed. Under this assumption a novel verification scheme for the hybrid acquisition system is developed and its performance in terms of mean acquisition time is evaluated. The performance of the proposed hybrid scheme is compared to that of the traditional hybrid scheme as well as the totally parallel phase cell search.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124410767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273299
A. K. Khor, C.G. Leedham, D.L. Maskell
The paper is concerned with the accurate detection of collision and the resulting impact force between the model of a person's hand and a static or moving object. The objective is to provide accurate feedback of touch in virtual reality applications. The methods described and evaluated use a combination of bounding sphere and recursive subdivision of the bounding box techniques to detect accurately when and where a collision occurs on the hand. The impact force of the collision is calculated using the geometry of the impact area and application of Newton's law. The methods are verified using a novel prototype of a hand controlled by a six degrees of freedom tracker glove and ball. The ball and hand can collide either by the ball hitting the hand, the hand hitting the ball or both. When inexpensive accurate touch transducers become available to complement the 3D body position input devices, many innovative applications can be realized by applying these methods.
{"title":"Collision and impact force computation for virtual reality applications","authors":"A. K. Khor, C.G. Leedham, D.L. Maskell","doi":"10.1109/TENCON.2003.1273299","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273299","url":null,"abstract":"The paper is concerned with the accurate detection of collision and the resulting impact force between the model of a person's hand and a static or moving object. The objective is to provide accurate feedback of touch in virtual reality applications. The methods described and evaluated use a combination of bounding sphere and recursive subdivision of the bounding box techniques to detect accurately when and where a collision occurs on the hand. The impact force of the collision is calculated using the geometry of the impact area and application of Newton's law. The methods are verified using a novel prototype of a hand controlled by a six degrees of freedom tracker glove and ball. The ball and hand can collide either by the ball hitting the hand, the hand hitting the ball or both. When inexpensive accurate touch transducers become available to complement the 3D body position input devices, many innovative applications can be realized by applying these methods.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124475814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/TENCON.2003.1273273
A.V. Mahajan, A.A. Yarra
Performance is critical for the success of a Web application. One of the major bottlenecks in performance is the limited bandwidth of the network that connects browsers to the servers. Hence, if the amount of data flowing through the network can be reduced, it is possible to improve response times and support more users using the same network infrastructure. The paper discusses various strategies to reduce the amount of data flowing from the server to the browser using programming concepts such as gzip compression, XSLT and JavaScript. This is done without changing the look and feel of the application, nor depending on hardware or networking optimization. The data flowing in the network has been reduced by 96% using the techniques described.
{"title":"Techniques for improving Web application performance in bandwidth constrained scenarios","authors":"A.V. Mahajan, A.A. Yarra","doi":"10.1109/TENCON.2003.1273273","DOIUrl":"https://doi.org/10.1109/TENCON.2003.1273273","url":null,"abstract":"Performance is critical for the success of a Web application. One of the major bottlenecks in performance is the limited bandwidth of the network that connects browsers to the servers. Hence, if the amount of data flowing through the network can be reduced, it is possible to improve response times and support more users using the same network infrastructure. The paper discusses various strategies to reduce the amount of data flowing from the server to the browser using programming concepts such as gzip compression, XSLT and JavaScript. This is done without changing the look and feel of the application, nor depending on hardware or networking optimization. The data flowing in the network has been reduced by 96% using the techniques described.","PeriodicalId":405847,"journal":{"name":"TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124106411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}