Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165199
T. Somasundaram, K. Govindarajan
Cloud Computing is one of the distributed computing paradigms and it is mainly focusing on providing everything as service to the consumers. In Cloud Computing Infrastructure as a Service (IaaS) is one of the service delivery model and it provides computational or storage resources, database and network resources to the users or customers. Eucalyptus is one of the open source cloud middleware, it is capable of creating private cloud inside an organization. The main drawback of the open source toolkit is it does not offer an integral monitoring and discovery service. The scalable monitoring and discovery service is important for managing the cloud resources. The proposed and implemented Cloud Monitoring and Discovery Service (CMDS) integrated with external information provider's discovers the cloud resources in a scalable way. The proposed model is integrated with CARE Resource Broker (CRB) and it is helpful for CRB for making scheduling decisions in cloud resources for virtual resource creation and application execution. The proposed system enhances the throughput of jobs by scaling of resources using CMDS in addition to that it increases the job success ratios of CRB.
{"title":"Cloud monitoring and discovery service (CMDS) for IaaS resources","authors":"T. Somasundaram, K. Govindarajan","doi":"10.1109/ICOAC.2011.6165199","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165199","url":null,"abstract":"Cloud Computing is one of the distributed computing paradigms and it is mainly focusing on providing everything as service to the consumers. In Cloud Computing Infrastructure as a Service (IaaS) is one of the service delivery model and it provides computational or storage resources, database and network resources to the users or customers. Eucalyptus is one of the open source cloud middleware, it is capable of creating private cloud inside an organization. The main drawback of the open source toolkit is it does not offer an integral monitoring and discovery service. The scalable monitoring and discovery service is important for managing the cloud resources. The proposed and implemented Cloud Monitoring and Discovery Service (CMDS) integrated with external information provider's discovers the cloud resources in a scalable way. The proposed model is integrated with CARE Resource Broker (CRB) and it is helpful for CRB for making scheduling decisions in cloud resources for virtual resource creation and application execution. The proposed system enhances the throughput of jobs by scaling of resources using CMDS in addition to that it increases the job success ratios of CRB.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134349447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165210
S. Vasuhi, V. Vaidehi, Midhunkrishna P R
In this paper, same target is being sensed by multiple sensors and the main objective is to classify the information into set of data produced for the same target. Once tracks are initialized and confirmed, the number of targets can be estimated; the future predicted position and target velocity can be computed for each track. Fusion is necessary to integrate the data from different sensors and to extract the relevant information of the targets. Support Vector Machines (SVMs) are generally binary classifiers and the multi class problems are solved by combining more than one SVM. This paper proposes a novel scheme for multiple target tracking using SVM classifier. The proposed scheme achieves classification by finding the optimal classification hyperplane with maximal margin. Also Kalman Filter (KF) and 1 Backscan Multiple Hypothesis Tracking (1 BMHT) are used for filtering and association respectively.
{"title":"Multiple target tracking using Support Vector Machine and data fusion","authors":"S. Vasuhi, V. Vaidehi, Midhunkrishna P R","doi":"10.1109/ICOAC.2011.6165210","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165210","url":null,"abstract":"In this paper, same target is being sensed by multiple sensors and the main objective is to classify the information into set of data produced for the same target. Once tracks are initialized and confirmed, the number of targets can be estimated; the future predicted position and target velocity can be computed for each track. Fusion is necessary to integrate the data from different sensors and to extract the relevant information of the targets. Support Vector Machines (SVMs) are generally binary classifiers and the multi class problems are solved by combining more than one SVM. This paper proposes a novel scheme for multiple target tracking using SVM classifier. The proposed scheme achieves classification by finding the optimal classification hyperplane with maximal margin. Also Kalman Filter (KF) and 1 Backscan Multiple Hypothesis Tracking (1 BMHT) are used for filtering and association respectively.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132750043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165209
M. Malleswaran, V. Vaidehi, M. Jebarsi
In positioning and navigation applications, Inertial navigation system (INS) and Global positioning system (GPS) technologies have been widely utilized. Each system has its own unique characteristics and limitations. Therefore, the integration of the two systems offers a number of advantages and overcomes each system inadequacies. The proposed schemes are implemented using the Autonomous neural networks (AUNN) — the cascade correlation network (CCN) and the Feedback cascade correlation network (FBCCN) that was able to construct the topology by itself autonomously on the fly and achieve prediction performance with less hidden neurons.
{"title":"Peformance comparison of Autonomous neural network based GPS/INS integration","authors":"M. Malleswaran, V. Vaidehi, M. Jebarsi","doi":"10.1109/ICOAC.2011.6165209","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165209","url":null,"abstract":"In positioning and navigation applications, Inertial navigation system (INS) and Global positioning system (GPS) technologies have been widely utilized. Each system has its own unique characteristics and limitations. Therefore, the integration of the two systems offers a number of advantages and overcomes each system inadequacies. The proposed schemes are implemented using the Autonomous neural networks (AUNN) — the cascade correlation network (CCN) and the Feedback cascade correlation network (FBCCN) that was able to construct the topology by itself autonomously on the fly and achieve prediction performance with less hidden neurons.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123784890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165169
M. Thirumaran, P. Dhavachelvan, G. Aranganayagi, K. Seenuvasan
The present generation business world does not work as a standalone system but requires the collaboration of various enterprises to achieve a desired goal. These enterprises are designed on the paradigm of web services and the maintenance of these web services is an implicit requirement. The maintenance of the services include extending the existing service, reusing the available services and remodel the inherent functionality of the services. The presently available business models accomplish the changes of the service in all the levels of the S DLC and do not concentrate on the implementation and maintenance level which in turn increases the time and manpower requirements. Thus there is a need for an incremental approach which executes on the vision of B2B interactions for long-term business agility and broad-scale interoperability. The proposed transparency model called Business Logic Model (BLM) widens the spectrum of understandability and visualization from the machine level to the business analyst level by providing complete transparency of the web service business logic to the business expert.
{"title":"A novel business model for enterprise service logic change management","authors":"M. Thirumaran, P. Dhavachelvan, G. Aranganayagi, K. Seenuvasan","doi":"10.1109/ICOAC.2011.6165169","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165169","url":null,"abstract":"The present generation business world does not work as a standalone system but requires the collaboration of various enterprises to achieve a desired goal. These enterprises are designed on the paradigm of web services and the maintenance of these web services is an implicit requirement. The maintenance of the services include extending the existing service, reusing the available services and remodel the inherent functionality of the services. The presently available business models accomplish the changes of the service in all the levels of the S DLC and do not concentrate on the implementation and maintenance level which in turn increases the time and manpower requirements. Thus there is a need for an incremental approach which executes on the vision of B2B interactions for long-term business agility and broad-scale interoperability. The proposed transparency model called Business Logic Model (BLM) widens the spectrum of understandability and visualization from the machine level to the business analyst level by providing complete transparency of the web service business logic to the business expert.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115258559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165219
R. Rashidha, Philomina Simon
This paper proposes a method for impulse noise removal using a Modified Directional Weighted Median Filter (MDWM). The proposed approach has two phases. In the first phase, corrupted pixels are identified using second order difference based detector. In the second phase, MDWM is applied to remove noise. MDWM is an improved directional weighted median filter, which replaces only the corrupted pixels in the image, leaving uncorrupted pixels unchanged. The corrupted pixel is replaced by the median of the pixel values in all the four main directions in a window. These pixels are associated with a weight value for median calculation. Here, maximum weights are assigned to the pixels in the direction with minimum deviation. The proposed method had been tested on benchmark images. Experimental results show the superiority of the proposed method in terms of PSNR, IEF, SSIM and IQI for a few iterations.
{"title":"A Modified Directional Weighted Median Filter using second order difference based detection for impulse noise removal","authors":"R. Rashidha, Philomina Simon","doi":"10.1109/ICOAC.2011.6165219","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165219","url":null,"abstract":"This paper proposes a method for impulse noise removal using a Modified Directional Weighted Median Filter (MDWM). The proposed approach has two phases. In the first phase, corrupted pixels are identified using second order difference based detector. In the second phase, MDWM is applied to remove noise. MDWM is an improved directional weighted median filter, which replaces only the corrupted pixels in the image, leaving uncorrupted pixels unchanged. The corrupted pixel is replaced by the median of the pixel values in all the four main directions in a window. These pixels are associated with a weight value for median calculation. Here, maximum weights are assigned to the pixels in the direction with minimum deviation. The proposed method had been tested on benchmark images. Experimental results show the superiority of the proposed method in terms of PSNR, IEF, SSIM and IQI for a few iterations.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115528129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165216
T. Sivakami, S. Shanmugavel
Now a day's researchers are concentrating more on 4G networks which combines different kinds of networks such as wlan, cellular, satellite, adhoc network etc and form a heterogeneous network. Heterogeneous network can provide anywhere at any time connection and namely “always best connected” (ABC) network environment. In such environment mobility management such as location and handoff plays a vital role when we incorporate different networks together and lot of issues come in to picture as far as macro mobility (users move from one networks to other networks) is concerned. Thus many research papers have been published under macro mobility management and addressed the problem of handoff management issues and given enough solution to their problems. But very few of them had concentrated on architectural point of view. So in this paper we have reviewed all possible methods for integrating of two networks and finally draw which method could be suitable for integration among different networks.
{"title":"An overview of mobility management and integration methods for heterogeneous networks","authors":"T. Sivakami, S. Shanmugavel","doi":"10.1109/ICOAC.2011.6165216","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165216","url":null,"abstract":"Now a day's researchers are concentrating more on 4G networks which combines different kinds of networks such as wlan, cellular, satellite, adhoc network etc and form a heterogeneous network. Heterogeneous network can provide anywhere at any time connection and namely “always best connected” (ABC) network environment. In such environment mobility management such as location and handoff plays a vital role when we incorporate different networks together and lot of issues come in to picture as far as macro mobility (users move from one networks to other networks) is concerned. Thus many research papers have been published under macro mobility management and addressed the problem of handoff management issues and given enough solution to their problems. But very few of them had concentrated on architectural point of view. So in this paper we have reviewed all possible methods for integrating of two networks and finally draw which method could be suitable for integration among different networks.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115784113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165159
M. Rajendiran, Assistant Professor, B. Syed, Ibrahim Sr Assistant Professor, R. Pratheesh, Kennnedy C Nelson, Dean Babu, R&d
In order to send information securely and to protect the same from the understanding of other, people around the world are using the various cryptography method of writing information. But for getting the Cipher by adopting the Multilanguage Encryption Technique (MULET) [1], one has to apply two types of Replacement called ‘Character’ and ‘Numerical’. The resultant Cipher is a stream cipher. The Multilanguage Two Dimensional Array Substitution method (MTDAS), which is being introduced in this article replaces two different arrays in to single two dimensional array, it supports more number of cipher alphabets for the same Mapping array value and it ensures the versatility [6] of the classical cryptography. The resultant Cipher's length will be a block cipher and at the same time it conforms to Multi-language. Hence, others will find it very difficult to understand the information and their efforts to decipher the information will prove futile. By using this new method it is possible to encrypt information in all the languages in the world that are in the Unicode [10].
为了安全地发送信息并保护信息不被他人理解,世界各地的人们都在使用各种加密方法来编写信息。但要通过采用多语言加密技术(MULET)[1]获得密码,必须应用两种类型的替换,称为“字符”和“数字”。生成的密码是一个流密码。本文介绍的多语言二维数组替换方法(Multilanguage Two Dimensional Array Substitution method, MTDAS)将两个不同的数组替换为单个二维数组,它支持相同映射数组值的更多密码字母,保证了经典密码学的通用性[6]。生成的密码长度将是分组密码,同时它符合多语言。因此,其他人会发现很难理解这些信息,他们破译信息的努力将被证明是徒劳的。通过使用这种新方法,可以加密世界上所有Unicode语言的信息[10]。
{"title":"Multilanguage block ciphering using two dimensional substitution array","authors":"M. Rajendiran, Assistant Professor, B. Syed, Ibrahim Sr Assistant Professor, R. Pratheesh, Kennnedy C Nelson, Dean Babu, R&d","doi":"10.1109/ICOAC.2011.6165159","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165159","url":null,"abstract":"In order to send information securely and to protect the same from the understanding of other, people around the world are using the various cryptography method of writing information. But for getting the Cipher by adopting the Multilanguage Encryption Technique (MULET) [1], one has to apply two types of Replacement called ‘Character’ and ‘Numerical’. The resultant Cipher is a stream cipher. The Multilanguage Two Dimensional Array Substitution method (MTDAS), which is being introduced in this article replaces two different arrays in to single two dimensional array, it supports more number of cipher alphabets for the same Mapping array value and it ensures the versatility [6] of the classical cryptography. The resultant Cipher's length will be a block cipher and at the same time it conforms to Multi-language. Hence, others will find it very difficult to understand the information and their efforts to decipher the information will prove futile. By using this new method it is possible to encrypt information in all the languages in the world that are in the Unicode [10].","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115844501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165192
S. Saha, A. Majumder, M. Hasanuzzaman, Asif Ekbal
The main goal of Biomedical Natural Language Processing (BioNLP) is to capture biomedical phenomena from textual data by extracting relevant entities, information and relations between biomedical entities (i.e. proteins and genes). In general, in most of the published papers, only binary relations were extracted. In a recent past, the focus is shifted towards extracting more complex relations in the form of bio-molecular events that may include several entities or other relations. In this paper we propose an approach that enables event extraction (detection and classification) of relatively complex bio-molecular events. We approach this problem as a supervised classificat ion problem and use the well-known algorithm, namely Support Vector Machine (SVM) that makes use of statistical and linguistic features that represent various morphological, syntactic and contextual information of the candidate bio-molecular trigger words. Firstly, we consider the problem of event detection and classification as a two-step process, first step of which deals with the event detection task and the second step classifies these identified events to one of the nine predefined classes. Later on we tr eat this problem as one-step process, and perform event detection and classification together. Three-fold cross validation expe riments on the BioNLP 2009 shared task datasets yield the overall average recall, precision and F-measure values of 62.95%, 74.53%, and 68.25%, respectively, for the event detection. We observed the overall classification accuracy of 72.50%. Evaluation resu lts of the proposed approach when detection and classification are performed together showed the overall recall, precision and F-measure values of 57.66%, 55.87%, and 56.75%, respectively.
{"title":"Bio-molecular event extraction using Support Vector Machine","authors":"S. Saha, A. Majumder, M. Hasanuzzaman, Asif Ekbal","doi":"10.1109/ICOAC.2011.6165192","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165192","url":null,"abstract":"The main goal of Biomedical Natural Language Processing (BioNLP) is to capture biomedical phenomena from textual data by extracting relevant entities, information and relations between biomedical entities (i.e. proteins and genes). In general, in most of the published papers, only binary relations were extracted. In a recent past, the focus is shifted towards extracting more complex relations in the form of bio-molecular events that may include several entities or other relations. In this paper we propose an approach that enables event extraction (detection and classification) of relatively complex bio-molecular events. We approach this problem as a supervised classificat ion problem and use the well-known algorithm, namely Support Vector Machine (SVM) that makes use of statistical and linguistic features that represent various morphological, syntactic and contextual information of the candidate bio-molecular trigger words. Firstly, we consider the problem of event detection and classification as a two-step process, first step of which deals with the event detection task and the second step classifies these identified events to one of the nine predefined classes. Later on we tr eat this problem as one-step process, and perform event detection and classification together. Three-fold cross validation expe riments on the BioNLP 2009 shared task datasets yield the overall average recall, precision and F-measure values of 62.95%, 74.53%, and 68.25%, respectively, for the event detection. We observed the overall classification accuracy of 72.50%. Evaluation resu lts of the proposed approach when detection and classification are performed together showed the overall recall, precision and F-measure values of 57.66%, 55.87%, and 56.75%, respectively.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114135258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165218
S. Tiwari, Sourav Khandelwal, Sanjiban Sekhar Roy
The use of computers has revolutionized the learning and teaching process. In case of learning languages its contribution is very important due to the availability of various multimedia tools. Computer assisted language learning (CALL) is a well established area of research. This has lead to a shift of focus from the teacher to the learner by giving the learner a greater level of autonomy. CALL focuses on the receptive skills of reading and listening which are very necessary for learning a language. In this paper we will analyze the features and advantages of CALL. A lot of people do not take up learning languages like Japanese inspite of their interest in that language only due to its entirely different pictographic writing system that is not easy to master. Classroom and textbook teaching alone would not suffice for learning the essential basics of Japanese language. Therefore in this paper we shall also be presenting an e-learning tool that can be used to learn and teach basics of Japanese language in a more interactive manner.
{"title":"E-learning tool for Japanese language learning through English, Hindi and Tamil: A computer assisted language learning (CALL) based approach","authors":"S. Tiwari, Sourav Khandelwal, Sanjiban Sekhar Roy","doi":"10.1109/ICOAC.2011.6165218","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165218","url":null,"abstract":"The use of computers has revolutionized the learning and teaching process. In case of learning languages its contribution is very important due to the availability of various multimedia tools. Computer assisted language learning (CALL) is a well established area of research. This has lead to a shift of focus from the teacher to the learner by giving the learner a greater level of autonomy. CALL focuses on the receptive skills of reading and listening which are very necessary for learning a language. In this paper we will analyze the features and advantages of CALL. A lot of people do not take up learning languages like Japanese inspite of their interest in that language only due to its entirely different pictographic writing system that is not easy to master. Classroom and textbook teaching alone would not suffice for learning the essential basics of Japanese language. Therefore in this paper we shall also be presenting an e-learning tool that can be used to learn and teach basics of Japanese language in a more interactive manner.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124085758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/ICOAC.2011.6165178
R. Sudhakar, V. Sudha
Color Image compression is now essential for applications such as transmission and storage in data bases since color gives a natural and pleasing nature for any object. For still image compression, the ‘Joint Photographic Experts Group’ standard has been established by International Standards Organization (ISO). The performance of existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform (DCT) scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints, scalar wavelets do not possess all the properties which are needed for a better performance in compression. The new class of wavelets, called multiwavelets, which possess more than one scaling filters overcomes this problem. The objective of this paper is to develop an efficient color compression scheme and to obtain better quality and higher compression ratio through multiwavelet transform and embedded coding of multiwavelet coefficients through Set Partitioning In Hierarchical Trees (SPIHT) algorithm. A comparison of the best known multiwavelets is made to the best known scalar wavelets. Both quantitative and qualitative measures of performance are examined.
{"title":"Color image compression using multiwavelets with modified SPIHT algorithm","authors":"R. Sudhakar, V. Sudha","doi":"10.1109/ICOAC.2011.6165178","DOIUrl":"https://doi.org/10.1109/ICOAC.2011.6165178","url":null,"abstract":"Color Image compression is now essential for applications such as transmission and storage in data bases since color gives a natural and pleasing nature for any object. For still image compression, the ‘Joint Photographic Experts Group’ standard has been established by International Standards Organization (ISO). The performance of existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform (DCT) scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints, scalar wavelets do not possess all the properties which are needed for a better performance in compression. The new class of wavelets, called multiwavelets, which possess more than one scaling filters overcomes this problem. The objective of this paper is to develop an efficient color compression scheme and to obtain better quality and higher compression ratio through multiwavelet transform and embedded coding of multiwavelet coefficients through Set Partitioning In Hierarchical Trees (SPIHT) algorithm. A comparison of the best known multiwavelets is made to the best known scalar wavelets. Both quantitative and qualitative measures of performance are examined.","PeriodicalId":369712,"journal":{"name":"2011 Third International Conference on Advanced Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115823254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}