This paper presents a robust watermarking for still digital images based on Fast Walsh-Hadamard Transform (FWHT) and Singular Value Decomposition (SVD) using Zigzag scanning. In this paper, after applying Fast Walsh-Hadamard transform to the whole cover image, the Walsh-Hadamard coefficients are arranged in zigzag order and mapped into four quadrants-Q1, Q2, Q3, Q4. These four quadrants represent different frequency bands from the lowest to highest. The quadrant Q1 again divided into no overlapping blocks and in it the highest entropy block is selected and Singular Value Decomposition is applied and the singular values of that blocks is modified with the singular values of the Fast-Walsh-Hadamard transforms coefficients of watermark. A comparative analysis is carried with recents works on Hadamard Transform and results of the proposed method are found to be superior in terms of imperceptibility and robustness at the expense of increased computational complexity.
{"title":"A Robust Watermarking Scheme Based Walsh-Hadamard Transform and SVD Using ZIG ZAG Scanning","authors":"K. Meenakshi, C. Rao, K. Prasad","doi":"10.1109/ICIT.2014.53","DOIUrl":"https://doi.org/10.1109/ICIT.2014.53","url":null,"abstract":"This paper presents a robust watermarking for still digital images based on Fast Walsh-Hadamard Transform (FWHT) and Singular Value Decomposition (SVD) using Zigzag scanning. In this paper, after applying Fast Walsh-Hadamard transform to the whole cover image, the Walsh-Hadamard coefficients are arranged in zigzag order and mapped into four quadrants-Q1, Q2, Q3, Q4. These four quadrants represent different frequency bands from the lowest to highest. The quadrant Q1 again divided into no overlapping blocks and in it the highest entropy block is selected and Singular Value Decomposition is applied and the singular values of that blocks is modified with the singular values of the Fast-Walsh-Hadamard transforms coefficients of watermark. A comparative analysis is carried with recents works on Hadamard Transform and results of the proposed method are found to be superior in terms of imperceptibility and robustness at the expense of increased computational complexity.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"9 2 1","pages":"167-172"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81237172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Dutta, H. Sarma, Ashish Kr. Srivastava, S. Verma
A novel clustering approach for cognitive nodes in CRN based Ad Hoc Networks (CRAHNs) is proposed in this paper. The Signal to Interference plus Noise Ratio (SINR) produced by Primary Users (PUs) on collocated Cognitive Users (CUs) along with Expected Transmission Time (ETT) among CUs is taken into account in order to form the clusters. The operation of CUs, either during cluster formation or data transmission no way harms the ongoing transmission of PU. The main aim here is to find suitable method of cluster formation so that the findings of this work can be used for developing efficient cluster based routing protocol for CRAHN. A medium scale network with up to 200 CUs are taken for experiment and some reasonable values for influential parameters are presented here.
{"title":"A SINR Based Clustering Protocol for Cognitive Radio Ad Hoc Network (CRAHN)","authors":"N. Dutta, H. Sarma, Ashish Kr. Srivastava, S. Verma","doi":"10.1109/ICIT.2014.76","DOIUrl":"https://doi.org/10.1109/ICIT.2014.76","url":null,"abstract":"A novel clustering approach for cognitive nodes in CRN based Ad Hoc Networks (CRAHNs) is proposed in this paper. The Signal to Interference plus Noise Ratio (SINR) produced by Primary Users (PUs) on collocated Cognitive Users (CUs) along with Expected Transmission Time (ETT) among CUs is taken into account in order to form the clusters. The operation of CUs, either during cluster formation or data transmission no way harms the ongoing transmission of PU. The main aim here is to find suitable method of cluster formation so that the findings of this work can be used for developing efficient cluster based routing protocol for CRAHN. A medium scale network with up to 200 CUs are taken for experiment and some reasonable values for influential parameters are presented here.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"93 1","pages":"69-75"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83815512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The functionality of most programs is delivered in terms of data. The values are somehow received by variables, which represent data and these values are used in computation of values for other variables. Data flow testing focuses on variable definition and variable usage. One of the fastest growing and most wide-spread application domains is the web application domain. The wide acceptance of Internet Technology requires sophisticated and high quality web applications. There are some sorts of entry forms that are provided by many web pages. These web pages require the user to supply input to the forms and click on the button or image. Sometimes, this program (commonly known as CGI program) is just an interface to an existing database, massaging user input into a database understandable format and massaging the database's output into the web browser understandable format (usually HTML). In this paper, we propose a technique for data flow testing of CGI programs that are written in Perl. We first propose a data flow model and compute definition-use chains. Then, we identify the paths to be exercised for each of these definition-use pairs.
{"title":"Data Flow Testing of CGI Based Web Applications","authors":"M. Sahu, D. Mohapatra","doi":"10.1109/ICIT.2014.27","DOIUrl":"https://doi.org/10.1109/ICIT.2014.27","url":null,"abstract":"The functionality of most programs is delivered in terms of data. The values are somehow received by variables, which represent data and these values are used in computation of values for other variables. Data flow testing focuses on variable definition and variable usage. One of the fastest growing and most wide-spread application domains is the web application domain. The wide acceptance of Internet Technology requires sophisticated and high quality web applications. There are some sorts of entry forms that are provided by many web pages. These web pages require the user to supply input to the forms and click on the button or image. Sometimes, this program (commonly known as CGI program) is just an interface to an existing database, massaging user input into a database understandable format and massaging the database's output into the web browser understandable format (usually HTML). In this paper, we propose a technique for data flow testing of CGI programs that are written in Perl. We first propose a data flow model and compute definition-use chains. Then, we identify the paths to be exercised for each of these definition-use pairs.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"19 1","pages":"106-111"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75108213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keshava Munegowda, G. Raju, V. Raju, T. N. Manjunath
The File Allocation Table (FAT) file system is widely used file system in tablet personal computers, mobile phones, digital cameras and other embedded devices for data storage and multi-media applications such as video imaging, audio/video playback and recording. The FAT file system is not power fail-safe. This means that, the uncontrolled power loss or abrupt removal of storage device from computer/embedded system causes the file system corruption. The TFAT (Transaction safe FAT) file system is an extension of FAT file system to provide power fail-safe feature to the FAT file system. This paper explores the design methodologies of cluster allocation algorithms of TFAT file system by conducting various combinations of file system operations in Windows CE (Compact Embedded) 6.0 Operating System (OS). This paper also records the performance bench-marking of TFAT file system in comparison with FAT File system.
文件分配表(File Allocation Table, FAT)文件系统是一种广泛应用于平板个人电脑、移动电话、数码相机等嵌入式设备的文件系统,用于数据存储和视频成像、音频/视频回放和录制等多媒体应用。FAT文件系统不是电源故障安全的。这意味着,不受控制的断电或突然从计算机/嵌入式系统中移除存储设备会导致文件系统损坏。TFAT(事务安全FAT)文件系统是FAT文件系统的扩展,为FAT文件系统提供电源故障安全功能。本文通过在Windows CE (Compact Embedded) 6.0操作系统中对文件系统操作进行各种组合,探讨了TFAT文件系统集群分配算法的设计方法。本文还记录了TFAT文件系统与FAT文件系统的性能基准测试。
{"title":"Design Methodologies of Transaction-Safe Cluster Allocations in TFAT File System for Embedded Storage Devices","authors":"Keshava Munegowda, G. Raju, V. Raju, T. N. Manjunath","doi":"10.1109/ICIT.2014.22","DOIUrl":"https://doi.org/10.1109/ICIT.2014.22","url":null,"abstract":"The File Allocation Table (FAT) file system is widely used file system in tablet personal computers, mobile phones, digital cameras and other embedded devices for data storage and multi-media applications such as video imaging, audio/video playback and recording. The FAT file system is not power fail-safe. This means that, the uncontrolled power loss or abrupt removal of storage device from computer/embedded system causes the file system corruption. The TFAT (Transaction safe FAT) file system is an extension of FAT file system to provide power fail-safe feature to the FAT file system. This paper explores the design methodologies of cluster allocation algorithms of TFAT file system by conducting various combinations of file system operations in Windows CE (Compact Embedded) 6.0 Operating System (OS). This paper also records the performance bench-marking of TFAT file system in comparison with FAT File system.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"314 1","pages":"316-320"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73747577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The micro RNAs or miRNAs are short non-coding RNAs, which are capable in regulating gene expression in post-transcriptional level. A huge volume of data is generated by expression profiling of miRNAs. From various studies it has been proved that a large proportion of miRNAs tend to form clusters on chromosome. So, in this article we are proposing a multi-objective optimization based clustering algorithm for extraction of relevant information from expression data of miRNA. The proposed method integrates the ability of point symmetry based distance and existing Multi-objective optimization based clustering technique-AMOSA to identify co-regulated or co-expressed miRNA clusters. The superiority of our proposed approach by comparing it with other state-of-the-art clustering methods, is demonstrated on two publicly available miRNA expression data sets using Davies-Bouldin index - an external cluster validity index.
{"title":"Identifying Co-expressed miRNAs using Multiobjective Optimization","authors":"S. Acharya, S. Saha","doi":"10.1109/ICIT.2014.69","DOIUrl":"https://doi.org/10.1109/ICIT.2014.69","url":null,"abstract":"The micro RNAs or miRNAs are short non-coding RNAs, which are capable in regulating gene expression in post-transcriptional level. A huge volume of data is generated by expression profiling of miRNAs. From various studies it has been proved that a large proportion of miRNAs tend to form clusters on chromosome. So, in this article we are proposing a multi-objective optimization based clustering algorithm for extraction of relevant information from expression data of miRNA. The proposed method integrates the ability of point symmetry based distance and existing Multi-objective optimization based clustering technique-AMOSA to identify co-regulated or co-expressed miRNA clusters. The superiority of our proposed approach by comparing it with other state-of-the-art clustering methods, is demonstrated on two publicly available miRNA expression data sets using Davies-Bouldin index - an external cluster validity index.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"25 1","pages":"245-250"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74582111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing need for software quality measurements has led to extensive research into software metrics and the development of software metric tools. Creating components which are reusable is seen as one of the best practice in industry today. To create reusable components the dependency between each component should be as low as possible. Hence, to maintain high quality software, developers need to strive for a low-coupled and highly cohesive design. However, as mentioned by many researchers, coupling and cohesion metrics lack formal and standardized definitions and thus for each metric there is more than one interpretation. This paper introduces our view of measurement of coupling for Java projects and our implementation approach. Coupling metrics are calculated at class level by considering the relationships between the methods of classes.
{"title":"A New Design Based Software Coupling Metric","authors":"Anshu Maheshwari, Aprna Tripathi, D. S. Kushwaha","doi":"10.1109/ICIT.2014.77","DOIUrl":"https://doi.org/10.1109/ICIT.2014.77","url":null,"abstract":"The increasing need for software quality measurements has led to extensive research into software metrics and the development of software metric tools. Creating components which are reusable is seen as one of the best practice in industry today. To create reusable components the dependency between each component should be as low as possible. Hence, to maintain high quality software, developers need to strive for a low-coupled and highly cohesive design. However, as mentioned by many researchers, coupling and cohesion metrics lack formal and standardized definitions and thus for each metric there is more than one interpretation. This paper introduces our view of measurement of coupling for Java projects and our implementation approach. Coupling metrics are calculated at class level by considering the relationships between the methods of classes.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"1 1","pages":"351-355"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76356017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Underwater surveillance and sensing is generally based on acoustics i.e. Sonar. This paper presents the application of acoustic reflection tomography methods for reconstruction of reflectivity of the underwater objects in active sonar environment. Based on the physical optics approximate, and by exploiting the relation between ramp response and the profile function in the illuminated region of the object, the profile function is obtained by time frequency analysis,. The study indicates that fairly good profile function can be obtained by application of time frequency analysis methods using limited frequency domain data. Reflectivity of the object is computed by applying tomography technique on the profiles obtained from the acoustic data, reflected by the object. These results obtained are found to be sufficient for visual identification /classification of the underwater objects.
{"title":"Application of Ocean Acoustic Tomography in Shape Reconstruction of Underwater Objects","authors":"T. Mani, O. V. Kumar, Raj Kumar","doi":"10.1109/ICIT.2014.64","DOIUrl":"https://doi.org/10.1109/ICIT.2014.64","url":null,"abstract":"Underwater surveillance and sensing is generally based on acoustics i.e. Sonar. This paper presents the application of acoustic reflection tomography methods for reconstruction of reflectivity of the underwater objects in active sonar environment. Based on the physical optics approximate, and by exploiting the relation between ramp response and the profile function in the illuminated region of the object, the profile function is obtained by time frequency analysis,. The study indicates that fairly good profile function can be obtained by application of time frequency analysis methods using limited frequency domain data. Reflectivity of the object is computed by applying tomography technique on the profiles obtained from the acoustic data, reflected by the object. These results obtained are found to be sufficient for visual identification /classification of the underwater objects.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"32 1","pages":"327-332"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90716746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sudip Ghosh, N. Das, Subhajit Das, S. Maity, H. Rahaman
The additional operation of retrieval of the cover image at the decoder is necessary for lossless watermarking system. Taking into account this major issue, efficient implementation of reversible image watermarking needs to be addressed. This can be solved using hardware implementation. This paper focus on the digital design with pipelined architecture of reversible watermarking algorithm based on Difference Expansion (DE) which is linear and whose running time is O (n). There are three different digital architectures proposed in this paper namely dataflow architecture, optimized dataflow architecture using pipelining and the modified architecture using pipelining. All the three design is implemented on Xilinx based FPGA. To the best of our knowledge this is the first digital design and pipelined architecture proposed in the literature for reversible watermarking using difference expansion.
{"title":"Digital Design and Pipelined Architecture for Reversible Watermarking Based on Difference Expansion Using FPGA","authors":"Sudip Ghosh, N. Das, Subhajit Das, S. Maity, H. Rahaman","doi":"10.1109/ICIT.2014.26","DOIUrl":"https://doi.org/10.1109/ICIT.2014.26","url":null,"abstract":"The additional operation of retrieval of the cover image at the decoder is necessary for lossless watermarking system. Taking into account this major issue, efficient implementation of reversible image watermarking needs to be addressed. This can be solved using hardware implementation. This paper focus on the digital design with pipelined architecture of reversible watermarking algorithm based on Difference Expansion (DE) which is linear and whose running time is O (n). There are three different digital architectures proposed in this paper namely dataflow architecture, optimized dataflow architecture using pipelining and the modified architecture using pipelining. All the three design is implemented on Xilinx based FPGA. To the best of our knowledge this is the first digital design and pipelined architecture proposed in the literature for reversible watermarking using difference expansion.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"43 1","pages":"123-128"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89801783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Selvi, M. Rath, N. Sinha, S. Singh, N. Hemrom, A. Bhattacharya, A. Biswal
Every organization is information driven and it's the employee who drives and carries out day to day activities. The P&A department train the people, organizes them, so that employees can effectively perform these activities. This requires viewing people as human assets, not costs to the organization. Looking at people as assets is part of human resource management and human capital management. For managing and automating the HR Process to maximize the productivity of the organization, the organization has to implement HRMS, a Human Resource Management System. HRMS system will help in reducing costs, saving time, integrating and aligning HR efforts with the rest of the organization. Employees will be empowered and engage with more input and control over their work life. Through HRMS one can quickly build the workflows and processes. The powerful flexibility features keep employees current and compliant, even as rules and regulations change. For competent management of business process, computerization is must in today's scenario. RDCIS (Research and Development Centre for Iron & Steel), is a research unit of SAIL in the area of Iron Steel. The organization hierarchy is two tier architecture. Top level is Area and Bottom level is department. Each area has various departments. The P&A (Personnel & Administration) department carries out different activities for managing various Human Resource functions. The different functions carried out by P&A department are Manpower Planning, Succession plans, Redeployment/ Job rotation, Career Planning, Compensation Revision, Employee Profile, Manpower Statistics, Age/ Skill/ Qualification matrix, Employee Turnover, Utilization of perks (LTC, Company Leased Housing etc.), Facilities (Residential phone, Housing loan etc.), Employee Performance/ Appraisal analysis, Training program details, Stagnation Analysis etc. Without a computerize systems, it is very difficult to drive the HR functions, adjustment of personnel systems to meet current and future requirements, and the management of change. The project comprises of database design, application design and development of software for storage and retrieval for the maintenance of HR data through user friendly interfaces. The developed software also has mechanisms to avoid tampering of data. The software has been developed with 3-tier approach. The software tools used are Oracle Designer, Oracle Database and JSP. The software has been deployed with Tomcat Apache Server on Windows Operating System.
{"title":"HR e-Leave Tour Management System at RDCIS, SAIL","authors":"S. Selvi, M. Rath, N. Sinha, S. Singh, N. Hemrom, A. Bhattacharya, A. Biswal","doi":"10.1109/ICIT.2014.31","DOIUrl":"https://doi.org/10.1109/ICIT.2014.31","url":null,"abstract":"Every organization is information driven and it's the employee who drives and carries out day to day activities. The P&A department train the people, organizes them, so that employees can effectively perform these activities. This requires viewing people as human assets, not costs to the organization. Looking at people as assets is part of human resource management and human capital management. For managing and automating the HR Process to maximize the productivity of the organization, the organization has to implement HRMS, a Human Resource Management System. HRMS system will help in reducing costs, saving time, integrating and aligning HR efforts with the rest of the organization. Employees will be empowered and engage with more input and control over their work life. Through HRMS one can quickly build the workflows and processes. The powerful flexibility features keep employees current and compliant, even as rules and regulations change. For competent management of business process, computerization is must in today's scenario. RDCIS (Research and Development Centre for Iron & Steel), is a research unit of SAIL in the area of Iron Steel. The organization hierarchy is two tier architecture. Top level is Area and Bottom level is department. Each area has various departments. The P&A (Personnel & Administration) department carries out different activities for managing various Human Resource functions. The different functions carried out by P&A department are Manpower Planning, Succession plans, Redeployment/ Job rotation, Career Planning, Compensation Revision, Employee Profile, Manpower Statistics, Age/ Skill/ Qualification matrix, Employee Turnover, Utilization of perks (LTC, Company Leased Housing etc.), Facilities (Residential phone, Housing loan etc.), Employee Performance/ Appraisal analysis, Training program details, Stagnation Analysis etc. Without a computerize systems, it is very difficult to drive the HR functions, adjustment of personnel systems to meet current and future requirements, and the management of change. The project comprises of database design, application design and development of software for storage and retrieval for the maintenance of HR data through user friendly interfaces. The developed software also has mechanisms to avoid tampering of data. The software has been developed with 3-tier approach. The software tools used are Oracle Designer, Oracle Database and JSP. The software has been deployed with Tomcat Apache Server on Windows Operating System.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"46 1","pages":"333-338"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73920414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recently proposed improved multi-stage clustering (IMSC) based blind equalisation algorithm in [1] gave significant performance improvement as compared to its state of the art counterparts. In that work, the performance was considered over a frequency-selective single input single output (SISO) additive white Gaussian noise (AWGN) channel. The practice of relaying is used in cooperative communications so as to give a variety of the independent signals to the receiver to choose from, the choice being dependent on the quality of the link. In other words, this results in a diversity gain at the receiver. In this paper, we propose a novel blind equalisation scheme which accepts inputs from relays, and finds a smart way of blindly fusing the incoming data, so as to reach a lower mean square deviation (MSD) from the Weiner solution. The simulations presented in this paper validate our algorithm. We also derive an expression for MSD from the Weiner solution of this algorithm as a function of step-size as in [2]. We find that it closely matches the experimentally obtained curves.
{"title":"Improved Multi-stage Clustering Based Blind Equalisation in Distributed Environments","authors":"R. Mitra, V. Bhatia","doi":"10.1109/ICIT.2014.32","DOIUrl":"https://doi.org/10.1109/ICIT.2014.32","url":null,"abstract":"The recently proposed improved multi-stage clustering (IMSC) based blind equalisation algorithm in [1] gave significant performance improvement as compared to its state of the art counterparts. In that work, the performance was considered over a frequency-selective single input single output (SISO) additive white Gaussian noise (AWGN) channel. The practice of relaying is used in cooperative communications so as to give a variety of the independent signals to the receiver to choose from, the choice being dependent on the quality of the link. In other words, this results in a diversity gain at the receiver. In this paper, we propose a novel blind equalisation scheme which accepts inputs from relays, and finds a smart way of blindly fusing the incoming data, so as to reach a lower mean square deviation (MSD) from the Weiner solution. The simulations presented in this paper validate our algorithm. We also derive an expression for MSD from the Weiner solution of this algorithm as a function of step-size as in [2]. We find that it closely matches the experimentally obtained curves.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"51 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78301293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}