This paper presents a robust watermarking for still digital images based on Fast Walsh-Hadamard Transform (FWHT) and Singular Value Decomposition (SVD) using Zigzag scanning. In this paper, after applying Fast Walsh-Hadamard transform to the whole cover image, the Walsh-Hadamard coefficients are arranged in zigzag order and mapped into four quadrants-Q1, Q2, Q3, Q4. These four quadrants represent different frequency bands from the lowest to highest. The quadrant Q1 again divided into no overlapping blocks and in it the highest entropy block is selected and Singular Value Decomposition is applied and the singular values of that blocks is modified with the singular values of the Fast-Walsh-Hadamard transforms coefficients of watermark. A comparative analysis is carried with recents works on Hadamard Transform and results of the proposed method are found to be superior in terms of imperceptibility and robustness at the expense of increased computational complexity.
{"title":"A Robust Watermarking Scheme Based Walsh-Hadamard Transform and SVD Using ZIG ZAG Scanning","authors":"K. Meenakshi, C. Rao, K. Prasad","doi":"10.1109/ICIT.2014.53","DOIUrl":"https://doi.org/10.1109/ICIT.2014.53","url":null,"abstract":"This paper presents a robust watermarking for still digital images based on Fast Walsh-Hadamard Transform (FWHT) and Singular Value Decomposition (SVD) using Zigzag scanning. In this paper, after applying Fast Walsh-Hadamard transform to the whole cover image, the Walsh-Hadamard coefficients are arranged in zigzag order and mapped into four quadrants-Q1, Q2, Q3, Q4. These four quadrants represent different frequency bands from the lowest to highest. The quadrant Q1 again divided into no overlapping blocks and in it the highest entropy block is selected and Singular Value Decomposition is applied and the singular values of that blocks is modified with the singular values of the Fast-Walsh-Hadamard transforms coefficients of watermark. A comparative analysis is carried with recents works on Hadamard Transform and results of the proposed method are found to be superior in terms of imperceptibility and robustness at the expense of increased computational complexity.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"9 2 1","pages":"167-172"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81237172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Dutta, H. Sarma, Ashish Kr. Srivastava, S. Verma
A novel clustering approach for cognitive nodes in CRN based Ad Hoc Networks (CRAHNs) is proposed in this paper. The Signal to Interference plus Noise Ratio (SINR) produced by Primary Users (PUs) on collocated Cognitive Users (CUs) along with Expected Transmission Time (ETT) among CUs is taken into account in order to form the clusters. The operation of CUs, either during cluster formation or data transmission no way harms the ongoing transmission of PU. The main aim here is to find suitable method of cluster formation so that the findings of this work can be used for developing efficient cluster based routing protocol for CRAHN. A medium scale network with up to 200 CUs are taken for experiment and some reasonable values for influential parameters are presented here.
{"title":"A SINR Based Clustering Protocol for Cognitive Radio Ad Hoc Network (CRAHN)","authors":"N. Dutta, H. Sarma, Ashish Kr. Srivastava, S. Verma","doi":"10.1109/ICIT.2014.76","DOIUrl":"https://doi.org/10.1109/ICIT.2014.76","url":null,"abstract":"A novel clustering approach for cognitive nodes in CRN based Ad Hoc Networks (CRAHNs) is proposed in this paper. The Signal to Interference plus Noise Ratio (SINR) produced by Primary Users (PUs) on collocated Cognitive Users (CUs) along with Expected Transmission Time (ETT) among CUs is taken into account in order to form the clusters. The operation of CUs, either during cluster formation or data transmission no way harms the ongoing transmission of PU. The main aim here is to find suitable method of cluster formation so that the findings of this work can be used for developing efficient cluster based routing protocol for CRAHN. A medium scale network with up to 200 CUs are taken for experiment and some reasonable values for influential parameters are presented here.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"93 1","pages":"69-75"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83815512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The functionality of most programs is delivered in terms of data. The values are somehow received by variables, which represent data and these values are used in computation of values for other variables. Data flow testing focuses on variable definition and variable usage. One of the fastest growing and most wide-spread application domains is the web application domain. The wide acceptance of Internet Technology requires sophisticated and high quality web applications. There are some sorts of entry forms that are provided by many web pages. These web pages require the user to supply input to the forms and click on the button or image. Sometimes, this program (commonly known as CGI program) is just an interface to an existing database, massaging user input into a database understandable format and massaging the database's output into the web browser understandable format (usually HTML). In this paper, we propose a technique for data flow testing of CGI programs that are written in Perl. We first propose a data flow model and compute definition-use chains. Then, we identify the paths to be exercised for each of these definition-use pairs.
{"title":"Data Flow Testing of CGI Based Web Applications","authors":"M. Sahu, D. Mohapatra","doi":"10.1109/ICIT.2014.27","DOIUrl":"https://doi.org/10.1109/ICIT.2014.27","url":null,"abstract":"The functionality of most programs is delivered in terms of data. The values are somehow received by variables, which represent data and these values are used in computation of values for other variables. Data flow testing focuses on variable definition and variable usage. One of the fastest growing and most wide-spread application domains is the web application domain. The wide acceptance of Internet Technology requires sophisticated and high quality web applications. There are some sorts of entry forms that are provided by many web pages. These web pages require the user to supply input to the forms and click on the button or image. Sometimes, this program (commonly known as CGI program) is just an interface to an existing database, massaging user input into a database understandable format and massaging the database's output into the web browser understandable format (usually HTML). In this paper, we propose a technique for data flow testing of CGI programs that are written in Perl. We first propose a data flow model and compute definition-use chains. Then, we identify the paths to be exercised for each of these definition-use pairs.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"19 1","pages":"106-111"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75108213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keshava Munegowda, G. Raju, V. Raju, T. N. Manjunath
The File Allocation Table (FAT) file system is widely used file system in tablet personal computers, mobile phones, digital cameras and other embedded devices for data storage and multi-media applications such as video imaging, audio/video playback and recording. The FAT file system is not power fail-safe. This means that, the uncontrolled power loss or abrupt removal of storage device from computer/embedded system causes the file system corruption. The TFAT (Transaction safe FAT) file system is an extension of FAT file system to provide power fail-safe feature to the FAT file system. This paper explores the design methodologies of cluster allocation algorithms of TFAT file system by conducting various combinations of file system operations in Windows CE (Compact Embedded) 6.0 Operating System (OS). This paper also records the performance bench-marking of TFAT file system in comparison with FAT File system.
文件分配表(File Allocation Table, FAT)文件系统是一种广泛应用于平板个人电脑、移动电话、数码相机等嵌入式设备的文件系统,用于数据存储和视频成像、音频/视频回放和录制等多媒体应用。FAT文件系统不是电源故障安全的。这意味着,不受控制的断电或突然从计算机/嵌入式系统中移除存储设备会导致文件系统损坏。TFAT(事务安全FAT)文件系统是FAT文件系统的扩展,为FAT文件系统提供电源故障安全功能。本文通过在Windows CE (Compact Embedded) 6.0操作系统中对文件系统操作进行各种组合,探讨了TFAT文件系统集群分配算法的设计方法。本文还记录了TFAT文件系统与FAT文件系统的性能基准测试。
{"title":"Design Methodologies of Transaction-Safe Cluster Allocations in TFAT File System for Embedded Storage Devices","authors":"Keshava Munegowda, G. Raju, V. Raju, T. N. Manjunath","doi":"10.1109/ICIT.2014.22","DOIUrl":"https://doi.org/10.1109/ICIT.2014.22","url":null,"abstract":"The File Allocation Table (FAT) file system is widely used file system in tablet personal computers, mobile phones, digital cameras and other embedded devices for data storage and multi-media applications such as video imaging, audio/video playback and recording. The FAT file system is not power fail-safe. This means that, the uncontrolled power loss or abrupt removal of storage device from computer/embedded system causes the file system corruption. The TFAT (Transaction safe FAT) file system is an extension of FAT file system to provide power fail-safe feature to the FAT file system. This paper explores the design methodologies of cluster allocation algorithms of TFAT file system by conducting various combinations of file system operations in Windows CE (Compact Embedded) 6.0 Operating System (OS). This paper also records the performance bench-marking of TFAT file system in comparison with FAT File system.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"314 1","pages":"316-320"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73747577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The micro RNAs or miRNAs are short non-coding RNAs, which are capable in regulating gene expression in post-transcriptional level. A huge volume of data is generated by expression profiling of miRNAs. From various studies it has been proved that a large proportion of miRNAs tend to form clusters on chromosome. So, in this article we are proposing a multi-objective optimization based clustering algorithm for extraction of relevant information from expression data of miRNA. The proposed method integrates the ability of point symmetry based distance and existing Multi-objective optimization based clustering technique-AMOSA to identify co-regulated or co-expressed miRNA clusters. The superiority of our proposed approach by comparing it with other state-of-the-art clustering methods, is demonstrated on two publicly available miRNA expression data sets using Davies-Bouldin index - an external cluster validity index.
{"title":"Identifying Co-expressed miRNAs using Multiobjective Optimization","authors":"S. Acharya, S. Saha","doi":"10.1109/ICIT.2014.69","DOIUrl":"https://doi.org/10.1109/ICIT.2014.69","url":null,"abstract":"The micro RNAs or miRNAs are short non-coding RNAs, which are capable in regulating gene expression in post-transcriptional level. A huge volume of data is generated by expression profiling of miRNAs. From various studies it has been proved that a large proportion of miRNAs tend to form clusters on chromosome. So, in this article we are proposing a multi-objective optimization based clustering algorithm for extraction of relevant information from expression data of miRNA. The proposed method integrates the ability of point symmetry based distance and existing Multi-objective optimization based clustering technique-AMOSA to identify co-regulated or co-expressed miRNA clusters. The superiority of our proposed approach by comparing it with other state-of-the-art clustering methods, is demonstrated on two publicly available miRNA expression data sets using Davies-Bouldin index - an external cluster validity index.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"25 1","pages":"245-250"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74582111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing need for software quality measurements has led to extensive research into software metrics and the development of software metric tools. Creating components which are reusable is seen as one of the best practice in industry today. To create reusable components the dependency between each component should be as low as possible. Hence, to maintain high quality software, developers need to strive for a low-coupled and highly cohesive design. However, as mentioned by many researchers, coupling and cohesion metrics lack formal and standardized definitions and thus for each metric there is more than one interpretation. This paper introduces our view of measurement of coupling for Java projects and our implementation approach. Coupling metrics are calculated at class level by considering the relationships between the methods of classes.
{"title":"A New Design Based Software Coupling Metric","authors":"Anshu Maheshwari, Aprna Tripathi, D. S. Kushwaha","doi":"10.1109/ICIT.2014.77","DOIUrl":"https://doi.org/10.1109/ICIT.2014.77","url":null,"abstract":"The increasing need for software quality measurements has led to extensive research into software metrics and the development of software metric tools. Creating components which are reusable is seen as one of the best practice in industry today. To create reusable components the dependency between each component should be as low as possible. Hence, to maintain high quality software, developers need to strive for a low-coupled and highly cohesive design. However, as mentioned by many researchers, coupling and cohesion metrics lack formal and standardized definitions and thus for each metric there is more than one interpretation. This paper introduces our view of measurement of coupling for Java projects and our implementation approach. Coupling metrics are calculated at class level by considering the relationships between the methods of classes.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"1 1","pages":"351-355"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76356017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Underwater surveillance and sensing is generally based on acoustics i.e. Sonar. This paper presents the application of acoustic reflection tomography methods for reconstruction of reflectivity of the underwater objects in active sonar environment. Based on the physical optics approximate, and by exploiting the relation between ramp response and the profile function in the illuminated region of the object, the profile function is obtained by time frequency analysis,. The study indicates that fairly good profile function can be obtained by application of time frequency analysis methods using limited frequency domain data. Reflectivity of the object is computed by applying tomography technique on the profiles obtained from the acoustic data, reflected by the object. These results obtained are found to be sufficient for visual identification /classification of the underwater objects.
{"title":"Application of Ocean Acoustic Tomography in Shape Reconstruction of Underwater Objects","authors":"T. Mani, O. V. Kumar, Raj Kumar","doi":"10.1109/ICIT.2014.64","DOIUrl":"https://doi.org/10.1109/ICIT.2014.64","url":null,"abstract":"Underwater surveillance and sensing is generally based on acoustics i.e. Sonar. This paper presents the application of acoustic reflection tomography methods for reconstruction of reflectivity of the underwater objects in active sonar environment. Based on the physical optics approximate, and by exploiting the relation between ramp response and the profile function in the illuminated region of the object, the profile function is obtained by time frequency analysis,. The study indicates that fairly good profile function can be obtained by application of time frequency analysis methods using limited frequency domain data. Reflectivity of the object is computed by applying tomography technique on the profiles obtained from the acoustic data, reflected by the object. These results obtained are found to be sufficient for visual identification /classification of the underwater objects.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"32 1","pages":"327-332"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90716746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sudip Ghosh, N. Das, Subhajit Das, S. Maity, H. Rahaman
The additional operation of retrieval of the cover image at the decoder is necessary for lossless watermarking system. Taking into account this major issue, efficient implementation of reversible image watermarking needs to be addressed. This can be solved using hardware implementation. This paper focus on the digital design with pipelined architecture of reversible watermarking algorithm based on Difference Expansion (DE) which is linear and whose running time is O (n). There are three different digital architectures proposed in this paper namely dataflow architecture, optimized dataflow architecture using pipelining and the modified architecture using pipelining. All the three design is implemented on Xilinx based FPGA. To the best of our knowledge this is the first digital design and pipelined architecture proposed in the literature for reversible watermarking using difference expansion.
{"title":"Digital Design and Pipelined Architecture for Reversible Watermarking Based on Difference Expansion Using FPGA","authors":"Sudip Ghosh, N. Das, Subhajit Das, S. Maity, H. Rahaman","doi":"10.1109/ICIT.2014.26","DOIUrl":"https://doi.org/10.1109/ICIT.2014.26","url":null,"abstract":"The additional operation of retrieval of the cover image at the decoder is necessary for lossless watermarking system. Taking into account this major issue, efficient implementation of reversible image watermarking needs to be addressed. This can be solved using hardware implementation. This paper focus on the digital design with pipelined architecture of reversible watermarking algorithm based on Difference Expansion (DE) which is linear and whose running time is O (n). There are three different digital architectures proposed in this paper namely dataflow architecture, optimized dataflow architecture using pipelining and the modified architecture using pipelining. All the three design is implemented on Xilinx based FPGA. To the best of our knowledge this is the first digital design and pipelined architecture proposed in the literature for reversible watermarking using difference expansion.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"43 1","pages":"123-128"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89801783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MANETs are normally used in applications which are highly confidential in nature, such as defense, disaster management, etc. Secure data transmission is one of the challenging issues in MANET. Secret sharing has been one of the popular techniques for implementing security in MANETs. The existing logics for securing the packet delivery seem to involve too many control signal interchanges. This paper discusses the performance of a multiple agent based secured routing scheme to detect secure routes with minimum load on the network. The Chinese remainder theorem is used in generating the secure key by the source node. Source node shares the secure key among all probable routes using multiple agents. The three phased approach is aimed to increase the overall performance in the network. Performance metrics which are used to evaluate the performance of AESCRT are delivery rate, load, malicious nodes and packet dropped. Simulation analysis shows that performance of AESCRT give better result compared to other existing routing protocols.
{"title":"Comparative Performance Analysis of AESCRT Using NS2","authors":"Ditipriya Sinha, R. Chaki","doi":"10.1109/ICIT.2014.42","DOIUrl":"https://doi.org/10.1109/ICIT.2014.42","url":null,"abstract":"MANETs are normally used in applications which are highly confidential in nature, such as defense, disaster management, etc. Secure data transmission is one of the challenging issues in MANET. Secret sharing has been one of the popular techniques for implementing security in MANETs. The existing logics for securing the packet delivery seem to involve too many control signal interchanges. This paper discusses the performance of a multiple agent based secured routing scheme to detect secure routes with minimum load on the network. The Chinese remainder theorem is used in generating the secure key by the source node. Source node shares the secure key among all probable routes using multiple agents. The three phased approach is aimed to increase the overall performance in the network. Performance metrics which are used to evaluate the performance of AESCRT are delivery rate, load, malicious nodes and packet dropped. Simulation analysis shows that performance of AESCRT give better result compared to other existing routing protocols.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"47 1","pages":"35-40"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76168938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we have estimated new call block and handoff call drop performances in Cellular WiMAX. Call Admission Control (CAC) is the process of regulating voice communication, particularly in wireless mobile networks. Further it is a fundamental mechanism used for QoS provisioning in a network. It restricts the access to the network based on resource availability in order to prevent network congestion and service degradation for already supported users. CAC decides whether a new connection can be established or not. Permission is granted only when QoS requirement of the new incoming flow is met without degrading those of the existing flows. The IEEE802.16e standard support for handover (HO) and various services like with Unsolicited Grant Service (UGS), real-time Polling Service (rtPS), non-real-time Polling Service (nrtPS), Best-Effort (BE), and extended real time Polling Service (ertPS). The Quality-of-Service (QoS) parameters used are the new and handoff call blocking. In this paper we investigate the blocking performances of three channel assignment schemes in the context of Cellular WiMAX network. Numerical results are found including the performance of these schemes.
{"title":"Estimation of Blocking Performances in Mobile WiMAX Cellular Networks","authors":"Anindita Chhotray, H. K. Pati","doi":"10.1109/ICIT.2014.62","DOIUrl":"https://doi.org/10.1109/ICIT.2014.62","url":null,"abstract":"In this paper, we have estimated new call block and handoff call drop performances in Cellular WiMAX. Call Admission Control (CAC) is the process of regulating voice communication, particularly in wireless mobile networks. Further it is a fundamental mechanism used for QoS provisioning in a network. It restricts the access to the network based on resource availability in order to prevent network congestion and service degradation for already supported users. CAC decides whether a new connection can be established or not. Permission is granted only when QoS requirement of the new incoming flow is met without degrading those of the existing flows. The IEEE802.16e standard support for handover (HO) and various services like with Unsolicited Grant Service (UGS), real-time Polling Service (rtPS), non-real-time Polling Service (nrtPS), Best-Effort (BE), and extended real time Polling Service (ertPS). The Quality-of-Service (QoS) parameters used are the new and handoff call blocking. In this paper we investigate the blocking performances of three channel assignment schemes in the context of Cellular WiMAX network. Numerical results are found including the performance of these schemes.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"27 1","pages":"148-154"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75165464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}