Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346716
Nishit Gupta, Sunil Alag
This paper proposes a novel approach for System Level Debug and Performance Evaluation that exploits the signal level and clock cycle accuracy existing in Bus Cycle Accurate hardware IP models along with the advantages of untimed Transaction Level Modeling. The developed toolset can be integrated in SoC simulations in a nonintrusive manner which secretly embeds performance figures and debug information in dumped simulation database at signal and transaction level. Proposed approach suggests modeling the SoC components with only functional accuracy in which the computational delays are added using the timing features provided by event based SystemC kernel. The components are modeled with clock cycle and signal level accuracy at the interface. Profiling results shows that the proposed approach outperforms several state-of-art methodologies in terms accuracy, adaptability and simulation speed by an order of magnitude of 102. The developed toolset can effectively be used in a co-simulation environment with IPs at different abstraction levels.
{"title":"Unified approach for Performance Evaluation and Debug of System on Chip at early design phase","authors":"Nishit Gupta, Sunil Alag","doi":"10.1109/IC3.2015.7346716","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346716","url":null,"abstract":"This paper proposes a novel approach for System Level Debug and Performance Evaluation that exploits the signal level and clock cycle accuracy existing in Bus Cycle Accurate hardware IP models along with the advantages of untimed Transaction Level Modeling. The developed toolset can be integrated in SoC simulations in a nonintrusive manner which secretly embeds performance figures and debug information in dumped simulation database at signal and transaction level. Proposed approach suggests modeling the SoC components with only functional accuracy in which the computational delays are added using the timing features provided by event based SystemC kernel. The components are modeled with clock cycle and signal level accuracy at the interface. Profiling results shows that the proposed approach outperforms several state-of-art methodologies in terms accuracy, adaptability and simulation speed by an order of magnitude of 102. The developed toolset can effectively be used in a co-simulation environment with IPs at different abstraction levels.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122239458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Twitter is amongst the most popular social networking and micro-blogging service today with over a hundred million users generating a wealth of information on a daily basis. This paper explores the automatic mining of trending topics on Twitter, analyzing the sentiments and generating summaries of the trending topics. The trending topics extracted are compared to the day's news items in order to verify the accuracy of the proposed approach. Results indicate that the proposed method is exhaustive in listing out all the important topics. The salient feature of the proposed technique is its ability to refine the trending topics to make them mutually exclusive. Sentiment analysis is carried out on the trending topics retrieved in order to discern mass reaction towards the trending topics and finally short summaries for all the trending topics are formulated that provide an immediate insight to the reaction of the masses towards every topic.
{"title":"Extraction, summariz ation and sentiment analysis of trending topics on Twitter","authors":"Srishti Sharma, Kanika Aggarwal, Palak Papneja, Saheb Singh","doi":"10.1109/IC3.2015.7346696","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346696","url":null,"abstract":"Twitter is amongst the most popular social networking and micro-blogging service today with over a hundred million users generating a wealth of information on a daily basis. This paper explores the automatic mining of trending topics on Twitter, analyzing the sentiments and generating summaries of the trending topics. The trending topics extracted are compared to the day's news items in order to verify the accuracy of the proposed approach. Results indicate that the proposed method is exhaustive in listing out all the important topics. The salient feature of the proposed technique is its ability to refine the trending topics to make them mutually exclusive. Sentiment analysis is carried out on the trending topics retrieved in order to discern mass reaction towards the trending topics and finally short summaries for all the trending topics are formulated that provide an immediate insight to the reaction of the masses towards every topic.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126002904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346701
Greeshma Sarath, H. MeghaLalS.
Location based services are widely used to access location information such as nearest ATMs and hospitals. These services are accessed by sending location queries containing user's current location to the Location based service(LBS) server. LBS server can retrieve the the current location of user from this query and misuse it, threatening his privacy. In security critical application like defense, protecting location privacy of authorized users is a critical issue. This paper describes the design and implementation of a solution to this privacy problem, which provides location privacy to authorized users and preserve confidentiality of data in LBS server. Our solution is a two stage approach, where first stage is based on Oblivious transfer and second stage is based on Private information Retrieval. Here the whole service area is divided into cells and location information of each cell is stored in the server in encrypted form. The user who wants to retrieve location information will create a clocking region(a subset of service area), containing his current location and generate a query embedding it. Server can only identify the user is somewhere in this clocking region, so user's security can be improved by increasing the size of the clocking region. Even if the server sends the location information of all the cells in the clocking region, user can decrypt service information only for the user's exact location, so confidentiality of server data will be preserved.
{"title":"Privacy preservation and content protection in location based queries","authors":"Greeshma Sarath, H. MeghaLalS.","doi":"10.1109/IC3.2015.7346701","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346701","url":null,"abstract":"Location based services are widely used to access location information such as nearest ATMs and hospitals. These services are accessed by sending location queries containing user's current location to the Location based service(LBS) server. LBS server can retrieve the the current location of user from this query and misuse it, threatening his privacy. In security critical application like defense, protecting location privacy of authorized users is a critical issue. This paper describes the design and implementation of a solution to this privacy problem, which provides location privacy to authorized users and preserve confidentiality of data in LBS server. Our solution is a two stage approach, where first stage is based on Oblivious transfer and second stage is based on Private information Retrieval. Here the whole service area is divided into cells and location information of each cell is stored in the server in encrypted form. The user who wants to retrieve location information will create a clocking region(a subset of service area), containing his current location and generate a query embedding it. Server can only identify the user is somewhere in this clocking region, so user's security can be improved by increasing the size of the clocking region. Even if the server sends the location information of all the cells in the clocking region, user can decrypt service information only for the user's exact location, so confidentiality of server data will be preserved.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122600943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346677
C. Prakash, A. Mittal, R. Kumar, Namita Mittal
Gait analysis has applications not only in medical, rehabilitation and sports, but it can also play a decisive role in security and surveillance as a behavioral biometric factor. This paper discusses gait parameters extraction technique without using makers or sensors. Videos from a home digital camera are analyzed and a silhouette image based technique is used to identify gait parameter such as step length, stride length, silhouette height and width, foot length, center of gravity (COG) and gait signature. 10 healthy subject's sagittal view is considered in this research at RAMAN lab, MNIT Jaipur. Non-requirement of markers makes the system non-invasive, cheap, and also easier to implement. To ascertain the quality of data obtained, the data has been compared with the data extracted by marker based approaches for the same subject and satisfactory results conform that the proposed system is feasible and can be used in the aforementioned areas. This paper can help researchers by providing them with a general insight of the gait parameters extraction technique for gait analysis.
{"title":"Identification of gait parameters from silhouette images","authors":"C. Prakash, A. Mittal, R. Kumar, Namita Mittal","doi":"10.1109/IC3.2015.7346677","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346677","url":null,"abstract":"Gait analysis has applications not only in medical, rehabilitation and sports, but it can also play a decisive role in security and surveillance as a behavioral biometric factor. This paper discusses gait parameters extraction technique without using makers or sensors. Videos from a home digital camera are analyzed and a silhouette image based technique is used to identify gait parameter such as step length, stride length, silhouette height and width, foot length, center of gravity (COG) and gait signature. 10 healthy subject's sagittal view is considered in this research at RAMAN lab, MNIT Jaipur. Non-requirement of markers makes the system non-invasive, cheap, and also easier to implement. To ascertain the quality of data obtained, the data has been compared with the data extracted by marker based approaches for the same subject and satisfactory results conform that the proposed system is feasible and can be used in the aforementioned areas. This paper can help researchers by providing them with a general insight of the gait parameters extraction technique for gait analysis.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115514048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346674
V. Agrawal, Satish Chandra
Feature Selection in medical image processing is a process of selection of relevant features, which are useful in model construction, as it will lead to reduced training times and classification model designed will be easier to interrupt. In this paper a meta-heuristic algorithm Artificial Bee Colony (ABC) has been used for feature selection in Computed Tomography (CT Scan) images of cervical cancer with the objective of detecting whether the data given as input is cancerous or not. Starting with segmentation as a first step, performed by implementing Active Contour Segmentation (ACM) algorithm over the images. In this paper a semi-automated the system has been developed so as to obtain the region of interest (ROI). Further, textural features proposed by Haralick are extracted region of interest. Classification is performed using hybridization of Artificial Bee Colony (ABC) and k- Nearest Neighbors (k-NN) algorithm, ABC and Support Vector Machine (SVM). It is observed that combination of ABC with SVM (Gaussian kernel) performs better than combination of ABC with SVM (Linear Kernel) and ABC with K-NN classifier.
{"title":"Feature selection using Artificial Bee Colony algorithm for medical image classification","authors":"V. Agrawal, Satish Chandra","doi":"10.1109/IC3.2015.7346674","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346674","url":null,"abstract":"Feature Selection in medical image processing is a process of selection of relevant features, which are useful in model construction, as it will lead to reduced training times and classification model designed will be easier to interrupt. In this paper a meta-heuristic algorithm Artificial Bee Colony (ABC) has been used for feature selection in Computed Tomography (CT Scan) images of cervical cancer with the objective of detecting whether the data given as input is cancerous or not. Starting with segmentation as a first step, performed by implementing Active Contour Segmentation (ACM) algorithm over the images. In this paper a semi-automated the system has been developed so as to obtain the region of interest (ROI). Further, textural features proposed by Haralick are extracted region of interest. Classification is performed using hybridization of Artificial Bee Colony (ABC) and k- Nearest Neighbors (k-NN) algorithm, ABC and Support Vector Machine (SVM). It is observed that combination of ABC with SVM (Gaussian kernel) performs better than combination of ABC with SVM (Linear Kernel) and ABC with K-NN classifier.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128169270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346687
Vinayak Ray, Ayush Goyal
This research presents a fully automatic sub-second fast method for left ventricle (LV) segmentation from clinical cardiac MRI images based on fast continuous max flow graph cuts and connected component labeling. The motivation for LV segmentation is to measure cardiac disease in a patient based on left ventricular function. This novel classification scheme of graph cuts labeling removes the need for manual segmentation and initialization with a seed point, since it automatically accurately extracts the LV in all slices of the full cardiac cycle in multi-frame MRI. This LV segmentation method achieves a sub-second fast computational time of 0.67 seconds on average per frame. The validity of the graph cuts labeling based automatic segmentation technique was verified by comparison with manual segmentation. Medical parameters like End Systolic Volume (ESV), End Diastolic Volume (EDV) and Ejection Fraction (EF) were calculated both automatically and manually and compared for accuracy.
{"title":"Image based sub-second fast fully automatic complete cardiac cycle left ventricle segmentation in multi frame cardiac MRI images using pixel clustering and labelling","authors":"Vinayak Ray, Ayush Goyal","doi":"10.1109/IC3.2015.7346687","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346687","url":null,"abstract":"This research presents a fully automatic sub-second fast method for left ventricle (LV) segmentation from clinical cardiac MRI images based on fast continuous max flow graph cuts and connected component labeling. The motivation for LV segmentation is to measure cardiac disease in a patient based on left ventricular function. This novel classification scheme of graph cuts labeling removes the need for manual segmentation and initialization with a seed point, since it automatically accurately extracts the LV in all slices of the full cardiac cycle in multi-frame MRI. This LV segmentation method achieves a sub-second fast computational time of 0.67 seconds on average per frame. The validity of the graph cuts labeling based automatic segmentation technique was verified by comparison with manual segmentation. Medical parameters like End Systolic Volume (ESV), End Diastolic Volume (EDV) and Ejection Fraction (EF) were calculated both automatically and manually and compared for accuracy.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346697
S. Saravanan
Nowadays, providing relevant product recommendations to customers plays an important role in retaining customers and improving their shopping experience. Recommender systems can be applied to industries such as an e-commerce, music, online radio, television, hospitality, finance and many more. It is proved over the years that a simple algorithm with a lot of data can always provide better results than a complex algorithm with an inadequate amount of data. To provide better product recommendations, retail businesses have to analyze huge amount of data. As the recommendation system has to analyze huge amount of data to provide better recommendations, it is considered as a data intensive application. Hadoop distributed cluster platform is developed by Apache Software Foundation to address the issues which are involved in designing data intensive applications. In this paper, the improved MapReduce based data preprocessing and Content based recommendation algorithms are proposed and implemented using hadoop framework. Also, graphical user interfaces are developed to interact with the recommender system. Experimental results on Amazon product co-purchasing network metadata show that Hadoop distributed cluster environment is an efficient and scalable platform for implementing large scale recommender system.
{"title":"Design of large-scale Content-based recommender system using hadoop MapReduce framework","authors":"S. Saravanan","doi":"10.1109/IC3.2015.7346697","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346697","url":null,"abstract":"Nowadays, providing relevant product recommendations to customers plays an important role in retaining customers and improving their shopping experience. Recommender systems can be applied to industries such as an e-commerce, music, online radio, television, hospitality, finance and many more. It is proved over the years that a simple algorithm with a lot of data can always provide better results than a complex algorithm with an inadequate amount of data. To provide better product recommendations, retail businesses have to analyze huge amount of data. As the recommendation system has to analyze huge amount of data to provide better recommendations, it is considered as a data intensive application. Hadoop distributed cluster platform is developed by Apache Software Foundation to address the issues which are involved in designing data intensive applications. In this paper, the improved MapReduce based data preprocessing and Content based recommendation algorithms are proposed and implemented using hadoop framework. Also, graphical user interfaces are developed to interact with the recommender system. Experimental results on Amazon product co-purchasing network metadata show that Hadoop distributed cluster environment is an efficient and scalable platform for implementing large scale recommender system.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121906331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346644
T. Ragunathan, Mohammed Sharfuddin
Replication is a strategy in which multiple copies of same data are stored at multiple sites. Replication has been used in cloud storage systems as a way to increase data availability, reliability and fault tolerance and to increase the performance. Number of replicas to be created for a file and placement of the replicas are the two important factors which determine the storage requirement and performance of the cloud storage systems. In this paper, we have proposed a novel replication algorithm for cloud storage systems which decides replication factor for a file block based on frequent block access pattern and placement of that file block based on local support value. We have carried out preliminary analysis of the algorithms and the results indicate that the proposed algorithm can perform better than the replication algorithm followed in Hadoop storage system.
{"title":"Frequent block access pattern-based replication algorithm for cloud storage systems","authors":"T. Ragunathan, Mohammed Sharfuddin","doi":"10.1109/IC3.2015.7346644","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346644","url":null,"abstract":"Replication is a strategy in which multiple copies of same data are stored at multiple sites. Replication has been used in cloud storage systems as a way to increase data availability, reliability and fault tolerance and to increase the performance. Number of replicas to be created for a file and placement of the replicas are the two important factors which determine the storage requirement and performance of the cloud storage systems. In this paper, we have proposed a novel replication algorithm for cloud storage systems which decides replication factor for a file block based on frequent block access pattern and placement of that file block based on local support value. We have carried out preliminary analysis of the algorithms and the results indicate that the proposed algorithm can perform better than the replication algorithm followed in Hadoop storage system.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121915543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346679
N. S. Grewal, M. Rattan, M. Patterh
The element failure detection of antenna arrays is a practical issue in communication field. The sidelobe power level increases to an unacceptable level due to element failure conditions. In this paper, the problem of antenna array failure detection has been solved using Bat Algorithm (BA). A fitness function has been developed to obtain the error between degraded sidelobe pattern and estimated sidelobe pattern and this function has been optimized using BA. Different numerical examples of failed elements detection are presented to show the capability of this proposed approach.
{"title":"A linear antenna array failure detection using Bat Algorithm","authors":"N. S. Grewal, M. Rattan, M. Patterh","doi":"10.1109/IC3.2015.7346679","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346679","url":null,"abstract":"The element failure detection of antenna arrays is a practical issue in communication field. The sidelobe power level increases to an unacceptable level due to element failure conditions. In this paper, the problem of antenna array failure detection has been solved using Bat Algorithm (BA). A fitness function has been developed to obtain the error between degraded sidelobe pattern and estimated sidelobe pattern and this function has been optimized using BA. Different numerical examples of failed elements detection are presented to show the capability of this proposed approach.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122152659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-20DOI: 10.1109/IC3.2015.7346655
G. Gupta, A. Joshi, Kanika Sharma
The chances of copyright violation and piracy have been increased due to growth of networking and technology. Digital watermark is useful to verify the integrity of the content and for authenticity of an owner. The digital watermark must be robust against various attacks in order to protect the copyright information which is embedded in the original content. The proposed algorithm is useful for protecting the distribution rights of digital images. Watermark is embedded in the DCT coefficients of the host image and the watermark is pseudo randomly spread over the entire image using Linear Feedback Shift Register (LFSR). The algorithm shows improvement over existing algorithms in terms of Normalized Correlation (NC), Tamper Assessment Function (TAF) and Peak Signal to Noise ratio (PSNR). The proposed algorithm outperforms the related previous work with better results.
{"title":"An efficient DCT based image watermarking scheme for protecting distribution rights","authors":"G. Gupta, A. Joshi, Kanika Sharma","doi":"10.1109/IC3.2015.7346655","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346655","url":null,"abstract":"The chances of copyright violation and piracy have been increased due to growth of networking and technology. Digital watermark is useful to verify the integrity of the content and for authenticity of an owner. The digital watermark must be robust against various attacks in order to protect the copyright information which is embedded in the original content. The proposed algorithm is useful for protecting the distribution rights of digital images. Watermark is embedded in the DCT coefficients of the host image and the watermark is pseudo randomly spread over the entire image using Linear Feedback Shift Register (LFSR). The algorithm shows improvement over existing algorithms in terms of Normalized Correlation (NC), Tamper Assessment Function (TAF) and Peak Signal to Noise ratio (PSNR). The proposed algorithm outperforms the related previous work with better results.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123016592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}