首页 > 最新文献

2015 Eighth International Conference on Contemporary Computing (IC3)最新文献

英文 中文
Unified approach for Performance Evaluation and Debug of System on Chip at early design phase 片上系统设计初期性能评估与调试的统一方法
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346716
Nishit Gupta, Sunil Alag
This paper proposes a novel approach for System Level Debug and Performance Evaluation that exploits the signal level and clock cycle accuracy existing in Bus Cycle Accurate hardware IP models along with the advantages of untimed Transaction Level Modeling. The developed toolset can be integrated in SoC simulations in a nonintrusive manner which secretly embeds performance figures and debug information in dumped simulation database at signal and transaction level. Proposed approach suggests modeling the SoC components with only functional accuracy in which the computational delays are added using the timing features provided by event based SystemC kernel. The components are modeled with clock cycle and signal level accuracy at the interface. Profiling results shows that the proposed approach outperforms several state-of-art methodologies in terms accuracy, adaptability and simulation speed by an order of magnitude of 102. The developed toolset can effectively be used in a co-simulation environment with IPs at different abstraction levels.
本文提出了一种新的系统级调试和性能评估方法,该方法利用总线周期精确硬件IP模型中存在的信号电平和时钟周期精度以及非定时事务级建模的优点。开发的工具集可以以非侵入式的方式集成到SoC仿真中,在信号和事务级将性能数据和调试信息秘密嵌入转储的仿真数据库中。该方法建议仅使用功能精度对SoC组件进行建模,其中使用基于事件的SystemC内核提供的定时特性添加计算延迟。这些元件在接口处采用时钟周期和信号电平精度进行建模。分析结果表明,该方法在精度、适应性和仿真速度方面优于几种最先进的方法,提高了102个数量级。开发的工具集可以有效地用于具有不同抽象级别ip的联合仿真环境。
{"title":"Unified approach for Performance Evaluation and Debug of System on Chip at early design phase","authors":"Nishit Gupta, Sunil Alag","doi":"10.1109/IC3.2015.7346716","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346716","url":null,"abstract":"This paper proposes a novel approach for System Level Debug and Performance Evaluation that exploits the signal level and clock cycle accuracy existing in Bus Cycle Accurate hardware IP models along with the advantages of untimed Transaction Level Modeling. The developed toolset can be integrated in SoC simulations in a nonintrusive manner which secretly embeds performance figures and debug information in dumped simulation database at signal and transaction level. Proposed approach suggests modeling the SoC components with only functional accuracy in which the computational delays are added using the timing features provided by event based SystemC kernel. The components are modeled with clock cycle and signal level accuracy at the interface. Profiling results shows that the proposed approach outperforms several state-of-art methodologies in terms accuracy, adaptability and simulation speed by an order of magnitude of 102. The developed toolset can effectively be used in a co-simulation environment with IPs at different abstraction levels.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122239458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extraction, summariz ation and sentiment analysis of trending topics on Twitter 对Twitter上的热门话题进行提取、总结和情感分析
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346696
Srishti Sharma, Kanika Aggarwal, Palak Papneja, Saheb Singh
Twitter is amongst the most popular social networking and micro-blogging service today with over a hundred million users generating a wealth of information on a daily basis. This paper explores the automatic mining of trending topics on Twitter, analyzing the sentiments and generating summaries of the trending topics. The trending topics extracted are compared to the day's news items in order to verify the accuracy of the proposed approach. Results indicate that the proposed method is exhaustive in listing out all the important topics. The salient feature of the proposed technique is its ability to refine the trending topics to make them mutually exclusive. Sentiment analysis is carried out on the trending topics retrieved in order to discern mass reaction towards the trending topics and finally short summaries for all the trending topics are formulated that provide an immediate insight to the reaction of the masses towards every topic.
Twitter是当今最受欢迎的社交网络和微博服务之一,每天有超过1亿的用户产生丰富的信息。本文研究了Twitter趋势话题的自动挖掘,分析趋势话题的情绪并生成趋势话题的摘要。将提取的趋势主题与当天的新闻项目进行比较,以验证所提出方法的准确性。结果表明,该方法在列出所有重要主题方面是详尽的。所提出的技术的显著特征是它能够细化趋势主题,使它们相互排斥。对检索到的热门话题进行情感分析,以辨别大众对热门话题的反应,最后制定所有热门话题的简短摘要,以提供大众对每个话题的反应的即时洞察。
{"title":"Extraction, summariz ation and sentiment analysis of trending topics on Twitter","authors":"Srishti Sharma, Kanika Aggarwal, Palak Papneja, Saheb Singh","doi":"10.1109/IC3.2015.7346696","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346696","url":null,"abstract":"Twitter is amongst the most popular social networking and micro-blogging service today with over a hundred million users generating a wealth of information on a daily basis. This paper explores the automatic mining of trending topics on Twitter, analyzing the sentiments and generating summaries of the trending topics. The trending topics extracted are compared to the day's news items in order to verify the accuracy of the proposed approach. Results indicate that the proposed method is exhaustive in listing out all the important topics. The salient feature of the proposed technique is its ability to refine the trending topics to make them mutually exclusive. Sentiment analysis is carried out on the trending topics retrieved in order to discern mass reaction towards the trending topics and finally short summaries for all the trending topics are formulated that provide an immediate insight to the reaction of the masses towards every topic.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126002904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Privacy preservation and content protection in location based queries 基于位置的查询中的隐私保护和内容保护
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346701
Greeshma Sarath, H. MeghaLalS.
Location based services are widely used to access location information such as nearest ATMs and hospitals. These services are accessed by sending location queries containing user's current location to the Location based service(LBS) server. LBS server can retrieve the the current location of user from this query and misuse it, threatening his privacy. In security critical application like defense, protecting location privacy of authorized users is a critical issue. This paper describes the design and implementation of a solution to this privacy problem, which provides location privacy to authorized users and preserve confidentiality of data in LBS server. Our solution is a two stage approach, where first stage is based on Oblivious transfer and second stage is based on Private information Retrieval. Here the whole service area is divided into cells and location information of each cell is stored in the server in encrypted form. The user who wants to retrieve location information will create a clocking region(a subset of service area), containing his current location and generate a query embedding it. Server can only identify the user is somewhere in this clocking region, so user's security can be improved by increasing the size of the clocking region. Even if the server sends the location information of all the cells in the clocking region, user can decrypt service information only for the user's exact location, so confidentiality of server data will be preserved.
基于位置的服务广泛用于访问位置信息,例如最近的atm和医院。通过向基于位置的服务(LBS)服务器发送包含用户当前位置的位置查询,可以访问这些服务。LBS服务器可以从用户的查询中获取用户的当前位置信息并滥用,从而威胁用户的隐私。在防御等安全关键应用中,保护授权用户的位置隐私是一个关键问题。本文描述了一种解决方案的设计和实现,该方案为授权用户提供位置隐私,并保护LBS服务器中数据的机密性。我们的解决方案是一个两阶段的方法,其中第一阶段是基于遗忘传输,第二阶段是基于私有信息检索。这里将整个服务区域划分为小区,每个小区的位置信息以加密形式存储在服务器中。想要检索位置信息的用户将创建一个时钟区域(服务区域的一个子集),其中包含他的当前位置,并生成嵌入该位置的查询。服务器只能识别用户在这个时钟区域的某个位置,因此可以通过增加时钟区域的大小来提高用户的安全性。即使服务器发送时钟区域中所有单元的位置信息,用户也只能为用户的确切位置解密服务信息,因此服务器数据的机密性将得到保护。
{"title":"Privacy preservation and content protection in location based queries","authors":"Greeshma Sarath, H. MeghaLalS.","doi":"10.1109/IC3.2015.7346701","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346701","url":null,"abstract":"Location based services are widely used to access location information such as nearest ATMs and hospitals. These services are accessed by sending location queries containing user's current location to the Location based service(LBS) server. LBS server can retrieve the the current location of user from this query and misuse it, threatening his privacy. In security critical application like defense, protecting location privacy of authorized users is a critical issue. This paper describes the design and implementation of a solution to this privacy problem, which provides location privacy to authorized users and preserve confidentiality of data in LBS server. Our solution is a two stage approach, where first stage is based on Oblivious transfer and second stage is based on Private information Retrieval. Here the whole service area is divided into cells and location information of each cell is stored in the server in encrypted form. The user who wants to retrieve location information will create a clocking region(a subset of service area), containing his current location and generate a query embedding it. Server can only identify the user is somewhere in this clocking region, so user's security can be improved by increasing the size of the clocking region. Even if the server sends the location information of all the cells in the clocking region, user can decrypt service information only for the user's exact location, so confidentiality of server data will be preserved.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122600943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Identification of gait parameters from silhouette images 基于轮廓图像的步态参数识别
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346677
C. Prakash, A. Mittal, R. Kumar, Namita Mittal
Gait analysis has applications not only in medical, rehabilitation and sports, but it can also play a decisive role in security and surveillance as a behavioral biometric factor. This paper discusses gait parameters extraction technique without using makers or sensors. Videos from a home digital camera are analyzed and a silhouette image based technique is used to identify gait parameter such as step length, stride length, silhouette height and width, foot length, center of gravity (COG) and gait signature. 10 healthy subject's sagittal view is considered in this research at RAMAN lab, MNIT Jaipur. Non-requirement of markers makes the system non-invasive, cheap, and also easier to implement. To ascertain the quality of data obtained, the data has been compared with the data extracted by marker based approaches for the same subject and satisfactory results conform that the proposed system is feasible and can be used in the aforementioned areas. This paper can help researchers by providing them with a general insight of the gait parameters extraction technique for gait analysis.
步态分析不仅在医疗、康复、运动等领域具有广泛的应用,而且作为一种行为生物识别因素,在安全、监控等领域也具有举足轻重的作用。本文讨论了不使用制造商和传感器的步态参数提取技术。对来自家用数码相机的视频进行分析,并采用基于剪影图像的技术来识别步态参数,如步长、步长、剪影高度和宽度、脚长、重心(COG)和步态特征。10本研究在印度理工大学斋浦尔分校RAMAN实验室进行,考虑了健康受试者的矢状面视图。不需要标记,使系统无创,成本低,也更容易实现。为了确定所获得的数据的质量,将数据与基于标记的方法提取的相同主题的数据进行了比较,结果令人满意,表明所提出的系统是可行的,可以用于上述领域。本文可以帮助研究人员对步态分析中的步态参数提取技术有一个总体的认识。
{"title":"Identification of gait parameters from silhouette images","authors":"C. Prakash, A. Mittal, R. Kumar, Namita Mittal","doi":"10.1109/IC3.2015.7346677","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346677","url":null,"abstract":"Gait analysis has applications not only in medical, rehabilitation and sports, but it can also play a decisive role in security and surveillance as a behavioral biometric factor. This paper discusses gait parameters extraction technique without using makers or sensors. Videos from a home digital camera are analyzed and a silhouette image based technique is used to identify gait parameter such as step length, stride length, silhouette height and width, foot length, center of gravity (COG) and gait signature. 10 healthy subject's sagittal view is considered in this research at RAMAN lab, MNIT Jaipur. Non-requirement of markers makes the system non-invasive, cheap, and also easier to implement. To ascertain the quality of data obtained, the data has been compared with the data extracted by marker based approaches for the same subject and satisfactory results conform that the proposed system is feasible and can be used in the aforementioned areas. This paper can help researchers by providing them with a general insight of the gait parameters extraction technique for gait analysis.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115514048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Feature selection using Artificial Bee Colony algorithm for medical image classification 基于人工蜂群算法的医学图像分类特征选择
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346674
V. Agrawal, Satish Chandra
Feature Selection in medical image processing is a process of selection of relevant features, which are useful in model construction, as it will lead to reduced training times and classification model designed will be easier to interrupt. In this paper a meta-heuristic algorithm Artificial Bee Colony (ABC) has been used for feature selection in Computed Tomography (CT Scan) images of cervical cancer with the objective of detecting whether the data given as input is cancerous or not. Starting with segmentation as a first step, performed by implementing Active Contour Segmentation (ACM) algorithm over the images. In this paper a semi-automated the system has been developed so as to obtain the region of interest (ROI). Further, textural features proposed by Haralick are extracted region of interest. Classification is performed using hybridization of Artificial Bee Colony (ABC) and k- Nearest Neighbors (k-NN) algorithm, ABC and Support Vector Machine (SVM). It is observed that combination of ABC with SVM (Gaussian kernel) performs better than combination of ABC with SVM (Linear Kernel) and ABC with K-NN classifier.
医学图像处理中的特征选择是一个选择相关特征的过程,这些特征在模型构建中很有用,因为它可以减少训练次数,并且设计的分类模型更容易中断。本文将人工蜂群(Artificial Bee Colony, ABC)元启发式算法用于宫颈癌CT扫描图像的特征选择,目的是检测作为输入的数据是否为癌性。首先从分割开始,在图像上实现主动轮廓分割(ACM)算法。本文开发了一种半自动化的感兴趣区域(ROI)提取系统。进一步,提取Haralick提出的纹理特征感兴趣的区域。采用人工蜂群(ABC)和k-近邻(k- nn)算法、ABC和支持向量机(SVM)的杂交方法进行分类。观察到ABC与SVM(高斯核)的组合优于ABC与SVM(线性核)和ABC与K-NN分类器的组合。
{"title":"Feature selection using Artificial Bee Colony algorithm for medical image classification","authors":"V. Agrawal, Satish Chandra","doi":"10.1109/IC3.2015.7346674","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346674","url":null,"abstract":"Feature Selection in medical image processing is a process of selection of relevant features, which are useful in model construction, as it will lead to reduced training times and classification model designed will be easier to interrupt. In this paper a meta-heuristic algorithm Artificial Bee Colony (ABC) has been used for feature selection in Computed Tomography (CT Scan) images of cervical cancer with the objective of detecting whether the data given as input is cancerous or not. Starting with segmentation as a first step, performed by implementing Active Contour Segmentation (ACM) algorithm over the images. In this paper a semi-automated the system has been developed so as to obtain the region of interest (ROI). Further, textural features proposed by Haralick are extracted region of interest. Classification is performed using hybridization of Artificial Bee Colony (ABC) and k- Nearest Neighbors (k-NN) algorithm, ABC and Support Vector Machine (SVM). It is observed that combination of ABC with SVM (Gaussian kernel) performs better than combination of ABC with SVM (Linear Kernel) and ABC with K-NN classifier.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128169270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Image based sub-second fast fully automatic complete cardiac cycle left ventricle segmentation in multi frame cardiac MRI images using pixel clustering and labelling 基于图像的多帧心脏MRI图像亚秒快速全自动完整心周期左心室分割
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346687
Vinayak Ray, Ayush Goyal
This research presents a fully automatic sub-second fast method for left ventricle (LV) segmentation from clinical cardiac MRI images based on fast continuous max flow graph cuts and connected component labeling. The motivation for LV segmentation is to measure cardiac disease in a patient based on left ventricular function. This novel classification scheme of graph cuts labeling removes the need for manual segmentation and initialization with a seed point, since it automatically accurately extracts the LV in all slices of the full cardiac cycle in multi-frame MRI. This LV segmentation method achieves a sub-second fast computational time of 0.67 seconds on average per frame. The validity of the graph cuts labeling based automatic segmentation technique was verified by comparison with manual segmentation. Medical parameters like End Systolic Volume (ESV), End Diastolic Volume (EDV) and Ejection Fraction (EF) were calculated both automatically and manually and compared for accuracy.
本研究提出了一种基于快速连续最大血流图切割和连通分量标记的全自动亚秒快速心脏MRI图像左心室分割方法。左室分割的动机是基于左心室功能来测量患者的心脏病。这种新的图切割标记分类方案消除了人工分割和种子点初始化的需要,因为它在多帧MRI中自动准确地提取了全心周期所有切片的LV。这种LV分割方法实现了亚秒级的快速计算时间,平均每帧0.67秒。通过与人工分割的比较,验证了基于图割标注的自动分割技术的有效性。自动和手动计算收缩期末容积(ESV)、舒张期末容积(EDV)和射血分数(EF)等医学参数,并比较其准确性。
{"title":"Image based sub-second fast fully automatic complete cardiac cycle left ventricle segmentation in multi frame cardiac MRI images using pixel clustering and labelling","authors":"Vinayak Ray, Ayush Goyal","doi":"10.1109/IC3.2015.7346687","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346687","url":null,"abstract":"This research presents a fully automatic sub-second fast method for left ventricle (LV) segmentation from clinical cardiac MRI images based on fast continuous max flow graph cuts and connected component labeling. The motivation for LV segmentation is to measure cardiac disease in a patient based on left ventricular function. This novel classification scheme of graph cuts labeling removes the need for manual segmentation and initialization with a seed point, since it automatically accurately extracts the LV in all slices of the full cardiac cycle in multi-frame MRI. This LV segmentation method achieves a sub-second fast computational time of 0.67 seconds on average per frame. The validity of the graph cuts labeling based automatic segmentation technique was verified by comparison with manual segmentation. Medical parameters like End Systolic Volume (ESV), End Diastolic Volume (EDV) and Ejection Fraction (EF) were calculated both automatically and manually and compared for accuracy.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Design of large-scale Content-based recommender system using hadoop MapReduce framework 基于hadoop MapReduce框架的大规模内容推荐系统的设计
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346697
S. Saravanan
Nowadays, providing relevant product recommendations to customers plays an important role in retaining customers and improving their shopping experience. Recommender systems can be applied to industries such as an e-commerce, music, online radio, television, hospitality, finance and many more. It is proved over the years that a simple algorithm with a lot of data can always provide better results than a complex algorithm with an inadequate amount of data. To provide better product recommendations, retail businesses have to analyze huge amount of data. As the recommendation system has to analyze huge amount of data to provide better recommendations, it is considered as a data intensive application. Hadoop distributed cluster platform is developed by Apache Software Foundation to address the issues which are involved in designing data intensive applications. In this paper, the improved MapReduce based data preprocessing and Content based recommendation algorithms are proposed and implemented using hadoop framework. Also, graphical user interfaces are developed to interact with the recommender system. Experimental results on Amazon product co-purchasing network metadata show that Hadoop distributed cluster environment is an efficient and scalable platform for implementing large scale recommender system.
如今,为顾客提供相关的产品推荐对于留住顾客和改善顾客的购物体验起着重要的作用。推荐系统可以应用于电子商务、音乐、在线广播、电视、酒店、金融等行业。多年来的事实证明,数据量大的简单算法总是比数据量不足的复杂算法提供更好的结果。为了提供更好的产品推荐,零售企业必须分析大量的数据。由于推荐系统需要分析大量的数据来提供更好的推荐,因此被认为是一个数据密集型应用。Hadoop分布式集群平台是由Apache软件基金会开发的,用于解决设计数据密集型应用程序所涉及的问题。本文提出了改进的基于MapReduce的数据预处理算法和基于内容的推荐算法,并在hadoop框架下实现。此外,还开发了图形用户界面与推荐系统进行交互。在亚马逊产品共购网络元数据上的实验结果表明,Hadoop分布式集群环境是实现大规模推荐系统的高效、可扩展的平台。
{"title":"Design of large-scale Content-based recommender system using hadoop MapReduce framework","authors":"S. Saravanan","doi":"10.1109/IC3.2015.7346697","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346697","url":null,"abstract":"Nowadays, providing relevant product recommendations to customers plays an important role in retaining customers and improving their shopping experience. Recommender systems can be applied to industries such as an e-commerce, music, online radio, television, hospitality, finance and many more. It is proved over the years that a simple algorithm with a lot of data can always provide better results than a complex algorithm with an inadequate amount of data. To provide better product recommendations, retail businesses have to analyze huge amount of data. As the recommendation system has to analyze huge amount of data to provide better recommendations, it is considered as a data intensive application. Hadoop distributed cluster platform is developed by Apache Software Foundation to address the issues which are involved in designing data intensive applications. In this paper, the improved MapReduce based data preprocessing and Content based recommendation algorithms are proposed and implemented using hadoop framework. Also, graphical user interfaces are developed to interact with the recommender system. Experimental results on Amazon product co-purchasing network metadata show that Hadoop distributed cluster environment is an efficient and scalable platform for implementing large scale recommender system.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121906331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Frequent block access pattern-based replication algorithm for cloud storage systems 基于频繁块访问模式的云存储系统复制算法
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346644
T. Ragunathan, Mohammed Sharfuddin
Replication is a strategy in which multiple copies of same data are stored at multiple sites. Replication has been used in cloud storage systems as a way to increase data availability, reliability and fault tolerance and to increase the performance. Number of replicas to be created for a file and placement of the replicas are the two important factors which determine the storage requirement and performance of the cloud storage systems. In this paper, we have proposed a novel replication algorithm for cloud storage systems which decides replication factor for a file block based on frequent block access pattern and placement of that file block based on local support value. We have carried out preliminary analysis of the algorithms and the results indicate that the proposed algorithm can perform better than the replication algorithm followed in Hadoop storage system.
复制是一种将相同数据的多个副本存储在多个站点的策略。在云存储系统中,复制被用作提高数据可用性、可靠性和容错性以及提高性能的一种方式。要为文件创建的副本数量和副本的位置是决定云存储系统的存储需求和性能的两个重要因素。本文提出了一种新的云存储复制算法,该算法根据文件块的频繁访问模式决定文件块的复制因子,并根据本地支持值决定文件块的位置。我们对算法进行了初步的分析,结果表明所提出的算法比Hadoop存储系统中遵循的复制算法性能更好。
{"title":"Frequent block access pattern-based replication algorithm for cloud storage systems","authors":"T. Ragunathan, Mohammed Sharfuddin","doi":"10.1109/IC3.2015.7346644","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346644","url":null,"abstract":"Replication is a strategy in which multiple copies of same data are stored at multiple sites. Replication has been used in cloud storage systems as a way to increase data availability, reliability and fault tolerance and to increase the performance. Number of replicas to be created for a file and placement of the replicas are the two important factors which determine the storage requirement and performance of the cloud storage systems. In this paper, we have proposed a novel replication algorithm for cloud storage systems which decides replication factor for a file block based on frequent block access pattern and placement of that file block based on local support value. We have carried out preliminary analysis of the algorithms and the results indicate that the proposed algorithm can perform better than the replication algorithm followed in Hadoop storage system.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121915543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A linear antenna array failure detection using Bat Algorithm 基于Bat算法的线性天线阵列故障检测
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346679
N. S. Grewal, M. Rattan, M. Patterh
The element failure detection of antenna arrays is a practical issue in communication field. The sidelobe power level increases to an unacceptable level due to element failure conditions. In this paper, the problem of antenna array failure detection has been solved using Bat Algorithm (BA). A fitness function has been developed to obtain the error between degraded sidelobe pattern and estimated sidelobe pattern and this function has been optimized using BA. Different numerical examples of failed elements detection are presented to show the capability of this proposed approach.
天线阵列元件故障检测是通信领域的一个实际问题。由于元件失效情况,旁瓣功率电平增加到不可接受的水平。本文采用蝙蝠算法(Bat Algorithm, BA)解决了天线阵列故障检测问题。建立了一个适应度函数来获取退化副瓣方向图与估计副瓣方向图之间的误差,并利用BA对该函数进行了优化。给出了不同的失效构件检测数值实例,以验证该方法的有效性。
{"title":"A linear antenna array failure detection using Bat Algorithm","authors":"N. S. Grewal, M. Rattan, M. Patterh","doi":"10.1109/IC3.2015.7346679","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346679","url":null,"abstract":"The element failure detection of antenna arrays is a practical issue in communication field. The sidelobe power level increases to an unacceptable level due to element failure conditions. In this paper, the problem of antenna array failure detection has been solved using Bat Algorithm (BA). A fitness function has been developed to obtain the error between degraded sidelobe pattern and estimated sidelobe pattern and this function has been optimized using BA. Different numerical examples of failed elements detection are presented to show the capability of this proposed approach.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122152659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An efficient DCT based image watermarking scheme for protecting distribution rights 一种有效的基于DCT的图像水印方案
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346655
G. Gupta, A. Joshi, Kanika Sharma
The chances of copyright violation and piracy have been increased due to growth of networking and technology. Digital watermark is useful to verify the integrity of the content and for authenticity of an owner. The digital watermark must be robust against various attacks in order to protect the copyright information which is embedded in the original content. The proposed algorithm is useful for protecting the distribution rights of digital images. Watermark is embedded in the DCT coefficients of the host image and the watermark is pseudo randomly spread over the entire image using Linear Feedback Shift Register (LFSR). The algorithm shows improvement over existing algorithms in terms of Normalized Correlation (NC), Tamper Assessment Function (TAF) and Peak Signal to Noise ratio (PSNR). The proposed algorithm outperforms the related previous work with better results.
由于网络和技术的发展,侵犯版权和盗版的可能性增加了。数字水印用于验证内容的完整性和所有者的真实性。为了保护嵌入在原始内容中的版权信息,数字水印必须具有抗各种攻击的鲁棒性。该算法对保护数字图像的发行权具有重要意义。利用线性反馈移位寄存器(LFSR)将水印嵌入到主图像的DCT系数中,并在整个图像中进行伪随机分布。该算法在归一化相关(NC)、篡改评估函数(TAF)和峰值信噪比(PSNR)方面都比现有算法有了改进。该算法比以往的相关工作具有更好的效果。
{"title":"An efficient DCT based image watermarking scheme for protecting distribution rights","authors":"G. Gupta, A. Joshi, Kanika Sharma","doi":"10.1109/IC3.2015.7346655","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346655","url":null,"abstract":"The chances of copyright violation and piracy have been increased due to growth of networking and technology. Digital watermark is useful to verify the integrity of the content and for authenticity of an owner. The digital watermark must be robust against various attacks in order to protect the copyright information which is embedded in the original content. The proposed algorithm is useful for protecting the distribution rights of digital images. Watermark is embedded in the DCT coefficients of the host image and the watermark is pseudo randomly spread over the entire image using Linear Feedback Shift Register (LFSR). The algorithm shows improvement over existing algorithms in terms of Normalized Correlation (NC), Tamper Assessment Function (TAF) and Peak Signal to Noise ratio (PSNR). The proposed algorithm outperforms the related previous work with better results.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123016592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2015 Eighth International Conference on Contemporary Computing (IC3)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1