首页 > 最新文献

2021 RIVF International Conference on Computing and Communication Technologies (RIVF)最新文献

英文 中文
Identification and classification of emerging genres in WebPages 网页中新兴类型的识别和分类
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066692
K. Kumari, A. Reddy
The information in World Wide Web is dynamic and growing faster. Existing topic based search engines are not adequate to retrieve information required by the users. So there is a necessity to develop genre based search engines. Firstly, web genres have to be identified to develop genre based search engines. Presently, there exist a few genre corpuses which include web genres like articles, online news, journalistic etc. The active nature of the web allows new genres to come into existence and these genres are called as emerging genres. In this paper, two novel algorithms are proposed namely Identification of Emerging Genres (IEG) algorithm and Adjustable Centroid Classification (ACC) algorithm. The IEG algorithm is used to identify emerging genres from the web pages that are collected randomly from the web and ACC algorithm is used to evaluate the performance of genre corpus. In this paper, the IEG algorithm has identified three emerging genres from 339 randomly selected web pages from World Wide Web by considering balanced 7-genre corpus for single label and unbalanced 20-genre corpus for multi-label respectively. The performance of the resultant datasets (10-genre single label and 23-genre multi-label) obtained during the identification process is evaluated using ACC algorithm and compared with SVM classifier, random forest classifier for single label classification and binary relevance random forest classifier, binary relevance SVM classifier for multi-label classification respectively. The classification results show that ACC algorithm gave better results when compared to existing classification algorithms.
万维网上的信息是动态的,而且增长得更快。现有的基于主题的搜索引擎不足以检索用户所需的信息。所以有必要开发基于类型的搜索引擎。首先,必须确定网络类型,以开发基于类型的搜索引擎。目前,存在一些体裁语料库,包括文章、网络新闻、新闻等网络体裁。网络的活跃特性允许新类型出现,这些类型被称为新兴类型。本文提出了两种新算法,即新兴类型识别(IEG)算法和可调质心分类(ACC)算法。使用IEG算法从随机收集的网页中识别新出现的体裁,使用ACC算法评估体裁语料库的性能。在本文中,IEG算法从万维网上随机抽取的339个网页中,分别考虑单标签下均衡的7个体裁语料库和多标签下不均衡的20个体裁语料库,识别出3个新兴的体裁。利用ACC算法对识别过程中得到的结果数据集(10个类型的单标签和23个类型的多标签)的性能进行评价,并分别与SVM分类器、随机森林分类器进行单标签分类,与二元相关随机森林分类器、二元相关SVM分类器进行多标签分类进行比较。分类结果表明,与现有的分类算法相比,ACC算法具有更好的分类效果。
{"title":"Identification and classification of emerging genres in WebPages","authors":"K. Kumari, A. Reddy","doi":"10.1109/ICCCT2.2014.7066692","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066692","url":null,"abstract":"The information in World Wide Web is dynamic and growing faster. Existing topic based search engines are not adequate to retrieve information required by the users. So there is a necessity to develop genre based search engines. Firstly, web genres have to be identified to develop genre based search engines. Presently, there exist a few genre corpuses which include web genres like articles, online news, journalistic etc. The active nature of the web allows new genres to come into existence and these genres are called as emerging genres. In this paper, two novel algorithms are proposed namely Identification of Emerging Genres (IEG) algorithm and Adjustable Centroid Classification (ACC) algorithm. The IEG algorithm is used to identify emerging genres from the web pages that are collected randomly from the web and ACC algorithm is used to evaluate the performance of genre corpus. In this paper, the IEG algorithm has identified three emerging genres from 339 randomly selected web pages from World Wide Web by considering balanced 7-genre corpus for single label and unbalanced 20-genre corpus for multi-label respectively. The performance of the resultant datasets (10-genre single label and 23-genre multi-label) obtained during the identification process is evaluated using ACC algorithm and compared with SVM classifier, random forest classifier for single label classification and binary relevance random forest classifier, binary relevance SVM classifier for multi-label classification respectively. The classification results show that ACC algorithm gave better results when compared to existing classification algorithms.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"48 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74944769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DNA for information security: A Survey on DNA computing and a pseudo DNA method based on central dogma of molecular biology 用于信息安全的DNA:基于分子生物学中心法则的DNA计算和伪DNA方法综述
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066757
Sreeja. C.S, M. Misbahuddin, Mohammed Hashim N.P
Biology is a life science which has high significance on the quality of life and information security is that aspect for social edification, which human beings will never compromise. Both are subjects of high relevance and inevitable for mankind. So, an amalgamation of these subjects definitely turns up as utility technology, either for security or data storage and is known as Bio computing. The secure transfer of information was a major concern from ancient civilizations. Various techniques have been proposed to maintain security of data so that only intended recipient should be able to receive the message other than the sender. These practices became more significant with the introduction of the Internet. Information varies from big data to a particular word, but every piece of information requires proper storage and protection which is a major concern. Cryptography is an art or science of secrecy which protects information from unauthorized access. Various techniques evolved through years for information protection, including Ciphers, Cryptography, Steganography, Biometrics and recent DNA for security.DNA cryptography was a major breakthrough in the field of security which uses Bio-molecular concepts and gives us a new hope of unbreakable algorithms. This paper discusses various DNA based Cryptographic methods proposed till now. It also proposes a DNA symmetric algorithm based on the Pseudo DNA Cryptography and Central dogma of molecular biology. The suggested algorithm uses splicing and padding techniques along with complementary rules which make the algorithm more secure as it is an additional layer of security than conventional cryptographic techniques.
生物学是一门对人类生活质量具有重要意义的生命科学,而信息安全则是对社会的熏化,是人类永远不会妥协的。两者都是高度相关的主题,对人类来说是不可避免的。因此,这些学科的融合肯定会成为实用技术,无论是安全性还是数据存储,都被称为生物计算。信息的安全传输是古代文明关注的主要问题。已经提出了各种技术来维护数据的安全性,以便只有预期的接收者能够接收消息而不是发送者。随着互联网的引入,这些做法变得更加重要。从大数据到特定的单词,信息各不相同,但每一条信息都需要适当的存储和保护,这是一个主要问题。密码学是一门保护信息不受未经授权访问的保密艺术或科学。多年来,各种信息保护技术不断发展,包括密码、密码学、隐写术、生物识别技术和最近的DNA安全技术。DNA密码学是安全领域的重大突破,它运用了生物分子的概念,给我们带来了破解算法的新希望。本文讨论了目前提出的各种基于DNA的加密方法。提出了一种基于伪DNA密码学和分子生物学中心法则的DNA对称算法。建议的算法使用拼接和填充技术以及补充规则,使算法更加安全,因为它是比传统加密技术额外的安全层。
{"title":"DNA for information security: A Survey on DNA computing and a pseudo DNA method based on central dogma of molecular biology","authors":"Sreeja. C.S, M. Misbahuddin, Mohammed Hashim N.P","doi":"10.1109/ICCCT2.2014.7066757","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066757","url":null,"abstract":"Biology is a life science which has high significance on the quality of life and information security is that aspect for social edification, which human beings will never compromise. Both are subjects of high relevance and inevitable for mankind. So, an amalgamation of these subjects definitely turns up as utility technology, either for security or data storage and is known as Bio computing. The secure transfer of information was a major concern from ancient civilizations. Various techniques have been proposed to maintain security of data so that only intended recipient should be able to receive the message other than the sender. These practices became more significant with the introduction of the Internet. Information varies from big data to a particular word, but every piece of information requires proper storage and protection which is a major concern. Cryptography is an art or science of secrecy which protects information from unauthorized access. Various techniques evolved through years for information protection, including Ciphers, Cryptography, Steganography, Biometrics and recent DNA for security.DNA cryptography was a major breakthrough in the field of security which uses Bio-molecular concepts and gives us a new hope of unbreakable algorithms. This paper discusses various DNA based Cryptographic methods proposed till now. It also proposes a DNA symmetric algorithm based on the Pseudo DNA Cryptography and Central dogma of molecular biology. The suggested algorithm uses splicing and padding techniques along with complementary rules which make the algorithm more secure as it is an additional layer of security than conventional cryptographic techniques.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"70 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73728065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Performance analysis of different software reliability prediction methods 不同软件可靠性预测方法的性能分析
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066743
S. Saif, Mudasir M Kirmani, A. Wahid
Software has gained popularity in daily activities ranging from small scale applications running on handheld devices to complex application and big data processing. The software is critical in nature as it has become the most vital part of a system resulting in risks related to software failures. The risk estimate associated with a system can be calculated using different techniques. The performance of these techniques in predicting performance has not been satisfactory under different system parameters defined in advance. A very important aspect of a software system is to monitor the behaviour of the software across different platforms. Software reliability is an important domain in monitoring and managing performance of a software system. Therefore, the need of the hour is to predict software reliability comprehensively using all scientifically acquired data sets. In this paper comprehensive analysis of various parametric and non-parametric reliability growth models has been performed. The results give an insight insight into the effectiveness of non-parametric model while calculating software reliability. This paper further justifies the importance of neural network based models in calculating reliability prediction of a software system.
从在手持设备上运行的小型应用程序到复杂的应用程序和大数据处理,软件在日常活动中越来越受欢迎。软件本质上是关键的,因为它已经成为系统中最重要的部分,导致与软件故障相关的风险。与系统相关的风险评估可以使用不同的技术进行计算。在预先确定的不同系统参数下,这些技术的性能预测效果并不令人满意。软件系统的一个非常重要的方面是监视跨不同平台的软件行为。软件可靠性是监控和管理软件系统性能的一个重要领域。因此,当务之急是综合利用所有科学获取的数据集来预测软件的可靠性。本文对各种参数和非参数可靠性增长模型进行了综合分析。研究结果对非参数模型在软件可靠性计算中的有效性有了深入的认识。本文进一步论证了基于神经网络的模型在软件系统可靠性预测计算中的重要性。
{"title":"Performance analysis of different software reliability prediction methods","authors":"S. Saif, Mudasir M Kirmani, A. Wahid","doi":"10.1109/ICCCT2.2014.7066743","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066743","url":null,"abstract":"Software has gained popularity in daily activities ranging from small scale applications running on handheld devices to complex application and big data processing. The software is critical in nature as it has become the most vital part of a system resulting in risks related to software failures. The risk estimate associated with a system can be calculated using different techniques. The performance of these techniques in predicting performance has not been satisfactory under different system parameters defined in advance. A very important aspect of a software system is to monitor the behaviour of the software across different platforms. Software reliability is an important domain in monitoring and managing performance of a software system. Therefore, the need of the hour is to predict software reliability comprehensively using all scientifically acquired data sets. In this paper comprehensive analysis of various parametric and non-parametric reliability growth models has been performed. The results give an insight insight into the effectiveness of non-parametric model while calculating software reliability. This paper further justifies the importance of neural network based models in calculating reliability prediction of a software system.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"78 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73459971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic semantic classification and categorization of web services in digital environment 数字环境下web服务的自动语义分类与分类
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066749
V. Sawant, V. Ghorpade
Classification of web services through semantic service discovery of a similar event will be the feature services. However, to improve the selection and matching process is not enough. The existing service discovery approaches often published keyword matching to find web services practices. In this paper we propose a framework for automatic service classification and categorization of web service process in digital environment. The proposed framework semantically perform automated service discovery and domain selection using domain-knowledge ontology based classification in a digital environment to improvise the service categorization. It is efficiently able to classify and annotated service information by means of specific service domain knowledge. In order to thoroughly evaluate the performance of our proposed semantic based crawlers for automatic service discovery, we measure the Precision, Mean Average Precision, Recall and F-measure Rates.
通过语义分类的web服务发现类似的服务事件将是服务的特征。然而,仅仅改善选择和匹配过程是不够的。现有的服务发现方法经常发布关键字匹配来查找web服务实践。本文提出了一种数字环境下web服务过程自动分类和分类的框架。该框架利用基于领域知识本体的分类,在数字环境下实现服务的语义自动发现和领域选择,以实现服务的快速分类。利用特定的服务领域知识,可以有效地对服务信息进行分类和标注。为了彻底评估我们提出的基于语义的自动服务发现爬虫的性能,我们测量了精度、平均精度、召回率和f测量率。
{"title":"Automatic semantic classification and categorization of web services in digital environment","authors":"V. Sawant, V. Ghorpade","doi":"10.1109/ICCCT2.2014.7066749","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066749","url":null,"abstract":"Classification of web services through semantic service discovery of a similar event will be the feature services. However, to improve the selection and matching process is not enough. The existing service discovery approaches often published keyword matching to find web services practices. In this paper we propose a framework for automatic service classification and categorization of web service process in digital environment. The proposed framework semantically perform automated service discovery and domain selection using domain-knowledge ontology based classification in a digital environment to improvise the service categorization. It is efficiently able to classify and annotated service information by means of specific service domain knowledge. In order to thoroughly evaluate the performance of our proposed semantic based crawlers for automatic service discovery, we measure the Precision, Mean Average Precision, Recall and F-measure Rates.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"32 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88391362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Cache based evaluation of iceberg queries 基于缓存的冰山查询评估
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066694
V. Shankar, C. V. Guru Rao
Nowadays, it is more demanded for techniques that are efficient in retrieval of small results from large data sets. Iceberg queries are such a kind of queries which accepts large data as input and process them for retrieve small results upon user specified threshold (T). Earlier, the iceberg queries are processed by many ways but are compromised in speed with which the data is retrieved. Thus lots of researchers are concentrating on improvement of iceberg query evaluation methods. Compressed bitmap index is an efficient technique which is developed recently to answer iceberg queries. In this paper, we proposed “Cache Based Evaluation of Iceberg Queries”. An iceberg query is evaluated using compressed bitmap index technique for threshold equals to 1, save results in cache memory for future reference. For further evaluation of an iceberg query thresholds greater than 1 are just picking the results from the cache memory instead of executing once again on the database table. Thus strategy clearly stating that, an execution time of IBQ is improved by avoiding repetition of an evaluation process by multiple times. Experimental results are demonstrating our cache based evaluation strategy is better than existing strategy.
目前,人们更需要能够从大数据集中高效地检索小结果的技术。冰山查询是这样一种查询,它接受大数据作为输入,并根据用户指定的阈值(T)处理它们以检索小结果。以前,冰山查询可以通过多种方式处理,但在检索数据的速度上受到损害。因此,冰山查询评价方法的改进成为众多研究者关注的焦点。压缩位图索引是近年来发展起来的一种解决冰山查询的有效技术。在本文中,我们提出了“基于缓存的冰山查询评估”。冰山查询使用压缩位图索引技术在阈值等于1时进行评估,将结果保存在缓存中以供将来参考。对于冰山查询的进一步评估,大于1的阈值只是从缓存内存中选择结果,而不是再次在数据库表上执行。因此,策略明确指出,通过避免多次重复评估过程,IBQ的执行时间得到了改善。实验结果表明,基于缓存的评估策略优于现有的评估策略。
{"title":"Cache based evaluation of iceberg queries","authors":"V. Shankar, C. V. Guru Rao","doi":"10.1109/ICCCT2.2014.7066694","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066694","url":null,"abstract":"Nowadays, it is more demanded for techniques that are efficient in retrieval of small results from large data sets. Iceberg queries are such a kind of queries which accepts large data as input and process them for retrieve small results upon user specified threshold (T). Earlier, the iceberg queries are processed by many ways but are compromised in speed with which the data is retrieved. Thus lots of researchers are concentrating on improvement of iceberg query evaluation methods. Compressed bitmap index is an efficient technique which is developed recently to answer iceberg queries. In this paper, we proposed “Cache Based Evaluation of Iceberg Queries”. An iceberg query is evaluated using compressed bitmap index technique for threshold equals to 1, save results in cache memory for future reference. For further evaluation of an iceberg query thresholds greater than 1 are just picking the results from the cache memory instead of executing once again on the database table. Thus strategy clearly stating that, an execution time of IBQ is improved by avoiding repetition of an evaluation process by multiple times. Experimental results are demonstrating our cache based evaluation strategy is better than existing strategy.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"2021 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72680498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A combined PTS & SLM approach with dummy signal insertion for PAPR reduction in OFDM systems 一种结合PTS和SLM的方法,在OFDM系统中插入假信号以降低PAPR
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066735
T. Sravanti, N. Vasantha
A novel combined approach of SLM (Selective Mapping), PTS (Partial Transmit Sequence) and DSI (Dummy Signal Insertion) is proposed to diminish PAPR (Peak to Average Power Ratio) and OBI (Out of Band Interference) in OFDM (Orthogonal Frequency Division Modulation) systems. The efficiency of OFDM decreases while the cost of installing HPA (High Power Amplifier) increases when the PAPR factor is high. A lot of research has been done to minimize this factor. The proposed method reduces the computational complexity by minimizing the number of IFFT (Inverse Fast Fourier Transform) operation to a half and the results show an effective decrement of PAPR by 0.6 - 1.4 dB. It also proved from the simulation results that it has 3.2 - 4 dB lower OBI when compared against the conventional and existing methods.
提出了一种新的选择性映射(SLM)、部分发射序列(PTS)和虚拟信号插入(DSI)相结合的方法来降低正交频分调制(OFDM)系统中的峰值平均功率比(PAPR)和带外干扰(OBI)。当PAPR系数较高时,OFDM的效率会降低,而安装HPA(高功率放大器)的成本会增加。为了尽量减少这个因素,已经做了很多研究。该方法通过将IFFT(快速反傅立叶变换)运算次数减少到一半来降低计算复杂度,结果表明PAPR有效降低0.6 - 1.4 dB。仿真结果表明,与传统方法和现有方法相比,该方法的OBI降低了3.2 ~ 4 dB。
{"title":"A combined PTS & SLM approach with dummy signal insertion for PAPR reduction in OFDM systems","authors":"T. Sravanti, N. Vasantha","doi":"10.1109/ICCCT2.2014.7066735","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066735","url":null,"abstract":"A novel combined approach of SLM (Selective Mapping), PTS (Partial Transmit Sequence) and DSI (Dummy Signal Insertion) is proposed to diminish PAPR (Peak to Average Power Ratio) and OBI (Out of Band Interference) in OFDM (Orthogonal Frequency Division Modulation) systems. The efficiency of OFDM decreases while the cost of installing HPA (High Power Amplifier) increases when the PAPR factor is high. A lot of research has been done to minimize this factor. The proposed method reduces the computational complexity by minimizing the number of IFFT (Inverse Fast Fourier Transform) operation to a half and the results show an effective decrement of PAPR by 0.6 - 1.4 dB. It also proved from the simulation results that it has 3.2 - 4 dB lower OBI when compared against the conventional and existing methods.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"3 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73023946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
GPU implementation of Belief Propagation method for Image Restoration using OpenCL 基于OpenCL的图像恢复信念传播方法的GPU实现
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066721
P. Ravibabu, K. S. Rao, Mallesham Dasari
The image processing applications involve huge amount of computational complexity as the operations are carried out on each pixel of the image. The General Purpose computations that are data independent can run on Graphics Processing Units (GPU) to enable speedup in running time due to high level of parallelism. Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL) programming environments are well known parallel programming languages for GPU-based Single Instruction Multiple Data (SIMD) architectures. This paper presents parallel implementation of Belief Propagation (BP) algorithm for Image Restoration on GPU using OpenCL parallel programming environment. The experimental results shows that, GPU-based implementation improves the running time of BP for image restoration when compared to sequential implmentation of BP. The best and average running time of BP algorithm on GPUs with 14 multiprocessors (48 cores) is 0.81ms and 1.46ms when tested on various benchmark images with CIF and VGA resolution.
图像处理应用涉及大量的计算复杂性,因为操作是在图像的每个像素上进行的。与数据无关的通用计算可以在图形处理单元(GPU)上运行,从而由于高度并行性而加快运行时间。计算统一设备架构(CUDA)和开放计算语言(OpenCL)编程环境是众所周知的基于gpu的单指令多数据(SIMD)架构的并行编程语言。本文利用OpenCL并行编程环境在GPU上并行实现图像恢复中的BP算法。实验结果表明,与序列BP相比,基于gpu的BP算法提高了BP图像恢复的运行时间。在CIF和VGA分辨率的各种基准图像上测试,BP算法在14个多处理器(48核)gpu上的最佳和平均运行时间分别为0.81ms和1.46ms。
{"title":"GPU implementation of Belief Propagation method for Image Restoration using OpenCL","authors":"P. Ravibabu, K. S. Rao, Mallesham Dasari","doi":"10.1109/ICCCT2.2014.7066721","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066721","url":null,"abstract":"The image processing applications involve huge amount of computational complexity as the operations are carried out on each pixel of the image. The General Purpose computations that are data independent can run on Graphics Processing Units (GPU) to enable speedup in running time due to high level of parallelism. Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL) programming environments are well known parallel programming languages for GPU-based Single Instruction Multiple Data (SIMD) architectures. This paper presents parallel implementation of Belief Propagation (BP) algorithm for Image Restoration on GPU using OpenCL parallel programming environment. The experimental results shows that, GPU-based implementation improves the running time of BP for image restoration when compared to sequential implmentation of BP. The best and average running time of BP algorithm on GPUs with 14 multiprocessors (48 cores) is 0.81ms and 1.46ms when tested on various benchmark images with CIF and VGA resolution.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"1 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89454175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Performance analysis of CSA using BEC and FZF logic with optimized full adder cell 采用优化的全加法器单元的BEC和FZF逻辑的CSA性能分析
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066706
Shivendra Pandey, A. Khan, Jyotirmoy Pathak, R. Sarma
This paper shows the implementation and comparison of Carry Select Adder (CSA) using BEC (Binary Excess one Converter) and First Zero Finding (FZF) logic implementation techniques with optimization of the Full Adder (FA) cell by minimize number of transistors. The results have been analyzed and compared for implementation of both the above logic styles for 28T, 10T and 8T FA cells where as keeping all other basic cells used for implementation of BEC and FZF based CSA same for all three of adder cells. The analysis shows that the CSA using FZF logic is better in terms of power consumption and Power Delay Product (PDP) for all three FA cells however BEC based CSA proves to be better in terms of number of transistors used to implement the overall circuit. All the designs are implemented 1.8Volt power supply and 180nm CMOS process technology in Cadence Virtuoso environment.
本文介绍了采用二进制过一变换器(BEC)和第一寻零(FZF)逻辑实现技术的进位选择加法器(CSA)的实现和比较,并通过最小化晶体管数量来优化全加法器(FA)单元。对上述两种逻辑风格在28T、10T和8T FA单元中的实现结果进行了分析和比较,同时对所有三个加法器单元保持用于实现基于BEC和FZF的CSA的所有其他基本单元相同。分析表明,使用FZF逻辑的CSA在所有三个FA单元的功耗和功率延迟积(PDP)方面都更好,而基于BEC的CSA在用于实现整个电路的晶体管数量方面表现更好。所有设计均在Cadence Virtuoso环境下实现1.8伏电源和180nm CMOS工艺技术。
{"title":"Performance analysis of CSA using BEC and FZF logic with optimized full adder cell","authors":"Shivendra Pandey, A. Khan, Jyotirmoy Pathak, R. Sarma","doi":"10.1109/ICCCT2.2014.7066706","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066706","url":null,"abstract":"This paper shows the implementation and comparison of Carry Select Adder (CSA) using BEC (Binary Excess one Converter) and First Zero Finding (FZF) logic implementation techniques with optimization of the Full Adder (FA) cell by minimize number of transistors. The results have been analyzed and compared for implementation of both the above logic styles for 28T, 10T and 8T FA cells where as keeping all other basic cells used for implementation of BEC and FZF based CSA same for all three of adder cells. The analysis shows that the CSA using FZF logic is better in terms of power consumption and Power Delay Product (PDP) for all three FA cells however BEC based CSA proves to be better in terms of number of transistors used to implement the overall circuit. All the designs are implemented 1.8Volt power supply and 180nm CMOS process technology in Cadence Virtuoso environment.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"28 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80180943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enhanced Test Case Design mechanism for regression & impact testing 增强的测试用例设计机制,用于回归和影响测试
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066739
Himanshu Joshi, H. Varma, R. Surapaneni
Test Case Design and Specifications are mostly written by teams in descriptive manner. Although teams put in their best efforts to write test cases that cover impacted requirements and regression testing scenarios, creating set of all-inclusive test cases is not possible. Also, it becomes difficult for one person to understand & execute test cases authored by another person. This paper examines existing test case design mechanism and proposes a new technique which overcomes the short falls of the existing method and utilizes Test Matrix method for automation.
测试用例设计和规格说明主要是由团队以描述性的方式编写的。尽管团队投入了他们最大的努力来编写涵盖受影响需求和回归测试场景的测试用例,但是创建一组包含所有用例的测试用例是不可能的。另外,一个人很难理解和执行由另一个人编写的测试用例。本文考察了现有的测试用例设计机制,提出了一种新的测试用例设计技术,克服了现有方法的不足,利用测试矩阵方法实现自动化。
{"title":"Enhanced Test Case Design mechanism for regression & impact testing","authors":"Himanshu Joshi, H. Varma, R. Surapaneni","doi":"10.1109/ICCCT2.2014.7066739","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066739","url":null,"abstract":"Test Case Design and Specifications are mostly written by teams in descriptive manner. Although teams put in their best efforts to write test cases that cover impacted requirements and regression testing scenarios, creating set of all-inclusive test cases is not possible. Also, it becomes difficult for one person to understand & execute test cases authored by another person. This paper examines existing test case design mechanism and proposes a new technique which overcomes the short falls of the existing method and utilizes Test Matrix method for automation.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"21 1","pages":"1-3"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73632082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Context based behavioural verification of composed web services modeled in finite state machines 有限状态机中建模的组合web服务的基于上下文的行为验证
Pub Date : 2014-12-01 DOI: 10.1109/ICCCT2.2014.7066722
D. Chenthati, H. Mohanty, A. Damodaram
Monitoring service execution for finding run time errors is of prime interest in achieving resilient service provisioning for users on web. Though the services are modelled and verified for structural errors still behavioural errors may occur for many practical reasons e.g undefined user network malfunctioning and computational errors. This makes a need for run time checking of service behaviour to ensure correctness in service execution. Run time system checking is always tricky for time consideration as overhead in runtime verification may discourage service user for delay in service provisioning. This paper address run-time behaviour verification with respect to the contexts, a service is designed for. A service is modelled in AFSM Finite State Machine augmented with context information. Context of a state is defined by variables and their values associated with. In case of a composed service communication among constituent services is also modeled both execution of a composed service and interactions among its constituting services. For runtime service behaviour verification, here we propose a technique that validates context sequence, context co-occurrence and context timeliness. A framework is proposed for system implementation.
监视服务执行以发现运行时错误是实现web上为用户提供弹性服务的主要兴趣。尽管对服务进行了结构错误的建模和验证,但由于许多实际原因,例如未定义的用户网络故障和计算错误,仍然可能发生行为错误。这就需要对服务行为进行运行时检查,以确保服务执行的正确性。从时间上考虑,运行时系统检查总是很棘手,因为运行时验证的开销可能会阻止服务用户延迟提供服务。本文针对服务被设计用于的上下文处理运行时行为验证。在AFSM有限状态机中扩展上下文信息,对服务进行建模。一个状态的上下文是由变量及其关联的值定义的。对于组成服务之间的组合服务通信,还对组合服务的执行及其组成服务之间的交互进行了建模。对于运行时服务行为验证,我们在这里提出了一种验证上下文序列、上下文共现性和上下文时效性的技术。提出了系统实现的框架。
{"title":"Context based behavioural verification of composed web services modeled in finite state machines","authors":"D. Chenthati, H. Mohanty, A. Damodaram","doi":"10.1109/ICCCT2.2014.7066722","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066722","url":null,"abstract":"Monitoring service execution for finding run time errors is of prime interest in achieving resilient service provisioning for users on web. Though the services are modelled and verified for structural errors still behavioural errors may occur for many practical reasons e.g undefined user network malfunctioning and computational errors. This makes a need for run time checking of service behaviour to ensure correctness in service execution. Run time system checking is always tricky for time consideration as overhead in runtime verification may discourage service user for delay in service provisioning. This paper address run-time behaviour verification with respect to the contexts, a service is designed for. A service is modelled in AFSM Finite State Machine augmented with context information. Context of a state is defined by variables and their values associated with. In case of a composed service communication among constituent services is also modeled both execution of a composed service and interactions among its constituting services. For runtime service behaviour verification, here we propose a technique that validates context sequence, context co-occurrence and context timeliness. A framework is proposed for system implementation.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"15 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82290847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2021 RIVF International Conference on Computing and Communication Technologies (RIVF)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1