Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066692
K. Kumari, A. Reddy
The information in World Wide Web is dynamic and growing faster. Existing topic based search engines are not adequate to retrieve information required by the users. So there is a necessity to develop genre based search engines. Firstly, web genres have to be identified to develop genre based search engines. Presently, there exist a few genre corpuses which include web genres like articles, online news, journalistic etc. The active nature of the web allows new genres to come into existence and these genres are called as emerging genres. In this paper, two novel algorithms are proposed namely Identification of Emerging Genres (IEG) algorithm and Adjustable Centroid Classification (ACC) algorithm. The IEG algorithm is used to identify emerging genres from the web pages that are collected randomly from the web and ACC algorithm is used to evaluate the performance of genre corpus. In this paper, the IEG algorithm has identified three emerging genres from 339 randomly selected web pages from World Wide Web by considering balanced 7-genre corpus for single label and unbalanced 20-genre corpus for multi-label respectively. The performance of the resultant datasets (10-genre single label and 23-genre multi-label) obtained during the identification process is evaluated using ACC algorithm and compared with SVM classifier, random forest classifier for single label classification and binary relevance random forest classifier, binary relevance SVM classifier for multi-label classification respectively. The classification results show that ACC algorithm gave better results when compared to existing classification algorithms.
{"title":"Identification and classification of emerging genres in WebPages","authors":"K. Kumari, A. Reddy","doi":"10.1109/ICCCT2.2014.7066692","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066692","url":null,"abstract":"The information in World Wide Web is dynamic and growing faster. Existing topic based search engines are not adequate to retrieve information required by the users. So there is a necessity to develop genre based search engines. Firstly, web genres have to be identified to develop genre based search engines. Presently, there exist a few genre corpuses which include web genres like articles, online news, journalistic etc. The active nature of the web allows new genres to come into existence and these genres are called as emerging genres. In this paper, two novel algorithms are proposed namely Identification of Emerging Genres (IEG) algorithm and Adjustable Centroid Classification (ACC) algorithm. The IEG algorithm is used to identify emerging genres from the web pages that are collected randomly from the web and ACC algorithm is used to evaluate the performance of genre corpus. In this paper, the IEG algorithm has identified three emerging genres from 339 randomly selected web pages from World Wide Web by considering balanced 7-genre corpus for single label and unbalanced 20-genre corpus for multi-label respectively. The performance of the resultant datasets (10-genre single label and 23-genre multi-label) obtained during the identification process is evaluated using ACC algorithm and compared with SVM classifier, random forest classifier for single label classification and binary relevance random forest classifier, binary relevance SVM classifier for multi-label classification respectively. The classification results show that ACC algorithm gave better results when compared to existing classification algorithms.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"48 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74944769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066757
Sreeja. C.S, M. Misbahuddin, Mohammed Hashim N.P
Biology is a life science which has high significance on the quality of life and information security is that aspect for social edification, which human beings will never compromise. Both are subjects of high relevance and inevitable for mankind. So, an amalgamation of these subjects definitely turns up as utility technology, either for security or data storage and is known as Bio computing. The secure transfer of information was a major concern from ancient civilizations. Various techniques have been proposed to maintain security of data so that only intended recipient should be able to receive the message other than the sender. These practices became more significant with the introduction of the Internet. Information varies from big data to a particular word, but every piece of information requires proper storage and protection which is a major concern. Cryptography is an art or science of secrecy which protects information from unauthorized access. Various techniques evolved through years for information protection, including Ciphers, Cryptography, Steganography, Biometrics and recent DNA for security.DNA cryptography was a major breakthrough in the field of security which uses Bio-molecular concepts and gives us a new hope of unbreakable algorithms. This paper discusses various DNA based Cryptographic methods proposed till now. It also proposes a DNA symmetric algorithm based on the Pseudo DNA Cryptography and Central dogma of molecular biology. The suggested algorithm uses splicing and padding techniques along with complementary rules which make the algorithm more secure as it is an additional layer of security than conventional cryptographic techniques.
{"title":"DNA for information security: A Survey on DNA computing and a pseudo DNA method based on central dogma of molecular biology","authors":"Sreeja. C.S, M. Misbahuddin, Mohammed Hashim N.P","doi":"10.1109/ICCCT2.2014.7066757","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066757","url":null,"abstract":"Biology is a life science which has high significance on the quality of life and information security is that aspect for social edification, which human beings will never compromise. Both are subjects of high relevance and inevitable for mankind. So, an amalgamation of these subjects definitely turns up as utility technology, either for security or data storage and is known as Bio computing. The secure transfer of information was a major concern from ancient civilizations. Various techniques have been proposed to maintain security of data so that only intended recipient should be able to receive the message other than the sender. These practices became more significant with the introduction of the Internet. Information varies from big data to a particular word, but every piece of information requires proper storage and protection which is a major concern. Cryptography is an art or science of secrecy which protects information from unauthorized access. Various techniques evolved through years for information protection, including Ciphers, Cryptography, Steganography, Biometrics and recent DNA for security.DNA cryptography was a major breakthrough in the field of security which uses Bio-molecular concepts and gives us a new hope of unbreakable algorithms. This paper discusses various DNA based Cryptographic methods proposed till now. It also proposes a DNA symmetric algorithm based on the Pseudo DNA Cryptography and Central dogma of molecular biology. The suggested algorithm uses splicing and padding techniques along with complementary rules which make the algorithm more secure as it is an additional layer of security than conventional cryptographic techniques.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"70 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73728065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066743
S. Saif, Mudasir M Kirmani, A. Wahid
Software has gained popularity in daily activities ranging from small scale applications running on handheld devices to complex application and big data processing. The software is critical in nature as it has become the most vital part of a system resulting in risks related to software failures. The risk estimate associated with a system can be calculated using different techniques. The performance of these techniques in predicting performance has not been satisfactory under different system parameters defined in advance. A very important aspect of a software system is to monitor the behaviour of the software across different platforms. Software reliability is an important domain in monitoring and managing performance of a software system. Therefore, the need of the hour is to predict software reliability comprehensively using all scientifically acquired data sets. In this paper comprehensive analysis of various parametric and non-parametric reliability growth models has been performed. The results give an insight insight into the effectiveness of non-parametric model while calculating software reliability. This paper further justifies the importance of neural network based models in calculating reliability prediction of a software system.
{"title":"Performance analysis of different software reliability prediction methods","authors":"S. Saif, Mudasir M Kirmani, A. Wahid","doi":"10.1109/ICCCT2.2014.7066743","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066743","url":null,"abstract":"Software has gained popularity in daily activities ranging from small scale applications running on handheld devices to complex application and big data processing. The software is critical in nature as it has become the most vital part of a system resulting in risks related to software failures. The risk estimate associated with a system can be calculated using different techniques. The performance of these techniques in predicting performance has not been satisfactory under different system parameters defined in advance. A very important aspect of a software system is to monitor the behaviour of the software across different platforms. Software reliability is an important domain in monitoring and managing performance of a software system. Therefore, the need of the hour is to predict software reliability comprehensively using all scientifically acquired data sets. In this paper comprehensive analysis of various parametric and non-parametric reliability growth models has been performed. The results give an insight insight into the effectiveness of non-parametric model while calculating software reliability. This paper further justifies the importance of neural network based models in calculating reliability prediction of a software system.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"78 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73459971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066749
V. Sawant, V. Ghorpade
Classification of web services through semantic service discovery of a similar event will be the feature services. However, to improve the selection and matching process is not enough. The existing service discovery approaches often published keyword matching to find web services practices. In this paper we propose a framework for automatic service classification and categorization of web service process in digital environment. The proposed framework semantically perform automated service discovery and domain selection using domain-knowledge ontology based classification in a digital environment to improvise the service categorization. It is efficiently able to classify and annotated service information by means of specific service domain knowledge. In order to thoroughly evaluate the performance of our proposed semantic based crawlers for automatic service discovery, we measure the Precision, Mean Average Precision, Recall and F-measure Rates.
{"title":"Automatic semantic classification and categorization of web services in digital environment","authors":"V. Sawant, V. Ghorpade","doi":"10.1109/ICCCT2.2014.7066749","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066749","url":null,"abstract":"Classification of web services through semantic service discovery of a similar event will be the feature services. However, to improve the selection and matching process is not enough. The existing service discovery approaches often published keyword matching to find web services practices. In this paper we propose a framework for automatic service classification and categorization of web service process in digital environment. The proposed framework semantically perform automated service discovery and domain selection using domain-knowledge ontology based classification in a digital environment to improvise the service categorization. It is efficiently able to classify and annotated service information by means of specific service domain knowledge. In order to thoroughly evaluate the performance of our proposed semantic based crawlers for automatic service discovery, we measure the Precision, Mean Average Precision, Recall and F-measure Rates.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"32 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88391362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066694
V. Shankar, C. V. Guru Rao
Nowadays, it is more demanded for techniques that are efficient in retrieval of small results from large data sets. Iceberg queries are such a kind of queries which accepts large data as input and process them for retrieve small results upon user specified threshold (T). Earlier, the iceberg queries are processed by many ways but are compromised in speed with which the data is retrieved. Thus lots of researchers are concentrating on improvement of iceberg query evaluation methods. Compressed bitmap index is an efficient technique which is developed recently to answer iceberg queries. In this paper, we proposed “Cache Based Evaluation of Iceberg Queries”. An iceberg query is evaluated using compressed bitmap index technique for threshold equals to 1, save results in cache memory for future reference. For further evaluation of an iceberg query thresholds greater than 1 are just picking the results from the cache memory instead of executing once again on the database table. Thus strategy clearly stating that, an execution time of IBQ is improved by avoiding repetition of an evaluation process by multiple times. Experimental results are demonstrating our cache based evaluation strategy is better than existing strategy.
{"title":"Cache based evaluation of iceberg queries","authors":"V. Shankar, C. V. Guru Rao","doi":"10.1109/ICCCT2.2014.7066694","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066694","url":null,"abstract":"Nowadays, it is more demanded for techniques that are efficient in retrieval of small results from large data sets. Iceberg queries are such a kind of queries which accepts large data as input and process them for retrieve small results upon user specified threshold (T). Earlier, the iceberg queries are processed by many ways but are compromised in speed with which the data is retrieved. Thus lots of researchers are concentrating on improvement of iceberg query evaluation methods. Compressed bitmap index is an efficient technique which is developed recently to answer iceberg queries. In this paper, we proposed “Cache Based Evaluation of Iceberg Queries”. An iceberg query is evaluated using compressed bitmap index technique for threshold equals to 1, save results in cache memory for future reference. For further evaluation of an iceberg query thresholds greater than 1 are just picking the results from the cache memory instead of executing once again on the database table. Thus strategy clearly stating that, an execution time of IBQ is improved by avoiding repetition of an evaluation process by multiple times. Experimental results are demonstrating our cache based evaluation strategy is better than existing strategy.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"2021 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72680498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066735
T. Sravanti, N. Vasantha
A novel combined approach of SLM (Selective Mapping), PTS (Partial Transmit Sequence) and DSI (Dummy Signal Insertion) is proposed to diminish PAPR (Peak to Average Power Ratio) and OBI (Out of Band Interference) in OFDM (Orthogonal Frequency Division Modulation) systems. The efficiency of OFDM decreases while the cost of installing HPA (High Power Amplifier) increases when the PAPR factor is high. A lot of research has been done to minimize this factor. The proposed method reduces the computational complexity by minimizing the number of IFFT (Inverse Fast Fourier Transform) operation to a half and the results show an effective decrement of PAPR by 0.6 - 1.4 dB. It also proved from the simulation results that it has 3.2 - 4 dB lower OBI when compared against the conventional and existing methods.
{"title":"A combined PTS & SLM approach with dummy signal insertion for PAPR reduction in OFDM systems","authors":"T. Sravanti, N. Vasantha","doi":"10.1109/ICCCT2.2014.7066735","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066735","url":null,"abstract":"A novel combined approach of SLM (Selective Mapping), PTS (Partial Transmit Sequence) and DSI (Dummy Signal Insertion) is proposed to diminish PAPR (Peak to Average Power Ratio) and OBI (Out of Band Interference) in OFDM (Orthogonal Frequency Division Modulation) systems. The efficiency of OFDM decreases while the cost of installing HPA (High Power Amplifier) increases when the PAPR factor is high. A lot of research has been done to minimize this factor. The proposed method reduces the computational complexity by minimizing the number of IFFT (Inverse Fast Fourier Transform) operation to a half and the results show an effective decrement of PAPR by 0.6 - 1.4 dB. It also proved from the simulation results that it has 3.2 - 4 dB lower OBI when compared against the conventional and existing methods.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"3 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73023946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066721
P. Ravibabu, K. S. Rao, Mallesham Dasari
The image processing applications involve huge amount of computational complexity as the operations are carried out on each pixel of the image. The General Purpose computations that are data independent can run on Graphics Processing Units (GPU) to enable speedup in running time due to high level of parallelism. Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL) programming environments are well known parallel programming languages for GPU-based Single Instruction Multiple Data (SIMD) architectures. This paper presents parallel implementation of Belief Propagation (BP) algorithm for Image Restoration on GPU using OpenCL parallel programming environment. The experimental results shows that, GPU-based implementation improves the running time of BP for image restoration when compared to sequential implmentation of BP. The best and average running time of BP algorithm on GPUs with 14 multiprocessors (48 cores) is 0.81ms and 1.46ms when tested on various benchmark images with CIF and VGA resolution.
{"title":"GPU implementation of Belief Propagation method for Image Restoration using OpenCL","authors":"P. Ravibabu, K. S. Rao, Mallesham Dasari","doi":"10.1109/ICCCT2.2014.7066721","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066721","url":null,"abstract":"The image processing applications involve huge amount of computational complexity as the operations are carried out on each pixel of the image. The General Purpose computations that are data independent can run on Graphics Processing Units (GPU) to enable speedup in running time due to high level of parallelism. Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL) programming environments are well known parallel programming languages for GPU-based Single Instruction Multiple Data (SIMD) architectures. This paper presents parallel implementation of Belief Propagation (BP) algorithm for Image Restoration on GPU using OpenCL parallel programming environment. The experimental results shows that, GPU-based implementation improves the running time of BP for image restoration when compared to sequential implmentation of BP. The best and average running time of BP algorithm on GPUs with 14 multiprocessors (48 cores) is 0.81ms and 1.46ms when tested on various benchmark images with CIF and VGA resolution.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"1 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89454175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066706
Shivendra Pandey, A. Khan, Jyotirmoy Pathak, R. Sarma
This paper shows the implementation and comparison of Carry Select Adder (CSA) using BEC (Binary Excess one Converter) and First Zero Finding (FZF) logic implementation techniques with optimization of the Full Adder (FA) cell by minimize number of transistors. The results have been analyzed and compared for implementation of both the above logic styles for 28T, 10T and 8T FA cells where as keeping all other basic cells used for implementation of BEC and FZF based CSA same for all three of adder cells. The analysis shows that the CSA using FZF logic is better in terms of power consumption and Power Delay Product (PDP) for all three FA cells however BEC based CSA proves to be better in terms of number of transistors used to implement the overall circuit. All the designs are implemented 1.8Volt power supply and 180nm CMOS process technology in Cadence Virtuoso environment.
{"title":"Performance analysis of CSA using BEC and FZF logic with optimized full adder cell","authors":"Shivendra Pandey, A. Khan, Jyotirmoy Pathak, R. Sarma","doi":"10.1109/ICCCT2.2014.7066706","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066706","url":null,"abstract":"This paper shows the implementation and comparison of Carry Select Adder (CSA) using BEC (Binary Excess one Converter) and First Zero Finding (FZF) logic implementation techniques with optimization of the Full Adder (FA) cell by minimize number of transistors. The results have been analyzed and compared for implementation of both the above logic styles for 28T, 10T and 8T FA cells where as keeping all other basic cells used for implementation of BEC and FZF based CSA same for all three of adder cells. The analysis shows that the CSA using FZF logic is better in terms of power consumption and Power Delay Product (PDP) for all three FA cells however BEC based CSA proves to be better in terms of number of transistors used to implement the overall circuit. All the designs are implemented 1.8Volt power supply and 180nm CMOS process technology in Cadence Virtuoso environment.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"28 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80180943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066739
Himanshu Joshi, H. Varma, R. Surapaneni
Test Case Design and Specifications are mostly written by teams in descriptive manner. Although teams put in their best efforts to write test cases that cover impacted requirements and regression testing scenarios, creating set of all-inclusive test cases is not possible. Also, it becomes difficult for one person to understand & execute test cases authored by another person. This paper examines existing test case design mechanism and proposes a new technique which overcomes the short falls of the existing method and utilizes Test Matrix method for automation.
{"title":"Enhanced Test Case Design mechanism for regression & impact testing","authors":"Himanshu Joshi, H. Varma, R. Surapaneni","doi":"10.1109/ICCCT2.2014.7066739","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066739","url":null,"abstract":"Test Case Design and Specifications are mostly written by teams in descriptive manner. Although teams put in their best efforts to write test cases that cover impacted requirements and regression testing scenarios, creating set of all-inclusive test cases is not possible. Also, it becomes difficult for one person to understand & execute test cases authored by another person. This paper examines existing test case design mechanism and proposes a new technique which overcomes the short falls of the existing method and utilizes Test Matrix method for automation.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"21 1","pages":"1-3"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73632082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCCT2.2014.7066722
D. Chenthati, H. Mohanty, A. Damodaram
Monitoring service execution for finding run time errors is of prime interest in achieving resilient service provisioning for users on web. Though the services are modelled and verified for structural errors still behavioural errors may occur for many practical reasons e.g undefined user network malfunctioning and computational errors. This makes a need for run time checking of service behaviour to ensure correctness in service execution. Run time system checking is always tricky for time consideration as overhead in runtime verification may discourage service user for delay in service provisioning. This paper address run-time behaviour verification with respect to the contexts, a service is designed for. A service is modelled in AFSM Finite State Machine augmented with context information. Context of a state is defined by variables and their values associated with. In case of a composed service communication among constituent services is also modeled both execution of a composed service and interactions among its constituting services. For runtime service behaviour verification, here we propose a technique that validates context sequence, context co-occurrence and context timeliness. A framework is proposed for system implementation.
{"title":"Context based behavioural verification of composed web services modeled in finite state machines","authors":"D. Chenthati, H. Mohanty, A. Damodaram","doi":"10.1109/ICCCT2.2014.7066722","DOIUrl":"https://doi.org/10.1109/ICCCT2.2014.7066722","url":null,"abstract":"Monitoring service execution for finding run time errors is of prime interest in achieving resilient service provisioning for users on web. Though the services are modelled and verified for structural errors still behavioural errors may occur for many practical reasons e.g undefined user network malfunctioning and computational errors. This makes a need for run time checking of service behaviour to ensure correctness in service execution. Run time system checking is always tricky for time consideration as overhead in runtime verification may discourage service user for delay in service provisioning. This paper address run-time behaviour verification with respect to the contexts, a service is designed for. A service is modelled in AFSM Finite State Machine augmented with context information. Context of a state is defined by variables and their values associated with. In case of a composed service communication among constituent services is also modeled both execution of a composed service and interactions among its constituting services. For runtime service behaviour verification, here we propose a technique that validates context sequence, context co-occurrence and context timeliness. A framework is proposed for system implementation.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"15 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82290847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}