Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435706
T. Kartheeswaran, V. Senthooran, T. D. D. L. Pemadasa
Today's Information Technology grows rapidly with new technologies and need more security for the data transmission over the internet. Steganography is one of the solutions to ensure the data secure over the internet. An audio steganography is a method to transfer concealed information by changing the cover audio file without degrading the quality of the original file. The cover medium before steganography and stego medium after steganography should have the same characteristics of a good steganographic system. The Agent technology involves distributing the services with flexible manner. As most of the applications are created as services in present computing scenario and data security also can be served as a service. Agent-based steganography will improve the efficiency of the secure steganographic system and it will be more convenient in terms of flexibility and availability. In this research paper, we present a trusted communication platform for multi-agents that are able to hide the confidential message in the cover audio stream according to the user request and retrieve the hidden information from the stego audio file. This system provides high availability and flexibility in this context and a more feasible way to trust the message transmission. This work is currently being done and designing part of this work has been done successfully with satisfied results.
{"title":"Multi agent based audio steganography","authors":"T. Kartheeswaran, V. Senthooran, T. D. D. L. Pemadasa","doi":"10.1109/ICCIC.2015.7435706","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435706","url":null,"abstract":"Today's Information Technology grows rapidly with new technologies and need more security for the data transmission over the internet. Steganography is one of the solutions to ensure the data secure over the internet. An audio steganography is a method to transfer concealed information by changing the cover audio file without degrading the quality of the original file. The cover medium before steganography and stego medium after steganography should have the same characteristics of a good steganographic system. The Agent technology involves distributing the services with flexible manner. As most of the applications are created as services in present computing scenario and data security also can be served as a service. Agent-based steganography will improve the efficiency of the secure steganographic system and it will be more convenient in terms of flexibility and availability. In this research paper, we present a trusted communication platform for multi-agents that are able to hide the confidential message in the cover audio stream according to the user request and retrieve the hidden information from the stego audio file. This system provides high availability and flexibility in this context and a more feasible way to trust the message transmission. This work is currently being done and designing part of this work has been done successfully with satisfied results.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"32 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114097308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435728
M. Tamilselvi, G. Ramkumar
Diabetes is deliberate as one of the major contributors of crystallize infirmity and death in non-infectious diseases. Common method for determination of blood glucose concentration is using a self-monitoring glucose meter. This process requires pricking the finger and pry out the blood from the forearm and doing the chemical analysis with the help of disposable test strips. The pain and difficulty caused by this method have lead to the evolution of a noninvasive method. This method makes use of a near infrared sensor for transmission and reception of rays from fingertip. Near-infrared (NIR) is passing through the fingertip, before and after obstructing the flow of blood. By explicate the variation in received signal intensity achieved after reflection in both the cases, glucose present in blood can be predicted and wirelessly transmitted to a remote PC. The results show the potential of glucose measurement using Near-infrared.
{"title":"Non-invasive tracking and monitoring glucose content using near infrared spectroscopy","authors":"M. Tamilselvi, G. Ramkumar","doi":"10.1109/ICCIC.2015.7435728","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435728","url":null,"abstract":"Diabetes is deliberate as one of the major contributors of crystallize infirmity and death in non-infectious diseases. Common method for determination of blood glucose concentration is using a self-monitoring glucose meter. This process requires pricking the finger and pry out the blood from the forearm and doing the chemical analysis with the help of disposable test strips. The pain and difficulty caused by this method have lead to the evolution of a noninvasive method. This method makes use of a near infrared sensor for transmission and reception of rays from fingertip. Near-infrared (NIR) is passing through the fingertip, before and after obstructing the flow of blood. By explicate the variation in received signal intensity achieved after reflection in both the cases, glucose present in blood can be predicted and wirelessly transmitted to a remote PC. The results show the potential of glucose measurement using Near-infrared.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114158540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435701
V. Srikanth, N. R. Kisore
To date a number of comprehensive techniques have been proposed to defend against buffer overflow attacks. While in theory these techniques aim to detect and defend against all forms of buffer overflows, in practice attackers constantly find techniques to bypass these protection mechanisms. In addition quite many of the mechanisms proposed in literature are never absorbed into a production system as these techniques suffer from performance issues such as high operational overhead in terms of system memory and/or CPU cycles and incompatibility with legacy systems. Further, none of the proposed security mechanisms guarantee 100% assurance against an attacker. On the other hand with the increase in the amount of digital data and the number of devices connected to the internet, the amount of information lost in the event of a large scale cyber attack is ever increasing. While often theoretical study of security is sufficient to identify the weakness of IT systems, an empirical evaluation is necessary to perform cost benefit benefit between the number computer hijacked (is an indirect measure of amount of information lost) in the event of a large scale cyber attack and the buffer overflow protection mechanism adopted. In this paper we propose an architecture for creation of an experimental test bed to evaluate the effectiveness of a buffer overflow protection mechanism by measuring overhead incurred versus it's effectiveness in defending against a large scale cyber attack.
{"title":"Design of experimental test bed to evaluate effectiveness of software protection mechanisms against buffer overflow attacks through emulation","authors":"V. Srikanth, N. R. Kisore","doi":"10.1109/ICCIC.2015.7435701","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435701","url":null,"abstract":"To date a number of comprehensive techniques have been proposed to defend against buffer overflow attacks. While in theory these techniques aim to detect and defend against all forms of buffer overflows, in practice attackers constantly find techniques to bypass these protection mechanisms. In addition quite many of the mechanisms proposed in literature are never absorbed into a production system as these techniques suffer from performance issues such as high operational overhead in terms of system memory and/or CPU cycles and incompatibility with legacy systems. Further, none of the proposed security mechanisms guarantee 100% assurance against an attacker. On the other hand with the increase in the amount of digital data and the number of devices connected to the internet, the amount of information lost in the event of a large scale cyber attack is ever increasing. While often theoretical study of security is sufficient to identify the weakness of IT systems, an empirical evaluation is necessary to perform cost benefit benefit between the number computer hijacked (is an indirect measure of amount of information lost) in the event of a large scale cyber attack and the buffer overflow protection mechanism adopted. In this paper we propose an architecture for creation of an experimental test bed to evaluate the effectiveness of a buffer overflow protection mechanism by measuring overhead incurred versus it's effectiveness in defending against a large scale cyber attack.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120953220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435692
R. K. Pathak, S. Meena
This paper has been proposed for the image steganography. It is based on a modification of LSB (least significant bit) of pixels and this modification is the replacement of LSB bits of the cover image (CI) pixels that carries the most significant bits (MSB) of data image (DI). This makes the algorithm modest. In steganography of image, security is also a major concern. Here, key based PN (Pseudo Number) sequence is generated. It is used to provide security to algorithm against the stegano-analytic attack. The algorithm also has been improved to detect and locate tampering done by malicious attackers. This is obtained by conversion of image into a fixed point image using GCD (Gaussian convolution and de-convolution) transform.
{"title":"LSB based image steganography using PN sequence & GCD transform","authors":"R. K. Pathak, S. Meena","doi":"10.1109/ICCIC.2015.7435692","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435692","url":null,"abstract":"This paper has been proposed for the image steganography. It is based on a modification of LSB (least significant bit) of pixels and this modification is the replacement of LSB bits of the cover image (CI) pixels that carries the most significant bits (MSB) of data image (DI). This makes the algorithm modest. In steganography of image, security is also a major concern. Here, key based PN (Pseudo Number) sequence is generated. It is used to provide security to algorithm against the stegano-analytic attack. The algorithm also has been improved to detect and locate tampering done by malicious attackers. This is obtained by conversion of image into a fixed point image using GCD (Gaussian convolution and de-convolution) transform.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116071384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435688
T. Kumuda, L. Basavaraj
Text in camera captured images contains important and useful information. Text in images can be used for identification, indexing and retrieval. Detection and localization of text from camera captured images is still a challenging task due to high variability of text appearance. In this paper we propose an efficient algorithm, for detecting and localizing text in natural scene images. The method is based on texture feature extraction using first and second order statistics. The entire work is divided into two stages. Text regions are detected in the first stage using texture features. Discriminative functions are used to filter out non-text regions. In the second stage the detected text regions are merged and localized. An experimental results obtained shows that the proposed approach detects and localizes texts of various sizes, fonts, orientations and languages efficiently.
{"title":"Detection and localization of text from natural scene images using texture features","authors":"T. Kumuda, L. Basavaraj","doi":"10.1109/ICCIC.2015.7435688","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435688","url":null,"abstract":"Text in camera captured images contains important and useful information. Text in images can be used for identification, indexing and retrieval. Detection and localization of text from camera captured images is still a challenging task due to high variability of text appearance. In this paper we propose an efficient algorithm, for detecting and localizing text in natural scene images. The method is based on texture feature extraction using first and second order statistics. The entire work is divided into two stages. Text regions are detected in the first stage using texture features. Discriminative functions are used to filter out non-text regions. In the second stage the detected text regions are merged and localized. An experimental results obtained shows that the proposed approach detects and localizes texts of various sizes, fonts, orientations and languages efficiently.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127745924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extreme learning machine is state of art supervised machine learning technique for classification and regression. A single ELM classifier can however generate faulty or skewed results due to random initialization of weights between input and hidden layer. To overcome this instability problem ensemble methods can be employed. Ensemble methods may have problem of redundancy i.e. ensemble may contain several redundant classifiers which can be weak or highly correlated classifiers. Ensemble pruning can be used to remove these redundant classifiers. The pruned ensemble should not only be accurate but diverse as well in order to correctly classify boundary instances. This work proposes an ensemble pruning algorithm which tries to establish a tradeoff between accuracy and diversity. The paper also proposes a metric which scores classifiers based on their diversity and contribution towards the ensemble. The results show that the pruned ensemble performs equally well or in some cases even better as compared to the unpruned set in terms of accuracy and diversity. The results of the experiments show that the proposed algorithm performs better than VELM. The proposed algorithm reduces the ensemble size to less than 60 % of the original ensemble size (original ensemble size is set to 50).
{"title":"A novel sparse ensemble pruning algorithm using a new diversity measure","authors":"Sanyam Shukla, Jivitesh Sharma, Shankul Khare, Samruddhi Kochkar, Vanya Dharni","doi":"10.1109/ICCIC.2015.7435815","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435815","url":null,"abstract":"Extreme learning machine is state of art supervised machine learning technique for classification and regression. A single ELM classifier can however generate faulty or skewed results due to random initialization of weights between input and hidden layer. To overcome this instability problem ensemble methods can be employed. Ensemble methods may have problem of redundancy i.e. ensemble may contain several redundant classifiers which can be weak or highly correlated classifiers. Ensemble pruning can be used to remove these redundant classifiers. The pruned ensemble should not only be accurate but diverse as well in order to correctly classify boundary instances. This work proposes an ensemble pruning algorithm which tries to establish a tradeoff between accuracy and diversity. The paper also proposes a metric which scores classifiers based on their diversity and contribution towards the ensemble. The results show that the pruned ensemble performs equally well or in some cases even better as compared to the unpruned set in terms of accuracy and diversity. The results of the experiments show that the proposed algorithm performs better than VELM. The proposed algorithm reduces the ensemble size to less than 60 % of the original ensemble size (original ensemble size is set to 50).","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127768826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435712
Deepak Kumar, Pankaj Verma
Rapid growth of wireless services demands more and more spectrum, but spectrum is limited resource. Recent studies show that the spectrum which is allocated to licensed users or primary users (PUs) is not fully utilized. To address this problem Cognitive Radio (CR) comes into the picture, which will detect the vacant spectrum and use it when the same is available. But in the scenario of multiple unlicensed users or secondary users (SUs), which SU will access the spectrum is an issue in the cognitive radio networks. In this paper, two Fuzzy Logic Models i.e. Mamdani and Takagi-Sugeno are used for the selection of SU to access the spectrum based upon the input parameters: spectrum efficiency, mobility of SU and distance between PU & SU. On the basis of these input parameters, 27 fuzzy rules are framed. Based upon these rules, output is obtained which shows the selection possibility of a SU to access the spectrum. A comparison is shown between two models by computing the correlation value for each of the inputs parameter.
{"title":"Comparative study of Mamdani & Takagi-Sugeno models for spectrum access in cognitive radio networks","authors":"Deepak Kumar, Pankaj Verma","doi":"10.1109/ICCIC.2015.7435712","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435712","url":null,"abstract":"Rapid growth of wireless services demands more and more spectrum, but spectrum is limited resource. Recent studies show that the spectrum which is allocated to licensed users or primary users (PUs) is not fully utilized. To address this problem Cognitive Radio (CR) comes into the picture, which will detect the vacant spectrum and use it when the same is available. But in the scenario of multiple unlicensed users or secondary users (SUs), which SU will access the spectrum is an issue in the cognitive radio networks. In this paper, two Fuzzy Logic Models i.e. Mamdani and Takagi-Sugeno are used for the selection of SU to access the spectrum based upon the input parameters: spectrum efficiency, mobility of SU and distance between PU & SU. On the basis of these input parameters, 27 fuzzy rules are framed. Based upon these rules, output is obtained which shows the selection possibility of a SU to access the spectrum. A comparison is shown between two models by computing the correlation value for each of the inputs parameter.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132216206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435774
Deepak N R, S. Balaji
The use of Multiple Inputs Multiple Outputs (MIMO) over the wireless network is increasing rapidly in present years and is expected to be more in future too. The multiple transmit antennas and receive antennas can be introduced in the next generation of wireless network standards for image communication in real time. The image communication has the requirement of large bandwidth. The image data representation requires large information that leads data at high rates and it in turns the high communication energy with distortions in the transmitted image. Various competing MIMO transmission techniques, namely, ODQ, BST, OBST, RO and CO are used to improve the image quality. The paper discussed the few transmission techniques of MIMO for image quality over the 4G wireless network.
{"title":"Performance analysis of MIMO-based transmission techniques for image quality in 4G wireless network","authors":"Deepak N R, S. Balaji","doi":"10.1109/ICCIC.2015.7435774","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435774","url":null,"abstract":"The use of Multiple Inputs Multiple Outputs (MIMO) over the wireless network is increasing rapidly in present years and is expected to be more in future too. The multiple transmit antennas and receive antennas can be introduced in the next generation of wireless network standards for image communication in real time. The image communication has the requirement of large bandwidth. The image data representation requires large information that leads data at high rates and it in turns the high communication energy with distortions in the transmitted image. Various competing MIMO transmission techniques, namely, ODQ, BST, OBST, RO and CO are used to improve the image quality. The paper discussed the few transmission techniques of MIMO for image quality over the 4G wireless network.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124291380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435743
Kalaivani Ravisekaran, Sivakumar Ramakrishnan
Learning Management System (LMS) has been becoming an unavoidable component in virtual learning system. It offers various facilities to enrich the learning process in a virtual environment. However, it is understood that existing Learning Management Systems need to be extended with ubiquitous additional functionalities. Hence this paper proposes a framework for LMS to support U-conferencing facility in order to make the LMS rich in ubiquitous nature. This framework ensures no latency and support for visually disabled learners in ubiquitous Learning Management System environment.
{"title":"Towards development of U-conferencing facility in learning management system","authors":"Kalaivani Ravisekaran, Sivakumar Ramakrishnan","doi":"10.1109/ICCIC.2015.7435743","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435743","url":null,"abstract":"Learning Management System (LMS) has been becoming an unavoidable component in virtual learning system. It offers various facilities to enrich the learning process in a virtual environment. However, it is understood that existing Learning Management Systems need to be extended with ubiquitous additional functionalities. Hence this paper proposes a framework for LMS to support U-conferencing facility in order to make the LMS rich in ubiquitous nature. This framework ensures no latency and support for visually disabled learners in ubiquitous Learning Management System environment.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134535712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/ICCIC.2015.7435628
D. Suryanarayana, S. M. Hussain, Prathyusha Kanakam, S. Gupta
Creating the Semantic Web (SW) practicality comprises numerous tasks to lead the way in the current computer technology. It can't carry out without a human intervention, but a master is capable to accomplish with the moderate training to a machine. The semantic search seeks out for the actual context of a particular search query given by the user based on the meaning rather than content matching. However, a few search engines provide a mass list of the results based on the user intentions using their semantic properties of Web representations. This paper shows the significance of Web knowledge representations and application integration, which are the two basic aspects of semantic programming to retrieve the information of a user's intentions. A comparative study and analysis of various popular search engines which are unsuccessful to retrieve relevant information for a test of queries given by the users is provided. Therefore, this paper presents an effective determination of the information by navigating the semantic Web using the Resource Description Framework (RDF) and Web ontology language (OWL).
{"title":"Stepping towards a semantic web search engine for accurate outcomes in favor of user queries: Using RDF and ontology technologies","authors":"D. Suryanarayana, S. M. Hussain, Prathyusha Kanakam, S. Gupta","doi":"10.1109/ICCIC.2015.7435628","DOIUrl":"https://doi.org/10.1109/ICCIC.2015.7435628","url":null,"abstract":"Creating the Semantic Web (SW) practicality comprises numerous tasks to lead the way in the current computer technology. It can't carry out without a human intervention, but a master is capable to accomplish with the moderate training to a machine. The semantic search seeks out for the actual context of a particular search query given by the user based on the meaning rather than content matching. However, a few search engines provide a mass list of the results based on the user intentions using their semantic properties of Web representations. This paper shows the significance of Web knowledge representations and application integration, which are the two basic aspects of semantic programming to retrieve the information of a user's intentions. A comparative study and analysis of various popular search engines which are unsuccessful to retrieve relevant information for a test of queries given by the users is provided. Therefore, this paper presents an effective determination of the information by navigating the semantic Web using the Resource Description Framework (RDF) and Web ontology language (OWL).","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134382692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}