Pub Date : 2014-11-28DOI: 10.1109/IC3I.2014.7019822
K. Skouby, P. Lynggaard
In a nearby future 5G technologies will connect the world from the largest megacities to the smallest internet of things in an always online fashion. Such a connected hierarchy must combine the smart cities, the smart homes, and the internet of things into one large coherent infrastructure. This paper suggest a four layer model which join and interfaces these elements by deploying technologies such as 5G, internet of things, cloud of things, and distributed artificial intelligence. Many advantages and service possibilities are offered by this new infrastructure such as interconnected internet of things, smart homes with artificial intelligence, and a platform for new combined smart home and smart city services based on big-data.
{"title":"Smart home and smart city solutions enabled by 5G, IoT, AAI and CoT services","authors":"K. Skouby, P. Lynggaard","doi":"10.1109/IC3I.2014.7019822","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019822","url":null,"abstract":"In a nearby future 5G technologies will connect the world from the largest megacities to the smallest internet of things in an always online fashion. Such a connected hierarchy must combine the smart cities, the smart homes, and the internet of things into one large coherent infrastructure. This paper suggest a four layer model which join and interfaces these elements by deploying technologies such as 5G, internet of things, cloud of things, and distributed artificial intelligence. Many advantages and service possibilities are offered by this new infrastructure such as interconnected internet of things, smart homes with artificial intelligence, and a platform for new combined smart home and smart city services based on big-data.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114192297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019669
B. Jalender, A. Govardhan, P. Premchand
Currently, the file has been used as a link from the website as being transferred to the system. It is always to select the file you want to transfer the target folder. This standard behavior is to create an Internet shortcut file if you drop the file into your computer. But if I want to get the file into the folder from website it is different behavior. The main objective of this paper is to develop an algorithm that will use drag and drop to transfer files to any computer. Therefore, it will save time when compared to uploading and downloading. The basic idea of this paper is to classify the software reusable components using uploading and downloading. In this paper we used file tree structure for classifying the components.
{"title":"A Novel approach for classifying software reusable components for upload and download","authors":"B. Jalender, A. Govardhan, P. Premchand","doi":"10.1109/IC3I.2014.7019669","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019669","url":null,"abstract":"Currently, the file has been used as a link from the website as being transferred to the system. It is always to select the file you want to transfer the target folder. This standard behavior is to create an Internet shortcut file if you drop the file into your computer. But if I want to get the file into the folder from website it is different behavior. The main objective of this paper is to develop an algorithm that will use drag and drop to transfer files to any computer. Therefore, it will save time when compared to uploading and downloading. The basic idea of this paper is to classify the software reusable components using uploading and downloading. In this paper we used file tree structure for classifying the components.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115405488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019758
Akash Yadav, Anandghan Waghmare, A. Sairam
Cyber-Physical Systems are a powerful and central element of the upcoming Internet of Things (IoT), as they provide spatially distributed interfaces to physical systems, can collect and pre-process the data, and forward it to a back end database system. Time synchronization is crucial in such networks for data aggregation as well as duty cycling of the nodes. Existing time synchronization mechanisms concentrating on improving synchronization accuracy leads to large overhead that may restrict its applicability in such low power scenarios. In this paper we propose a time synchronization mechanism that exploits the heterogeneity in power availability of the nodes to provide high synchronization accuracy as well as high energy efficiency. Analytical as well as empirical results indicate that in the proposed protocol, the control message overhead is considerably less compared to a popular existing scheme with modest increase in the error rate.
{"title":"Exploiting node heterogeneity for time synchronization in low power sensor networks","authors":"Akash Yadav, Anandghan Waghmare, A. Sairam","doi":"10.1109/IC3I.2014.7019758","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019758","url":null,"abstract":"Cyber-Physical Systems are a powerful and central element of the upcoming Internet of Things (IoT), as they provide spatially distributed interfaces to physical systems, can collect and pre-process the data, and forward it to a back end database system. Time synchronization is crucial in such networks for data aggregation as well as duty cycling of the nodes. Existing time synchronization mechanisms concentrating on improving synchronization accuracy leads to large overhead that may restrict its applicability in such low power scenarios. In this paper we propose a time synchronization mechanism that exploits the heterogeneity in power availability of the nodes to provide high synchronization accuracy as well as high energy efficiency. Analytical as well as empirical results indicate that in the proposed protocol, the control message overhead is considerably less compared to a popular existing scheme with modest increase in the error rate.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117198095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019682
D. Ganesh, V. Ramaprasad
Online social networks are now a popular way for users to connect, express themselves, and share content. Users in today's online social networks often post a profile, consisting of attributes like geographic location, interests, and schools attended. Such profile information is used on the sites as a basis for grouping users, for sharing content, and for suggesting users who may benefit from interaction. Online social networks have increased become a de facto portal for billions of regular users like Face book Twitter Linked In world wide. These OSNs offer attractive means for Relations and sharing of information, but it also causes number of problems which are private to users. Suppose online social network allow users to restrict access to share data, at present there is no effective mechanism to provide privacy concerns over confidential data associated with many number of users. To share the profile, relation and content our analysis presents an approach to protect the shared data associated with multiple users in social networks. To capture the essence of multiparty authorization users requirement, along with a multiparty policy specification scheme & enforcement mechanism. Our access control model allows us to Extend the features of traditional mechanisms to perform various tasks such as analysis and design on new model, Comparative study provide usability study and problems in previous and advantages of our method.
{"title":"Protection of shared data among multiple users for online social networks","authors":"D. Ganesh, V. Ramaprasad","doi":"10.1109/IC3I.2014.7019682","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019682","url":null,"abstract":"Online social networks are now a popular way for users to connect, express themselves, and share content. Users in today's online social networks often post a profile, consisting of attributes like geographic location, interests, and schools attended. Such profile information is used on the sites as a basis for grouping users, for sharing content, and for suggesting users who may benefit from interaction. Online social networks have increased become a de facto portal for billions of regular users like Face book Twitter Linked In world wide. These OSNs offer attractive means for Relations and sharing of information, but it also causes number of problems which are private to users. Suppose online social network allow users to restrict access to share data, at present there is no effective mechanism to provide privacy concerns over confidential data associated with many number of users. To share the profile, relation and content our analysis presents an approach to protect the shared data associated with multiple users in social networks. To capture the essence of multiparty authorization users requirement, along with a multiparty policy specification scheme & enforcement mechanism. Our access control model allows us to Extend the features of traditional mechanisms to perform various tasks such as analysis and design on new model, Comparative study provide usability study and problems in previous and advantages of our method.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127505396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019580
D. K. Ravish, Nayana R. Shenoy, K. Shanthi, S. Nisargh
Heart Attacks are the major cause of death in the world today, particularly in India. The need to predict this is a major necessity for improving the country's healthcare sector. Accurate and precise prediction of the heart disease mainly depends on Electrocardiogram (ECG) data and clinical data. These data's must be fed to a non linear disease prediction model. This non linear heart function monitoring module must be able to detect arrhythmias such as tachycardia, bradycardia, myocardial infarction, atrial, ventricular fibrillation, atrial ventricular flutters and PVC's. In this paper we have developed an efficient method to acquire the clinical and ECG data, so as to train the Artificial Neural Network to accurately diagnose the heart and predict abnormalities if any. The overall process can be categorized into three steps. Firstly, we acquire the ECG of the patient by standard 3 lead pre jelled electrodes. The acquired ECG is then processed, amplified and filtered to remove any noise captured during the acquisition stage. This analog data is now converted into digital format by A/D converter, mainly because of its uncertainty. Secondly we acquire 4-5 relevant clinical data's like mean arterial pressure (MAP), fasting blood sugar (FBS), heart rate (HR), cholesterol (CH), and age/gender. Finally we use these two data's i.e. ECG and clinical data to train the neural network for classifying the heart disease and to predict abnormalities in the heart or it's functioning.
{"title":"Heart function monitoring, prediction and prevention of Heart Attacks: Using Artificial Neural Networks","authors":"D. K. Ravish, Nayana R. Shenoy, K. Shanthi, S. Nisargh","doi":"10.1109/IC3I.2014.7019580","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019580","url":null,"abstract":"Heart Attacks are the major cause of death in the world today, particularly in India. The need to predict this is a major necessity for improving the country's healthcare sector. Accurate and precise prediction of the heart disease mainly depends on Electrocardiogram (ECG) data and clinical data. These data's must be fed to a non linear disease prediction model. This non linear heart function monitoring module must be able to detect arrhythmias such as tachycardia, bradycardia, myocardial infarction, atrial, ventricular fibrillation, atrial ventricular flutters and PVC's. In this paper we have developed an efficient method to acquire the clinical and ECG data, so as to train the Artificial Neural Network to accurately diagnose the heart and predict abnormalities if any. The overall process can be categorized into three steps. Firstly, we acquire the ECG of the patient by standard 3 lead pre jelled electrodes. The acquired ECG is then processed, amplified and filtered to remove any noise captured during the acquisition stage. This analog data is now converted into digital format by A/D converter, mainly because of its uncertainty. Secondly we acquire 4-5 relevant clinical data's like mean arterial pressure (MAP), fasting blood sugar (FBS), heart rate (HR), cholesterol (CH), and age/gender. Finally we use these two data's i.e. ECG and clinical data to train the neural network for classifying the heart disease and to predict abnormalities in the heart or it's functioning.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125269105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019794
Vedpal, N. Chauhan, Harish Kumar
Software reuse is the use of existing artifacts to create new software. Inheritance is the foremost technique of reuse. But the inherent complexity due to inheritance hierarchy found in object - oriented paradigm also affect testing. Every time any change occurs in the software, new test cases are added in addition to the existing test suite. So there is need to conduct effective regression testing having less number of test cases to reduce cost and time. In this paper a hierarchical test case prioritization technique is proposed wherein various factors have been considered that affect error propagation in the inheritance. In this paper prioritization of test cases take place at two levels. In the first level the classes are prioritized and in the second level the test cases of prioritized classes are ordered. To show the effectiveness of proposed technique it was applied and analyze on a C++ program.
{"title":"A hierarchical test case prioritization technique for object oriented software","authors":"Vedpal, N. Chauhan, Harish Kumar","doi":"10.1109/IC3I.2014.7019794","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019794","url":null,"abstract":"Software reuse is the use of existing artifacts to create new software. Inheritance is the foremost technique of reuse. But the inherent complexity due to inheritance hierarchy found in object - oriented paradigm also affect testing. Every time any change occurs in the software, new test cases are added in addition to the existing test suite. So there is need to conduct effective regression testing having less number of test cases to reduce cost and time. In this paper a hierarchical test case prioritization technique is proposed wherein various factors have been considered that affect error propagation in the inheritance. In this paper prioritization of test cases take place at two levels. In the first level the classes are prioritized and in the second level the test cases of prioritized classes are ordered. To show the effectiveness of proposed technique it was applied and analyze on a C++ program.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125486002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019590
Shreekanth T, V. Udayashankara
The Optical Braille Character Recognition (OBR) system is in significant need in order to preserve the Braille documents to make them available in future for the large section of visually impaired people and also to make the bi-directional communication between the sighted people and the visually impaired people feasible. The recognition and transcribing the double sided Braille document into its corresponding natural text is a challenging task. This difficulty is due to the overlapping of the front side dots (Recto) with that of the back side dots (Verso) in the Inter-point Braille document. In such cases, the usual method of template matching to distinguish recto and verso dots is not efficient. In this paper a new system for double sided Braille dot recognition is proposed, which employs a two-stage highly efficient and an adaptive technique to differentiate the recto and verso dots from an inter-point Braille using the projection profile method. In this paper we present (i) a horizontal projection profile for Braille line segmentation, (ii) vertical projection profile for Braille word segmentation and (iii) Integration of horizontal and vertical projection profiles along with distance thresholding for Braille character segmentation. We demonstrate the effectiveness of this segmentation technique on a large dataset consisting of 754 words from Hindi Devanagari Braille documents with varying image resolution and with different word patterns. A recognition rate of 96.9% has been achieved.
{"title":"A two stage Braille Character segmentation approach for embossed double sided Hindi Devanagari Braille documents","authors":"Shreekanth T, V. Udayashankara","doi":"10.1109/IC3I.2014.7019590","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019590","url":null,"abstract":"The Optical Braille Character Recognition (OBR) system is in significant need in order to preserve the Braille documents to make them available in future for the large section of visually impaired people and also to make the bi-directional communication between the sighted people and the visually impaired people feasible. The recognition and transcribing the double sided Braille document into its corresponding natural text is a challenging task. This difficulty is due to the overlapping of the front side dots (Recto) with that of the back side dots (Verso) in the Inter-point Braille document. In such cases, the usual method of template matching to distinguish recto and verso dots is not efficient. In this paper a new system for double sided Braille dot recognition is proposed, which employs a two-stage highly efficient and an adaptive technique to differentiate the recto and verso dots from an inter-point Braille using the projection profile method. In this paper we present (i) a horizontal projection profile for Braille line segmentation, (ii) vertical projection profile for Braille word segmentation and (iii) Integration of horizontal and vertical projection profiles along with distance thresholding for Braille character segmentation. We demonstrate the effectiveness of this segmentation technique on a large dataset consisting of 754 words from Hindi Devanagari Braille documents with varying image resolution and with different word patterns. A recognition rate of 96.9% has been achieved.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126631583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019655
J. Das, P. Mukherjee, S. Majumder, Prosenjit Gupta
Recommender Systems (RS) are widely used for providing automatic personalized suggestions for information, products and services. Collaborative Filtering (CF) is one of the most popular recommendation techniques. However, with the rapid growth of the Web in terms of users and items, majority of the RS using CF technique suffer from problems like data sparsity and scalability. In this paper, we present a Recommender System based on data clustering techniques to deal with the scalability problem associated with the recommendation task. We use different voting systems as algorithms to combine opinions from multiple users for recommending items of interest to the new user. The proposed work use DBSCAN clustering algorithm for clustering the users, and then implement voting algorithms to recommend items to the user depending on the cluster into which it belongs. The idea is to partition the users of the RS using clustering algorithm and apply the Recommendation Algorithm separately to each partition. Our system recommends item to a user in a specific cluster only using the rating statistics of the other users of that cluster. This helps us to reduce the running time of the algorithm as we avoid computations over the entire data. Our objective is to improve the running time as well as maintain an acceptable recommendation quality. We have tested the algorithm on the Netflix prize dataset.
{"title":"Clustering-based recommender system using principles of voting theory","authors":"J. Das, P. Mukherjee, S. Majumder, Prosenjit Gupta","doi":"10.1109/IC3I.2014.7019655","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019655","url":null,"abstract":"Recommender Systems (RS) are widely used for providing automatic personalized suggestions for information, products and services. Collaborative Filtering (CF) is one of the most popular recommendation techniques. However, with the rapid growth of the Web in terms of users and items, majority of the RS using CF technique suffer from problems like data sparsity and scalability. In this paper, we present a Recommender System based on data clustering techniques to deal with the scalability problem associated with the recommendation task. We use different voting systems as algorithms to combine opinions from multiple users for recommending items of interest to the new user. The proposed work use DBSCAN clustering algorithm for clustering the users, and then implement voting algorithms to recommend items to the user depending on the cluster into which it belongs. The idea is to partition the users of the RS using clustering algorithm and apply the Recommendation Algorithm separately to each partition. Our system recommends item to a user in a specific cluster only using the rating statistics of the other users of that cluster. This helps us to reduce the running time of the algorithm as we avoid computations over the entire data. Our objective is to improve the running time as well as maintain an acceptable recommendation quality. We have tested the algorithm on the Netflix prize dataset.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115490370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019793
Archana Nandibewoor, R. Hegadi, Prashant Adiver
One of the emerging technologies that can be used to study the rate of vegetation is hyper spectral remote sensing. Hyper spectral satellite image of Western part of Indiana is adopted for our study. This data was further used to calculate different spectral indices. The study on spectral indices which show some significant changes with variation in Vegetation are presented in this paper. These spectral indices are used to monitor the vegetation. The spectral indices that are used are NDVI (normalized differential Vegetation index), SRPI (simple Ratio pigment index), red edge (Clrededge) and SG (VI green). All these spectral indices stated above showed significant changes with change in rate of chlorophyll and nitrogen Concentration. In the graph plotted for different wavelengths verses the reflectance values showed different Curves for change in the area. From this study it can be inferred that the hyper spectral data can also be used to find area containing dense forest, farm lands and bare land. Hence Satellite images can give lot of information that needs to be explored.
{"title":"Identification of vegetation from satellite derived hyper spectral indices","authors":"Archana Nandibewoor, R. Hegadi, Prashant Adiver","doi":"10.1109/IC3I.2014.7019793","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019793","url":null,"abstract":"One of the emerging technologies that can be used to study the rate of vegetation is hyper spectral remote sensing. Hyper spectral satellite image of Western part of Indiana is adopted for our study. This data was further used to calculate different spectral indices. The study on spectral indices which show some significant changes with variation in Vegetation are presented in this paper. These spectral indices are used to monitor the vegetation. The spectral indices that are used are NDVI (normalized differential Vegetation index), SRPI (simple Ratio pigment index), red edge (Clrededge) and SG (VI green). All these spectral indices stated above showed significant changes with change in rate of chlorophyll and nitrogen Concentration. In the graph plotted for different wavelengths verses the reflectance values showed different Curves for change in the area. From this study it can be inferred that the hyper spectral data can also be used to find area containing dense forest, farm lands and bare land. Hence Satellite images can give lot of information that needs to be explored.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122661880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019797
K. Prakash, T. V. Ananthan, V. Rajavarman
The rapid growth of World Wide Web has led to a dramatic increase in accessible information. Today, people use Web for a large variety of activities including travel planning, entertainment and research. However, the tools available for collecting, organizing, and sharing web content have not kept pace with the rapid growth in information. But the major complexity arises when web documents in regional languages are displayed. Understanding the content of the document and later communication through oral or text becomes difficult. This is the area the current paper addresses. To overcome the difficulty a novel concept-based mining model is proposed and states how the knowledge is created in the minds of illiterate user. The paper first presents how letters and words which form the basis of text-based communication can be used for content. Artificial neural network training helps us to give a comparative study with statistical interpretation which was studied earlier.
{"title":"Neural network framework for multilingual Web documents","authors":"K. Prakash, T. V. Ananthan, V. Rajavarman","doi":"10.1109/IC3I.2014.7019797","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019797","url":null,"abstract":"The rapid growth of World Wide Web has led to a dramatic increase in accessible information. Today, people use Web for a large variety of activities including travel planning, entertainment and research. However, the tools available for collecting, organizing, and sharing web content have not kept pace with the rapid growth in information. But the major complexity arises when web documents in regional languages are displayed. Understanding the content of the document and later communication through oral or text becomes difficult. This is the area the current paper addresses. To overcome the difficulty a novel concept-based mining model is proposed and states how the knowledge is created in the minds of illiterate user. The paper first presents how letters and words which form the basis of text-based communication can be used for content. Artificial neural network training helps us to give a comparative study with statistical interpretation which was studied earlier.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122985499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}