Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692085
Runumi Devi, D. Mehrotra, Hajer Baazaoui-Zghal
Availability of publications data is significant in research development and a global publications data in semantic web will help research community in great manner. W3C has provided a semantic web standard termed as RDB to RDF Mapping Language(R2RML). R2RML allows us to express mappings to be customized from relational database to RDF database. This paper discusses a convergence approach-PubWorld using R2RML that generate r2rml mapping files from three disparate relational databases- two for publication and one for world database. Publications data are made shareable directly from mapping file by converting into local ontologies and merging the local ontologies along with one existing ontology into one global ontology. The Header-Dictionary-Triples(HDT) compression technique is used for storing the global ontology to achieve large spatial savings. Simple Protocol and RDF Query Language(SPARQL) queries using Jena ARQ(And RDF Query) on both RDF and HDT version shows similar running time.
{"title":"Pubworld-A R2rml Mapping Driven Approach To Transform Relational Database Data Into Shareable Format","authors":"Runumi Devi, D. Mehrotra, Hajer Baazaoui-Zghal","doi":"10.1109/IADCC.2018.8692085","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692085","url":null,"abstract":"Availability of publications data is significant in research development and a global publications data in semantic web will help research community in great manner. W3C has provided a semantic web standard termed as RDB to RDF Mapping Language(R2RML). R2RML allows us to express mappings to be customized from relational database to RDF database. This paper discusses a convergence approach-PubWorld using R2RML that generate r2rml mapping files from three disparate relational databases- two for publication and one for world database. Publications data are made shareable directly from mapping file by converting into local ontologies and merging the local ontologies along with one existing ontology into one global ontology. The Header-Dictionary-Triples(HDT) compression technique is used for storing the global ontology to achieve large spatial savings. Simple Protocol and RDF Query Language(SPARQL) queries using Jena ARQ(And RDF Query) on both RDF and HDT version shows similar running time.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125018005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692133
Chhanda Roy, C. Chaudhuri
Researches have been carried out in the past and recent years for the automation of examination systems. But most of them target on-line examinations with either choice-based or very short descriptive answers at best. The primary goal of this paper is to propose a framework, where textual papers set for subjective questions, are supplemented with model answer points to facilitate the evaluation procedure in a semi-automated manner. The proposed framework also accommodates provisions for reward and penalty schemes. In the reward scheme, additional valid points provided by the examinees would earn them bonus marks as rewards. By incremental up-gradation of the question case-base with these extra answer-points, the examiner can incorporate an automatic fairness in the checking procedure. In the penalty scheme, unfair means adopted amongst neighboring examinees can be detected by maintaining seat plans in the form of a neighborhood graph. The degree of penalization can then be impartially ascertained by computing the degree of similarity amongst adjoining answer scripts. The main question-bank as well as the model answer points are all maintained using Case Based Reasoning strategies.
{"title":"Case Based Modeling of Answer Points to Expedite Semi-Automated Evaluation of Subjective Papers","authors":"Chhanda Roy, C. Chaudhuri","doi":"10.1109/IADCC.2018.8692133","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692133","url":null,"abstract":"Researches have been carried out in the past and recent years for the automation of examination systems. But most of them target on-line examinations with either choice-based or very short descriptive answers at best. The primary goal of this paper is to propose a framework, where textual papers set for subjective questions, are supplemented with model answer points to facilitate the evaluation procedure in a semi-automated manner. The proposed framework also accommodates provisions for reward and penalty schemes. In the reward scheme, additional valid points provided by the examinees would earn them bonus marks as rewards. By incremental up-gradation of the question case-base with these extra answer-points, the examiner can incorporate an automatic fairness in the checking procedure. In the penalty scheme, unfair means adopted amongst neighboring examinees can be detected by maintaining seat plans in the form of a neighborhood graph. The degree of penalization can then be impartially ascertained by computing the degree of similarity amongst adjoining answer scripts. The main question-bank as well as the model answer points are all maintained using Case Based Reasoning strategies.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126654883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692122
N. Meena, Buddha Singh
The most important issue of Quality of Service is network connectivity and coverage of sensing in the design of Wireless Sensor Network. The target area is monitoring or tracking by the sensors, called as coverage. Where the human intrusion is difficult or impossible in a hostile environment then the sensors are dropped by airplanes. In this situation sensors cannot be same in the whole area, therefore some area may be covered or uncovered and some sensors may be overlapped. These redundant sensors improve coverage and connectivity, but increase energy depletion. Monitoring of the coverage holes is an important task because of their harmful and denying effect on the WSNs. In the present paper, we have proposed a model to extend the network lifetime and maximum coverage rate using multi-objective optimization approach. This model can achieve maximum coverage, minimum energy consumption and maximize network lifetime. This paper considers non-dominated sorting genetic algorithm (NSGA-II) for optimizing coverage problems. The results of the simulation show that the proposed method can improve the coverage probability and lifetime of the network at the same time can maintain the connectivity of the network.
{"title":"Coverage Maximization using Multi-Objective Optimization Approach for Wireless Sensor Network in Real Time Environment","authors":"N. Meena, Buddha Singh","doi":"10.1109/IADCC.2018.8692122","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692122","url":null,"abstract":"The most important issue of Quality of Service is network connectivity and coverage of sensing in the design of Wireless Sensor Network. The target area is monitoring or tracking by the sensors, called as coverage. Where the human intrusion is difficult or impossible in a hostile environment then the sensors are dropped by airplanes. In this situation sensors cannot be same in the whole area, therefore some area may be covered or uncovered and some sensors may be overlapped. These redundant sensors improve coverage and connectivity, but increase energy depletion. Monitoring of the coverage holes is an important task because of their harmful and denying effect on the WSNs. In the present paper, we have proposed a model to extend the network lifetime and maximum coverage rate using multi-objective optimization approach. This model can achieve maximum coverage, minimum energy consumption and maximize network lifetime. This paper considers non-dominated sorting genetic algorithm (NSGA-II) for optimizing coverage problems. The results of the simulation show that the proposed method can improve the coverage probability and lifetime of the network at the same time can maintain the connectivity of the network.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126736665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692136
Jayashree Nair
Machine Translation is a branch of re- search under Computational Linguistics that deais with the automatic/semi-automatic translation of a natu- ral/human language into another. The language that is being translated is termed as Source Language(SL) and the language into which translation is done, is termed Target Language(TL). This paper presents an English to Indian Languages Machine Translation technique that is based on the rules of grammar, namely word deelensions or inflections, and sentence formation rules of the target languages, i.e. Indian Languages. Declensions are variations or inflections of words in language and Indian languages are richly declensional or inflectional languages. This study is on generating Noun Declension-case markers for English to Indian languages in Declension Rule based Machine Translation. This paper also describes about the various approaches to machine translation along with their system architectures. The proposed Declension based RBMT is explained with its architecture and each of the modules and their functionalities are elaborated in detail. The input and output, to and from the System are also described with an example. The research works that are similar with the proposed system, such as ANUSAARAKA and ANGLABHARATI are also explored.
{"title":"Generating Noun Declension-case markers for English to Indian Languages in Declension Rule based MT Systems","authors":"Jayashree Nair","doi":"10.1109/IADCC.2018.8692136","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692136","url":null,"abstract":"Machine Translation is a branch of re- search under Computational Linguistics that deais with the automatic/semi-automatic translation of a natu- ral/human language into another. The language that is being translated is termed as Source Language(SL) and the language into which translation is done, is termed Target Language(TL). This paper presents an English to Indian Languages Machine Translation technique that is based on the rules of grammar, namely word deelensions or inflections, and sentence formation rules of the target languages, i.e. Indian Languages. Declensions are variations or inflections of words in language and Indian languages are richly declensional or inflectional languages. This study is on generating Noun Declension-case markers for English to Indian languages in Declension Rule based Machine Translation. This paper also describes about the various approaches to machine translation along with their system architectures. The proposed Declension based RBMT is explained with its architecture and each of the modules and their functionalities are elaborated in detail. The input and output, to and from the System are also described with an example. The research works that are similar with the proposed system, such as ANUSAARAKA and ANGLABHARATI are also explored.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121327934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quite a few times when the problem of study involves binary classification we are dealt with a situation of unbalanced class labels; the negative class often dominates the positive class leading to the problem that the model was not able to learn enough complexities to correctly classify the label which are lower in comparison. The Bagging and boosting classifiers in recent times have gained in popularity due to its robustness against the unbalanced class labels, both uses the notion of ensemble to generalize the model and predict on the unseen data. Through this paper we aim to explore the improvement in the classification performance by bagging and boosting classifiers on an unbalanced binary classification dataset.
{"title":"Review of Bagging and Boosting Classification Performance on Unbalanced Binary Classification","authors":"Yashasvi Singhal, Ayushi Jain, Shreya Batra, Yash Varshney, Megha Rathi","doi":"10.1109/IADCC.2018.8692138","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692138","url":null,"abstract":"Quite a few times when the problem of study involves binary classification we are dealt with a situation of unbalanced class labels; the negative class often dominates the positive class leading to the problem that the model was not able to learn enough complexities to correctly classify the label which are lower in comparison. The Bagging and boosting classifiers in recent times have gained in popularity due to its robustness against the unbalanced class labels, both uses the notion of ensemble to generalize the model and predict on the unseen data. Through this paper we aim to explore the improvement in the classification performance by bagging and boosting classifiers on an unbalanced binary classification dataset.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"371 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122434536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692102
Laxmi Chaudhary, Buddha Singh
Finding communities in a complex network is tedious task. In this paper, we have proposed a Fast Cosine Shared Link (FCSL) method for unveiling and analyzing concealed behavior of the communities in the network. We have used Cosine similarity measure to find the node’s similarity. Further, we have evaluated the time taken to identify the communities in the network. Substantial experiments and results shows the potential of the proposed method to successfully find real world communities in real world network datasets. The experiments we carried out exhibit that our method outperforms other techniques and slightly improve results of the other existing methods, proving reliable results. The performance of methods evaluated in terms of communities, modularity value and time taken to detect the communities in network.
{"title":"Community Detection using Fast Cosine Shared Link Method","authors":"Laxmi Chaudhary, Buddha Singh","doi":"10.1109/IADCC.2018.8692102","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692102","url":null,"abstract":"Finding communities in a complex network is tedious task. In this paper, we have proposed a Fast Cosine Shared Link (FCSL) method for unveiling and analyzing concealed behavior of the communities in the network. We have used Cosine similarity measure to find the node’s similarity. Further, we have evaluated the time taken to identify the communities in the network. Substantial experiments and results shows the potential of the proposed method to successfully find real world communities in real world network datasets. The experiments we carried out exhibit that our method outperforms other techniques and slightly improve results of the other existing methods, proving reliable results. The performance of methods evaluated in terms of communities, modularity value and time taken to detect the communities in network.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122795388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692116
Nikita Goel, D. Jain, Adwitiya Sinha
Owing to changing climatic conditions, crops often get affected, as a result of which agricultural yield decreases drastically. If the condition gets worse, crops may get vulnerable towards infections caused by fungal, bacterial, virus, etc. diseases causing agents. The method that can be adopted to prevent plant loss can be carried out by real-time identification of plant diseases. Our proposed model provides an automatic method to determine leaf disease in a plant using a trained dataset of pomegranate leaf images. The test set is used to check whether an image entered into the system contains disease or not. If not, it is considered to be healthy, otherwise the disease if that leaf is predicted and the prevention of plant disease is proposed automatically. Further, the rodent causing disease is also identified with image analysis performed on the image certified by biologists and scientists. This model provides an accuracy of the results generated using different cluster sizes, optimized experimentally, with image segmentation. Our model provides useful estimation and prediction of disease causing agent with necessary precautions.
{"title":"Prediction Model for Automated Leaf Disease Detection & Analysis","authors":"Nikita Goel, D. Jain, Adwitiya Sinha","doi":"10.1109/IADCC.2018.8692116","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692116","url":null,"abstract":"Owing to changing climatic conditions, crops often get affected, as a result of which agricultural yield decreases drastically. If the condition gets worse, crops may get vulnerable towards infections caused by fungal, bacterial, virus, etc. diseases causing agents. The method that can be adopted to prevent plant loss can be carried out by real-time identification of plant diseases. Our proposed model provides an automatic method to determine leaf disease in a plant using a trained dataset of pomegranate leaf images. The test set is used to check whether an image entered into the system contains disease or not. If not, it is considered to be healthy, otherwise the disease if that leaf is predicted and the prevention of plant disease is proposed automatically. Further, the rodent causing disease is also identified with image analysis performed on the image certified by biologists and scientists. This model provides an accuracy of the results generated using different cluster sizes, optimized experimentally, with image segmentation. Our model provides useful estimation and prediction of disease causing agent with necessary precautions.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114293417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692123
Jyotsna Singh, Ahsaas Bajaj, A. Mittal, Ansh Khanna, Rishabh Karwayun
Images have become a standard for information consumption and storage, far replacing text in various domains such as museums, news stations, medicine and remote sensing. Such images constitute of the majority of data being consumed on the Internet today and the volume is constantly increasing day by day. Most of these images are unlabeled and devoid of any keywords. The swift and continuous increase in the use of images and their unlabeled characteristics have demanded the need for efficient and accurate content-based image retrieval systems. A considerable number of such systems have been designed for the task that derive features from a query image and show the most similar images. One such efficient and accurate system is attempted in this paper which makes use of color and texture information of the images and retrieves the best possible results based on this information. The proposed method makes use of Color Coherence Vector (CCV) for color feature extraction and Gabor Filters for texture features. The results were found to be significantly higher and easily exceeded a few popular studies as well.
{"title":"Content Based Image Retrieval using Gabor Filters and Color Coherence Vector","authors":"Jyotsna Singh, Ahsaas Bajaj, A. Mittal, Ansh Khanna, Rishabh Karwayun","doi":"10.1109/IADCC.2018.8692123","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692123","url":null,"abstract":"Images have become a standard for information consumption and storage, far replacing text in various domains such as museums, news stations, medicine and remote sensing. Such images constitute of the majority of data being consumed on the Internet today and the volume is constantly increasing day by day. Most of these images are unlabeled and devoid of any keywords. The swift and continuous increase in the use of images and their unlabeled characteristics have demanded the need for efficient and accurate content-based image retrieval systems. A considerable number of such systems have been designed for the task that derive features from a query image and show the most similar images. One such efficient and accurate system is attempted in this paper which makes use of color and texture information of the images and retrieves the best possible results based on this information. The proposed method makes use of Color Coherence Vector (CCV) for color feature extraction and Gabor Filters for texture features. The results were found to be significantly higher and easily exceeded a few popular studies as well.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"331 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116233551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692141
P. R. Deshmukh, R. Phalnikar
Document classification particularly in biomedical research plays a vital role in extracting knowledge from medical literature, journal, article and report. To extract meaningful information such as signs, symptoms, diagnoses and treatments of any disease by classification, the context needs to be considered. The need to automatically extract key information from medical text has been widely accepted and it has been proven that search based approaches are limited in their ability. This paper presents a novel method of information identification for a particular disease using Gaussian Naïve Bayes and feature weighting approach that is then classified by the context. It is useful to enhance the effectiveness of analytics by considering the importance of the term as well as the probability of every feature of the disease during classification. Experimental results show that our method upgrades performance of classification system and is an improvement from traditional classification system.
{"title":"Identifying Contextual Information in Document Classification using Term Weighting","authors":"P. R. Deshmukh, R. Phalnikar","doi":"10.1109/IADCC.2018.8692141","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692141","url":null,"abstract":"Document classification particularly in biomedical research plays a vital role in extracting knowledge from medical literature, journal, article and report. To extract meaningful information such as signs, symptoms, diagnoses and treatments of any disease by classification, the context needs to be considered. The need to automatically extract key information from medical text has been widely accepted and it has been proven that search based approaches are limited in their ability. This paper presents a novel method of information identification for a particular disease using Gaussian Naïve Bayes and feature weighting approach that is then classified by the context. It is useful to enhance the effectiveness of analytics by considering the importance of the term as well as the probability of every feature of the disease during classification. Experimental results show that our method upgrades performance of classification system and is an improvement from traditional classification system.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124699547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/IADCC.2018.8692090
Hardik Gupta, A. Anil, Rinki Gupta
Analysis of motion of lower limbs is required in different fields including health monitoring, robotics, rehabilitation sciences, biometrics and consumer electronics. Motion sensors, such as accelerometers are prominently used in such analysis since they are non-invasive and are readily available in low cost. However, it is evident from literature that fusion of accelerometer data with those recorded from other types of sensors improves the recognition of human activities. In this paper, the use of surface electromyogram (sEMG) along with accelerometers is explored to recognize nine activities of daily living. The effect of the placement of the sEMG sensor on two of the most popularly reported muscle locations on leg, namely soleus and tibialis anterior, is studied in more detail to determine the appropriate positioning of such sensors for human activity recognition and hence, reduce the number of sensors that are required for classification. It is demonstrated using actual data that the use of sEMG along with accelerometer improves the overall classification accuracy to 98.2% from around 94.5%, which is obtained if only accelerometer is used. In particular, the classification of stationary activities is improved with the inclusion of sEMG. Moreover, the placement of the sEMG sensor on soleus muscle aids the classification more as compared to tibialis anterior muscle.
{"title":"On the combined use of Electromyogram and Accelerometer in Lower Limb Motion Recognition","authors":"Hardik Gupta, A. Anil, Rinki Gupta","doi":"10.1109/IADCC.2018.8692090","DOIUrl":"https://doi.org/10.1109/IADCC.2018.8692090","url":null,"abstract":"Analysis of motion of lower limbs is required in different fields including health monitoring, robotics, rehabilitation sciences, biometrics and consumer electronics. Motion sensors, such as accelerometers are prominently used in such analysis since they are non-invasive and are readily available in low cost. However, it is evident from literature that fusion of accelerometer data with those recorded from other types of sensors improves the recognition of human activities. In this paper, the use of surface electromyogram (sEMG) along with accelerometers is explored to recognize nine activities of daily living. The effect of the placement of the sEMG sensor on two of the most popularly reported muscle locations on leg, namely soleus and tibialis anterior, is studied in more detail to determine the appropriate positioning of such sensors for human activity recognition and hence, reduce the number of sensors that are required for classification. It is demonstrated using actual data that the use of sEMG along with accelerometer improves the overall classification accuracy to 98.2% from around 94.5%, which is obtained if only accelerometer is used. In particular, the classification of stationary activities is improved with the inclusion of sEMG. Moreover, the placement of the sEMG sensor on soleus muscle aids the classification more as compared to tibialis anterior muscle.","PeriodicalId":365713,"journal":{"name":"2018 IEEE 8th International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129565284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}