Aircraft flight control systems consist of primary controls and secondary control systems. Primary flight control systems provide the operation for the airplane's elevator, aileron rudder and horizontal stabilizer trim actuator. Simulink models are used to design and simulate such systems. Sensor failures happen very often and the control system is designed to be robust against sensor failures. Voting logic is used to select a good sensor in a multi sensor environment. The design of voter logic requires scenarios which provide catastrophic disastrous situations to study their effectiveness. This research proposes a strategy that finds a worst case scenario for an autopilot namely using an orthogonal array based algorithm and Design of Experiments (DOE). The voter logic is tested against such a situation and proves to improve the situation drastically.
{"title":"A Worst Case Benchmark Problem to Validate Voter Logic","authors":"N. A. Kumar, Y. Jeppu, Krishna Chandramouli","doi":"10.1109/WCCCT.2014.17","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.17","url":null,"abstract":"Aircraft flight control systems consist of primary controls and secondary control systems. Primary flight control systems provide the operation for the airplane's elevator, aileron rudder and horizontal stabilizer trim actuator. Simulink models are used to design and simulate such systems. Sensor failures happen very often and the control system is designed to be robust against sensor failures. Voting logic is used to select a good sensor in a multi sensor environment. The design of voter logic requires scenarios which provide catastrophic disastrous situations to study their effectiveness. This research proposes a strategy that finds a worst case scenario for an autopilot namely using an orthogonal array based algorithm and Design of Experiments (DOE). The voter logic is tested against such a situation and proves to improve the situation drastically.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124923338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile Health care system with efficient Quality of Service (QoS) in cloud environment has been proposed in this paper. This health care application determines the probability of various cardio vascular diseases like heart attack, stroke etc. It obtains requested value of the attributes like weight, height, gender, age, blood pressure, lipid profile, sugar etc from the mobile user and transfers the data to the cloud where predefined processing techniques are applied on it. An efficient QoS system has been placed in the cloud which helps the user to get the service with expected quality. The QoS system is monitored continuously for various performance metrics like mobility, latency, throughput to check whether Service Level Agreement (SLA) is attained.
{"title":"QoS Aware Healthcare System on Mobile Clouds","authors":"Priya Pandey, Karthika, P. Krishna, B. Sarojini","doi":"10.1109/WCCCT.2014.48","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.48","url":null,"abstract":"Mobile Health care system with efficient Quality of Service (QoS) in cloud environment has been proposed in this paper. This health care application determines the probability of various cardio vascular diseases like heart attack, stroke etc. It obtains requested value of the attributes like weight, height, gender, age, blood pressure, lipid profile, sugar etc from the mobile user and transfers the data to the cloud where predefined processing techniques are applied on it. An efficient QoS system has been placed in the cloud which helps the user to get the service with expected quality. The QoS system is monitored continuously for various performance metrics like mobility, latency, throughput to check whether Service Level Agreement (SLA) is attained.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125068007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study addresses for applying data-mining techniques in diabetes research which gives a rational insight to model predicate patterns that can forecast incidence of Diabetes Mellitus disease (DMD) in human race. Clinical Patient records and Pathological test reports inherently represent data sets which may be applied to data mining for diabetes research. Hidden knowledge rules may be extracted to new hypothesis for improving standards and quality in the field of health care for diabetes patients. Primary Data mining methods such as Rule classification and Decision trees are used.
{"title":"A Predictive Approach for Diabetes Mellitus Disease through Data Mining Technologies","authors":"S. Sankaranarayanan, T. Perumal","doi":"10.1109/WCCCT.2014.65","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.65","url":null,"abstract":"This study addresses for applying data-mining techniques in diabetes research which gives a rational insight to model predicate patterns that can forecast incidence of Diabetes Mellitus disease (DMD) in human race. Clinical Patient records and Pathological test reports inherently represent data sets which may be applied to data mining for diabetes research. Hidden knowledge rules may be extracted to new hypothesis for improving standards and quality in the field of health care for diabetes patients. Primary Data mining methods such as Rule classification and Decision trees are used.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114070424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, methods for automated extraction of multiple features of cytoplasm and nuclei from cervical cytology images are described. Edges of the image are enhanced by Edge Sharpening filter. Then Gaussian mixture model using Expectation Maximization and K-means clustering is used to segment the image into its components as background, nucleus and cytoplasm. Features have been identified for both multiple and single cervical cytology cells. For multiple cell images, nucleus to cytoplasm ratio is calculated. A mixture of features like center, perimeter, area, mean intensity of nucleus and cytoplasm are extracted from cells with single nucleus. These features may be used to determine the stage of cancer.
{"title":"Multiple Feature Extraction from Cervical Cytology Images by Gaussian Mixture Model","authors":"G. Lakshmi, K. Krishnaveni","doi":"10.1109/WCCCT.2014.89","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.89","url":null,"abstract":"In this paper, methods for automated extraction of multiple features of cytoplasm and nuclei from cervical cytology images are described. Edges of the image are enhanced by Edge Sharpening filter. Then Gaussian mixture model using Expectation Maximization and K-means clustering is used to segment the image into its components as background, nucleus and cytoplasm. Features have been identified for both multiple and single cervical cytology cells. For multiple cell images, nucleus to cytoplasm ratio is calculated. A mixture of features like center, perimeter, area, mean intensity of nucleus and cytoplasm are extracted from cells with single nucleus. These features may be used to determine the stage of cancer.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124281011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sparse coding algorithm is an learning algorithm mainly for unsupervised feature for finding succinct, a little above high - level Representation of inputs, and it has successfully given a way for Deep learning. Our objective is to use High - Level Representation data in form of unlabeled category to help unsupervised learning task. When compared with labeled data, unlabeled data is easier to acquire because, unlike labeled data it does not follow some particular class labels. This really makes the Deep learning wider and applicable to practical problems and learning. The main problem with sparse coding is it uses Quadratic loss function and Gaussian noise mode. So, its performs is very poor when binary or integer value or other Non-Gaussian type data is applied. Thus first we propose an algorithm for solving the L1 - regularized convex optimization algorithm for the problem to allow High - Level Representation of unlabeled data. Through this we derive a optimal solution for describing an approach to Deep learning algorithm by using sparse code.
{"title":"Sparse Coding: A Deep Learning Using Unlabeled Data for High - Level Representation","authors":"Mrs. R. Vidya, M. Phil","doi":"10.1109/WCCCT.2014.69","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.69","url":null,"abstract":"Sparse coding algorithm is an learning algorithm mainly for unsupervised feature for finding succinct, a little above high - level Representation of inputs, and it has successfully given a way for Deep learning. Our objective is to use High - Level Representation data in form of unlabeled category to help unsupervised learning task. When compared with labeled data, unlabeled data is easier to acquire because, unlike labeled data it does not follow some particular class labels. This really makes the Deep learning wider and applicable to practical problems and learning. The main problem with sparse coding is it uses Quadratic loss function and Gaussian noise mode. So, its performs is very poor when binary or integer value or other Non-Gaussian type data is applied. Thus first we propose an algorithm for solving the L1 - regularized convex optimization algorithm for the problem to allow High - Level Representation of unlabeled data. Through this we derive a optimal solution for describing an approach to Deep learning algorithm by using sparse code.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130834237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}