A machine learning adaptive approach to remove impurities over Bigdata

Akash Devgun
{"title":"A machine learning adaptive approach to remove impurities over Bigdata","authors":"Akash Devgun","doi":"10.1109/ICECCE.2014.7086616","DOIUrl":null,"url":null,"abstract":"A Bigdata is the vast information storage collected from various locations and sources. Bigdata is defined as centralized repository with a standard structural specification. But the information driven from various sources are not always appropriate for this structure. This kind of information suffers from number of associated impurities. These impurities include incompleteness, duplicate information, lack of association between dataset attributes etc. To represent this information in organized and structured form, there is the requirement of some algorithmic approach that can identify these impurities and accept the validated data. In this present work, a two stage mode is defined under machine learning approach to transformed unstructured data to structured form. In first stage of this model, a fuzzy based model is defined to analyze this user data. The analysis is performed here under the impurity type analysis and the association analysis. The fuzzy rule is implied here to identify the degree of impurity and the associativity. Once the analysis is performed, the final stage of work is the transformation approach. During this stage, the transformation of this unstructured data to structured data is performed. An ontology driven work is defined to define such mapping. The mapping is here performed under the domain constructs and the data constructs. The work is implemented in java environment. The obtained results from system shows the reliable and robust information mapping so that the effective information tracking over the dataset is obtained.","PeriodicalId":223751,"journal":{"name":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECCE.2014.7086616","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A Bigdata is the vast information storage collected from various locations and sources. Bigdata is defined as centralized repository with a standard structural specification. But the information driven from various sources are not always appropriate for this structure. This kind of information suffers from number of associated impurities. These impurities include incompleteness, duplicate information, lack of association between dataset attributes etc. To represent this information in organized and structured form, there is the requirement of some algorithmic approach that can identify these impurities and accept the validated data. In this present work, a two stage mode is defined under machine learning approach to transformed unstructured data to structured form. In first stage of this model, a fuzzy based model is defined to analyze this user data. The analysis is performed here under the impurity type analysis and the association analysis. The fuzzy rule is implied here to identify the degree of impurity and the associativity. Once the analysis is performed, the final stage of work is the transformation approach. During this stage, the transformation of this unstructured data to structured data is performed. An ontology driven work is defined to define such mapping. The mapping is here performed under the domain constructs and the data constructs. The work is implemented in java environment. The obtained results from system shows the reliable and robust information mapping so that the effective information tracking over the dataset is obtained.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种机器学习自适应方法去除大数据上的杂质
大数据是从不同地点和来源收集的大量信息存储。大数据被定义为具有标准结构规范的集中式存储库。但是,来自各种来源的信息并不总是适合于这种结构。这类信息有许多相关的杂质。这些杂质包括不完整、重复信息、数据集属性之间缺乏关联等。为了以有组织和结构化的形式表示这些信息,需要一些算法方法来识别这些杂质并接受经过验证的数据。本文在机器学习方法下定义了一种将非结构化数据转换为结构化数据的两阶段模式。在该模型的第一阶段,定义了一个基于模糊的模型来分析用户数据。分析在杂质类型分析和关联分析下进行。这里隐含了模糊规则来识别杂质程度和结合律。一旦执行了分析,工作的最后阶段就是转换方法。在此阶段,将执行非结构化数据到结构化数据的转换。定义了本体驱动的工作来定义这样的映射。这里,映射是在域构造和数据构造下执行的。该工作在java环境中实现。系统得到的结果显示了信息映射的可靠性和鲁棒性,从而实现了对数据集的有效信息跟踪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Performance comparison of GTS mechanism enabled IEEE 802.15.4 based Wireless Sensor Networks using LAR and DYMO protocol PLC based controller simulation for Raytheon precision laser driller/welder Automated vigilant transportation system for minimizing the Road accidents Reconfigurable Solar Converter for PV battery application Acoustic Echo Cancellation using time and frequency domain adaptive filter methods on Tms320c6713dsk
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1