针对多模态异构数据的深度核化降维器

3区 计算机科学 Q1 Computer Science Journal of Ambient Intelligence and Humanized Computing Pub Date : 2024-05-09 DOI:10.1007/s12652-024-04804-z
Arifa Shikalgar, Shefali Sonavane
{"title":"针对多模态异构数据的深度核化降维器","authors":"Arifa Shikalgar, Shefali Sonavane","doi":"10.1007/s12652-024-04804-z","DOIUrl":null,"url":null,"abstract":"<p>Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep kernelized dimensionality reducer for multi-modality heterogeneous data\",\"authors\":\"Arifa Shikalgar, Shefali Sonavane\",\"doi\":\"10.1007/s12652-024-04804-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.</p>\",\"PeriodicalId\":14959,\"journal\":{\"name\":\"Journal of Ambient Intelligence and Humanized Computing\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Ambient Intelligence and Humanized Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12652-024-04804-z\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Ambient Intelligence and Humanized Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12652-024-04804-z","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

数据挖掘应用使用高维数据集,但大量的维数仍然会导致众所周知的 "维数诅咒",即由于数据集中包含了大多数不重要和不必要的维数,机器学习分类器的准确性会降低。人们采用了许多方法来处理临界维度数据集,但其准确性却因此受到了影响。因此,为了处理高维数据集,人们提出了一种基于特征学习的混合深度核化堆叠去噪自动编码器(DKSDA)。由于具有分层特性,DKSDA 可以管理海量异构数据,并通过考虑多种质量来执行基于知识的降噪。它将使用两个微调阶段来检查所有多模态和所有隐藏的潜在模态,输入有随机噪声和特征向量,并生成一叠去噪自动编码器。这种 SDA 处理方法可减少因缺乏对多模态中隐藏对象的分析而造成的预测误差。此外,为了处理庞大的数据集,还在卷积神经网络(CNN)结构的基础上引入了新的空间金字塔池化层(SPP),利用核函数的结构知识减少或去除关键特征以外的剩余部分。最近的研究表明,DKSDA 的平均准确率约为 97.57%,维度降低了 12%。通过提高分类准确率和处理复杂度,预训练降低了维度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep kernelized dimensionality reducer for multi-modality heterogeneous data

Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Ambient Intelligence and Humanized Computing
Journal of Ambient Intelligence and Humanized Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCEC-COMPUTER SCIENCE, INFORMATION SYSTEMS
CiteScore
9.60
自引率
0.00%
发文量
854
期刊介绍: The purpose of JAIHC is to provide a high profile, leading edge forum for academics, industrial professionals, educators and policy makers involved in the field to contribute, to disseminate the most innovative researches and developments of all aspects of ambient intelligence and humanized computing, such as intelligent/smart objects, environments/spaces, and systems. The journal discusses various technical, safety, personal, social, physical, political, artistic and economic issues. The research topics covered by the journal are (but not limited to): Pervasive/Ubiquitous Computing and Applications Cognitive wireless sensor network Embedded Systems and Software Mobile Computing and Wireless Communications Next Generation Multimedia Systems Security, Privacy and Trust Service and Semantic Computing Advanced Networking Architectures Dependable, Reliable and Autonomic Computing Embedded Smart Agents Context awareness, social sensing and inference Multi modal interaction design Ergonomics and product prototyping Intelligent and self-organizing transportation networks & services Healthcare Systems Virtual Humans & Virtual Worlds Wearables sensors and actuators
期刊最新文献
Predicting the unconfined compressive strength of stabilized soil using random forest coupled with meta-heuristic algorithms Expressive sign language system for deaf kids with MPEG-4 approach of virtual human character MEDCO: an efficient protocol for data compression in wireless body sensor network A multi-objective gene selection for cancer diagnosis using particle swarm optimization and mutual information Partial policy hidden medical data access control method based on CP-ABE
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1