{"title":"针对多模态异构数据的深度核化降维器","authors":"Arifa Shikalgar, Shefali Sonavane","doi":"10.1007/s12652-024-04804-z","DOIUrl":null,"url":null,"abstract":"<p>Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.</p>","PeriodicalId":14959,"journal":{"name":"Journal of Ambient Intelligence and Humanized Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep kernelized dimensionality reducer for multi-modality heterogeneous data\",\"authors\":\"Arifa Shikalgar, Shefali Sonavane\",\"doi\":\"10.1007/s12652-024-04804-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.</p>\",\"PeriodicalId\":14959,\"journal\":{\"name\":\"Journal of Ambient Intelligence and Humanized Computing\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Ambient Intelligence and Humanized Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12652-024-04804-z\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Ambient Intelligence and Humanized Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12652-024-04804-z","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
Deep kernelized dimensionality reducer for multi-modality heterogeneous data
Data mining applications use high-dimensional datasets, but still, a large number of extents causes the well-known ‘Curse of Dimensionality,' which leads to worse accuracy of machine learning classifiers due to the fact that most unimportant and unnecessary dimensions are included in the dataset. Many approaches are employed to handle critical dimension datasets, but their accuracy suffers as a result. As a consequence, to deal with high-dimensional datasets, a hybrid Deep Kernelized Stacked De-Noising Auto encoder based on feature learning was proposed (DKSDA). Because of the layered property, the DKSDA can manage vast amounts of heterogeneous data and performs knowledge-based reduction by taking into account many qualities. It will examine all the multimodalities and all hidden potential modalities using two fine-tuning stages, the input has random noise along with feature vectors, and a stack of de-noising auto-encoders is generated. This SDA processing decreases the prediction error caused by the lack of analysis of concealed objects among the multimodalities. In addition, to handle a huge set of data, a new layer of Spatial Pyramid Pooling (SPP) is introduced along with the structure of Convolutional Neural Network (CNN) by decreasing or removing the remaining sections other than the key characteristic with structural knowledge using kernel function. The recent studies revealed that the DKSDA proposed has an average accuracy of about 97.57% with a dimensionality reduction of 12%. By enhancing the classification accuracy and processing complexity, pre-training reduces dimensionality.
期刊介绍:
The purpose of JAIHC is to provide a high profile, leading edge forum for academics, industrial professionals, educators and policy makers involved in the field to contribute, to disseminate the most innovative researches and developments of all aspects of ambient intelligence and humanized computing, such as intelligent/smart objects, environments/spaces, and systems. The journal discusses various technical, safety, personal, social, physical, political, artistic and economic issues. The research topics covered by the journal are (but not limited to):
Pervasive/Ubiquitous Computing and Applications
Cognitive wireless sensor network
Embedded Systems and Software
Mobile Computing and Wireless Communications
Next Generation Multimedia Systems
Security, Privacy and Trust
Service and Semantic Computing
Advanced Networking Architectures
Dependable, Reliable and Autonomic Computing
Embedded Smart Agents
Context awareness, social sensing and inference
Multi modal interaction design
Ergonomics and product prototyping
Intelligent and self-organizing transportation networks & services
Healthcare Systems
Virtual Humans & Virtual Worlds
Wearables sensors and actuators