{"title":"Kernelized global-local discriminant information preservation for unsupervised domain adaptation","authors":"Lekshmi R, Rakesh Kumar Sanodiya, Babita Roslind Jose, Jimson Mathew","doi":"10.1007/s10489-023-04706-1","DOIUrl":null,"url":null,"abstract":"<div><p>Visual recognition has become inevitable in applications such as object detection, biometric tracking, autonomous vehicles, and social media platforms. The images have multiple factors such as image resolution, illumination, perspective and noise, resulting in a significant mismatch between the training and testing domains. Unsupervised Domain adaptation (DA) has proven an effective way to reduce the differences by adapting the knowledge from a richly labeled source domain to an unlabeled target domain. But the real-time datasets are non-linear and high-dimensional. Though kernelization can handle the non-linearity in data, the dimension needs to be reduced as the salient features of the data lie in a low-dimensional subspace. Current dimensionality reduction approaches in DA preserve either the global or local part of information of manifold data. Specifically, the data manifold’s static (subject-invariant) and dynamic (intra-subject variant) information need to be considered during knowledge transfer. Therefore, to preserve both parts of information Globality-Locality Preserving Projection (GLPP) method is applied to the labeled source domain. The other objectives are preserving the discriminant information and variance of target data, and minimizing the distribution and subspace differences between the domains. With all these objectives, we propose a unique method known as Kernelized Global-Local Discriminant Information Preservation for unsupervised DA (KGLDIP). KGLDIP aims to reduce the discrimination discrepancy geometrically and statistically between the two domains after calculating two projection matrices for each domain. Intensive experiments are conducted using five standard datasets and the analysis reveals that the proposed algorithm excels the other state-of-the-art DA approaches.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"53 21","pages":"25412 - 25434"},"PeriodicalIF":3.4000,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-023-04706-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Visual recognition has become inevitable in applications such as object detection, biometric tracking, autonomous vehicles, and social media platforms. The images have multiple factors such as image resolution, illumination, perspective and noise, resulting in a significant mismatch between the training and testing domains. Unsupervised Domain adaptation (DA) has proven an effective way to reduce the differences by adapting the knowledge from a richly labeled source domain to an unlabeled target domain. But the real-time datasets are non-linear and high-dimensional. Though kernelization can handle the non-linearity in data, the dimension needs to be reduced as the salient features of the data lie in a low-dimensional subspace. Current dimensionality reduction approaches in DA preserve either the global or local part of information of manifold data. Specifically, the data manifold’s static (subject-invariant) and dynamic (intra-subject variant) information need to be considered during knowledge transfer. Therefore, to preserve both parts of information Globality-Locality Preserving Projection (GLPP) method is applied to the labeled source domain. The other objectives are preserving the discriminant information and variance of target data, and minimizing the distribution and subspace differences between the domains. With all these objectives, we propose a unique method known as Kernelized Global-Local Discriminant Information Preservation for unsupervised DA (KGLDIP). KGLDIP aims to reduce the discrimination discrepancy geometrically and statistically between the two domains after calculating two projection matrices for each domain. Intensive experiments are conducted using five standard datasets and the analysis reveals that the proposed algorithm excels the other state-of-the-art DA approaches.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.