{"title":"A survey and experimental study for embedding-aware generative models: Features, models, and any-shot scenarios","authors":"Jiaqi Yue, Jiancheng Zhao, Liangjun Feng, Chunhui Zhao","doi":"10.1016/j.jprocont.2024.103297","DOIUrl":null,"url":null,"abstract":"<div><p>In the era of industrial artificial intelligence, grappling with data insufficiency remains a formidable challenge that stands at the forefront of our progress. The embedding-aware generative model emerges as a promising solution, tackling this issue head-on in the realm of zero-shot learning by ingeniously constructing a generator that bridges the gap between semantic and feature spaces. Thanks to the predefined benchmark and protocols, the number of proposed embedding-aware generative models for zero-shot learning is increasing rapidly. We argue that it is time to take a step back and reconsider the embedding-aware generative paradigm. The main work of this paper is two-fold. First, embedding features in benchmark datasets are somehow overlooked, which potentially limits the performance of generative models, while most researchers focus on how to improve them. Therefore, we conduct a systematic evaluation of 10 representative embedding-aware generative models and prove that even simple representation modifications on the embedding features can improve the performance of generative models for zero-shot learning remarkably. So it is time to pay more attention to the current embedding features in benchmark datasets. Second, based on five benchmark datasets, each with six any-shot learning scenarios, we systematically compare the performance of ten typical embedding-aware generative models for the first time, and we give a strong baseline for zero-shot learning and few-shot learning. Meanwhile, a comprehensive generative model repository, namely, generative any-shot learning repository, is provided, which contains the models, features, parameters, and scenarios of embedding-aware generative models for zero-shot learning and few-shot learning. Any results in this paper can be readily reproduced with only one command line based on generative any-shot learning.</p></div>","PeriodicalId":50079,"journal":{"name":"Journal of Process Control","volume":"143 ","pages":"Article 103297"},"PeriodicalIF":3.3000,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Process Control","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0959152424001379","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In the era of industrial artificial intelligence, grappling with data insufficiency remains a formidable challenge that stands at the forefront of our progress. The embedding-aware generative model emerges as a promising solution, tackling this issue head-on in the realm of zero-shot learning by ingeniously constructing a generator that bridges the gap between semantic and feature spaces. Thanks to the predefined benchmark and protocols, the number of proposed embedding-aware generative models for zero-shot learning is increasing rapidly. We argue that it is time to take a step back and reconsider the embedding-aware generative paradigm. The main work of this paper is two-fold. First, embedding features in benchmark datasets are somehow overlooked, which potentially limits the performance of generative models, while most researchers focus on how to improve them. Therefore, we conduct a systematic evaluation of 10 representative embedding-aware generative models and prove that even simple representation modifications on the embedding features can improve the performance of generative models for zero-shot learning remarkably. So it is time to pay more attention to the current embedding features in benchmark datasets. Second, based on five benchmark datasets, each with six any-shot learning scenarios, we systematically compare the performance of ten typical embedding-aware generative models for the first time, and we give a strong baseline for zero-shot learning and few-shot learning. Meanwhile, a comprehensive generative model repository, namely, generative any-shot learning repository, is provided, which contains the models, features, parameters, and scenarios of embedding-aware generative models for zero-shot learning and few-shot learning. Any results in this paper can be readily reproduced with only one command line based on generative any-shot learning.
期刊介绍:
This international journal covers the application of control theory, operations research, computer science and engineering principles to the solution of process control problems. In addition to the traditional chemical processing and manufacturing applications, the scope of process control problems involves a wide range of applications that includes energy processes, nano-technology, systems biology, bio-medical engineering, pharmaceutical processing technology, energy storage and conversion, smart grid, and data analytics among others.
Papers on the theory in these areas will also be accepted provided the theoretical contribution is aimed at the application and the development of process control techniques.
Topics covered include:
• Control applications• Process monitoring• Plant-wide control• Process control systems• Control techniques and algorithms• Process modelling and simulation• Design methods
Advanced design methods exclude well established and widely studied traditional design techniques such as PID tuning and its many variants. Applications in fields such as control of automotive engines, machinery and robotics are not deemed suitable unless a clear motivation for the relevance to process control is provided.