The past decade has witnessed remarkable achievements in computer vision, owing to the fast development of deep learning. With the advancement of computing power and deep learning algorithms, we can process and apply millions or even hundreds of millions of large-scale data to train robust and advanced deep learning models. In spite of the impressive success, current deep learning methods tend to rely on massive annotated training data and lack the capability of learning from limited exemplars.
However, constructing a million-scale annotated dataset like ImageNet is time-consuming, labour-intensive and even infeasible in many applications. In certain fields, very limited annotated examples can be gathered due to various reasons such as privacy or ethical issues. Consequently, one of the pressing challenges in computer vision is to develop approaches that are capable of learning from limited annotated data. The purpose of this Special Issue is to collect high-quality articles on learning from limited annotations for computer vision tasks (e.g. image classification, object detection, semantic segmentation, instance segmentation and many others), publish new ideas, theories, solutions and insights on this topic and showcase their applications.
In this Special Issue we received 29 papers, all of which underwent peer review. Of the 29 originally submitted papers, 9 have been accepted.
The nine accepted papers can be clustered into two main categories: theoretical and applications. The papers that fall into the first category are by Liu et al., Li et al. and He et al. The second category of papers offers a direct solution to various computer vision tasks. These papers are by Ma et al., Wu et al., Rao et al., Sun et al., Hou et al. and Gong et al. A brief presentation of each of the papers in this Special Issue follows.
All of the papers selected for this Special Issue show that the field of learning from limited annotations for computer vision tasks is steadily moving forward. The possibility of a weakly supervised learning paradigm will remain a source of inspiration for new techniques in the years to come.