In real-world applications of active learning frameworks, human oracles are often imperfect, and label noise is introduced into the learning process. This issue can be mitigated by further training the oracle using previous knowledge acquired by the model. However, it remains unclear whether model-informed oracle training can significantly improve performance. This study investigates whether recursive feedback between the model and the oracle can induce a knowledge augmentation effect, defined as a statistically significant improvement in model performance after receiving feedback from a self-data-trained oracle. To this end, we implemented a bidirectional active learning framework in which the model assists oracle learning by selectively transferring prior knowledge. In a closed-loop environment without external data, the model performs informative sample selection from an unlabeled pool, querying the oracle for labels, and retraining on the updated dataset. Simultaneously, the oracle is updated by learning from samples from the model’s training data that exhibit high uncertainty from the oracle’s perspective. This framework was empirically validated through a behavioral experiment involving 252 clinicians performing a medical image interpretation task. The results showed that model-informed oracle training enhanced both oracle accuracy and model performance. Moreover, when oracle learning was constrained by a fixed learning budget, a sampling strategy jointly balancing uncertainty and representativeness yielded the strongest effect. These findings provide compelling empirical evidence of the knowledge augmentation effect arising from human learning within a closed-loop active learning framework.
扫码关注我们
求助内容:
应助结果提醒方式:
