Abstract:Traditional active learning methods select examples by only considering the predictions of the current model. However, these methods neglect the information of the previous trained models, which reflect the stability of the prediction sequence for each unlabeled example during the active learning stage. Thus, a novel active learning method with instability sampling was proposed, which attempted to estimate the potential utility of each unlabeled examples for improving the model performance based on the difference among predictions of the previous models. The proposed method measured the instability of unlabeled example based on the difference between the posterior probabilities predicted by the previous models, and the example with the largest instability was selected to be queried. Extensive experiments were conducted on multiple datasets with diverse classification models. The experimental results validate the effectiveness of the proposed method.