Abstract:With the rapid advancement of artificial intelligence and deep learning, deep neural networks have achieved remarkable predictive performance, yet they still face critical challenges, including limited interpretability as black-box models, heavy reliance on large-scale data, and insufficient robustness in noisy and small-sample scenarios. Prototype learning, as a learning paradigm that combines intuitive interpretability with efficient knowledge representation, characterizes data distributions and semantic centers by constructing representative prototypes, thereby offering a new theoretical perspective and technical foundation for building trustworthy, transparent, and efficient artificial intelligence systems. This paper aims to systematically review and summarize recent advances in prototype learning. First, we formalize the fundamental concepts of prototype learning and present its mathematical formulations. Next, prototype construction paradigms are comprehensively discussed from three perspectives: statistical machine learning, deep feature–driven modeling, and semantic representation learning. Subsequently, prototype-based methods for single- and multi-modal data augmentation and fusion are analyzed, highlighting their crucial role in alleviating data quality bottlenecks. Building upon this, we examine the application logic of prototype learning in interpretable deep network modeling, fuzzy rule inference, causal attribution, and time-series analysis. Furthermore, we explore emerging research directions, including prototype-guided generative learning, prototype-enhanced large model capability improvement, and prototype-based graph learning. Finally, we summarize the development trends of prototype learning and discuss its future potential in frontier areas such as generative AI, large-model collaboration, and sustainable learning.