Pre-training and Fine-tuning: A unified approach for solving graph combinatorial optimization problems
DOI:
CSTR:
Author:
Affiliation:

Clc Number:

TP18

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Graph combinatorial optimization problems are widely found in real-world applications. However, due to their NP-hard nature, traditional exact algorithms struggle with large-scale instances, while existing machine learning approaches often rely on problem-specific model designs, limiting their general applicability. To address this, a unified solving framework based on a pre-training and fine-tuning paradigm is proposed, aiming to enhance generalization capability and solving efficiency across diverse graph combinatorial optimization tasks. The framework first reduced different combinatorial optimization problems into a unified form, constructing a consistent representation space. A cross-problem pre-training mechanism was then designed to learn common knowledge from diverse problem instances. Furthermore, multiple fine-tuning strategies were introduced to enable the model to quickly adapt to various problem distributions during testing. Experimental results demonstrate that the proposed method achieves superior generalization performance and stable solving efficiency across multiple classic combinatorial optimization tasks while maintaining a single model architecture, offering a feasible path toward a general-purpose combinatorial optimization solver.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:April 15,2025
  • Revised:November 29,2025
  • Adopted:December 05,2025
  • Online:
  • Published:
Article QR Code