Abstract:To address the limitations of LLMs (large language models) in reasoning and decision-making within complex dynamic game scenarios, the DCoRD (dynamic-chain-of-reasoning-and-decision) method was proposed based on the limitations of text generation mechanism and reasoning of large language models. The DCoRD consisted of a reasoning-decision framework and a dynamic decision option library, serving as structured prompt engineering to enhance the reasoning and decision-making abilities of LLMs. By incorporating task objectives to constrain output formats and content scope, the method reduced model hallucinations and improves decision accuracy. Four approaches were compared: free-generation mode, traditional chain-of-thought, chain-of-draft and the proposed DCoRD method, in a StarCraft II environment. Experimental results demonstrate that DCoRD significantly reduces token consumption and response latency while enhancing decision accuracy and task alignment, offering novel theoretical and methodological insights for applying LLMs to game-theoretic decision tasks.