针对控制流计算图的内存优化方法
DOI:
作者:
作者单位:

1.安徽大学 互联网学院;2.科大讯飞股份有限公司

作者简介:

通讯作者:

中图分类号:

TP18

基金项目:

国家自然科学基金(62406003)


Memory Optimization Method for Control Flow Computation Graph
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献()
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    人工智能(artificial intelligence,AI)芯片在深度学习应用中受限于片上内存容量,当前主流内存优化方法针对静态计算图,对动态计算图的内存优化存在进一步的优化空间。 针对该问题,提出一种控制流计算图模型的内存优化框架,在子图内部实现内存复用的基础上,结合控制流特性递归进行子图间的内存复用。针对片上与片外内存的内存墙问题,针对控制流计算图的权重数据提出一种有效的乒乓缓存实现策略,在子图内部实现访存和计算操作的重叠执行。基于国产LUNA AI芯片进行验证,结果表明,该内存优化框架实现了控制流计算图的片上内存优化使用,相比原有方法进一步提升了1%到5.9%。该策略有效解决了内存墙问题,减少了片上片外内存的数据传输时间,计算图的执行效率最高提升29%。

    Abstract:

    Artificial intelligence (AI) chips face on-chip memory limits in deep learning. Current optimization methods focus on static computation graphs, leaving room to improve memory efficiency for dynamic graphs. To overcome this limitation, a memory optimization framework for control-flow computation graphs was developed. The framework realized operator-level memory reuse within subgraphs and further achieved recursive reuse across subgraphs by exploiting control-flow characteristics. In addition, a ping-pong buffering strategy for weight data was introduced to mitigate the memory wall between on-chip and off-chip memory, thereby allowing overlapping of memory access and computation operations within subgraphs. Validation on the domestic LUNA AI chip has demonstrated that the proposed framework improves on-chip memory utilization by 1% to 5.9% compared with existing methods. Moreover, the strategy effectively alleviates the memory wall problem by reducing data transfer time between on-chip and off-chip memory, resulting in execution efficiency improvements of up to 29%.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-05-06
  • 最后修改日期:2025-09-18
  • 录用日期:2025-09-19
  • 在线发布日期:
  • 出版日期:
文章二维码