引用本文: | 乔鹏,贺周雨,李荣春,等.高性能异构加速器MiniGo算子优化方法.[J].国防科技大学学报,2024,46(1):131-140.[点击复制] |
QIAO Peng,HE Zhouyu,LI Rongchun,et al.Optimizing operator computation of MiniGo on high-performance heterogeneous accelerator[J].Journal of National University of Defense Technology,2024,46(1):131-140[点击复制] |
|
|
|
本文已被:浏览 7488次 下载 2686次 |
高性能异构加速器MiniGo算子优化方法 |
乔鹏1,2,贺周雨1,2,李荣春1,2,姜晶菲1,2 |
(1. 国防科技大学 计算机学院, 湖南 长沙 410073;2. 国防科技大学 并行与分布计算全国重点实验室, 湖南 长沙 410073)
|
摘要: |
根据高性能异构加速器的特性和MiniGo的训练模式提出了一种高效的并行计算方法。对片上计算资源进行合理规划,实现异构设备之间的流水并行优化;根据异构设备间存在共享存储段设计了共享内存编码模式,减少数据传输开销;根据数字信号处理簇内具有多计算资源的特点结合算子计算-访存特性设计了不同的算子并行计算优化策略。同时,面向TensorFlow实现了一个易于使用的高性能计算库。实验结果显示,该方法实现了典型算子的多核并行计算。相对于单核,卷积算子加速比为24.69。相较于裁剪版8核FT2000+CPU,该方法训练和自博弈执行速度加速比分别为3.83和1.5。 |
关键词: 异构计算 算子优化 卷积神经网络 强化学习 |
DOI:10.11887/j.cn.202401014 |
投稿日期:2022-12-15 |
基金项目:国家重点实验室稳定支持资助项目(WDZC20205500104) |
|
Optimizing operator computation of MiniGo on high-performance heterogeneous accelerator |
QIAO Peng1,2, HE Zhouyu1,2, LI Rongchun1,2, JIANG Jingfei1,2 |
(1. College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China;2. National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China)
|
Abstract: |
An efficient parallel computing method based on the characteristics of the high-performance heterogeneous accelerator and the training mode of MiniGo was proposed. The on-chip computing resources were reasonably planned to achieve pipelining parallel optimization between heterogeneous devices. The shared memory programming was designed according to the existence of shared storage segments between heterogeneous devices to reduce data transmission costs. According to the characteristics of multiple computing resources in a digital signal processing cluster, combined with the computing-memory access feature of the operators, different optimization strategies were designed. At the same time, this method provides an easy-use high-performance operator library for TensorFlow. The experimental results show that this method realizes the multi-core parallel computing of operators. The speedup of convolution was 24.69 compared with that was achieved on a single core. Compared with the cropped version of the 8-core FT2000+ CPU, the speedup of training and self-play execution on this method were 3.83 and 1.5, respectively. |
Keywords: heterogeneous computing operator optimization convolutional neural networks reinforcement learning |
|
|
|
|
|