Print Email Facebook Twitter Programming tensor cores from an image processing DSL Title Programming tensor cores from an image processing DSL Author Sioutas, S. Stuijk, S. Basten, T. Somers, L. Corporaal, H. Publication year 2020 Abstract Tensor Cores (TCUs) are specialized units first introduced by NVIDIA in the Volta microarchitecture in order to accelerate matrix multiplications for deep learning and linear algebra workloads. While these units have proved to be capable of providing significant speedups for specific applications, their programmability remains difficult for the average user. In this paper, we extend the Halide DSL and compiler with the ability to utilize these units when generating code for a CUDA based NVIDIA GPGPU. To this end, we introduce a new scheduling directive along with custom lowering passes that automatically transform a Halide AST in order to be able to generate code for the TCUs. We evaluate the generated code and show that it can achieve over 5X speedup compared to Halide manual schedules without TCU support, while it remains within 20% of the NVIDIA cuBLAS implementations for mixed precision GEMM and within 10% of manual CUDA implementations with WMMA intrinsics. Subject GPGPUsHalideMatrix multiplicationTensor cores To reference this document use: http://resolver.tudelft.nl/uuid:d4523a67-6acb-4fcf-8db8-2466a66fbabe DOI https://doi.org/10.1145/3378678.3391880 TNO identifier 878019 Publisher ACM ISBN 9781450371315 Source Proceedings of the 23rd International Workshop on Software and Compilers for Embedded Systems, SCOPES 2020, 25 May 2020, 36-41 Bibliographical note 23rd International Workshop on Software and Compilers for Embedded Systems, SCOPES 2020; Schloss RheinfelsSt. Goar; Germany; 25 May 2020 through 26 May 2020; Code 160114 Document type conference paper Files To receive the publication files, please send an e-mail request to TNO Library.