Towards An Approach to Identify Divergences in Hardware Designs for HPC Workloads
Abstract
Developing efficient hardware accelerators for mathematical kernels used in scientific applications and machine learning has traditionally been a labor-intensive task. These accelerators typically require low-level programming in Verilog or other hardware description languages, along with significant manual optimization effort. Recently, to alleviate this challenge, high-level hardware design tools like Chisel and High-Level Synthesis have emerged. However, as with any compiler, some of the generated hardware may be suboptimal compared to expert-crafted designs. Understanding where these inefficiencies arise is crucial, as it provides valuable insights for both users and tool developers. In this paper, we propose a methodology to hierarchically decompose mathematical kernels - such as Fourier transforms, matrix multiplication, and QR factorization - into a set of common building blocks or primitives. Then the primitives are implemented in the different programming environments, and the larger algorithms get assembled. Furthermore, we employ an automatic approach to investigate the achievable frequency and required resources. Performing this experimentation at each level will provide fairer comparisons between designs and offer guidance for both tool developers and hardware designers to adopt better practices.