A Multi-Level Compiler Backend for Accelerated Micro-Kernels Targeting RISC-V ISA Extensions
High-performance micro-kernels must fully exploit today’s diverse and specialized hardware to deliver peak performance to deep neural networks (DNNs). While higher-level optimizations for DNNs are offered by numerous compilers (e.g., MLIR, TVM, OpenXLA), performance-critical micro-kernels are left to specialized code generators or hand-written assembly. Even though widely-adopted compilers (e.g., LLVM, GCC) offer highly-tuned backends, their CPU-focused input abstraction, structure-less internal representation, and general-purpose best-effort design inhibit tailored code generation for innovative hardware. We think it is time to widen the classical hourglass backend and embrace progressive lowering across a diverse set of structured abstractions to bring domain-specific code generation to compiler backends. We demonstrate this concept by implementing a custom backend for a RISC-V-based accelerator with hardware loops and streaming registers, leveraging knowledge about the hardware at levels of abstraction that match its custom ISA. We use incremental register allocation over structured IRs while dropping classical spilling heuristics and show up-to 90% FPU utilization across key DNN kernels. By breaking the backend hourglass, we re-open the path from domain-specific abstractions to specialized hardware.