We introduce VEGA, an AI-driven system aimed at easing the development of compiler backends for new targets. Our approach involves categorizing functions from existing backends into function groups, each comprising various target-specific implementations of a standard compiler interface function, abstracted as a single function template. Therefore, generating a new backend involves customizing these function templates to specific target requirements. To capitalize on AI’s capabilities in code generation, VEGA maps statements in a target-specific version of a function template into feature vectors, distinguishing between target-independent and target-specific properties. Leveraging a pre-trained model, VEGA can efficiently auto-generate a version of each function template tailored to a specific target, thereby enabling the construction of a complete compiler backend for a new target based solely on its target description files.

We evaluated VEGA on three distinct targets: a CPU processor (RISC-V), a customized processor with instruction extensions (RI5CY), and an IoT processor (xCORE). VEGA demonstrated high efficiency, generating compiler backends under an hour, which substantially enhances developer productivity. Across the three targets, VEGA achieved accuracy rates of 71.5%, 73.2%, and 62.2% for all generated functions, significantly outperforming the traditional fork-flow method, which yielded less than 8% accuracy. Moreover, VEGA provides explicit confidence scores for generated functions and statements, allowing developers to identify areas that require minimal manual intervention easily. This research is poised to enhance the effectiveness of traditional compiler backend development significantly.

Mon 3 Mar

Displayed time zone: Pacific Time (US & Canada) change

14:00 - 15:20
ML Tools & OptimizationMain Conference at Casuarina Ballroom (Level 2)
Chair(s): Jeronimo Castrillon TU Dresden, Germany
14:00
20m
Talk
VEGA: Automatically Generating Compiler Backends Using a Pre-Trained Transformer Model
Main Conference
Ming Zhong SKLP, Institute of Computing Technology, CAS, Fang Lv Institute of Computing Technology, Chinese Academy of Sciences, Lulin Wang SKLP, ICT, CAS Beijing, China, Lei Qiu SKLP, Institute of Computing Technology, CAS; University of Chinese Academy of Sciences, Yingying Wang SKLP, ICT, CAS Beijing, China, Ying Liu Institute of Computing Technology, Chinese Academy of Sciences, Huimin Cui Institute of Computing Technology, Chinese Academy of Sciences, Xiaobing Feng ICT CAS, Jingling Xue UNSW Sydney
14:20
20m
Talk
IntelliGen: Instruction-Level Auto-Tuning for Tensor Program with Monotonic Memory Optimization
Main Conference
Zixuan Ma Tsinghua University, Haojie Wang Tsinghua University, Jingze Xing Tsinghua University, Shuhong Huang Tsinghua University, Liyan Zheng Tsinghua University, Chen Zhang Tsinghua University, Huanqi Cao Tsinghua University, Kezhao Huang Tsinghua University, Mingshu Zhai Tsinghua University, Shizhi Tang Tsinghua University, Penghan Wang Tsinghua University, Jidong Zhai Tsinghua University
14:40
20m
Talk
GraalNN: Context-Sensitive Static Profiling with Graph Neural Networks
Main Conference
Lazar Milikic Oracle Labs, Milan Cugurovic Oracle Labs, Vojin Jovanovic Oracle Labs
15:00
20m
Talk
LLM-Vectorizer: LLM-based Verified Loop Vectorizer
Main Conference
Jubi Taneja Microsoft Research, Avery Laird University of Toronto, Cong Yan Microsoft Research, Madan Musuvathi Microsoft Research, Shuvendu K. Lahiri Microsoft Research