Enhancing Deployment-time Predictive Model Robustness for Code Analysis and Optimization
Supervised machine learning techniques have shown promising results in code analysis and optimization problems. However, a learning-based solution can be brittle because minor changes in hardware or application workloads – such as facing a new CPU architecture or code pattern – may jeopardize decision accuracy, ultimately undermining model robustness. We introduce Prom, an open-source Python toolkit to enhance the robustness and performance of predictive models against such changes during deployment. Prom achieves this by using statistical assessments to identify test samples prone to mispredictions and using feedback on these samples to improve a deployed model. We showcase Prom by applying it to 13 representative machine learning models across 5 code analysis and optimization tasks. Our extensive evaluation demonstrates that Prom can successfully identify an average of 97% (up to 100%) of mispredictions. By relabeling up to 5% of the Prom-identified samples through incremental learning, Prom can help a deployed model achieve a performance comparable to that attained during its model training phase.
Mon 3 MarDisplayed time zone: Pacific Time (US & Canada) change
10:00 - 11:00 | |||
10:00 20mTalk | Synthesis of Sorting Kernels Main Conference | ||
10:20 20mTalk | Tensorize: Fast Synthesis of Tensor Programs from Legacy Code using Symbolic Tracing, Sketching and Solving Main Conference Alexander Brauckmann University of Edinburgh, Luc Jaulmes University of Edinburgh, United Kingdom, José Wesley De Souza Magalhães University of Edinburgh, Elizabeth Polgreen University of Edinburgh, Michael F. P. O'Boyle University of Edinburgh | ||
10:40 20mTalk | Enhancing Deployment-time Predictive Model Robustness for Code Analysis and Optimization Main Conference Huanting Wang University of Leeds, Patrick Lenihan University of Leeds, Zheng Wang University of Leeds |