loading page

Pre-Training Representations of Binary Code Using Contrastive Learning
  • +5
  • Yifan Zhang ,
  • Chen Huang ,
  • Yueke Zhang ,
  • Kevin Cao ,
  • Scott Anderson ,
  • Huajie Shao ,
  • Kevin Leach ,
  • Yu Huang
Yifan Zhang
Vanderbilt University

Corresponding Author:[email protected]

Author Profile
Chen Huang
Author Profile
Yueke Zhang
Author Profile
Kevin Cao
Author Profile
Scott Anderson
Author Profile
Huajie Shao
Author Profile
Kevin Leach
Author Profile


Binary code analysis and comprehension is critical to applications in reverse engineering and computer security tasks where source code is not available. Unfortunately, unlike source code, binary code lacks semantics and is more difficult for human engineers to understand and analyze. Limited work has explored incorporating multiple program representations. In this paper, we present ContraBin, a contrastive learning technique that integrates source code and comment information along with binaries to create an embedding capable of aiding binary analysis and comprehension tasks. Specifically, we present three components in ContraBin: (1) a primary contrastive learning method for initial pre-training, (2) a simplex interpolation method to integrate source code, comments, and binary code, and (3) an intermediate representation learning algorithm to train a binary code embedding. We evaluate the effectiveness of ContraBin through four indicative downstream tasks related to binary code: algorithmic functionality classification, function name recovery, code summarization, and reverse engineering. The results show that ContraBin considerably improves performance on all four tasks, measured by accuracy, mean of average precision, and BLEU scores as appropriate. ContraBin is the first language representation model to incorporate source code, binary code, and comments into contrastive code representation learning and is intended to contribute to the field of binary code analysis. The dataset used in this study is available for further research.