loading page

Privacy-Preserving Fine-Tuning of Artificial Intelligence (AI) Foundation Models with Federated Learning, Differential Privacy, Offsite Tuning, and Parameter-Efficient Fine-Tuning (PEFT)
  • Jun Zhao
Jun Zhao
Nanyang Technological University

Corresponding Author:[email protected]

Author Profile


Artificial Intelligence (AI) Foundation Models (FMs), pre-trained on massive datasets, have recently emerged as a pivotal asset in a wide array of tasks. Examples of FMs include Large Language Models (LLMs), Large Vision Models (LVMs), and Large Multimodal Models (LMMs). The adaptability of FMs, achieved through finetuning, enables these models to perform exceptionally across diverse domains. However, the fine-tuning process often entails data centralization, which raises privacy concerns. For instance, in healthcare, hospitals might want to fine-tune an AI model on patient records. Sending this data to a central server for fine-tuning can raise privacy concerns. To mitigate the privacy challenges, this research seeks to employ privacy-preserving technologies such as federated learning (FL), differential privacy (DP), and emulator-based tuning (i.e., offsite tuning) in combination with parameter-efficient fine-tuning (PEFT) techniques to refine FMs without compromising data privacy.