loading page

Hypothetical Framework for CPU Micro Containerization: Bridging the Performance Gap with GPUs in AI
  • M Mostagir Bhuiyan
M Mostagir Bhuiyan

Corresponding Author:[email protected]

Author Profile

Abstract

The growing body of Artificial Intelligence (AI) and Machine Learning (ML) models increases the necessity for computational needs that put the onus on looking for new approaches so as to stray from the traditional reliance on Graphics Processing Units (GPUs). This paper introduces a new method to maximize Central Processing Units (CPUs) through the use of a micro-containerization concept. The proposed approach theoretically dissects the CPU cores into isolated, efficient processing units called 'micro containers', making an effort to simulate GPU capabilities for parallel processing in order to enhance processing efficiencies to supposedly compete with GPU-based environments. Through a theoretical study of the possibilities in architecture and dynamics of operations, the paper shows how micro containerization can contribute to the democratization of advanced computational resources. The presented micro containerization model has the potential to revolutionize the landscape of machine learning workloads for the same very reasons: it is scalable, cost-effective, and holds a promise of maybe even changing the way computation strategies currently deal with tasks that are data-intensive. This bridges the current constraints with the future needs of computation, making micro containerization the new paradigm in accessibility and sustainability of high-performance computing innovation for a wide class of AI applications.
14 May 2024Submitted to TechRxiv
20 May 2024Published in TechRxiv