loading page

BCEdge: SLO-Aware DNN Inference Services with Adaptive Batching on Edge Platforms
  • +2
  • Ziyang Zhang ,
  • Huan Li ,
  • Yang Zhao ,
  • Changyao Lin ,
  • Jie Liu
Ziyang Zhang
Harbin Institute of Technology

Corresponding Author:[email protected]

Author Profile
Yang Zhao
Author Profile
Changyao Lin
Author Profile

Abstract

As deep neural networks (DNNs) are being applied to a wide range of edge intelligent applications, it is critical for edge inference platforms to have both high-throughput and low-latency at the same time. Such edge platforms with multiple DNN models pose new challenges for scheduler designs. First, each request may have different service level objectives (SLOs) to improve quality of service (QoS). Second, the edge platforms should be able to efficiently schedule multiple heterogeneous DNN models so that system utilization can be improved. To meet these two goals, this paper proposes BCEdge, a novel learning-based scheduling framework that takes adaptive batching and concurrent execution of DNN inference services on edge platforms. We define a utility function to evaluate the trade-off between throughput and latency. The scheduler in BCEdge leverages maximum entropy-based deep reinforcement learning (DRL) to maximize utility by 1) co-optimizing batch size and 2) the number of concurrent models automatically. Our prototype implemented on different edge platforms shows that the proposed BCEdge enhances utility by up to 37.6% on average, compared to state-of-the-art solutions, while satisfying SLOs.