Multi-User Linearly-Separable Distributed Computing

In this work, we explore the problem of multi-user linearly-separable
distributed computation, where N servers help compute the desired
functions (jobs) of K users, and where each desired function can be
written as a linear combination of up to L (generally non-linear)
subtasks (or sub-functions). Each server computes some of the subtasks,
communicates a function of its computed outputs to some of the users,
and then each user collects its received data to recover its desired
function. We explore the computation and communication relationship
between how many servers compute each subtask vs. how much data each
user receives.

For a matrix F representing the linearly-separable form of the set of requested functions, our problem becomes equivalent to the open problem of sparse matrix factorization F=DE over finite fields, where a sparse decoding matrix D and encoding matrix E imply reduced communication and computation costs respectively. This paper establishes a novel relationship between our distributed computing problem, matrix factorization, syndrome decoding and covering codes. To reduce the computation cost, the above D is drawn from covering codes or from a here-introduced class of so-called ‘partial covering’ codes, whose study here yields computation cost results that we present.

For a matrix F representing the linearly-separable form of the set of requested functions, our problem becomes equivalent to the open problem of sparse matrix factorization F=DE over finite fields, where a sparse decoding matrix D and encoding matrix E imply reduced communication and computation costs respectively. This paper establishes a novel relationship between our distributed computing problem, matrix factorization, syndrome decoding and covering codes. To reduce the computation cost, the above D is drawn from covering codes or from a here-introduced class of so-called ‘partial covering’ codes, whose study here yields computation cost results that we present.

Oct 2023