CoCo-FL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization
TimeTuesday, July 12th6pm - 7pm PDT
LocationLevel 2 Lobby
DescriptionWe study the problem of federated learning (FL) where participating devices have heterogeneous communication and computation resources. Most state-of-the-art methods address the heterogeneity by scaling the width of the trained neural network (NN) part. However, this reduces the accuracy due to tight coupling of communication and computation requirements, wasting resources. We present the first technique that independently optimizes for specific communication and computation requirements by training only a subset of layers but freezing and quantizing others. This technique outperforms the state of the art with respect to the final accuracy and convergence speed.