Computationally Efficient Auto-Weighted Aggregation for Heterogeneous Federated Learning
Published in IEEE International Conference on Edge Computing and Communications (EDGE), 2022
Recommended citation: Z. Talukder and M. A. Islam, "Computationally Efficient Auto-Weighted Aggregation for Heterogeneous Federated Learning," 2022 IEEE International Conference on Edge Computing and Communications (EDGE), 2022, pp. 12-22, doi: 10.1109/EDGE55608.2022.00015. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9860312
Abstract: Federated Learning (FL) offers a privacy-preserving massively distributed Machine Learning (ML) paradigm where many clients cooperatively work together towards training a shared machine learning model. FL, however, is susceptible to data heterogeneity problems as the FL clients have diverse data sources. Prior works employ auto-weighted model aggregation to mitigate the heterogeneity issue to minimize the impact of unfavorable model updates. However, existing approaches require extensive computation for statistical analysis of clients’ model updates. To circumvent this, we propose, FedASL (Federated Learning with Auto-weighted Aggregation based on Standard Deviation of Training Loss) which uses only the local training loss of FL clients for auto-weighting the model aggregation. Our evaluation under three different datasets and various data corruption scenarios reveals that FedASL can effectively thwart data corruption from bad clients while causing as little as one-tenth of the computation cost of existing approaches.