End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things

Yansong Gao, Minki Kim, Sharif Abuadbba, Yeonjae Kim, Chandra Thapa, Kyuyeon Kim, Seyit A. Camtep, Hyoungshick Kim, Surya Nepal

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

121 Scopus citations

Abstract

Federated learning (FL) and split neural networks (SplitNN) are state-of-art distributed machine learning techniques to enable machine learning without directly accessing raw data on clients or end devices. In theory, such distributed machine learning techniques have great potential in distributed applications, in which data are typically generated and collected at the client-side while the collected data should be processed by the application deployed at the server-side. However, there is still a significant gap in evaluating the performance of those techniques concerning their practicality in the Internet of Things (IoT)-enabled distributed systems constituted by resource-constrained devices. This work is the first attempt to provide empirical comparisons of FL and SplitNN in real-world IoT settings in terms of learning performance and device implementation overhead. We consider a variety of datasets, different model architectures, multiple clients, and various performance metrics. For the learning performance (i.e., model accuracy and convergence time), we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data. We show that the learning performance of SplitNN is better than FL under an imbalanced data distribution but worse than FL under an extreme non-IID data distribution. For implementation overhead, we mount both FL and SplitNN on Raspberry Pi devices and comprehensively evaluate their overhead, including training time, communication overhead, power consumption, and memory usage. Our key observations are that under the IoT scenario where the communication traffic is the primary concern, FL appears to perform better over SplitNN because FL has a significantly lower communication overhead compared with SplitNN. However, our experimental results also demonstrate that neither FL or SplitNN can be applied to a heavy model, e.g., with several million parameters, on resource-constrained IoT devices because its training cost would be too expensive for such devices. Source code is released and available: https://github.com/Minki-Kim95/Federated-Learning-and-Split-Learning-with-raspberry-pi.

Original languageEnglish
Title of host publicationProceedings - 2020 International Symposium on Reliable Distributed Systems, SRDS 2020
PublisherIEEE Computer Society
Pages91-100
Number of pages10
ISBN (Electronic)9781728176260
DOIs
StatePublished - Sep 2020
Event39th International Symposium on Reliable Distributed Systems, SRDS 2020 - Virtual, Shanghai, China
Duration: 21 Sep 202024 Sep 2020

Publication series

NameProceedings of the IEEE Symposium on Reliable Distributed Systems
Volume2020-September
ISSN (Print)1060-9857

Conference

Conference39th International Symposium on Reliable Distributed Systems, SRDS 2020
Country/TerritoryChina
CityVirtual, Shanghai
Period21/09/2024/09/20

Keywords

  • distributed machine learning
  • federated learning
  • IoT
  • split learning

Fingerprint

Dive into the research topics of 'End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things'. Together they form a unique fingerprint.

Cite this