Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 11 von 148
IEEE transaction on neural networks and learning systems, 2024-06, Vol.35 (6), p.8075-8085
2024
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
A New Look and Convergence Rate of Federated Multitask Learning With Laplacian Regularization
Ist Teil von
  • IEEE transaction on neural networks and learning systems, 2024-06, Vol.35 (6), p.8075-8085
Ort / Verlag
United States: IEEE
Erscheinungsjahr
2024
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • Non-independent and identically distributed (non-IID) data distribution among clients is considered as the key factor that degrades the performance of federated learning (FL). Several approaches to handle non-IID data, such as personalized FL and federated multitask learning (FMTL), are of great interest to research communities. In this work, first, we formulate the FMTL problem using Laplacian regularization to explicitly leverage the relationships among the models of clients for multitask learning. Then, we introduce a new view of the FMTL problem, which, for the first time, shows that the formulated FMTL problem can be used for conventional FL and personalized FL. We also propose two algorithms <inline-formula> <tex-math notation="LaTeX">\textsf {FedU} </tex-math></inline-formula> and decentrali- zed <inline-formula> <tex-math notation="LaTeX">\textsf {FedU} </tex-math></inline-formula> (<inline-formula> <tex-math notation="LaTeX">\textsf {dFedU} </tex-math></inline-formula>) to solve the formulated FMTL problem in communication-centralized and decentralized schemes, respectively. Theoretically, we prove that the convergence rates of both algorithms achieve linear speedup for strongly convex and sublinear speedup of order <inline-formula> <tex-math notation="LaTeX">1/2 </tex-math></inline-formula> for nonconvex objectives. Experimentally, we show that our algorithms outperform the conventional algorithm FedAvg, FedProx, SCAFFOLD, and AFL in FL settings, MOCHA in FMTL settings, as well as pFedMe and Per-FedAvg in personalized FL settings.
Sprache
Englisch
Identifikatoren
ISSN: 2162-237X, 2162-2388
eISSN: 2162-2388
DOI: 10.1109/TNNLS.2022.3224252
Titel-ID: cdi_ieee_primary_9975151

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX