Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 5 von 13
GLOBECOM 2020 - 2020 IEEE Global Communications Conference, 2020, p.1-6
2020
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Towards Efficient Secure Aggregation for Model Update in Federated Learning
Ist Teil von
  • GLOBECOM 2020 - 2020 IEEE Global Communications Conference, 2020, p.1-6
Ort / Verlag
IEEE
Erscheinungsjahr
2020
Quelle
IEL
Beschreibungen/Notizen
  • Currently, a large volume of IoT devices generate huge amounts of data in edge networks, which can open up many research and applications for machine learning. However, traditional machine learning requires data to be sent to a server and centrally trained, which will cause the waste of the bandwidth and expose privacy of individuals. Federated learning allows data to be locally trained in their device and only send model updates to the central server for aggregation. But the security of model updates in the aggregation should also be carefully addressed. Existing works mainly focus on secure multiparty computation or differential privacy, which depends on heavy encryption or brings low accuracy. In this paper, we propose an efficient secure aggregation method for model updates in federated learning by pre-processing the model updates from each participant and only encrypting portion of the processed updates by functional encryption for inner product to protect the whole parameters, thus achieving efficient aggregation of model update vectors. Security analysis and experimental evaluation demonstrate that our scheme can efficiently aggregate the model updates without losing security.
Sprache
Englisch
Identifikatoren
eISSN: 2576-6813
DOI: 10.1109/GLOBECOM42002.2020.9347960
Titel-ID: cdi_ieee_primary_9347960

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX