联邦学习——论文研究(FedBoost: Communication-Efficient Algorithms for Federated Learning)

时间:2021-08-09
本文章向大家介绍联邦学习——论文研究(FedBoost: Communication-Efficient Algorithms for Federated Learning),主要包括联邦学习——论文研究(FedBoost: Communication-Efficient Algorithms for Federated Learning)使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

主要内容:

  不同于梯度压缩和模型压缩,FedBoost集成学习算法,能够降低服务器到客户端

  和客户端到服务器的通信成本,提高通信效率。

集成学习:集成学习(ensemble learning)原理详解_春华秋实-CSDN博客_集成学习

主要优点:

  1. Pre-trained base predictors: base predictors can be pre-trained on publicly available data,

   thus reducing the need for user data in training.

  2. Convergence guarantee: ensemble methods often require training relatively few parameters,

   which typically results in far fewer rounds of optimization and faster convergence compared to

   training the entire model from scratch.

  3. Adaptation or drifting over time: user data may change over time, but, in the ensemble approach,

   we can keep the base predictors fixed and retrain the ensemble weights whenever the data changes.

  4. Differential privacy (DP): federated learning can be combined with global DP to provide an additional

   layer of privacy . Training only the ensemble weights via federated learning is well-suited for DP since

   the utility-privacy trade-off depends on the number of parameters being trained . Furthermore, this

   learning problem is typically a convex optimization problem for which DP convex optimization can give

   better privacy guarantees.

原文地址:https://www.cnblogs.com/iscanghai/p/15119820.html