feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    UID:
    almahu_9949683954902882
    Format: 1 online resource (436 pages)
    Edition: First edition.
    ISBN: 0-443-19038-0
    Content: Federated Learning: Theory and Practi ce provides a holisti c treatment to federated learning as a distributed learning system with various forms of decentralized data and features. Part I of the book begins with a broad overview of opti mizati on fundamentals and modeling challenges, covering various aspects of communicati on effi ciency, theoretical convergence, and security. Part II features emerging challenges stemming from many socially driven concerns of federated learning as a future public machine learning service. Part III concludes the book with a wide array of industrial applicati ons of federated learning, as well as ethical considerations, showcasing its immense potential for driving innovation while safeguarding sensitive data. Federated Learning: Theory and Practi ce provides a comprehensive and accessible introducti on to federated learning which is suitable for researchers and students in academia, and industrial practitioners who seek to leverage the latest advance in machine learning for their entrepreneurial endeavors. Presents the fundamentals and a survey of key developments in the field of federated learning Provides emerging, state-of-the art topics that build on fundamentals Contains industry applications Gives an overview of visions of the future.
    Note: Front Cover -- Federated Learning -- Copyright -- Contents -- Contributors -- Preface -- 1 Optimization fundamentals for secure federated learning -- 1 Gradient descent-type methods -- 1.1 Introduction -- 1.2 Basic components of GD-type methods -- 1.2.1 Search direction -- 1.2.2 Step-size -- 1.2.3 Proximal operator -- 1.2.4 Momentum -- 1.2.5 Dual averaging variant -- 1.2.6 Structure assumptions -- 1.2.7 Optimality certification -- 1.2.8 Unified convergence analysis -- 1.2.9 Convergence rates and complexity analysis -- 1.2.10 Initial point, warm-start, and restart -- 1.3 Stochastic gradient descent methods -- 1.3.1 The algorithmic template -- 1.3.2 SGD estimators -- 1.3.3 Unified convergence analysis -- 1.4 Concluding remarks -- Acknowledgments -- References -- 2 Considerations on the theory of training models with differential privacy -- 2.1 Introduction -- 2.2 Differential private SGD (DP-SGD) -- 2.2.1 Clipping -- 2.2.2 Mini-batch SGD -- 2.2.3 Gaussian noise -- 2.2.4 Aggregation at the server -- 2.2.5 Interrupt service routine -- 2.2.6 DP principles and utility -- 2.2.7 Normalization -- 2.3 Differential privacy -- 2.3.1 Characteristics of a differential privacy measure -- 2.3.2 (ε,δ)-differential privacy -- 2.3.3 Divergence-based DP measures -- 2.4 Gaussian differential privacy -- 2.4.1 Gaussian DP -- 2.4.2 Subsampling -- 2.4.3 Composition -- 2.4.4 Tight analysis of DP-SGD -- 2.4.5 Strong adversarial model -- 2.4.6 Group privacy -- 2.4.7 DP-SGD's trade-off function -- 2.5 Future work -- 2.5.1 Using synthetic data -- 2.5.2 Adaptive strategies -- 2.5.3 DP proof: a weaker adversarial model -- 2.5.4 Computing environment with less adversarial capabilities -- References -- 3 Privacy-preserving federated learning: algorithms and guarantees -- 3.1 Introduction -- 3.2 Background and preliminaries -- 3.2.1 The FedAvg algorithm. , 3.2.2 Differential privacy -- 3.3 DP guaranteed algorithms -- 3.3.1 Sample-level DP -- 3.3.1.1 Algorithms and discussion -- 3.3.2 Client-level DP -- 3.3.2.1 Clipping strategies for client-level DP -- 3.3.2.2 Algorithms and discussion -- 3.4 Performance of clip-enabled DP-FedAvg -- 3.4.1 Main results -- 3.4.1.1 Convergence theorem -- 3.4.1.2 DP guarantee -- 3.4.2 Experimental evaluation -- 3.5 Conclusion and future work -- References -- 4 Assessing vulnerabilities and securing federated learning -- 4.1 Introduction -- 4.2 Background and vulnerability analysis -- 4.2.1 Definitions and notation -- 4.2.1.1 Horizontal federated learning -- 4.2.1.2 Vertical federated learning -- 4.2.2 Vulnerability analysis -- 4.2.2.1 Clients' updates -- 4.2.2.2 Repeated interaction -- 4.3 Attacks on federated learning -- 4.3.1 Training-time attacks -- 4.3.1.1 Byzantine attacks -- 4.3.1.2 Backdoor attacks -- 4.3.2 Inference-time attacks -- 4.4 Defenses -- 4.4.1 Protecting against training-time attacks -- 4.4.1.1 In Situ defenses -- 4.4.1.2 Post Facto defenses -- 4.4.2 Protecting against inference-time attacks -- 4.5 Takeaways and future work -- References -- 5 Adversarial robustness in federated learning -- 5.1 Introduction -- 5.2 Attack in federated learning -- 5.2.1 Targeted data poisoning attack -- 5.2.1.1 Label flipping -- 5.2.1.2 Backdoor -- 5.2.1.2.1 Trigger-based backdoor -- 5.2.1.2.2 Semantic backdoor -- 5.2.2 Untargeted model poisoning attack -- 5.3 Defense in federated learning -- 5.3.1 Vector-wise defense -- 5.3.2 Dimension-wise defense -- 5.3.3 Certification -- 5.3.4 Personalization -- 5.3.5 Differential privacy -- 5.3.6 The gap between distributed training and federated learning -- 5.3.7 Open problems and further work -- 5.4 Conclusion -- References -- 6 Evaluating gradient inversion attacks and defenses -- 6.1 Introduction -- 6.2 Gradient inversion attacks. , 6.3 Strong assumptions made by SOTA attacks -- 6.3.1 The state-of-the-art attacks -- 6.3.2 Strong assumptions -- Assumption 1: Knowledge of BatchNorm statistics -- Assumption 2: Knowing or being able to infer private labels -- 6.3.3 Re-evaluation under relaxed assumptions -- Relaxation 1: Not knowing BatchNorm statistics -- Relaxation 2: Not knowing private labels -- 6.4 Defenses against the gradient inversion attack -- 6.4.1 Encrypt gradients -- 6.4.2 Perturbing gradients -- 6.4.3 Weak encryption of inputs (encoding inputs) -- 6.5 Evaluation -- 6.5.1 Experimental setup -- 6.5.2 Performance of defense methods -- 6.5.3 Performance of combined defenses -- 6.5.4 Time estimate for end-to-end recovery of a single image -- 6.6 Conclusion -- 6.7 Future directions -- 6.7.1 Gradient inversion attacks for text data -- 6.7.2 Gradient inversion attacks in variants of federated learning -- 6.7.3 Defenses with provable guarantee -- References -- 2 Emerging topics -- 7 Personalized federated learning: theory and open problems -- 7.1 Introduction -- 7.2 Problem formulation of pFL -- 7.3 Review of personalized FL approaches -- 7.3.1 Mixing models -- 7.3.2 Model-based approaches: meta-learning -- 7.3.3 Multi-task learning -- 7.3.4 Weight sharing -- 7.3.5 Clients clustering -- 7.4 Personalized FL algorithms -- 7.4.1 pFedMe -- 7.4.2 FedU -- 7.5 Experiments -- 7.5.1 Experimental settings -- 7.5.2 Comparison -- 7.6 Open problems -- 7.6.1 Transfer learning -- 7.6.2 Knowledge distillation -- 7.7 Conclusion -- References -- 8 Fairness in federated learning -- 8.1 Introduction -- 8.2 Notions of fairness -- 8.2.1 Equitable fairness -- 8.2.2 Collaborative fairness -- 8.2.3 Algorithmic fairness -- 8.3 Algorithms to achieve fairness in FL -- 8.3.1 Algorithms to achieve equitable fairness -- 8.3.2 Algorithms to achieve collaborative fairness. , 8.3.3 Algorithms to achieve algorithmic fairness -- 8.4 Open problems and conclusion -- Acknowledgments -- References -- 9 Meta-federated learning -- 9.1 Introduction -- 9.2 Background -- 9.2.1 Federated learning -- 9.2.2 Secure aggregation -- 9.2.3 Robust aggregation rules and defenses -- 9.3 Problem definition and threat model -- 9.4 Meta-federated learning -- 9.5 Experimental evaluation and discussion -- 9.5.1 Datasets and experiment setup -- 9.5.2 Utility of meta-FL -- 9.5.3 Robustness of meta-FL -- 9.6 Conclusion -- References -- 10 Graph-aware federated learning -- 10.1 Introduction -- 10.2 Decentralized federated learning -- 10.3 Multi-center federated learning -- 10.4 Graph-knowledge based federated learning -- 10.4.1 Applications of BiG-FL -- 10.4.2 Algorithm design for BiG-FL -- 10.5 Numerical evaluation of GFL models -- 10.5.1 Results on synthetic data -- 10.5.2 Results on real-world data for NLP -- 10.6 Summary -- References -- 11 Vertical asynchronous federated learning: algorithms and theoretic guarantees -- 11.1 Introduction -- 11.1.1 This chapter -- 11.1.2 Related work -- 11.2 Vertical federated learning -- 11.2.1 Problem statement -- 11.2.2 Asynchronous client updates -- 11.2.3 Types of flexible update rules -- 11.3 Convergence analysis -- 11.3.1 Convergence under bounded delay -- 11.3.2 Convergence under stochastic unbounded delay -- 11.4 Perturbed local embedding for smoothness -- 11.4.1 Local perturbation -- 11.4.2 Enforcing smoothness -- 11.5 Numerical tests -- 11.5.1 VAFL for federated logistic regression -- 11.5.2 VAFL for federated deep learning -- 11.5.2.1 Training on ModelNet40 dataset -- 11.5.2.2 Training on MIMIC-III dataset -- Acknowledgments -- References -- 12 Hyperparameter tuning for federated learning - systems and practices -- 12.1 Introduction -- 12.2 Systems resources -- 12.3 Cross-device FL hyperparameters. , 12.3.1 Client-side hyperparameters -- 12.3.1.1 Batch size -- 12.3.1.2 Learning rate -- 12.3.1.3 Local epochs -- 12.3.2 Server-side hyperparameters -- 12.3.2.1 Clients per round -- 12.3.2.2 Client timeout / client participation ratio -- 12.3.2.3 Number of rounds -- 12.3.2.4 Staleness -- 12.4 System challenges in FL HPO -- 12.4.1 Data privacy -- 12.4.2 Data heterogeneity -- 12.4.3 Resource limitations -- 12.4.4 Scalability -- 12.4.5 Resource heterogeneity -- 12.4.6 Dynamic data -- 12.4.7 Participation fairness and client dropouts -- 12.5 State-of-the-art -- 12.6 Conclusion -- References -- 13 Hyper-parameter optimization in federated learning -- 13.1 Introduction -- 13.1.1 FL-HPO problem definition -- 13.1.2 Challenges and goals -- 13.2 State-of-the-art FL-HPO approaches -- 13.3 FLoRA: a single-shot FL-HPO approach -- 13.3.1 The main algorithm: leveraging local HPOs -- 13.3.1.1 Why adaptive HPO? -- 13.3.1.2 Why asynchronous HPO? -- 13.3.2 Loss surface aggregation -- 13.3.3 Optimality guarantees for FLoRA -- 13.4 Empirical evaluation -- 13.5 Conclusion -- References -- 14 Federated sequential decision making: Bayesian optimization, reinforcement learning, and beyond -- 14.1 Introduction -- 14.2 Federated Bayesian optimization -- 14.2.1 Background on Bayesian optimization -- 14.2.2 Background on federated Bayesian optimization -- 14.2.3 Overview of representative existing works on FBO -- 14.2.4 Algorithms for FBO -- 14.2.5 Theoretical and empirical results for FBO -- 14.3 Federated reinforcement learning -- 14.3.1 Background on reinforcement learning -- 14.3.2 Background on federated reinforcement learning -- 14.3.3 Overview of representative existing works on FRL -- 14.3.4 Frameworks and algorithms for FRL -- 14.3.5 Theoretical and empirical results for FRL -- 14.4 Related work -- 14.4.1 Federated Bayesian optimization. , 14.4.2 Federated reinforcement learning.
    Additional Edition: Print version: Nguyen, Lam M. Federated Learning San Diego : Elsevier Science & Technology,c2024 ISBN 9780443190377
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    UID:
    gbv_535028288
    Format: 354 S. , Ill.
    Language: Vietnamese
    Keywords: Vietnam ; Dorf ; Bildung ; Holismus
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Book
    Book
    Hà-nội : Nxb-Giáo-dục-viện-khoa-học-ngân-hàng
    UID:
    gbv_527256498
    Format: 562 S , Ill
    Language: English
    Keywords: Englisch ; Französisch ; Vietnamesisch ; Kreditwesen ; Mehrsprachiges Wörterbuch
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    UID:
    edoccha_9961073600802883
    Format: 1 online resource (viii, 52 pages) : , color illustrations.
    Series Statement: ARL-TR ; 5193
    Note: Title from PDF title screen (viewed on July 23, 2010). , "May 2010."
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    UID:
    edocfu_9961073600802883
    Format: 1 online resource (viii, 52 pages) : , color illustrations.
    Series Statement: ARL-TR ; 5193
    Note: Title from PDF title screen (viewed on July 23, 2010). , "May 2010."
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    UID:
    edocfu_9960178582902883
    Format: 1 online resource (35 p.)
    ISBN: 1-4983-6686-4 , 1-4755-7100-3
    Content: La crisis financiera mundial puso de relieve los peligros de los sistemas financieros que han crecido demasiado con demasiada rapidez. En este documento se reexamina la profundización financiera, centrándose el análisis en lo que pueden aprender los mercados emergentes de la experiencia de las economías avanzadas. En él se observa que los beneficios de la profundización financiera para el crecimiento y la estabilidad siguen siendo importantes en el caso de la mayoría de los mercados emergentes, pero hay límites en cuanto a su tamaño y velocidad. Cuando la profundización financiera sobrepasa la fortaleza del marco de supervisión, lleva a una excesiva asunción de riesgo e inestabilidad. Algo alentador es que las reformas regulatorias que promueven la profundidad del sector financiero son esencialmente las mismas que aquellas que contribuyen a una mayor estabilidad. Una mejor regulación --no necesariamente más regulación-- determina entonces mayores posibilidades de lograr tanto desarrollo como estabilidad.
    Additional Edition: ISBN 1-4983-0204-1
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    UID:
    edoccha_9958125009402883
    Format: 1 online resource (44 p.)
    ISBN: 1-4983-0064-2 , 1-4983-0194-0 , 1-5135-6948-1
    Content: The global financial crisis experience shone a spotlight on the dangers of financial systems that have grown too big too fast. This note reexamines financial deepening, focusing on what emerging markets can learn from the advanced economy experience. It finds that gains for growth and stability from financial deepening remain large for most emerging markets, but there are limits on size and speed. When financial deepening outpaces the strength of the supervisory framework, it leads to excessive risk taking and instability. Encouragingly, the set of regulatory reforms that promote financial depth is essentially the same as those that contribute to greater stability. Better regulation--not necessarily more regulation--thus leads to greater possibilities both for development and stability.
    Additional Edition: ISBN 1-5135-3330-4
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    UID:
    edoccha_9960178582902883
    Format: 1 online resource (35 p.)
    ISBN: 1-4983-6686-4 , 1-4755-7100-3
    Content: La crisis financiera mundial puso de relieve los peligros de los sistemas financieros que han crecido demasiado con demasiada rapidez. En este documento se reexamina la profundización financiera, centrándose el análisis en lo que pueden aprender los mercados emergentes de la experiencia de las economías avanzadas. En él se observa que los beneficios de la profundización financiera para el crecimiento y la estabilidad siguen siendo importantes en el caso de la mayoría de los mercados emergentes, pero hay límites en cuanto a su tamaño y velocidad. Cuando la profundización financiera sobrepasa la fortaleza del marco de supervisión, lleva a una excesiva asunción de riesgo e inestabilidad. Algo alentador es que las reformas regulatorias que promueven la profundidad del sector financiero son esencialmente las mismas que aquellas que contribuyen a una mayor estabilidad. Una mejor regulación --no necesariamente más regulación-- determina entonces mayores posibilidades de lograr tanto desarrollo como estabilidad.
    Additional Edition: ISBN 1-4983-0204-1
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    UID:
    edoccha_9961421183902883
    Format: 1 online resource (436 pages)
    Edition: First edition.
    ISBN: 0-443-19038-0
    Content: Federated Learning: Theory and Practi ce provides a holisti c treatment to federated learning as a distributed learning system with various forms of decentralized data and features. Part I of the book begins with a broad overview of opti mizati on fundamentals and modeling challenges, covering various aspects of communicati on effi ciency, theoretical convergence, and security. Part II features emerging challenges stemming from many socially driven concerns of federated learning as a future public machine learning service. Part III concludes the book with a wide array of industrial applicati ons of federated learning, as well as ethical considerations, showcasing its immense potential for driving innovation while safeguarding sensitive data. Federated Learning: Theory and Practi ce provides a comprehensive and accessible introducti on to federated learning which is suitable for researchers and students in academia, and industrial practitioners who seek to leverage the latest advance in machine learning for their entrepreneurial endeavors. Presents the fundamentals and a survey of key developments in the field of federated learning Provides emerging, state-of-the art topics that build on fundamentals Contains industry applications Gives an overview of visions of the future.
    Note: Front Cover -- Federated Learning -- Copyright -- Contents -- Contributors -- Preface -- 1 Optimization fundamentals for secure federated learning -- 1 Gradient descent-type methods -- 1.1 Introduction -- 1.2 Basic components of GD-type methods -- 1.2.1 Search direction -- 1.2.2 Step-size -- 1.2.3 Proximal operator -- 1.2.4 Momentum -- 1.2.5 Dual averaging variant -- 1.2.6 Structure assumptions -- 1.2.7 Optimality certification -- 1.2.8 Unified convergence analysis -- 1.2.9 Convergence rates and complexity analysis -- 1.2.10 Initial point, warm-start, and restart -- 1.3 Stochastic gradient descent methods -- 1.3.1 The algorithmic template -- 1.3.2 SGD estimators -- 1.3.3 Unified convergence analysis -- 1.4 Concluding remarks -- Acknowledgments -- References -- 2 Considerations on the theory of training models with differential privacy -- 2.1 Introduction -- 2.2 Differential private SGD (DP-SGD) -- 2.2.1 Clipping -- 2.2.2 Mini-batch SGD -- 2.2.3 Gaussian noise -- 2.2.4 Aggregation at the server -- 2.2.5 Interrupt service routine -- 2.2.6 DP principles and utility -- 2.2.7 Normalization -- 2.3 Differential privacy -- 2.3.1 Characteristics of a differential privacy measure -- 2.3.2 (ε,δ)-differential privacy -- 2.3.3 Divergence-based DP measures -- 2.4 Gaussian differential privacy -- 2.4.1 Gaussian DP -- 2.4.2 Subsampling -- 2.4.3 Composition -- 2.4.4 Tight analysis of DP-SGD -- 2.4.5 Strong adversarial model -- 2.4.6 Group privacy -- 2.4.7 DP-SGD's trade-off function -- 2.5 Future work -- 2.5.1 Using synthetic data -- 2.5.2 Adaptive strategies -- 2.5.3 DP proof: a weaker adversarial model -- 2.5.4 Computing environment with less adversarial capabilities -- References -- 3 Privacy-preserving federated learning: algorithms and guarantees -- 3.1 Introduction -- 3.2 Background and preliminaries -- 3.2.1 The FedAvg algorithm. , 3.2.2 Differential privacy -- 3.3 DP guaranteed algorithms -- 3.3.1 Sample-level DP -- 3.3.1.1 Algorithms and discussion -- 3.3.2 Client-level DP -- 3.3.2.1 Clipping strategies for client-level DP -- 3.3.2.2 Algorithms and discussion -- 3.4 Performance of clip-enabled DP-FedAvg -- 3.4.1 Main results -- 3.4.1.1 Convergence theorem -- 3.4.1.2 DP guarantee -- 3.4.2 Experimental evaluation -- 3.5 Conclusion and future work -- References -- 4 Assessing vulnerabilities and securing federated learning -- 4.1 Introduction -- 4.2 Background and vulnerability analysis -- 4.2.1 Definitions and notation -- 4.2.1.1 Horizontal federated learning -- 4.2.1.2 Vertical federated learning -- 4.2.2 Vulnerability analysis -- 4.2.2.1 Clients' updates -- 4.2.2.2 Repeated interaction -- 4.3 Attacks on federated learning -- 4.3.1 Training-time attacks -- 4.3.1.1 Byzantine attacks -- 4.3.1.2 Backdoor attacks -- 4.3.2 Inference-time attacks -- 4.4 Defenses -- 4.4.1 Protecting against training-time attacks -- 4.4.1.1 In Situ defenses -- 4.4.1.2 Post Facto defenses -- 4.4.2 Protecting against inference-time attacks -- 4.5 Takeaways and future work -- References -- 5 Adversarial robustness in federated learning -- 5.1 Introduction -- 5.2 Attack in federated learning -- 5.2.1 Targeted data poisoning attack -- 5.2.1.1 Label flipping -- 5.2.1.2 Backdoor -- 5.2.1.2.1 Trigger-based backdoor -- 5.2.1.2.2 Semantic backdoor -- 5.2.2 Untargeted model poisoning attack -- 5.3 Defense in federated learning -- 5.3.1 Vector-wise defense -- 5.3.2 Dimension-wise defense -- 5.3.3 Certification -- 5.3.4 Personalization -- 5.3.5 Differential privacy -- 5.3.6 The gap between distributed training and federated learning -- 5.3.7 Open problems and further work -- 5.4 Conclusion -- References -- 6 Evaluating gradient inversion attacks and defenses -- 6.1 Introduction -- 6.2 Gradient inversion attacks. , 6.3 Strong assumptions made by SOTA attacks -- 6.3.1 The state-of-the-art attacks -- 6.3.2 Strong assumptions -- Assumption 1: Knowledge of BatchNorm statistics -- Assumption 2: Knowing or being able to infer private labels -- 6.3.3 Re-evaluation under relaxed assumptions -- Relaxation 1: Not knowing BatchNorm statistics -- Relaxation 2: Not knowing private labels -- 6.4 Defenses against the gradient inversion attack -- 6.4.1 Encrypt gradients -- 6.4.2 Perturbing gradients -- 6.4.3 Weak encryption of inputs (encoding inputs) -- 6.5 Evaluation -- 6.5.1 Experimental setup -- 6.5.2 Performance of defense methods -- 6.5.3 Performance of combined defenses -- 6.5.4 Time estimate for end-to-end recovery of a single image -- 6.6 Conclusion -- 6.7 Future directions -- 6.7.1 Gradient inversion attacks for text data -- 6.7.2 Gradient inversion attacks in variants of federated learning -- 6.7.3 Defenses with provable guarantee -- References -- 2 Emerging topics -- 7 Personalized federated learning: theory and open problems -- 7.1 Introduction -- 7.2 Problem formulation of pFL -- 7.3 Review of personalized FL approaches -- 7.3.1 Mixing models -- 7.3.2 Model-based approaches: meta-learning -- 7.3.3 Multi-task learning -- 7.3.4 Weight sharing -- 7.3.5 Clients clustering -- 7.4 Personalized FL algorithms -- 7.4.1 pFedMe -- 7.4.2 FedU -- 7.5 Experiments -- 7.5.1 Experimental settings -- 7.5.2 Comparison -- 7.6 Open problems -- 7.6.1 Transfer learning -- 7.6.2 Knowledge distillation -- 7.7 Conclusion -- References -- 8 Fairness in federated learning -- 8.1 Introduction -- 8.2 Notions of fairness -- 8.2.1 Equitable fairness -- 8.2.2 Collaborative fairness -- 8.2.3 Algorithmic fairness -- 8.3 Algorithms to achieve fairness in FL -- 8.3.1 Algorithms to achieve equitable fairness -- 8.3.2 Algorithms to achieve collaborative fairness. , 8.3.3 Algorithms to achieve algorithmic fairness -- 8.4 Open problems and conclusion -- Acknowledgments -- References -- 9 Meta-federated learning -- 9.1 Introduction -- 9.2 Background -- 9.2.1 Federated learning -- 9.2.2 Secure aggregation -- 9.2.3 Robust aggregation rules and defenses -- 9.3 Problem definition and threat model -- 9.4 Meta-federated learning -- 9.5 Experimental evaluation and discussion -- 9.5.1 Datasets and experiment setup -- 9.5.2 Utility of meta-FL -- 9.5.3 Robustness of meta-FL -- 9.6 Conclusion -- References -- 10 Graph-aware federated learning -- 10.1 Introduction -- 10.2 Decentralized federated learning -- 10.3 Multi-center federated learning -- 10.4 Graph-knowledge based federated learning -- 10.4.1 Applications of BiG-FL -- 10.4.2 Algorithm design for BiG-FL -- 10.5 Numerical evaluation of GFL models -- 10.5.1 Results on synthetic data -- 10.5.2 Results on real-world data for NLP -- 10.6 Summary -- References -- 11 Vertical asynchronous federated learning: algorithms and theoretic guarantees -- 11.1 Introduction -- 11.1.1 This chapter -- 11.1.2 Related work -- 11.2 Vertical federated learning -- 11.2.1 Problem statement -- 11.2.2 Asynchronous client updates -- 11.2.3 Types of flexible update rules -- 11.3 Convergence analysis -- 11.3.1 Convergence under bounded delay -- 11.3.2 Convergence under stochastic unbounded delay -- 11.4 Perturbed local embedding for smoothness -- 11.4.1 Local perturbation -- 11.4.2 Enforcing smoothness -- 11.5 Numerical tests -- 11.5.1 VAFL for federated logistic regression -- 11.5.2 VAFL for federated deep learning -- 11.5.2.1 Training on ModelNet40 dataset -- 11.5.2.2 Training on MIMIC-III dataset -- Acknowledgments -- References -- 12 Hyperparameter tuning for federated learning - systems and practices -- 12.1 Introduction -- 12.2 Systems resources -- 12.3 Cross-device FL hyperparameters. , 12.3.1 Client-side hyperparameters -- 12.3.1.1 Batch size -- 12.3.1.2 Learning rate -- 12.3.1.3 Local epochs -- 12.3.2 Server-side hyperparameters -- 12.3.2.1 Clients per round -- 12.3.2.2 Client timeout / client participation ratio -- 12.3.2.3 Number of rounds -- 12.3.2.4 Staleness -- 12.4 System challenges in FL HPO -- 12.4.1 Data privacy -- 12.4.2 Data heterogeneity -- 12.4.3 Resource limitations -- 12.4.4 Scalability -- 12.4.5 Resource heterogeneity -- 12.4.6 Dynamic data -- 12.4.7 Participation fairness and client dropouts -- 12.5 State-of-the-art -- 12.6 Conclusion -- References -- 13 Hyper-parameter optimization in federated learning -- 13.1 Introduction -- 13.1.1 FL-HPO problem definition -- 13.1.2 Challenges and goals -- 13.2 State-of-the-art FL-HPO approaches -- 13.3 FLoRA: a single-shot FL-HPO approach -- 13.3.1 The main algorithm: leveraging local HPOs -- 13.3.1.1 Why adaptive HPO? -- 13.3.1.2 Why asynchronous HPO? -- 13.3.2 Loss surface aggregation -- 13.3.3 Optimality guarantees for FLoRA -- 13.4 Empirical evaluation -- 13.5 Conclusion -- References -- 14 Federated sequential decision making: Bayesian optimization, reinforcement learning, and beyond -- 14.1 Introduction -- 14.2 Federated Bayesian optimization -- 14.2.1 Background on Bayesian optimization -- 14.2.2 Background on federated Bayesian optimization -- 14.2.3 Overview of representative existing works on FBO -- 14.2.4 Algorithms for FBO -- 14.2.5 Theoretical and empirical results for FBO -- 14.3 Federated reinforcement learning -- 14.3.1 Background on reinforcement learning -- 14.3.2 Background on federated reinforcement learning -- 14.3.3 Overview of representative existing works on FRL -- 14.3.4 Frameworks and algorithms for FRL -- 14.3.5 Theoretical and empirical results for FRL -- 14.4 Related work -- 14.4.1 Federated Bayesian optimization. , 14.4.2 Federated reinforcement learning.
    Additional Edition: Print version: Nguyen, Lam M. Federated Learning San Diego : Elsevier Science & Technology,c2024 ISBN 9780443190377
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    UID:
    edocfu_9961421183902883
    Format: 1 online resource (436 pages)
    Edition: First edition.
    ISBN: 0-443-19038-0
    Content: Federated Learning: Theory and Practi ce provides a holisti c treatment to federated learning as a distributed learning system with various forms of decentralized data and features. Part I of the book begins with a broad overview of opti mizati on fundamentals and modeling challenges, covering various aspects of communicati on effi ciency, theoretical convergence, and security. Part II features emerging challenges stemming from many socially driven concerns of federated learning as a future public machine learning service. Part III concludes the book with a wide array of industrial applicati ons of federated learning, as well as ethical considerations, showcasing its immense potential for driving innovation while safeguarding sensitive data. Federated Learning: Theory and Practi ce provides a comprehensive and accessible introducti on to federated learning which is suitable for researchers and students in academia, and industrial practitioners who seek to leverage the latest advance in machine learning for their entrepreneurial endeavors. Presents the fundamentals and a survey of key developments in the field of federated learning Provides emerging, state-of-the art topics that build on fundamentals Contains industry applications Gives an overview of visions of the future.
    Note: Front Cover -- Federated Learning -- Copyright -- Contents -- Contributors -- Preface -- 1 Optimization fundamentals for secure federated learning -- 1 Gradient descent-type methods -- 1.1 Introduction -- 1.2 Basic components of GD-type methods -- 1.2.1 Search direction -- 1.2.2 Step-size -- 1.2.3 Proximal operator -- 1.2.4 Momentum -- 1.2.5 Dual averaging variant -- 1.2.6 Structure assumptions -- 1.2.7 Optimality certification -- 1.2.8 Unified convergence analysis -- 1.2.9 Convergence rates and complexity analysis -- 1.2.10 Initial point, warm-start, and restart -- 1.3 Stochastic gradient descent methods -- 1.3.1 The algorithmic template -- 1.3.2 SGD estimators -- 1.3.3 Unified convergence analysis -- 1.4 Concluding remarks -- Acknowledgments -- References -- 2 Considerations on the theory of training models with differential privacy -- 2.1 Introduction -- 2.2 Differential private SGD (DP-SGD) -- 2.2.1 Clipping -- 2.2.2 Mini-batch SGD -- 2.2.3 Gaussian noise -- 2.2.4 Aggregation at the server -- 2.2.5 Interrupt service routine -- 2.2.6 DP principles and utility -- 2.2.7 Normalization -- 2.3 Differential privacy -- 2.3.1 Characteristics of a differential privacy measure -- 2.3.2 (ε,δ)-differential privacy -- 2.3.3 Divergence-based DP measures -- 2.4 Gaussian differential privacy -- 2.4.1 Gaussian DP -- 2.4.2 Subsampling -- 2.4.3 Composition -- 2.4.4 Tight analysis of DP-SGD -- 2.4.5 Strong adversarial model -- 2.4.6 Group privacy -- 2.4.7 DP-SGD's trade-off function -- 2.5 Future work -- 2.5.1 Using synthetic data -- 2.5.2 Adaptive strategies -- 2.5.3 DP proof: a weaker adversarial model -- 2.5.4 Computing environment with less adversarial capabilities -- References -- 3 Privacy-preserving federated learning: algorithms and guarantees -- 3.1 Introduction -- 3.2 Background and preliminaries -- 3.2.1 The FedAvg algorithm. , 3.2.2 Differential privacy -- 3.3 DP guaranteed algorithms -- 3.3.1 Sample-level DP -- 3.3.1.1 Algorithms and discussion -- 3.3.2 Client-level DP -- 3.3.2.1 Clipping strategies for client-level DP -- 3.3.2.2 Algorithms and discussion -- 3.4 Performance of clip-enabled DP-FedAvg -- 3.4.1 Main results -- 3.4.1.1 Convergence theorem -- 3.4.1.2 DP guarantee -- 3.4.2 Experimental evaluation -- 3.5 Conclusion and future work -- References -- 4 Assessing vulnerabilities and securing federated learning -- 4.1 Introduction -- 4.2 Background and vulnerability analysis -- 4.2.1 Definitions and notation -- 4.2.1.1 Horizontal federated learning -- 4.2.1.2 Vertical federated learning -- 4.2.2 Vulnerability analysis -- 4.2.2.1 Clients' updates -- 4.2.2.2 Repeated interaction -- 4.3 Attacks on federated learning -- 4.3.1 Training-time attacks -- 4.3.1.1 Byzantine attacks -- 4.3.1.2 Backdoor attacks -- 4.3.2 Inference-time attacks -- 4.4 Defenses -- 4.4.1 Protecting against training-time attacks -- 4.4.1.1 In Situ defenses -- 4.4.1.2 Post Facto defenses -- 4.4.2 Protecting against inference-time attacks -- 4.5 Takeaways and future work -- References -- 5 Adversarial robustness in federated learning -- 5.1 Introduction -- 5.2 Attack in federated learning -- 5.2.1 Targeted data poisoning attack -- 5.2.1.1 Label flipping -- 5.2.1.2 Backdoor -- 5.2.1.2.1 Trigger-based backdoor -- 5.2.1.2.2 Semantic backdoor -- 5.2.2 Untargeted model poisoning attack -- 5.3 Defense in federated learning -- 5.3.1 Vector-wise defense -- 5.3.2 Dimension-wise defense -- 5.3.3 Certification -- 5.3.4 Personalization -- 5.3.5 Differential privacy -- 5.3.6 The gap between distributed training and federated learning -- 5.3.7 Open problems and further work -- 5.4 Conclusion -- References -- 6 Evaluating gradient inversion attacks and defenses -- 6.1 Introduction -- 6.2 Gradient inversion attacks. , 6.3 Strong assumptions made by SOTA attacks -- 6.3.1 The state-of-the-art attacks -- 6.3.2 Strong assumptions -- Assumption 1: Knowledge of BatchNorm statistics -- Assumption 2: Knowing or being able to infer private labels -- 6.3.3 Re-evaluation under relaxed assumptions -- Relaxation 1: Not knowing BatchNorm statistics -- Relaxation 2: Not knowing private labels -- 6.4 Defenses against the gradient inversion attack -- 6.4.1 Encrypt gradients -- 6.4.2 Perturbing gradients -- 6.4.3 Weak encryption of inputs (encoding inputs) -- 6.5 Evaluation -- 6.5.1 Experimental setup -- 6.5.2 Performance of defense methods -- 6.5.3 Performance of combined defenses -- 6.5.4 Time estimate for end-to-end recovery of a single image -- 6.6 Conclusion -- 6.7 Future directions -- 6.7.1 Gradient inversion attacks for text data -- 6.7.2 Gradient inversion attacks in variants of federated learning -- 6.7.3 Defenses with provable guarantee -- References -- 2 Emerging topics -- 7 Personalized federated learning: theory and open problems -- 7.1 Introduction -- 7.2 Problem formulation of pFL -- 7.3 Review of personalized FL approaches -- 7.3.1 Mixing models -- 7.3.2 Model-based approaches: meta-learning -- 7.3.3 Multi-task learning -- 7.3.4 Weight sharing -- 7.3.5 Clients clustering -- 7.4 Personalized FL algorithms -- 7.4.1 pFedMe -- 7.4.2 FedU -- 7.5 Experiments -- 7.5.1 Experimental settings -- 7.5.2 Comparison -- 7.6 Open problems -- 7.6.1 Transfer learning -- 7.6.2 Knowledge distillation -- 7.7 Conclusion -- References -- 8 Fairness in federated learning -- 8.1 Introduction -- 8.2 Notions of fairness -- 8.2.1 Equitable fairness -- 8.2.2 Collaborative fairness -- 8.2.3 Algorithmic fairness -- 8.3 Algorithms to achieve fairness in FL -- 8.3.1 Algorithms to achieve equitable fairness -- 8.3.2 Algorithms to achieve collaborative fairness. , 8.3.3 Algorithms to achieve algorithmic fairness -- 8.4 Open problems and conclusion -- Acknowledgments -- References -- 9 Meta-federated learning -- 9.1 Introduction -- 9.2 Background -- 9.2.1 Federated learning -- 9.2.2 Secure aggregation -- 9.2.3 Robust aggregation rules and defenses -- 9.3 Problem definition and threat model -- 9.4 Meta-federated learning -- 9.5 Experimental evaluation and discussion -- 9.5.1 Datasets and experiment setup -- 9.5.2 Utility of meta-FL -- 9.5.3 Robustness of meta-FL -- 9.6 Conclusion -- References -- 10 Graph-aware federated learning -- 10.1 Introduction -- 10.2 Decentralized federated learning -- 10.3 Multi-center federated learning -- 10.4 Graph-knowledge based federated learning -- 10.4.1 Applications of BiG-FL -- 10.4.2 Algorithm design for BiG-FL -- 10.5 Numerical evaluation of GFL models -- 10.5.1 Results on synthetic data -- 10.5.2 Results on real-world data for NLP -- 10.6 Summary -- References -- 11 Vertical asynchronous federated learning: algorithms and theoretic guarantees -- 11.1 Introduction -- 11.1.1 This chapter -- 11.1.2 Related work -- 11.2 Vertical federated learning -- 11.2.1 Problem statement -- 11.2.2 Asynchronous client updates -- 11.2.3 Types of flexible update rules -- 11.3 Convergence analysis -- 11.3.1 Convergence under bounded delay -- 11.3.2 Convergence under stochastic unbounded delay -- 11.4 Perturbed local embedding for smoothness -- 11.4.1 Local perturbation -- 11.4.2 Enforcing smoothness -- 11.5 Numerical tests -- 11.5.1 VAFL for federated logistic regression -- 11.5.2 VAFL for federated deep learning -- 11.5.2.1 Training on ModelNet40 dataset -- 11.5.2.2 Training on MIMIC-III dataset -- Acknowledgments -- References -- 12 Hyperparameter tuning for federated learning - systems and practices -- 12.1 Introduction -- 12.2 Systems resources -- 12.3 Cross-device FL hyperparameters. , 12.3.1 Client-side hyperparameters -- 12.3.1.1 Batch size -- 12.3.1.2 Learning rate -- 12.3.1.3 Local epochs -- 12.3.2 Server-side hyperparameters -- 12.3.2.1 Clients per round -- 12.3.2.2 Client timeout / client participation ratio -- 12.3.2.3 Number of rounds -- 12.3.2.4 Staleness -- 12.4 System challenges in FL HPO -- 12.4.1 Data privacy -- 12.4.2 Data heterogeneity -- 12.4.3 Resource limitations -- 12.4.4 Scalability -- 12.4.5 Resource heterogeneity -- 12.4.6 Dynamic data -- 12.4.7 Participation fairness and client dropouts -- 12.5 State-of-the-art -- 12.6 Conclusion -- References -- 13 Hyper-parameter optimization in federated learning -- 13.1 Introduction -- 13.1.1 FL-HPO problem definition -- 13.1.2 Challenges and goals -- 13.2 State-of-the-art FL-HPO approaches -- 13.3 FLoRA: a single-shot FL-HPO approach -- 13.3.1 The main algorithm: leveraging local HPOs -- 13.3.1.1 Why adaptive HPO? -- 13.3.1.2 Why asynchronous HPO? -- 13.3.2 Loss surface aggregation -- 13.3.3 Optimality guarantees for FLoRA -- 13.4 Empirical evaluation -- 13.5 Conclusion -- References -- 14 Federated sequential decision making: Bayesian optimization, reinforcement learning, and beyond -- 14.1 Introduction -- 14.2 Federated Bayesian optimization -- 14.2.1 Background on Bayesian optimization -- 14.2.2 Background on federated Bayesian optimization -- 14.2.3 Overview of representative existing works on FBO -- 14.2.4 Algorithms for FBO -- 14.2.5 Theoretical and empirical results for FBO -- 14.3 Federated reinforcement learning -- 14.3.1 Background on reinforcement learning -- 14.3.2 Background on federated reinforcement learning -- 14.3.3 Overview of representative existing works on FRL -- 14.3.4 Frameworks and algorithms for FRL -- 14.3.5 Theoretical and empirical results for FRL -- 14.4 Related work -- 14.4.1 Federated Bayesian optimization. , 14.4.2 Federated reinforcement learning.
    Additional Edition: Print version: Nguyen, Lam M. Federated Learning San Diego : Elsevier Science & Technology,c2024 ISBN 9780443190377
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages