Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Differentially Private Federated Learning: A Client-level Perspective



Differentially Private Federated Learning: A Client-level Perspective

REUSE status
made-with-python PyPI License

Description:

Federated Learning is a privacy-preserving decentralized learning protocol introduced by Google. Multiple clients jointly learn a model without data centralization. Centralization is pushed from data space to parameter space: https://research.google.com/pubs/pub44822.html [1].
Differential privacy in deep learning is concerned with preserving the privacy of individual data points: https://arxiv.org/abs/1607.00133 [2].
In this work, we combine the notion of both by making federated learning differentially private. We focus on preserving privacy for the entire data set of a client. For more information, please refer to: https://arxiv.org/abs/1712.07557v2.

This code simulates a federated setting and enables federated learning with differential privacy. The privacy accountant used is from https://arxiv.org/abs/1607.00133 [2]. The files: accountant.py, utils.py, gaussian_moments.py are taken from: https://github.com/tensorflow/models/tree/master/research/differential_privacy

Note that the privacy agent is not completely set up yet (especially for more than 100 clients). It has to be specified manually or otherwise parameters ‘m’ and ‘sigma’ need to be specified.

Authors:

Requirements

Download and Installation

  1. Install Tensorflow 1.4.1
    2 Download the files as a ZIP archive, or you can clone the repository to your local hard drive.
  2. Change to the directory of the download, If using macOS, simply run:
    bash RUNME.sh

    This will download the MNIST data-sets, create clients, and get started.

For more information on the individual functions, please refer to their doc strings.

Known Issues

No issues known

How to obtain support

This project is provided ‘as-is’ and any bug reports are not guaranteed to be fixed.

Citations

If you use this code or the pre-trained models in your research,
please cite:

@ARTICLE{2017arXiv171207557G,
   author = {{Geyer}, R.~C. and {Klein}, T. and {Nabi}, M.},
    title = '{Differentially Private Federated Learning: A Client Level Perspective}',
  journal = {ArXiv e-prints},
archivePrefix = 'arXiv',
   eprint = {1712.07557},
 primaryClass = 'cs.CR',
 keywords = {Computer Science - Cryptography and Security, Computer Science - Learning, Statistics - Machine Learning},
     year = 2017,
    month = dec,
   adsurl = {http://adsabs.harvard.edu/abs/2017arXiv171207557G},
  adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

References

License

Copyright (c) 2017 SAP SE or an SAP affiliate company. All rights reserved. This project is licensed under the Apache Software License, version 2.0 except as noted otherwise in the LICENSE file.

About the tool


Developing organisation(s):


Tool type(s):


Objective(s):


Country of origin:


Type of approach:





Programming languages:



Github stars:

  • 259

Github forks:

  • 66

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.