32

GitHub - IBM/adversarial-robustness-toolbox: This is a library dedicated to adve...

 5 years ago
source link: https://github.com/IBM/adversarial-robustness-toolbox
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

Adversarial Robustness Toolbox (ART v0.1)

Documentation Status

This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

The library is still under development. Feedback, bug reports and extension requests are highly appreciated.

Supported attack and defense methods

The Adversarial Robustness Toolbox contains implementations of the following attacks:

The following defense methods are also supported:

Setup

The Adversarial Robustness Toolbox is designed to run with Python 3 (and most likely Python 2 with small changes). You can either download the source code or clone the repository in your directory of choice:

git clone https://github.com/IBM/adversarial-robustness-toolbox

To install the project dependencies, use the requirements file:

pip install .

The library comes with a basic set of unit tests. To check your install, you can run all the unit tests by calling in the library folder:

bash run_tests.sh

The configuration file config/config.ini allows to set custom paths for data. By default, data is downloaded in the data folder as follows:

[DEFAULT]
profile=LOCAL

[LOCAL]
data_path=./data
mnist_path=./data/mnist
cifar10_path=./data/cifar-10
stl10_path=./data/stl-10

If the datasets are not present at the indicated path, loading them will also download the data.

Running scripts

The library contains three main scripts for:

  • training a classifier using (train.py)
  • crafting adversarial examples on a trained model through (generate_adversarial.py)
  • testing model accuracy on different test sets using (test_accuracies.py)

Detailed instructions for each script are available by typing

python3 <script_name> -h

Documentation

Documentation is available here.

Some examples of how to use the toolbox when writing your own code can be found in the examples folder. See examples/README.md for more information about what each example does. To run an example, use the following command:

python3 examples/<example_name>.py

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK