Skip to content

Create adversarial attacks against machine learning Windows malware detectors

License

Notifications You must be signed in to change notification settings

pralab/secml_malware

Repository files navigation

SecML Malware

PyPI GitHub code size in bytes GitHub issues PyPI - Python Version PyPI - Downloads Documentation Status

Python library for creating adversarial attacks against Windows Malware detectors. Built on top of SecML, SecML Malware includes most of the attack proposed in the state of the art. We include a pre-trained MalConv model trained by EndGame, used for testing.

Included Attacks

Installation

Navigate to the folder where you want to clone the project. I recommend creating a new environment (I use conda):

conda create -n secml_malware_env python=3.9
conda activate secml_malware_env
pip install git+https://github.com/zangobot/ember.git
pip install secml-malware

If you are an Apple Silicon user, please install lightgbm from conda:

conda install -c conda-forge lightgbm

Optional - Nevergrad

If you want speed up blackbox attacks, you can install Nevergrad. See the blackbox tutorial for more information on its usage.

How to use

Activate your environment, and import the secml_malware package inside your script:

import secml_malware
print(secml_malware.__version__)

The tests included in this project show how the library can be used for applying the manipulation to the input programs. There is also an example Jupyter notebook tutorial that shows how to build a apply a standard attack.

Docker

There is also a Dockerfile that can be used to start a container and test the library without messing with virtual environments!

docker build --tag secml_malware:0.3.2 .
docker run --rm -it secml_malware:0.3.2 bash

The container is also shipped with ipython, for a more interactive experience with this library.

Cite

If you use our library, please cite us!

@misc{demetrio2021secmlmalware,
      title={secml-malware: A Python Library for Adversarial Robustness Evaluation of Windows Malware Classifiers}, 
      author={Luca Demetrio and Battista Biggio},
      year={2021},
      eprint={2104.12848},
      archivePrefix={arXiv},
      primaryClass={cs.CR}
}

Also, depending on the manipulations / formalization you are using, please cite our work:

Content shifting and DOS header extension manipulations or RAMEn formalization

@article{demetrio2021adversarial,
    title={Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection},
    author={Luca Demetrio and Scott E. Coull and Battista Biggio and Giovanni Lagorio and Alessandro Armando and Fabio Roli},
    journal={ACM Transactions on Privacy and Security},
    year={2021},
    publisher={ACM}
}

GAMMA

@article{demetrio2021functionality,
  title={Functionality-preserving black-box optimization of adversarial windows malware},
  author={Demetrio, Luca and Biggio, Battista and Lagorio, Giovanni and Roli, Fabio and Armando, Alessandro},
  journal={IEEE Transactions on Information Forensics and Security},
  year={2021},
  publisher={IEEE}
}

Partial DOS manipulation

@inproceedings{demetrio2019explaining,
  title={Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries},
  author={Luca Demetrio and Battista Biggio and Giovanni Lagorio and Fabio Roli and Alessandro Alessandro},
  booktitle={ITASEC19},
  volume={2315},
  year={2019}
}

Bug reports

If you encounter something strange, feel free to open an issue! I am working a lot, and bugs are present everywhere. Let me know, and I'll try to fix them as soon as possible.

Testing

I provide a small test suite for the attacks I have developed inside the plugin. If you want to run them, ADD GOODWARE/MALWARE samples! There are two distinct folders:

secml_malware/data/goodware_samples
secml_malware/data/malware_samples/test_folder

Please, add samples to both folders (if and only if you want to run the internal tests).