- Security Policy for Machine Learning Systems
- Machine Learning Privacy-Preserving Techniques
- Tools for Securing Machine Learning
- Security Threats to Machine Learning
- ML Security Regulations and Standards
A ThalesGroup policy framework to secure machine learning datasets, models, underlying platform, compliance with internal and external regulations, and to humans involved.
Available at ML Security Policy with ML Security Requirements and ML Security Guidelines
Learn about cutting-edge privacy-preserving techniques for machine learning including Differential Privacy, Federated Learning, Homomorphic Encryption, Secure Multi-Party Computation (SMPC), and Privacy-Preserving Data Synthesis in this comprehensive GitHub repository. Explore how these methods safeguard sensitive data while enabling collaborative analysis and model training.
Available at ML privacy-preserving techniques
Discover essential security tools for source code vulnerability detection, comprehensive attack and defense tools, ML supply chain security solutions, and privacy and compliance tools. Additionally, explore techniques for securing Jupyter notebooks, ensuring robust protection for your data, code, and models. Embrace a holistic approach to cybersecurity and data privacy in your development and analysis workflows.
Available at ML security tools
Available at ML Security Threats
- Conference: OWASP LASCON 2024
- Agenda: ML lifecycle/workflow, AI for Cyber vs Cyber for AI, Cyber Attacks, Risks, Threats, Thales Security Framework, Recommendations and more.
You can access the presentation deck (PDF) at View Documentation (PDF)
This project is licensed under the Creative Commons Attribution-NoDerivs 4.0 International (CC BY-ND 4.0) License. You can view the full license text here.
For further information or to contribute to this project, you can reach out to the following contacts: