This repository contains sample code that demonstrate how to use Azure AI Content Safety service to detect and moderate potentially harmful content in text and images (more modalities please stay tuned). Azure AI Content Safety is a cloud-based service that leverages machine learning and computer vision to help you create a safer and more inclusive online environment for your users and customers.
The repository is organized into two folders: dotnet
and python
. The dotnet
folder contains C# console applications that show how to use the Content Safety .NET SDK to analyze text, images and manage blocklists in text moderation. The python
folder contains several python files that shows how to use the Content Safety Python SDK to analyze text, images, and manage blocklists in text moderation. Both the application and the python files cover the following scenarios:
- Text moderation: Detect hate speech, sexual, selfharm, violence content in text.
- Image moderation: Detect hate speech, sexual, selfharm, violence content in images.
To run the sample code, you will need an Azure subscription and a Content Safety resource. You can create a free Azure AI Content Safety resource here. You will also need to install the required dependencies for certain languages.
We hope you find this repository useful and informative. If you have any questions or feedback, please feel free to open an issue or a pull request.