This is a Python server that uses the Hugging Face Transformers library to convert text into embeddings and returns the embeddings as JSON.
These instructions will help you set up and run the server on your local machine.
- Python 3.9 (or a compatible version)
- Docker (optional, for containerization)
- Clone the repository:
git clone https://github.com/fairDataSociety/huggingface-vectorizer.git
cd huggingface-vectorizer
- Install the required Python packages:
pip install -r requirements.txt
To run the server locally, execute the following command:
python app.py --model-name sentence-transformers/all-MiniLM-L6-v2
By default, the server will listen on port 9876. You can customize the port by modifying the code in app.py
.
You can also run the server in a Docker container. First, build the Docker image:
docker build -t fairdatasociety/huggingface-vectorizer .
Run the container:
docker run -p 9876:9876 fairdatasociety/huggingface-vectorizer --model-name sentence-transformers/all-MiniLM-L6-v2
-
/health
: A health check endpoint that returns "OK" when the server is running. -
/vectorize
: Accepts a JSON request with a "query" field containing the text to vectorize. Returns the embeddings as a JSON response.
You can send POST requests to the /vectorize
endpoint to obtain embeddings for text. For example:
curl -X POST -H "Content-Type: application/json" -d '{"query": ["your text here"]}' http://localhost:9876/vectorize
Contributions are welcome! If you'd like to contribute to the project, please follow these steps:
- Fork the repository on GitHub.
- Create a new branch with a descriptive name for your feature or bug fix.
- Make your changes and commit them with clear messages.
- Push your branch to your fork on GitHub.
- Create a pull request to the main repository.
TODO