- Getting Started
- Project Description
- External APIs and Services
- End Points
- Contributors
- FE Repo
- BE Repo
- Deployment
- Python 3.12.5
- Django: 5.1
Turlink AI Service is a micro service of Turlink! This micro service is built in Python 3.12.5 and consumes the openAI API. There is one exposed endpoint that allows for each shorten link to receive a brief AI generated description in a three bullet point format.
Turlink was designed and built with a team of 8 developers as part of the Capstone project from Turing School of Software and Design. The Turlink AI Service was built with 2 Backend devlopers from the Capstone project.
Setup
-
Fork and/or Clone this Repo from GitHub.
-
In your terminal use
$ git clone git@github.com:turingschool/turlink-ai-service.git
. -
Create a virtual enviorment with
$ python -m venv myenv
-
Activate your virtual enviorment with
$ source myenv/bin/activate
-
Change into the cloned directory using
$ cd example
. -
In your virtual enviorment use
$ pip install django
. -
In your virtual enviorment use
$ pip install djangorestframework
. -
In your virtual enviorment use
$ pip install requests
.
Testing
Test using the terminal utilizing Unittest:
pip install unittest
$ python <follow directory path to test specific files>
-
In our application, we utilize OpenAI API to produce AI generated prompts for Turlink. We're able to acomplish this by exposing the openAI API chat completions endpoint. Lastly for better readability we format the prompts to be described in 3 bullet points.
Create One Link Description
Request:
POST /api/v1/openai
Content-Type: application/json
Accept: application/json
Body:
{
"link": "www.example.com"
}
Response: status: 201
{
"data": {
"link": "www.example.com",
"summary": "1. example 1\n2. example 2\n3. example 3"
}
}
Get AI Text
Request:
GET /api/v1/ping
Content-Type: application/json
Accept: application/json
Response: status: 200
{
"data": {
"link": "www.example.com",
"summary": "1. example 1\n2. example 2\n3. example 3"
}
}