Skip to content

Latest commit

 

History

History
235 lines (150 loc) · 10.4 KB

README.md

File metadata and controls

235 lines (150 loc) · 10.4 KB

1. AI Workflow

Backend Tests Frontend Tests Deploy Frontend to IBM Cloud Deploy Backend to IBM Cloud Deploy Database to IBM Cloud

URL:

https://ai-workflow.classroom-eu-gb-1-bx2-4x1-d4ceb080620f0ec34cd169ad110144ef-0000.eu-gb.containers.appdomain.cloud

final_app



1.1. Project Brief

"Create a tool which will allow a user to connect a defined set of 3rd party applications and execute actions when conditions have been met e.g. when a tweet containing 'ibm' is detected place content of tweet onto a Google sheet and use Watson tone analyser to determine of tweet was positive or negative on tone. If it is a positive tweet then place the content of tweet onto Google slide."


1.2. User Stories

In this section, we document the 3 main users of such an application.

1.2.1. End user 1: John McNamara

As an individual in IBM, John wants to find out what people are saying about the company on Twitter and present his findings. He would like a tool where he can configure actions on a website, where he asks the website to go on Twitter and grab tweets relating to IBM and sends it to the Tone analyser and sort them into different slides. For example, when an IBM customer posts a tweet complaining that the IBM Cloud does not work well sometimes, the application automatically recoginze its tone and put it into the pile of slides where the user is sad.

1.2.2. End user 2: IBM Marketing Team

As a part of the IBM marketing team, they'd want to analyse the success of our marketing campains and the public's opinion on IBM products so that we can increase the success of future campains and communicate with our developers over how our products can be improved.

1.2.3. End user 3: IBM Software Developers

As a software developer, the team would like to create a bot that replies to user Tweets. Using the application, the developer can use the data collected by the application and stored in the database to train a machine learning model and create an AI chatbot that can reply to tweets, e.g. if a user praised an IBM service, the developer's bot will be able to reply to the user saying thank you.


1.3. Ethics

Since the project only needs to work on a conceptual level we didn't need to actually collect real tweets. Therefore we've only set a few dummy tweets to showcase functionality, and it is still essential for the ethics approval. You can check the Ethics.md. Our route is route B specifically because only we and the client can access the functionality.


1.4. Tech Stack

The following tech stack was used to build the application:

stack_diagram


1.5. Deployment Instructions

1.5.1. Requirements

The requirements differ depending on how you are deploying this website. If you are using the exact same deployment method as us (using IBM IKS and Ingresses), then the following requirements are required:

  • Install kubectl
  • Clone the repository: git clone git@github.com:mitchlui/ai-workflow.git

If you are deploying with Docker Compose, then installing Docker and the relevant Compose libraries will be sufficient.

1.5.2. Environment and Credentials

1.5.2.1. Frontend

The const CLIENT_ID in client/src/settings.js has to be changed to your own OAuth Client ID, which you can learn more about Google's Cloud Console.

The return value for function API_DOMAIN() must also be changed to your desired host for the backend. The default one is for testing only and is not available to anyone to use except those maintaining the repository.

1.5.2.2. Backend

A .env file is needed. A .env.sample file can be found in server/, it should look like this:

TWITTER_BEARER_TOKEN=""
IBM_TONE_ANALZER_KEY=""
GOOGLE_SECRET=""
DATABASE_IP=""

To get a twitter bearer token, go to Twitter's Developer Portal and create an app. Once you have created an app, go to the "Keys and Access Tokens" tab and click "Create my access token". Copy the "Bearer Token" and paste it into the .env file.

The IBM Tone Analyser API key can be generated by going to the IBM Cloud website and activating the service.

To get a Google Secret, a credentials.json file is also needed in server/routers/internal/, you can also follow this guide, rename the file and put it in the aforementioned directory. Change the value in .web.client_secret to null and put the value in GOOGLE_SECRET in the .env file.

The Database IP secret is the IP of the database used in production.

This is to ensure that the secret is not available to the public.


1.6. Development Instructions

If you are developing this project, you will need to install the following:

Ensure that you have the dependencies installed as well once you have cloned and entered the repository:

cd client && npm install
cd ..
cd server && pip3 install -r requirements.txt
cd ..

1.6.1. To Test Development Build

Running the ./make_compose.sh script will create a Docker Compose network and build the needed containers to run the app, along with any dependency that is needed.

./make_compose.sh

The script creates a Compose network that has three containers -- frontend, backend and database. The frontend container is a React website with Sign in with Google services + a flow-based programming tool called Rete.js.

By default the containers run on the following port:

  • frontend: 8080
  • backend: 5001
  • database: 27017

For the Beta it displays a website that is created using React, built on IBM's Carbon Design System. The forms allows you to run a default workflow.

The backend container is a python FastAPI REST application that will be used to interact with a database and act as a portal to other services e.g. the tone analyser and other 3rd party APIs.

There is also a dongo container that is the mongoDB database used to store user data. The reason it is called dongo is due to Mitch was thinking docker and mongo together, therefore misspeaking and said the word 'dongo' instead of 'mongo'.

For documentation regarding the frontend, backend and database, please consult the docs folder.


1.7. Continuous Integration

We decided to use GitHub actions that triggers whenever we start a pull request into main and stable.

There are two tests, one for the frontend and one for the backend.

Tests can be found in client/App.test.js and server/test_application.py.


1.8. Continuous Delivery

We decided to use a GitHub action that triggers whenever we push to main.

The repository should have two secrets:

  1. ICR_NAMESPACE (The namespace of your cluster)
  2. IBM_CLOUD_API_KEY (An API key to access IBM Cloud)

Detailed instructions can be found in the actual action file

IMPORTANT: Please run the ibm_cloud_setup.sh shell script prior to running the action.

./ibm_cloud_setup.sh <REGISTRY_HOSTNAME> <IBM_CLOUD_API_KEY> <ICR_NAMESPACE>

The action builds three images (one for each service detailed above) and pushes it to IBM's IKS where our client can access the platform.

To test locally, use act:

act -j <job> --container-architecture linux/amd64 -s IBM_CLOUD_API_KEY="xxx" -s ICR_NAMESPACE="xxx" 

To find out available jobs, type act -l.

Other secrets are needed, you can go to the Settings page and see the secrets.


1.9. Wikis and Poster

1.9.1. Wiki Link

Our Wikis contains our development progress and the achievements and goals for every release version.

1.9.2. CS in the city poster

Our poster is an overview of our project, containing client userstories, problems we encountered, approaches to solve them and some screenshots about our program.