This is a backend focused project to demonstrate:
- Video processing pipeline for streaming services.
- Various usage of AWS services.
✅ Infrastructure as Code with Terraform. Deploy everything with a single command.
✅ Secure services with strict roles and policies
✅ Video processing with AWS Elemental MediaConvert to produce HLS playlist
✅ S3 to store raw videos and processed HLS playlist
✅ Event processing with AWS Lambda and SQS
✅ CloudWatch event to filter finished Media Convert jobs
✅ User management with AWS Cognito
✅ Database with DynamoDB
✅ Frontend with Next.js and React
Beautiful and feature complete front end: This is a backend focused project.
High availability: I use minimal infrastructure to keep costs down.
You'll need an AWS account with credit card to use Media Convert.
Please install the following tools:
Make sure you are signed in to AWS CLI.
Then run the following command:
terraform init
terraform apply
Every uploaded video must have an owner. This means users must sign in before uploading videos.
I use Amplify's React UI to handle sign in and sign up, store the access token in cookie storage.
After the user is authenticated, the frontend will request a presigned POST URL from the backend.
The backend will:
- Generate a new ULID which is going to be used as the object key and also the video ID in database.
- Generate a presigned POST URL with
x-amz-user-id
set to the owner of the video, and also limit the upload size to 100MB. - Return the presigned POST URL and the video ID to the frontend.
The video is uploaded to the presigned POST URL.
Once the video is uploaded, the front end ask the back end to create a new video record in database with the specified video ID.
The backend verifies whether the x-amz-user-id
of the object matches the owner of the video.
The new video record will have these fields:
- title
- description
- status set to
processing
. Videos with this status will not show up in the home page. - createdAt
- userId
Once the video is uploaded, an event will be sent to an SQS queue which is consumed by a Lambda function.
The Lambda function will:
- Use
ffprobe
to get the video resolution. - Calculate the resolutions of the HLS playlist.
- A 720p source video will produce 720p, 480p, 360p and 240p resolutions.
- A 370p source video will produce 360p and 240p resolutions.
- Send a job to AWS Elemental MediaConvert to generate the HLS playlist with the calculated resolutions.
In order to use ffprobe
, I need to include the ffprobe binary in a separate Lambda layer.
The Lambda layer is packaged in a separate repository and can be downloaded easily with curl.
When a successful Media Convert job has finished, the results will be stored in a separate HLS playlist bucket.
This bucket is publicly readable so no signed URL required to get the thumbnail and the HLS playlist.
The video's status needs to be updated to either done
or failed
by:
- Using CloudWatch event, filtered only for
COMPLETE
orERROR
events coming from the video upload bucket. - SQS and Lambda will handle the filtered CloudWatch event, get the video id from the event, and update the video's status.
- Aside from that, the raw video of the finished job will be deleted.