Skip to content

Amey-Thakur/ZERO-SHOT-VIDEO-GENERATION

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 

Repository files navigation

ZERO-SHOT-VIDEO-GENERATION

👍🏻 ELEC8900: ML - Project [SEMESTER II]

  • GENG8900: ML

Machine Learning Project


Introduction

The Zero-Shot Video Generation ML Project is an innovative solution for transforming text into dynamic video content. This README guides you through setting up and running the project on your local machine.

Prerequisites

  • Python 3.8+

Installation

Step 1: Clone the Repository

git clone https://github.com/Amey-Thakur/ZERO-SHOT-VIDEO-GENERATION
# Or download and extract the zip file

Step 2: Install Dependencies

Navigate to the project directory and run:

pip install -r requirements.txt

This installs necessary libraries like torch, numpy, opencv, gradio, and moviepy.

Step 3: Starting the Project

In the project directory, run:

python app.py

This starts the server and initializes the Text2Video model.

Usage

Step 4: Accessing the Local Server

Copy the localhost link (typically http://127.0.0.1:7860) from the terminal into your web browser.

Step 5: Interacting with the Text2Video Model

Enter the desired text into the model's interface on the server page to start video generation.

Step 6: Generating Video

Trigger the model to process the text and generate the video.

Step 7: Viewing Results

Review the generated video displayed on the webpage directly in the browser.

Conclusion

Enjoy transforming text into videos with ease and efficiency! For further details, refer to the project documentation.


👉🏻 Presented as a part of the 3rd Semester Project @ University of Windsor 👈🏻

👷 Project Authors: Amey Thakur, Jithin Gijo and Ritika Agarwal (Batch of 2024)

✌🏻 Back To Engineering ✌🏻