Skip to content

UmerrAhsan/Text_Generation

Repository files navigation

Text_Generation

Finetuning Distilled Gpt-2 for Test Generation

The dataset used for finetuning is ROC stories dataset. Dataset Link : https://cs.rochester.edu/nlp/rocstories/

Install the following packages from requirements.txt ( run the following command in the terminal )

pip install -r requirements.txt

Overview

This project uses PyTorch, Transformers library, and Hugging Face's GPT-2 model. The training and evaluation loops were written in PyTorch, harnessing the power of GPU acceleration for efficient fine-tuning. The primary objective was to fine-tune the GPT-2 model for text generation, allowing it to produce context-aware and coherent text sequences. Perplexity is used for evaluation.

Dependencies

  • PyTorch
  • Transformers Library
  • Hugging Face GPT-2 Model
  • GPU for accelerated fine-tuning

About

Finetuning Distilled Gpt2 for Test Genration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages