Skip to content

Latest commit

 

History

History
24 lines (13 loc) · 849 Bytes

README.md

File metadata and controls

24 lines (13 loc) · 849 Bytes

Text_Generation

Finetuning Distilled Gpt-2 for Test Generation

The dataset used for finetuning is ROC stories dataset. Dataset Link : https://cs.rochester.edu/nlp/rocstories/

Install the following packages from requirements.txt ( run the following command in the terminal )

pip install -r requirements.txt

Overview

This project uses PyTorch, Transformers library, and Hugging Face's GPT-2 model. The training and evaluation loops were written in PyTorch, harnessing the power of GPU acceleration for efficient fine-tuning. The primary objective was to fine-tune the GPT-2 model for text generation, allowing it to produce context-aware and coherent text sequences. Perplexity is used for evaluation.

Dependencies

  • PyTorch
  • Transformers Library
  • Hugging Face GPT-2 Model
  • GPU for accelerated fine-tuning