Skip to content

SahilCarterr/FinetuningLLM

Repository files navigation

Fine-Tuning Large Language Models and Non-LLM Models

This repository contains the code and results for fine-tuning various language models (LLMs) and non-large-language models. The models included in this study are:

Overview

The goal of this project is to compare the performance of different fine-tuned models on a specific task. The accuracies achieved by these models are as follows:

Model Accuracy (%)
distillbert_neural_network 40.32
bert_base_uncased 66.67
bart_large_mnli 67.30
llama3_8b 74.60
mistral_7b 75.23
gemma_7b 77.11

Results

plot

Dataset Preprocessed

  • Dropped sq,sub_topic, sub_sub_topic columns
  • Removed all links and emojies
  • Replaced Numbers with words
  • Droped nan values
  • Removed empty rows

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published