Finetuning Some Wizard Models With QLoRA
-
Updated
Sep 17, 2023 - Python
Finetuning Some Wizard Models With QLoRA
Streamlit application for Reddit posts powered by OpenAI, Pinecone and Langchain
A collection of examples for training or fine-tuning LLMs.
A winner of NeurIPS LLM 2023 Competition
Finetune an LLM to generate SQL from text on Intel GPUs (XPUs) using QLoRA
Natural Language Processing Class Project - Spring '23. Analysing and Generating Sports Fans Responses from Reddit Sport Subreddits
Factuality check of the SemRep Predications
This is a package for generating questions and answers from unstructured data to be used for NLP tasks.
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
A payload compression toolkit that makes it easy to create ideal data structures for LLMs; from training data to chain payloads.
Fine-tune large language models (LLMs) using the Hugging Face Transformers library.
Gemma-2b-it LLM has been finetuned on a dataset of Python codes, enabling it to proficiently learn Python syntax and assist in debugging tasks, offering valuable guidance to programmers.
nter the realm of truth detection with GPT-Truth - fine-tuning GPT-3.5 for unparalleled accuracy in identifying deceptive opinions
LLM (Large Language Model) FineTuning
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
Our research project for NLP class in University of Ljubljana that I'm one of the contributors.
Collecting data for Building Lucknow's first LLM
Enhancing Large Vision Language Models with Self-Training on Image Comprehension.
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Add a description, image, and links to the llm-finetuning topic page so that developers can more easily learn about it.
To associate your repository with the llm-finetuning topic, visit your repo's landing page and select "manage topics."