Skip to content

aneeshjoy/URIAL

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

URIAL: Tuning-Free Alignment via In-Context Learning

This is part of the Re-Align project by AI2 Mosaic. Please find more information on our website: https://allenai.github.io/re-align/.

URIAL is a simple, tuning-free alignment method, URIAL (Untuned LLMs with Restyled In-context ALignment). URIAL achieves effective alignment purely through in-context learning (ICL), requiring as few as three constant stylistic examples and a system prompt for achieving a comparable performance with SFT/RLHF.

URIAL Prompts

🖼️ Click here to see a figure for the illustration of URIAL and other tuning-free Alignment methods.

Content

As discussed here, a URIAL Prompt consists of K-shot stylistic in-context examples and a system prompt. The folder urial_prompts contains:

Installation

conda create -n re_align python=3.10
conda activate re_align
pip install -r requirements.txt

URIAL Inference

Below we show an example of how to run inference experiments with URIAL prompts on :

version="inst_1k"
output_dir="result_dirs/urial/${version}/"
python src/infer.py \
        --interval 1 \
        --model_path "mistralai/Mistral-7B-v0.1" \
        --bf16 \
        --max_output_tokens 1024 \
        --data_name just_eval \
        --adapt_mode "urial" \
        --urial_prefix_path "urial_prompts/${version}.txt" \
        --repetition_penalty 1.1 \
        --output_folder $output_dir

Supported models include:

  • meta-llama/Llama-2-7b-hf
  • TheBloke/Llama-2-70B-GPTQ with --gptq flag.
  • other similar models on huggingface.co

👀 More will come with the support of vLLM. Please stay tuned!

Evaluation with Just-Eval

Please find more details about our evaluation here: https://github.com/Re-Align/just-eval

Installation of Just-Eval

pip install git+https://github.com/Re-Align/just-eval.git
export OPENAI_API_KEY=<your secret key>

Reformatting output data

For example, if the output data is result_dirs/urial/inst_1k/Mistral-7B-v0.1.json, then run the following command to reformat the output data to result_dirs/urial/inst_1k/Mistral-7B-v0.1.to_eval.json.

python src/scripts/reformat.py result_dirs/urial/inst_1k/Mistral-7B-v0.1.json

Run Scoring

to_eval_file="result_dirs/urial/inst_1k/Mistral-7B-v0.1.to_eval.json"
run_name="Mistral-URIAL"
# GPT-4 for first five aspects on 0-800 examples 
just_eval \
    --mode "score_multi" \
    --model "gpt-4-0314" \
    --start_idx 0 \
    --end_idx 800 \
    --first_file $to_eval_file \
    --output_file "result_dirs/just-eval_results/${run_name}.score_multi.gpt-4.json"

# GPT-3.5-turbo for the safety aspect on 800-1000 examples
just_eval \
        --mode "score_safety" \
        --model "gpt-3.5-turbo-0613" \
        --first_file $to_eval_file \
        --start_idx 800 --end_idx 1000 \
        --output_file "result_dirs/just-eval_results/${run_name}.score_safety.chatgpt.json"  

Citation

@article{Lin2023ReAlign,
    author = {Bill Yuchen Lin and Abhilasha Ravichander and Ximing Lu and Nouha Dziri and Melanie Sclar and Khyathi Chandu and Chandra Bhagavatula and Yejin Choi},
    journal = {ArXiv preprint},
    title = {The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning},
    year = {2023}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%