Skip to content

Latest commit

 

History

History
52 lines (44 loc) · 2.36 KB

README.md

File metadata and controls

52 lines (44 loc) · 2.36 KB

NCU-IISR: Prompt Engineering on GPT-4 to Stove Biological Problems in BioASQ 11b Phase B

This is the repository for this paper. Including how we prompt and extract the results from ChatGPT via the OpenAI API.

The results are recorded here or you can go to official website to check the result.

To-Do

  • Use new openai api to force return json format
  • Compare between select top-n snippets and summary the snippets
  • Postprocessing when factoid type question have more than 5 entries

Citation

@inproceedings{hsueh2023bioasq,
  title        = {NCU-IISR: Prompt Engineering on GPT-4 to Stove Biological Problems in BioASQ 11b Phase B},
  author       = {Chun-Yu Hsueh and Yu Zhang and Yu-Wei Lu and Jen-Chieh Han and Wilailack Meesawad and Richard Tzong-Han Tsai},
  year         = 2023,
  booktitle    = {Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023), Thessaloniki, Greece, September 18th to 21st, 2023},
  publisher    = {CEUR-WS.org},
  series       = {CEUR Workshop Proceedings},
  volume       = 3497,
  pages        = {114--121},
  url          = {https://ceur-ws.org/Vol-3497/paper-009.pdf},
  editor       = {Mohammad Aliannejadi and Guglielmo Faggioli and Nicola Ferro and Michalis Vlachos}
}

Working Note

Consider Systems

  1. BioBERT1
  2. BioGPT (GPT2 based)2
  3. ChatGPT (GPT4)
  4. ChatGPT (GPT3.5)
  5. Fine-tune/Transfer GPT3,prompt turning

Scoring

Others

Footnotes

  1. Previous Method Paper and Note

  2. Inaccurate result on PapersWithCode, should follow PudMedQA homepage