can-ai-code: Self-evaluating interview for AI coders #491
Labels
github
gh tools like cli, Actions, Issues, Pages
llm-applications
Topics related to practical applications of Large Language Models in various fields
llm-evaluation
Evaluating Large Language Models performance and behavior through human-written evaluation sets
New-Label
Choose this option if the existing labels are insufficient to describe the content accurately
openai
OpenAI APIs, LLMs, Recipes and Evals
source-code
Code snippets
Title: the-crypt-keeper/can-ai-code: Self-evaluating interview for AI coders
A self-evaluating interview for AI coding models, written by humans and taken by AI.
Key Ideas
News
mlabonne/Beyonder-4x7B-v2
(AWQ only, FP16 was mega slow).Suggested labels
{ "label-name": "interview-evaluation", "description": "Self-evaluating interview for AI coding models", "repo": "the-crypt-keeper/can-ai-code", "confidence": 96.49 }
The text was updated successfully, but these errors were encountered: