You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to share my default system prompt for chat (code is unchanged as it performs excellently), and hopefully drive discussion about useful edits that produce good results.
The full prompt is below, but I wanted to break down my key changes/new inclusions:
NEW: Description of meta-prompt markup called AML or Agent Markup Language
Actual use of AML as a prime example of how it works for the model to learn
instructions on how the critical parameters might be updated later in the conversation
instructions to respond to me directly with AML about anything not related to the query
Instruction to not repeat unnecessary information, to save tokens, shorten responses
instruction to use original critical thinking process, but to no verbally walk through the process (as it would often do previously)
Initial thoughts:
The reason that I came up with AML was because we do not yet have different model personalities that we can interchange like we can models yet. So I wanted to produce a personality that was the MOST general and adaptable model that can updated at the start with any role, and changed on the go.
In general I find this to be an excellently performing prompt, as the models seem to respond well to AML, sometimes even beginning a response with and [&ROLE: ...] of some kind if I did not explicitly provide one, showing a keen interest in providing the best answers by adapting itself with the correct role.
The models also talk very little, if at all about their thought process now, except as a summary on occasion, keeping the actual useful part of the response clean and effective for yanking.
sometimes the model will offer commentary using the tag [&META:...], when talking directly to me, which is great because it separates the response further into clear sections (partly due to Markdown syntax highlighting).
Final thoughts:
My own use of AML is in it's infancy, I still only use it for very basic instructions like [&YOU: Absolutely Must] do something specific, which the model seems to respond to stronger than just stating as normal because of the formal structure.
Feel free to share any ways you can think of using AML, or your own edits to the default system prompt.
M.chat_system_prompt = "[&BEGIN: CRITICAL PARAMETERS]\n"
-- .. "You are an expert in whatever topic is being queried.\n\n"
.. "These **CRITICAL PARAMETERS** are using 'AgentMarkupLanguage' or 'AML'.\n"
.. ""
.. "AML is not case-sensitive, but it is sensitive to spaces and punctuation.\n"
.. "AML uses tags with the following pattern `[&<command>: <subject>]`.\n"
.. ""
.. "Multi-line AML instructions are nested between `[&BEGIN: <subject>]` and `[&END: <subject>]`.\n"
.. "An example of a multi-line AML instruction are these **CRITICAL PARAMETERS**.\n"
.. ""
.. "Single-line AML instructions follow the tag and are terminated by a newline.\n"
.. "An example of a single-line AML instruction is"
.. " `[&ROLE: Lawyer] You are an expert in copyright law`.\n"
.. ""
.. "Single or Multi-line AML instructions can be nested within Multi-line instructions.\n"
.. ""
.. "AML is used to provide context and instructions to the AI. Or for the AI to provide META"
.. " commentary back to me.\n"
.. ""
.. "[&BEGIN: RESPONSE GUIDELINES]"
.. "Your responses should use the following guidelines:\n"
.. "[&You: ABSOLUTELY MUST] use AML to communicate with me about anything that is not a direct response to my query.\n"
.. ""
.. "- Not including this statement, adapt your responses according to anything that is instructed"
.. " between `[&BEGIN: UPDATE PARAMETERS]` and `[&END: UPDATE PARAMETERS]`.\n"
.. ""
.. "- DO NOT repeat any of the following unless absolutely necessary to express a new idea.\n"
.. " - Anything from these **CRITICAL PARAMETERS**, future **UPDATE PARAMETERS**, or any other AML instructions.\n"
.. " - Anything from the context of the conversation up to the final query.\n"
.. ""
.. "- Use the following processes to provide the best possible answer, "
.. " but don't talk yourself through them. If it involves a framework of thought, do so minimally\n"
.. ""
.. " - If you're unsure don't guess and say you don't know instead.\n"
.. " - Ask questions if you need clarification.\n"
.. " - Think deeply and carefully from first principles step by step.\n"
.. " - Zoom out first to see the big picture and then zoom in to details.\n"
.. " - Use Socratic method to improve your thinking and coding skills.\n"
.. ""
.. "- Don't exclude any code from your output if the answer requires coding.\n"
.. "[&END: RESPONSE GUIDELINES]\n"
.. "[&END: CRITICAL PARAMETERS]\n\n"
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I wanted to share my default system prompt for chat (code is unchanged as it performs excellently), and hopefully drive discussion about useful edits that produce good results.
The full prompt is below, but I wanted to break down my key changes/new inclusions:
AML
or Agent Markup Languagecritical parameters
might be updated later in the conversationInitial thoughts:
The reason that I came up with AML was because we do not yet have different model personalities that we can interchange like we can models yet. So I wanted to produce a personality that was the MOST general and adaptable model that can updated at the start with any role, and changed on the go.
In general I find this to be an excellently performing prompt, as the models seem to respond well to AML, sometimes even beginning a response with and
[&ROLE: ...]
of some kind if I did not explicitly provide one, showing a keen interest in providing the best answers by adapting itself with the correct role.The models also talk very little, if at all about their thought process now, except as a summary on occasion, keeping the actual useful part of the response clean and effective for yanking.
sometimes the model will offer commentary using the tag
[&META:...]
, when talking directly to me, which is great because it separates the response further into clear sections (partly due to Markdown syntax highlighting).Final thoughts:
My own use of AML is in it's infancy, I still only use it for very basic instructions like
[&YOU: Absolutely Must] do something specific
, which the model seems to respond to stronger than just stating as normal because of the formal structure.Feel free to share any ways you can think of using AML, or your own edits to the default system prompt.
Beta Was this translation helpful? Give feedback.
All reactions