Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Copilot + local AI setup instructions #7413

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

azigler
Copy link
Contributor

@azigler azigler commented Sep 18, 2024

I noticed this comment from a user that had trouble configuring Copilot with Ollama and realized that the documentation instructions do not include the specific URL instructions to access the OpenAI-compatible API. This PR clarifies that instruction, in line with what's taught in the Academy course.

@azigler azigler added 1: Dev Review Requires review by a core commiter 2: Editor Review Requires review by an editor labels Sep 18, 2024
@azigler azigler self-assigned this Sep 18, 2024
Copy link
Member

@cwarnermm cwarnermm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @azigler!

@@ -82,7 +82,7 @@ Configure a large language model (LLM) for your Copilot integration by going to

1. Deploy your model, for example, on `Ollama <https://ollama.com/>`_.
2. Select **OpenAI Compatible** in the **AI Service** dropdown.
3. Enter the URL to your AI service from your Mattermost deployment in the **API URL** field.
3. Enter the URL to your AI service from your Mattermost deployment in the **API URL** field. Be sure to include the port, and append `/v1` to the end of the URL. (e.g., `http://localhost:11434/v1` for Ollama)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
3. Enter the URL to your AI service from your Mattermost deployment in the **API URL** field. Be sure to include the port, and append `/v1` to the end of the URL. (e.g., `http://localhost:11434/v1` for Ollama)
3. Enter the URL to your AI service from your Mattermost deployment in the **API URL** field. Be sure to include the port, and append ``/v1`` to the end of the URL. (e.g., ``http://localhost:11434/v1`` for Ollama)

@cwarnermm cwarnermm removed the 2: Editor Review Requires review by an editor label Sep 18, 2024
Copy link

Newest code from mattermost has been published to preview environment for Git SHA bfa8ac9

1 similar comment
Copy link

Newest code from mattermost has been published to preview environment for Git SHA bfa8ac9

@azigler
Copy link
Contributor Author

azigler commented Sep 18, 2024

@crspeller Should we include a recommendation for disabling tools? Is that a broad recommendation? If so, should we revisit the option and it's default setting? As a minor note, it's a bit confusing that the field is "disable" and has a true/false (which means the radio toggle is semantically opposite from the end result, which might confuse users). What about "Enable Tools" and it has a default setting to whatever you think is most appropriate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1: Dev Review Requires review by a core commiter
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants