Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repo sync #35736

Merged
merged 2 commits into from
Dec 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -87,10 +87,10 @@ By default, {% data variables.product.prodname_copilot_chat_short %} uses the `G

By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 family of models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.

The o1 family of models includes two models:
The o1 family of models includes three models:

* `o1-preview`: This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the `gpt-4o` model. Each member of your enterprise can make 10 requests to this model per day.
* `o1-mini`: This is the faster version of the `o1-preview` model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model per day.
* `o1`/`o1-preview`: These models are focused on advanced reasoning and solving complex problems, in particular in math and science. They respond more slowly than the `gpt-4o` model. Each member of your enterprise can make 10 requests to each of these models per day.
* `o1-mini`: This is the faster version of the `o1` model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model per day.

### {% data variables.product.prodname_copilot_short %} Metrics API access

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ The skills you can use in {% data variables.product.prodname_copilot_chat_dotcom

{% data reusables.copilot.copilot-chat-models-beta-note %}

{% data reusables.copilot.copilot-chat-models-list %}
{% data reusables.copilot.copilot-chat-models-list-o1 %}

### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ You can tell {% data variables.product.prodname_copilot_short %} to answer a que

{% data reusables.copilot.copilot-chat-models-beta-note %}

{% data reusables.copilot.copilot-chat-models-list %}
{% data reusables.copilot.copilot-chat-models-list-o1 %}

### Changing your AI model

Expand Down Expand Up @@ -308,7 +308,7 @@ You can tell {% data variables.product.prodname_copilot_short %} to answer a que

{% data reusables.copilot.copilot-chat-models-beta-note %}

{% data reusables.copilot.copilot-chat-models-list %}
{% data reusables.copilot.copilot-chat-models-list-o1-preview %}

### Changing your AI model

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ The following models are currently available through multi-model {% data variabl
* **o1-preview:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the `gpt-4o` model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1-preview is hosted on Azure.
* **o1-mini:** This is the faster version of the `o1-preview` model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. You can make 50 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1-mini is hosted on Azure.

> [!NOTE]
> Support for the `o1` model, replacing `o1-preview`, is coming soon to {% data variables.product.prodname_vs %}.

For more information about the o1 models, see [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.

For more information about the {% data variables.copilot.copilot_claude_sonnet %} model from Anthropic, see "[AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot)."
10 changes: 10 additions & 0 deletions data/reusables/copilot/copilot-chat-models-list-o1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:

* **GPT 4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). Gpt-4o is hosted on Azure.
* **{% data variables.copilot.copilot_claude_sonnet %}:** This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
* **o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the `gpt-4o` model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
* **o1-mini:** This is the faster version of the `o1` model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. You can make 50 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1-mini is hosted on Azure.

For more information about the o1 models, see [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.

For more information about the {% data variables.copilot.copilot_claude_sonnet %} model from Anthropic, see "[AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot)."
5 changes: 5 additions & 0 deletions src/audit-logs/data/fpt/organization.json
Original file line number Diff line number Diff line change
Expand Up @@ -3129,6 +3129,11 @@
"description": "A user requested to bypass secret scanning push protection.",
"docs_reference_links": "/code-security/secret-scanning/working-with-push-protection#requesting-bypass-privileges-when-working-with-the-command-line"
},
{
"action": "secret_scanning_scan.completed",
"description": "A secret scanning scan has completed on this repository.",
"docs_reference_links": "/code-security/secret-scanning/about-secret-scanning"
},
{
"action": "security_configuration.create",
"description": "A security configuration was created",
Expand Down
5 changes: 5 additions & 0 deletions src/audit-logs/data/ghec/enterprise.json
Original file line number Diff line number Diff line change
Expand Up @@ -3914,6 +3914,11 @@
"description": "A user requested to bypass secret scanning push protection.",
"docs_reference_links": "/code-security/secret-scanning/working-with-push-protection#requesting-bypass-privileges-when-working-with-the-command-line"
},
{
"action": "secret_scanning_scan.completed",
"description": "A secret scanning scan has completed on this repository.",
"docs_reference_links": "/code-security/secret-scanning/about-secret-scanning"
},
{
"action": "security_configuration.create",
"description": "A security configuration was created",
Expand Down
5 changes: 5 additions & 0 deletions src/audit-logs/data/ghec/organization.json
Original file line number Diff line number Diff line change
Expand Up @@ -3129,6 +3129,11 @@
"description": "A user requested to bypass secret scanning push protection.",
"docs_reference_links": "/code-security/secret-scanning/working-with-push-protection#requesting-bypass-privileges-when-working-with-the-command-line"
},
{
"action": "secret_scanning_scan.completed",
"description": "A secret scanning scan has completed on this repository.",
"docs_reference_links": "/code-security/secret-scanning/about-secret-scanning"
},
{
"action": "security_configuration.create",
"description": "A security configuration was created",
Expand Down
2 changes: 1 addition & 1 deletion src/audit-logs/lib/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@
"apiOnlyEvents": "This event is not available in the web interface, only via the REST API, audit log streaming, or JSON/CSV exports.",
"apiRequestEvent": "This event is only available via audit log streaming."
},
"sha": "9a2840e598a2a275532831e9bc3dc60b677c7926"
"sha": "cf8e25bad05e4b14ca3b701b3ecfe9e5d0187544"
}
Loading