-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: aws_bedrockagent_agent
resource fails to create due to inconsistent result after apply
#37168
Comments
Community NoteVoting for Prioritization
Volunteering to Work on This Issue
|
Another use case is that I tried to provide the resource "aws_bedrockagent_agent" "forex_asst" {
agent_name = "ForexAssistant"
agent_resource_role_arn = aws_iam_role.bedrock_agent_forex_asst.arn
description = "An assisant that provides forex rate information."
foundation_model = data.aws_bedrock_foundation_model.this.model_id
instruction = "You are an assistant that looks up today's currency exchange rates. A user may ask you what the currency exchange rate is for one currency to another. They may provide either the currency name or the three-letter currency code. If they give you a name, you may first need to first look up the currency code by its name."
prompt_override_configuration {
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/pre_processing.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
prompt_state = "DISABLED"
prompt_type = "PRE_PROCESSING"
inference_configuration {
max_length = 2048
stop_sequences = [
"\n\nHuman:"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/orchestration.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "ENABLED"
prompt_type = "ORCHESTRATION"
inference_configuration {
max_length = 2048
stop_sequences = [
"$invoke$",
"$answer$",
"$error$"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/kb_resp_gen.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
prompt_state = "DISABLED"
prompt_type = "KNOWLEDGE_BASE_RESPONSE_GENERATION"
inference_configuration {
max_length = 2048
stop_sequences = [
"\n\nHuman:"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/post_processing.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
prompt_state = "DISABLED"
prompt_type = "POST_PROCESSING"
inference_configuration {
max_length = 2048
stop_sequences = [
"$invoke$",
"$answer$",
"$error$"
]
temperature = 0
top_k = 250
top_p = 1
}
}
}
} I get the following validation error: │ operation error Bedrock Agent: CreateAgent, https response error StatusCode: 400, RequestID: 9409d5c8-be89-4983-a0e3-410178033863, ValidationException:
│ BasePromptTemplate is incompatible with prompt type: PRE_PROCESSING when promptCreationMode is DEFAULT. Remove BasePromptTemplate and retry your
│ request.;InferenceConfiguration is incompatible with prompt type: PRE_PROCESSING when promptCreationMode is DEFAULT. Remove InferenceConfiguration and retry
│ your request.;PromptState is incompatible with prompt type: PRE_PROCESSING when promptCreationMode is DEFAULT. Remove PromptState and retry your
│ request.;BasePromptTemplate is incompatible with prompt type: KNOWLEDGE_BASE_RESPONSE_GENERATION when promptCreationMode is DEFAULT. Remove
│ BasePromptTemplate and retry your request.;InferenceConfiguration is incompatible with prompt type: KNOWLEDGE_BASE_RESPONSE_GENERATION when
│ promptCreationMode is DEFAULT. Remove InferenceConfiguration and retry your request.;PromptState is incompatible with prompt type:
│ KNOWLEDGE_BASE_RESPONSE_GENERATION when promptCreationMode is DEFAULT. Remove PromptState and retry your request.;BasePromptTemplate is incompatible with
│ prompt type: POST_PROCESSING when promptCreationMode is DEFAULT. Remove BasePromptTemplate and retry your request.;InferenceConfiguration is incompatible
│ with prompt type: POST_PROCESSING when promptCreationMode is DEFAULT. Remove InferenceConfiguration and retry your request.;PromptState is incompatible with
│ prompt type: POST_PROCESSING when promptCreationMode is DEFAULT. Remove PromptState and retry your request. After I fixed these validation issues in the configuration like so: resource "aws_bedrockagent_agent" "forex_asst" {
agent_name = "ForexAssistant"
agent_resource_role_arn = aws_iam_role.bedrock_agent_forex_asst.arn
description = "An assisant that provides forex rate information."
foundation_model = data.aws_bedrock_foundation_model.this.model_id
instruction = "You are an assistant that looks up today's currency exchange rates. A user may ask you what the currency exchange rate is for one currency to another. They may provide either the currency name or the three-letter currency code. If they give you a name, you may first need to first look up the currency code by its name."
prompt_override_configuration {
prompt_configurations {
# base_prompt_template = file("${path.module}/prompt_templates/pre_processing.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
# prompt_state = "DISABLED"
prompt_type = "PRE_PROCESSING"
# inference_configuration {
# max_length = 2048
# stop_sequences = [
# "\n\nHuman:"
# ]
# temperature = 0
# top_k = 250
# top_p = 1
# }
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/orchestration.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "ENABLED"
prompt_type = "ORCHESTRATION"
inference_configuration {
max_length = 2048
stop_sequences = [
"$invoke$",
"$answer$",
"$error$"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
# base_prompt_template = file("${path.module}/prompt_templates/kb_resp_gen.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
# prompt_state = "DISABLED"
prompt_type = "KNOWLEDGE_BASE_RESPONSE_GENERATION"
# inference_configuration {
# max_length = 2048
# stop_sequences = [
# "\n\nHuman:"
# ]
# temperature = 0
# top_k = 250
# top_p = 1
# }
}
prompt_configurations {
# base_prompt_template = file("${path.module}/prompt_templates/post_processing.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
# prompt_state = "DISABLED"
prompt_type = "POST_PROCESSING"
# inference_configuration {
# max_length = 2048
# stop_sequences = [
# "$invoke$",
# "$answer$",
# "$error$"
# ]
# temperature = 0
# top_k = 250
# top_p = 1
# }
}
}
} I then get the inconsistent state error because the state is returning all attributes for the aws_bedrockagent_agent.forex_asst: Creating...
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to aws_bedrockagent_agent.forex_asst, provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value:
│ .prompt_override_configuration[0].prompt_configurations: planned set element
│ cty.ObjectVal(map[string]cty.Value{"base_prompt_template":cty.NullVal(cty.String),
│ "inference_configuration":cty.NullVal(cty.List(cty.Object(map[string]cty.Type{"max_length":cty.Number, "stop_sequences":cty.List(cty.String),
│ "temperature":cty.Number, "top_k":cty.Number, "top_p":cty.Number}))), "parser_mode":cty.StringVal("DEFAULT"),
│ "prompt_creation_mode":cty.StringVal("DEFAULT"), "prompt_state":cty.NullVal(cty.String), "prompt_type":cty.StringVal("KNOWLEDGE_BASE_RESPONSE_GENERATION")})
│ does not correlate with any element in actual.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to aws_bedrockagent_agent.forex_asst, provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value:
│ .prompt_override_configuration[0].prompt_configurations: planned set element
│ cty.ObjectVal(map[string]cty.Value{"base_prompt_template":cty.NullVal(cty.String),
│ "inference_configuration":cty.NullVal(cty.List(cty.Object(map[string]cty.Type{"max_length":cty.Number, "stop_sequences":cty.List(cty.String),
│ "temperature":cty.Number, "top_k":cty.Number, "top_p":cty.Number}))), "parser_mode":cty.StringVal("DEFAULT"),
│ "prompt_creation_mode":cty.StringVal("DEFAULT"), "prompt_state":cty.NullVal(cty.String), "prompt_type":cty.StringVal("POST_PROCESSING")}) does not correlate
│ with any element in actual.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to aws_bedrockagent_agent.forex_asst, provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value:
│ .prompt_override_configuration[0].prompt_configurations: planned set element
│ cty.ObjectVal(map[string]cty.Value{"base_prompt_template":cty.NullVal(cty.String),
│ "inference_configuration":cty.NullVal(cty.List(cty.Object(map[string]cty.Type{"max_length":cty.Number, "stop_sequences":cty.List(cty.String),
│ "temperature":cty.Number, "top_k":cty.Number, "top_p":cty.Number}))), "parser_mode":cty.StringVal("DEFAULT"),
│ "prompt_creation_mode":cty.StringVal("DEFAULT"), "prompt_state":cty.NullVal(cty.String), "prompt_type":cty.StringVal("PRE_PROCESSING")}) does not correlate
│ with any element in actual.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵ |
I am getting the same thing with aws provider version I am trying to create a relatively default agent but want to customize the Here's what I am trying to configure: My code is very similar to @acwwat
The agents are created but the version/alias is not created. I suspect because of the |
Just to add to this one, I am experiencing the same problem - whenever I override the prompt templates, it produces |
I ran into this as well, and since I am only using the Post-Processing piece for my Agent, this is what I did to essentially "blank out" the other options, allowing for a clean apply:
|
Warning This issue has been closed, meaning that any additional comments are hard for our team to see. Please assume that the maintainers will not see them. Ongoing conversations amongst community members are welcome, however, the issue will be locked after 30 days. Moving conversations to another venue, such as the AWS Provider forum, is recommended. If you have additional concerns, please open a new issue, referencing this one where needed. |
This functionality has been released in v5.64.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Terraform Core Version
1.6.6
AWS Provider Version
1.47.0
Affected Resource(s)
aws_bedrockagent_agent
Expected Behavior
The resource is created or updated successfully.
Actual Behavior
The resource fails to create or update due to the validation error below.
Relevant Error/Panic Output Snippet
Terraform Configuration Files
You'll also need to place this file in a
prompt_templates
folder in the same location as the Terraform configuration.orchestration.txt
Steps to Reproduce
Debug Output
No response
Panic Output
No response
Important Factoids
My goal is to customize only one of the four prompt configurations, since they are very verbose and would be hard to repeat in Terraform. Not sure if it is possible, but it would be great if the resource can use the state for the blocks that are not specified to for consistency.
References
No response
Would you like to implement a fix?
None
The text was updated successfully, but these errors were encountered: