-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
r/aws_msk_cluster - new storage_info arg #24767
r/aws_msk_cluster - new storage_info arg #24767
Conversation
deprecate ebs_volume_size
considers refactor from deprecated ebs_volume_size to storage_info
accept api err of no updates to ebs volume
deprecate ebs_volume_size and replace with storage_info
c0068d0
to
daedf01
Compare
|
LGTM 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, @GlennChia. It's looking good. I have a few suggestions for updates
CheckDestroy: testAccCheckClusterDestroy, | ||
Steps: []resource.TestStep{ | ||
{ | ||
Config: testAccClusterBrokerNodeGroupInfoStorageInfoConfig2(rName, original_volume_size, "kafka.m5.4xlarge"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
testAccClusterBrokerNodeGroupInfoStorageInfoConfig1
, testAccClusterBrokerNodeGroupInfoStorageInfoConfig2
, and testAccClusterBrokerNodeGroupInfoStorageInfoConfig3
should have more descriptive names identifying which parameters are set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in 4cc2c44
{ | ||
ResourceName: resourceName, | ||
ImportState: true, | ||
ImportStateVerify: true, | ||
ImportStateVerifyIgnore: []string{ | ||
"current_version", | ||
}, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This import isn't needed, since other tests cover importing a configuration using the deprecated broker_node_group_info. ebs_volume_size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in 3158ad8
{ | ||
ResourceName: resourceName, | ||
ImportState: true, | ||
ImportStateVerify: true, | ||
ImportStateVerifyIgnore: []string{ | ||
"current_version", | ||
}, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This import isn't needed, since other tests cover importing a configuration using broker_node_group_info.storage_info.ebs_storage_info.volume_size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in 3158ad8
internal/service/kafka/cluster.go
Outdated
} | ||
} | ||
|
||
if d.HasChange("broker_node_group_info.0.storage_info") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way these conditionals are ordered could lead to two UpdateBrokerStorageWithContext
requests to the API. The top-level conditional could instead be if d.HasChanges("broker_node_group_info.0.ebs_volume_size", "broker_node_group_info.0.storage_info")
. From there you could build the kafka.UpdateBrokerStorageInput
based on what parameters are set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in 917a661
internal/service/kafka/cluster.go
Outdated
input := &kafka.UpdateBrokerStorageInput{ | ||
ClusterArn: aws.String(d.Id()), | ||
CurrentVersion: aws.String(d.Get("current_version").(string)), | ||
} | ||
if v, ok := d.GetOk("broker_node_group_info.0.storage_info.0.ebs_storage_info.0.provisioned_throughput"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { | ||
input.TargetBrokerEBSVolumeInfo = []*kafka.BrokerEBSVolumeInfo{ | ||
{ | ||
KafkaBrokerNodeId: aws.String("All"), | ||
ProvisionedThroughput: expandProvisionedThroughput(v.([]interface{})[0].(map[string]interface{})), | ||
VolumeSizeGB: aws.Int64(int64(d.Get("broker_node_group_info.0.storage_info.0.ebs_storage_info.0.volume_size").(int))), | ||
}, | ||
} | ||
|
||
} else { | ||
input.TargetBrokerEBSVolumeInfo = []*kafka.BrokerEBSVolumeInfo{ | ||
{ | ||
KafkaBrokerNodeId: aws.String("All"), | ||
VolumeSizeGB: aws.Int64(int64(d.Get("broker_node_group_info.0.storage_info.0.ebs_storage_info.0.volume_size").(int))), | ||
}, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be simplified to something like the following
input := &kafka.UpdateBrokerStorageInput{ | |
ClusterArn: aws.String(d.Id()), | |
CurrentVersion: aws.String(d.Get("current_version").(string)), | |
} | |
if v, ok := d.GetOk("broker_node_group_info.0.storage_info.0.ebs_storage_info.0.provisioned_throughput"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { | |
input.TargetBrokerEBSVolumeInfo = []*kafka.BrokerEBSVolumeInfo{ | |
{ | |
KafkaBrokerNodeId: aws.String("All"), | |
ProvisionedThroughput: expandProvisionedThroughput(v.([]interface{})[0].(map[string]interface{})), | |
VolumeSizeGB: aws.Int64(int64(d.Get("broker_node_group_info.0.storage_info.0.ebs_storage_info.0.volume_size").(int))), | |
}, | |
} | |
} else { | |
input.TargetBrokerEBSVolumeInfo = []*kafka.BrokerEBSVolumeInfo{ | |
{ | |
KafkaBrokerNodeId: aws.String("All"), | |
VolumeSizeGB: aws.Int64(int64(d.Get("broker_node_group_info.0.storage_info.0.ebs_storage_info.0.volume_size").(int))), | |
}, | |
} | |
} | |
input := &kafka.UpdateBrokerStorageInput{ | |
ClusterArn: aws.String(d.Id()), | |
CurrentVersion: aws.String(d.Get("current_version").(string)), | |
} | |
ebsVolumeInfo := &kafka.BrokerEBSVolumeInfo{ | |
KafkaBrokerNodeId: aws.String("All"), | |
VolumeSizeGB: aws.Int64(int64(d.Get("broker_node_group_info.0.storage_info.0.ebs_storage_info.0.volume_size").(int))), | |
} | |
if v, ok := d.GetOk("broker_node_group_info.0.storage_info.0.ebs_storage_info.0.provisioned_throughput"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { | |
ebsVolumeInfo.ProvisionedThroughput: expandProvisionedThroughput(v.([]interface{})[0].(map[string]interface{})) | |
} | |
input.TargetBrokerEBSVolumeInfo = []*kafka.BrokerEBSVolumeInfo{ebsVolumeInfo} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in 9766872
Co-authored-by: Graham Davison <g.m.davison@computer.org>
Co-authored-by: Graham Davison <g.m.davison@computer.org>
353d2b0
to
9766872
Compare
Thanks for the review @gdavison. In addition, I also removed the import test in 6e7218a after upgrading the instance type. This is because
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, @GlennChia! 🚀
--- PASS: TestAccKafkaCluster_enhancedMonitoring (1517.86s)
--- PASS: TestAccKafkaCluster_EncryptionInfo_encryptionAtRestKMSKeyARN (1522.11s)
--- PASS: TestAccKafkaCluster_EncryptionInfoEncryptionInTransit_inCluster (1523.78s)
--- PASS: TestAccKafkaCluster_EncryptionInfoEncryptionInTransit_clientBroker (1524.23s)
--- PASS: TestAccKafkaCluster_tags (1525.65s)
--- PASS: TestAccKafkaClusterDataSource_basic (1581.76s)
--- PASS: TestAccKafkaCluster_ClientAuthenticationTLS_certificateAuthorityARNs (1608.87s)
--- PASS: TestAccKafkaCluster_BrokerNodeGroupInfo_storageInfo (1713.09s)
--- PASS: TestAccKafkaCluster_loggingInfo (1758.40s)
--- PASS: TestAccKafkaCluster_openMonitoring (1786.87s)
--- PASS: TestAccKafkaCluster_Info_revision (1947.46s)
--- PASS: TestAccKafkaCluster_numberOfBrokerNodes (2520.16s)
--- PASS: TestAccKafkaCluster_kafkaVersionDowngrade (2993.91s)
--- PASS: TestAccKafkaCluster_basic (1478.53s)
--- PASS: TestAccKafkaCluster_ClientAuthenticationSASL_scram (3024.41s)
--- PASS: TestAccKafkaCluster_ClientAuthenticationSASL_iam (3170.76s)
--- PASS: TestAccKafkaCluster_BrokerNodeGroupInfo_ebsVolumeSize (1765.18s)
--- PASS: TestAccKafkaCluster_ClientAuthenticationTLS_initiallyNoAuthentication (3449.99s)
--- PASS: TestAccKafkaCluster_BrokerNodeGroupInfo_modifyEbsVolumeSizeToStorageInfo (4556.84s)
--- PASS: TestAccKafkaCluster_BrokerNodeGroupInfo_publicAccessSaslIam (5109.58s)
--- PASS: TestAccKafkaCluster_kafkaVersionUpgradeWithInfo (5676.89s)
--- PASS: TestAccKafkaCluster_BrokerNodeGroupInfo_instanceType (6346.58s)
--- PASS: TestAccKafkaCluster_kafkaVersionUpgrade (5588.27s)
This functionality has been released in v4.15.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Closes #24464
Output from acceptance testing: