From 7d2c47c850c7bed76f534dd78c24a1273676a03c Mon Sep 17 00:00:00 2001
From: aws-sdk-python-automation This action returns details for a specified legal hold. The details are the body of a legal hold in JSON format, in addition to metadata. This operation returns the metadata and details specific to the backup index associated with the specified recovery point. Returns a list of all frameworks for an Amazon Web Services account and Amazon Web Services Region. This operation returns a list of recovery points that have an associated index, belonging to the specified account. Optional parameters you can include are: MaxResults; NextToken; SourceResourceArns; CreatedBefore; CreatedAfter; and ResourceType. Updates whether the Amazon Web Services account is opted in to cross-account backup. Returns an error if the account is not an Organizations management account. Use the This operation updates the settings of a recovery point index. Required: BackupVaultName, RecoveryPointArn, and IAMRoleArn The timezone in which the schedule expression is set. By default, ScheduleExpressions are in UTC. You can modify this to a specified timezone. IndexActions is an array you use to specify how backup data should be indexed. eEach BackupRule can have 0 or 1 IndexAction, as each backup can have up to one index associated with it. Within the array is ResourceType. Only one will be accepted for each BackupRule. Specifies a scheduled task used to back up a selection of resources. The timezone in which the schedule expression is set. By default, ScheduleExpressions are in UTC. You can modify this to a specified timezone. There can up to one IndexAction in each BackupRule, as each backup can have 0 or 1 backup index associated with it. Within the array is ResourceTypes. Only 1 resource type will be accepted for each BackupRule. Valid values: Specifies a scheduled task used to back up a selection of resources. The type of vault in which the described recovery point is stored. This is the current status for the backup index associated with the specified recovery point. Statuses are: A recovery point with an index that has the status of A string in the form of a detailed message explaining the status of a backup index associated with the recovery point. The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created. Accepted characters include lowercase letters, numbers, and hyphens. An ARN that uniquely identifies a recovery point; for example, An ARN that uniquely identifies a recovery point; for example, An ARN that uniquely identifies the backup vault where the recovery point index is stored. For example, A string of the Amazon Resource Name (ARN) that uniquely identifies the source resource. The date and time that a backup index was created, in Unix format and Coordinated Universal Time (UTC). The value of The date and time that a backup index was deleted, in Unix format and Coordinated Universal Time (UTC). The value of The date and time that a backup index finished creation, in Unix format and Coordinated Universal Time (UTC). The value of This is the current status for the backup index associated with the specified recovery point. Statuses are: A recovery point with an index that has the status of A detailed message explaining the status of a backup index associated with the recovery point. Count of items within the backup index associated with the recovery point. 0 or 1 index action will be accepted for each BackupRule. Valid values: This is an optional array within a BackupRule. IndexAction consists of one ResourceTypes. An ARN that uniquely identifies a recovery point; for example, A string of the Amazon Resource Name (ARN) that uniquely identifies the source resource. This specifies the IAM role ARN used for this operation. For example, arn:aws:iam::123456789012:role/S3Access The date and time that a backup was created, in Unix format and Coordinated Universal Time (UTC). The value of The resource type of the indexed recovery point. The date and time that a backup index was created, in Unix format and Coordinated Universal Time (UTC). The value of This is the current status for the backup index associated with the specified recovery point. Statuses are: A recovery point with an index that has the status of A string in the form of a detailed message explaining the status of a backup index associated with the recovery point. An ARN that uniquely identifies the backup vault where the recovery point index is stored. For example, This is a recovery point that has an associated backup index. Only recovery points with a backup index can be included in a search. The next item following a partial list of returned recovery points. For example, if a request is made to return The maximum number of resource list items to be returned. A string of the Amazon Resource Name (ARN) that uniquely identifies the source resource. Returns only indexed recovery points that were created before the specified date. Returns only indexed recovery points that were created after the specified date. Returns a list of indexed recovery points for the specified resource type(s). Accepted values include: Include this parameter to filter the returned list by the indicated statuses. Accepted values: A recovery point with an index that has the status of This is a list of recovery points that have an associated index, belonging to the specified account. The next item following a partial list of returned recovery points. For example, if a request is made to return The type of vault in which the described recovery point is stored. This is the current status for the backup index associated with the specified recovery point. Statuses are: A recovery point with an index that has the status of A string in the form of a detailed message explaining the status of a backup index associated with the recovery point. Contains detailed information about the recovery points stored in a backup vault. The type of vault in which the described recovery point is stored. This is the current status for the backup index associated with the specified recovery point. Statuses are: A recovery point with an index that has the status of A string in the form of a detailed message explaining the status of a backup index associated with the recovery point. Contains detailed information about a saved recovery point. The backup option for a selected resource. This option is only available for Windows Volume Shadow Copy Service (VSS) backup jobs. Valid values: Set to Include this parameter to enable index creation if your backup job has a resource type that supports backup indexes. Resource types that support backup indexes include: Index can have 1 of 2 possible values, either To create a backup index for an eligible To delete a backup index, set value to The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created. Accepted characters include lowercase letters, numbers, and hyphens. An ARN that uniquely identifies a recovery point; for example, This specifies the IAM role ARN used for this operation. For example, arn:aws:iam::123456789012:role/S3Access Index can have 1 of 2 possible values, either To create a backup index for an eligible To delete a backup index, set value to The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created. An ARN that uniquely identifies a recovery point; for example, This is the current status for the backup index associated with the specified recovery point. Statuses are: A recovery point with an index that has the status of Index can have 1 of 2 possible values, either A value of A value of This operation retrieves metadata of a search job, including its progress. This operation retrieves the metadata of an export job. An export job is an operation that transmits the results of a search job to a specified S3 bucket in a .csv file. An export job allows you to retain results of a search beyond the search job's scheduled retention of 7 days. This operation returns a list of all backups (recovery points) in a paginated format that were included in the search job. If a search does not display an expected backup in the results, you can call this operation to display each backup included in the search. Any backups that were not included because they have a Only recovery points with a backup index that has a status of This operation returns a list of a specified search job. This operation returns a list of search jobs belonging to an account. This operation exports search results of a search job to a specified destination S3 bucket. This operation returns the tags for a resource type. This operation creates a search job which returns recovery points filtered by SearchScope and items filtered by ItemFilters. You can optionally include ClientToken, EncryptionKeyArn, Name, and/or Tags. This operations starts a job to export the results of search job to a designated S3 bucket. This operations ends a search job. Only a search job with a status of This operation puts tags on the resource you indicate. This operation removes tags from the specified resource. User does not have sufficient access to perform this action. You do not have sufficient access to perform this action. This timestamp includes recovery points only created after the specified time. This timestamp includes recovery points only created before the specified time. This filters by recovery points within the CreatedAfter and CreatedBefore timestamps. Updating or deleting a resource can cause an inconsistent state. Identifier of the resource affected. Type of the resource affected. This exception occurs when a conflict with a previous successful operation is detected. This generally occurs when the previous operation did not have time to propagate to the host serving the current request. A retry (with appropriate backoff logic) is the recommended response to this exception. This number is the sum of all backups that have been scanned so far during a search job. This number is the sum of all items that have been scanned so far during a search job. This number is the sum of all items that match the item filters in a search job in progress. This contains information results retrieved from a search job that may not have completed. You can include 1 to 10 values. If one file path is included, the results will return only items that match the file path. If more than one file path is included, the results will return all items that match any of the file paths. You can include 1 to 10 values. If one is included, the results will return only items that match. If more than one is included, the results will return all items that match any of the included values. You can include 1 to 10 values. If one is included, the results will return only items that match. If more than one is included, the results will return all items that match any of the included values. You can include 1 to 10 values. If one is included, the results will return only items that match. If more than one is included, the results will return all items that match any of the included values. This contains arrays of objects, which may include CreationTimes time condition objects, FilePaths string objects, LastModificationTimes time condition objects, These are one or more items in the results that match values for the Amazon Resource Name (ARN) of recovery points returned in a search of Amazon EBS backup metadata. These are one or more items in the results that match values for the Amazon Resource Name (ARN) of source resources returned in a search of Amazon EBS backup metadata. The name of the backup vault. These are one or more items in the results that match values for file systems returned in a search of Amazon EBS backup metadata. These are one or more items in the results that match values for file paths returned in a search of Amazon EBS backup metadata. These are one or more items in the results that match values for file sizes returned in a search of Amazon EBS backup metadata. These are one or more items in the results that match values for creation times returned in a search of Amazon EBS backup metadata. These are one or more items in the results that match values for Last Modified Time returned in a search of Amazon EBS backup metadata. These are the items returned in the results of a search of Amazon EBS backup metadata. This is the unique string that identifies a specific export job. This is the unique ARN (Amazon Resource Name) that belongs to the new export job. The status of the export job is one of the following: This is a timestamp of the time the export job was created. This is a timestamp of the time the export job compeleted. A status message is a string that is returned for an export job. A status message is included for any status other than The unique string that identifies the Amazon Resource Name (ARN) of the specified search job. This is the summary of an export job. This specifies the destination Amazon S3 bucket for the export job. And, if included, it also specifies the destination prefix. This contains the export specification object. Required unique string that specifies the search job. Returned name of the specified search job. Returned summary of the specified search job scope, including: TotalBackupsToScanCount, the number of recovery points returned by the search. TotalItemsToScanCount, the number of items returned by the search. Returns numbers representing BackupsScannedCount, ItemsScanned, and ItemsMatched. A status message will be returned for either a earch job with a status of For example, a message may say that a search contained recovery points unable to be scanned because of a permissions issue. The encryption key for the specified search job. Example: The date and time that a search job completed, in Unix format and Coordinated Universal Time (UTC). The value of The current status of the specified search job. A search job may have one of the following statuses: The search scope is all backup properties input into a search. Item Filters represent all input item properties specified when the search was created. The date and time that a search job was created, in Unix format and Coordinated Universal Time (UTC). The value of The unique string that identifies the specified search job. The unique string that identifies the Amazon Resource Name (ARN) of the specified search job. This is the unique string that identifies a specific export job. Required for this operation. This is the unique string that identifies the specified export job. The unique Amazon Resource Name (ARN) that uniquely identifies the export job. This is the current status of the export job. The date and time that an export job was created, in Unix format and Coordinated Universal Time (UTC). The value of The date and time that an export job completed, in Unix format and Coordinated Universal Time (UTC). The value of A status message is a string that is returned for search job with a status of The export specification consists of the destination S3 bucket to which the search results were exported, along with the destination prefix. The unique string that identifies the Amazon Resource Name (ARN) of the specified search job. Unexpected error during processing of request. Retry the call after number of seconds. An internal server error occurred. Retry your request. This array can contain CreationTimes, ETags, ObjectKeys, Sizes, or VersionIds objects. This array can contain CreationTimes, FilePaths, LastModificationTimes, or Sizes objects. Item Filters represent all input item properties specified when the search was created. Contains either EBSItemFilters or S3ItemFilters The unique string that specifies the search job. The next item following a partial list of returned backups included in a search job. For example, if a request is made to return The maximum number of resource list items to be returned. The recovery points returned the results of a search job The next item following a partial list of returned backups included in a search job. For example, if a request is made to return The unique string that specifies the search job. The next item following a partial list of returned search job results. For example, if a request is made to return The maximum number of resource list items to be returned. The results consist of either EBSResultItem or S3ResultItem. The next item following a partial list of search job results. For example, if a request is made to return Include this parameter to filter list by search job status. The next item following a partial list of returned search jobs. For example, if a request is made to return The maximum number of resource list items to be returned. The search jobs among the list, with details of the returned search jobs. The next item following a partial list of returned backups included in a search job. For example, if a request is made to return The search jobs to be included in the export job can be filtered by including this parameter. The unique string that specifies the search job. The next item following a partial list of returned backups included in a search job. For example, if a request is made to return The maximum number of resource list items to be returned. The operation returns the included export jobs. The next item following a partial list of returned backups included in a search job. For example, if a request is made to return The Amazon Resource Name (ARN) that uniquely identifies the resource.> List of tags returned by the operation. The value of an item included in one of the search item filters. A string that defines what values will be returned. If this is included, avoid combinations of operators that will return all possible values. For example, including both The long condition contains a Request references a resource which does not exist. Hypothetical identifier of the resource affected. Hypothetical type of the resource affected. The resource was not found for this request. Confirm the resource information, such as the ARN or type is correct and exists, then retry the request. These are items returned in the search results of an Amazon S3 search. These are items returned in the search results of an Amazon EBS search. This is an object representing the item returned in the results of a search for a specific resource type. This specifies the destination Amazon S3 bucket for the export job. This specifies the prefix for the destination Amazon S3 bucket for the export job. This specification contains a required string of the destination bucket; optionally, you can include the destination prefix. You can include 1 to 10 values. If one value is included, the results will return only items that match the value. If more than one value is included, the results will return all items that match any of the values. You can include 1 to 10 values. If one value is included, the results will return only items that match the value. If more than one value is included, the results will return all items that match any of the values. You can include 1 to 10 values. If one value is included, the results will return only items that match the value. If more than one value is included, the results will return all items that match any of the values. You can include 1 to 10 values. If one value is included, the results will return only items that match the value. If more than one value is included, the results will return all items that match any of the values. You can include 1 to 10 values. If one value is included, the results will return only items that match the value. If more than one value is included, the results will return all items that match any of the values. This contains arrays of objects, which may include ObjectKeys, Sizes, CreationTimes, VersionIds, and/or Etags. These are items in the returned results that match recovery point Amazon Resource Names (ARN) input during a search of Amazon S3 backup metadata. These are items in the returned results that match source Amazon Resource Names (ARN) input during a search of Amazon S3 backup metadata. The name of the backup vault. This is one or more items returned in the results of a search of Amazon S3 backup metadata that match the values input for object key. These are items in the returned results that match values for object size(s) input during a search of Amazon S3 backup metadata. These are one or more items in the returned results that match values for item creation time input during a search of Amazon S3 backup metadata. These are one or more items in the returned results that match values for ETags input during a search of Amazon S3 backup metadata. These are one or more items in the returned results that match values for version IDs input during a search of Amazon S3 backup metadata. These are the items returned in the results of a search of Amazon S3 backup metadata. This is the status of the search job backup result. This is the status message included with the results. This is the resource type of the search. The Amazon Resource Name (ARN) that uniquely identifies the backup resources. The Amazon Resource Name (ARN) that uniquely identifies the source resources. This is the creation time of the backup index. This is the creation time of the backup (recovery point). This contains the information about recovery points returned in results of a search job. The unique string that specifies the search job. The unique string that identifies the Amazon Resource Name (ARN) of the specified search job. This is the name of the search job. This is the status of the search job. This is the creation time of the search job. This is the completion time of the search job. Returned summary of the specified search job scope, including: TotalBackupsToScanCount, the number of recovery points returned by the search. TotalItemsToScanCount, the number of items returned by the search. A status message will be returned for either a earch job with a status of For example, a message may say that a search contained recovery points unable to be scanned because of a permissions issue. This is information pertaining to a search job. The resource types included in a search. Eligible resource types include S3 and EBS. This is the time a backup resource was created. The Amazon Resource Name (ARN) that uniquely identifies the source resources. The Amazon Resource Name (ARN) that uniquely identifies the backup resources. These are one or more tags on the backup (recovery point). The search scope is all backup properties input into a search. This is the count of the total number of backups that will be scanned in a search. This is the count of the total number of items that will be scanned in a search. The summary of the specified search job scope, including: TotalBackupsToScanCount, the number of recovery points returned by the search. TotalItemsToScanCount, the number of items returned by the search. This request was not successful due to a service quota exceeding limits. Identifier of the resource. Type of resource. This is the code unique to the originating service with the quota. This is the code specific to the quota type. The request denied due to exceeding the quota limits permitted. List of tags returned by the operation. Include alphanumeric characters to create a name for this search job. The encryption key for the specified search job. Include this parameter to allow multiple identical calls for idempotency. A client token is valid for 8 hours after the first request that uses it is completed. After this time, any request with the same token is treated as a new request. This object can contain BackupResourceTypes, BackupResourceArns, BackupResourceCreationTime, BackupResourceTags, and SourceResourceArns to filter the recovery points returned by the search job. Item Filters represent all input item properties specified when the search was created. Contains either EBSItemFilters or S3ItemFilters The unique string that identifies the Amazon Resource Name (ARN) of the specified search job. The date and time that a job was created, in Unix format and Coordinated Universal Time (UTC). The value of The unique string that specifies the search job. The unique string that specifies the search job. This specification contains a required string of the destination bucket; optionally, you can include the destination prefix. Include this parameter to allow multiple identical calls for idempotency. A client token is valid for 8 hours after the first request that uses it is completed. After this time, any request with the same token is treated as a new request. Optional tags to include. A tag is a key-value pair you can use to manage, filter, and search for your resources. Allowed characters include UTF-8 letters, numbers, spaces, and the following characters: + - = . _ : /. This parameter specifies the role ARN used to start the search results export jobs. This is the unique ARN (Amazon Resource Name) that belongs to the new export job. This is the unique identifier that specifies the new export job. The unique string that specifies the search job. The value of the string. A string that defines what values will be returned. If this is included, avoid combinations of operators that will return all possible values. For example, including both This contains the value of the string and can contain one or more operators. The Amazon Resource Name (ARN) that uniquely identifies the resource. This is the resource that will have the indicated tags. Required tags to include. A tag is a key-value pair you can use to manage, filter, and search for your resources. Allowed characters include UTF-8 letters, numbers, spaces, and the following characters: + - = . _ : /. Request was unsuccessful due to request throttling. This is the code unique to the originating service. This is the code unique to the originating service with the quota. Retry the call after number of seconds. The request was denied due to request throttling. This is the timestamp value of the time condition. A string that defines what values will be returned. If this is included, avoid combinations of operators that will return all possible values. For example, including both A time condition denotes a creation time, last modification time, or other time. The Amazon Resource Name (ARN) that uniquely identifies the resource where you want to remove tags. This required parameter contains the tag keys you want to remove from the source. The input fails to satisfy the constraints specified by an Amazon service. The input fails to satisfy the constraints specified by a service. Backup Search is the recovery point and item level search for Backup. For additional information, see:DescribeGlobalSettings
API to determine the current settings.
"
}
},
"documentation":"EBS
for Amazon Elastic Block StoreS3
for Amazon Simple Storage Service (Amazon S3)PENDING
| ACTIVE
| FAILED
| DELETING
ACTIVE
can be included in a search.arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.arn:aws:backup:us-east-1:123456789012:backup-vault:aBackupVault
.CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.PENDING
| ACTIVE
| FAILED
| DELETING
ACTIVE
can be included in a search.
"
+ }
+ },
+ "documentation":"EBS
for Amazon Elastic Block StoreS3
for Amazon Simple Storage Service (Amazon S3)arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
"
+ },
+ "IndexCreationDate":{
+ "shape":"timestamp",
+ "documentation":"EBS
for Amazon Elastic Block StoreS3
for Amazon Simple Storage Service (Amazon S3)CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.PENDING
| ACTIVE
| FAILED
| DELETING
ACTIVE
can be included in a search.arn:aws:backup:us-east-1:123456789012:backup-vault:aBackupVault
.MaxResults
number of indexed recovery points, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.
",
+ "location":"querystring",
+ "locationName":"resourceType"
+ },
+ "IndexStatus":{
+ "shape":"IndexStatus",
+ "documentation":"EBS
for Amazon Elastic Block StoreS3
for Amazon Simple Storage Service (Amazon S3)PENDING
| ACTIVE
| FAILED
| DELETING
ACTIVE
can be included in a search.MaxResults
number of indexed recovery points, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.PENDING
| ACTIVE
| FAILED
| DELETING
ACTIVE
can be included in a search.PENDING
| ACTIVE
| FAILED
| DELETING
ACTIVE
can be included in a search.\"WindowsVSS\":\"enabled\"
to enable the WindowsVSS
backup option and create a Windows VSS backup. Set to \"WindowsVSS\"\"disabled\"
to create a regular backup. The WindowsVSS
option is not enabled by default.
EBS
for Amazon Elastic Block StoreS3
for Amazon Simple Storage Service (Amazon S3)ENABLED
or DISABLED
.ACTIVE
recovery point that does not yet have a backup index, set value to ENABLED
.DISABLED
.arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.ENABLED
or DISABLED
.ACTIVE
recovery point that does not yet have a backup index, set value to ENABLED
.DISABLED
.arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.PENDING
| ACTIVE
| FAILED
| DELETING
ACTIVE
can be included in a search.ENABLED
or DISABLED
.ENABLED
means a backup index for an eligible ACTIVE
recovery point has been created.DISABLED
means a backup index was deleted.FAILED
status from a permissions issue will be displayed, along with a status message.ACTIVE
will be included in search results. If the index has any other status, its status will be displayed along with a status message.RUNNING
can be stopped.CREATED
; RUNNING
; FAILED
; or COMPLETED
.COMPLETED
without issues.
"
+ },
+ "CurrentSearchProgress":{
+ "shape":"CurrentSearchProgress",
+ "documentation":"ERRORED
or a status of COMPLETED
jobs with issues.arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
.CompletionTime
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.RUNNING
; COMPLETED
; STOPPED
; FAILED
; TIMED_OUT
; or EXPIRED
.CompletionTime
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.CreationTime
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.CreationTime
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.FAILED
, along with steps to remedy and retry the operation.MaxResults
number of backups, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.MaxResults
number of backups, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.MaxResults
number of search job results, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.MaxResults
number of backups, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.MaxResults
number of backups, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.MaxResults
number of backups, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.MaxResults
number of backups, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.MaxResults
number of backups, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.EQUALS_TO
and NOT_EQUALS_TO
with a value of 4
will return all values.Value
and can optionally contain an Operator
.
"
+ },
+ "StatusMessage":{
+ "shape":"String",
+ "documentation":"ERRORED
or a status of COMPLETED
jobs with issues.
"
+ },
+ "ServiceQuotaExceededException":{
+ "type":"structure",
+ "required":[
+ "message",
+ "resourceId",
+ "resourceType",
+ "serviceCode",
+ "quotaCode"
+ ],
+ "members":{
+ "message":{
+ "shape":"String",
+ "documentation":"CompletionTime
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.EQUALS_TO
and NOT_EQUALS_TO
with a value of 4
will return all values.EQUALS_TO
and NOT_EQUALS_TO
with a value of 4
will return all values.
The properties for a task definition that describes the container and volume definitions of an Amazon ECS task. You can specify which Docker images to use, the required resources, and other configurations related to launching the task definition through an Amazon ECS service or task.
" }, + "EksAnnotationsMap":{ + "type":"map", + "key":{"shape":"String"}, + "value":{"shape":"String"} + }, "EksAttemptContainerDetail":{ "type":"structure", "members":{ @@ -2037,6 +2042,10 @@ "shape":"String", "documentation":"The path on the container where the volume is mounted.
" }, + "subPath":{ + "shape":"String", + "documentation":"A sub-path inside the referenced volume instead of its root.
" + }, "readOnly":{ "shape":"Boolean", "documentation":"If this value is true
, the container has read-only access to the volume. Otherwise, the container can write to the volume. The default value is false
.
Key-value pairs used to identify, sort, and organize cube resources. Can contain up to 63 uppercase letters, lowercase letters, numbers, hyphens (-), and underscores (_). Labels can be added or modified at any time. Each resource can have multiple labels, but each key must be unique for a given object.
" + }, + "annotations":{ + "shape":"EksAnnotationsMap", + "documentation":"Key-value pairs used to attach arbitrary, non-identifying metadata to Kubernetes objects. Valid annotation keys have two segments: an optional prefix and a name, separated by a slash (/).
The prefix is optional and must be 253 characters or less. If specified, the prefix must be a DNS subdomain− a series of DNS labels separated by dots (.), and it must end with a slash (/).
The name segment is required and must be 63 characters or less. It can include alphanumeric characters ([a-z0-9A-Z]), dashes (-), underscores (_), and dots (.), but must begin and end with an alphanumeric character.
Annotation values must be 255 characters or less.
Annotations can be added or modified at any time. Each resource can have multiple annotations.
" + }, + "namespace":{ + "shape":"String", + "documentation":"The namespace of the Amazon EKS cluster. In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Batch places Batch Job pods in this namespace. If this field is provided, the value can't be empty or null. It must meet the following requirements:
1-63 characters long
Can't be set to default
Can't start with kube
Must match the following regular expression: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$
For more information, see Namespaces in the Kubernetes documentation. This namespace can be different from the kubernetesNamespace
set in the compute environment's EksConfiguration
, but must have identical role-based access control (RBAC) roles as the compute environment's kubernetesNamespace
. For multi-node parallel jobs, the same value must be provided across all the node ranges.
Describes and uniquely identifies Kubernetes resources. For example, the compute environment that a pod runs in or the jobID
for a job running in the pod. For more information, see Understanding Kubernetes Objects in the Kubernetes documentation.
Describes and uniquely identifies Kubernetes resources. For example, the compute environment that a pod runs in or the jobID
for a job running in the pod. For more information, see Understanding Kubernetes Objects in the Kubernetes documentation.
The name of the persistentVolumeClaim
bounded to a persistentVolume
. For more information, see Persistent Volume Claims in the Kubernetes documentation.
An optional boolean value indicating if the mount is read only. Default is false. For more information, see Read Only Mounts in the Kubernetes documentation.
" + } + }, + "documentation":"A persistentVolumeClaim
volume is used to mount a PersistentVolume into a Pod. PersistentVolumeClaims are a way for users to \"claim\" durable storage without knowing the details of the particular cloud environment. See the information about PersistentVolumes in the Kubernetes documentation.
Specifies the configuration of a Kubernetes secret
volume. For more information, see secret in the Kubernetes documentation.
Specifies the configuration of a Kubernetes persistentVolumeClaim
bounded to a persistentVolume
. For more information, see Persistent Volume Claims in the Kubernetes documentation.
Specifies an Amazon EKS volume for a job definition.
" diff --git a/botocore/data/cleanroomsml/2023-09-06/service-2.json b/botocore/data/cleanroomsml/2023-09-06/service-2.json index f0878885bc..d9804c45da 100644 --- a/botocore/data/cleanroomsml/2023-09-06/service-2.json +++ b/botocore/data/cleanroomsml/2023-09-06/service-2.json @@ -1094,7 +1094,8 @@ "sqlParameters":{ "shape":"ProtectedQuerySQLParameters", "documentation":"The protected SQL query parameters.
" - } + }, + "sqlComputeConfiguration":{"shape":"ComputeConfiguration"} }, "documentation":"Defines the Amazon S3 bucket where the seed audience for the generating audience is stored.
" }, @@ -5419,7 +5420,7 @@ }, "dataSource":{ "shape":"ModelInferenceDataSource", - "documentation":"Defines he data source that is used for the trained model inference job.
" + "documentation":"Defines the data source that is used for the trained model inference job.
" }, "description":{ "shape":"ResourceDescription", diff --git a/botocore/data/cloudfront/2020-05-31/service-2.json b/botocore/data/cloudfront/2020-05-31/service-2.json index 6f29f51d32..450b373251 100644 --- a/botocore/data/cloudfront/2020-05-31/service-2.json +++ b/botocore/data/cloudfront/2020-05-31/service-2.json @@ -4519,11 +4519,11 @@ }, "OriginReadTimeout":{ "shape":"integer", - "documentation":"Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the origin response timeout. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 30 seconds.
For more information, see Origin Response Timeout in the Amazon CloudFront Developer Guide.
" + "documentation":"Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the origin response timeout. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 30 seconds.
For more information, see Response timeout (custom origins only) in the Amazon CloudFront Developer Guide.
" }, "OriginKeepaliveTimeout":{ "shape":"integer", - "documentation":"Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 5 seconds.
For more information, see Origin Keep-alive Timeout in the Amazon CloudFront Developer Guide.
" + "documentation":"Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 5 seconds.
For more information, see Keep-alive timeout (custom origins only) in the Amazon CloudFront Developer Guide.
" } }, "documentation":"A custom origin. A custom origin is any origin that is not an Amazon S3 bucket, with one exception. An Amazon S3 bucket that is configured with static website hosting is a custom origin.
" @@ -5113,7 +5113,7 @@ }, "DefaultRootObject":{ "shape":"string", - "documentation":"The object that you want CloudFront to request from your origin (for example, index.html
) when a viewer requests the root URL for your distribution (https://www.example.com
) instead of an object in your distribution (https://www.example.com/product-description.html
). Specifying a default root object avoids exposing the contents of your distribution.
Specify only the object name, for example, index.html
. Don't add a /
before the object name.
If you don't want to specify a default root object when you create a distribution, include an empty DefaultRootObject
element.
To delete the default root object from an existing distribution, update the distribution configuration and include an empty DefaultRootObject
element.
To replace the default root object, update the distribution configuration and specify the new object.
For more information about the default root object, see Creating a Default Root Object in the Amazon CloudFront Developer Guide.
" + "documentation":"When a viewer requests the root URL for your distribution, the default root object is the object that you want CloudFront to request from your origin. For example, if your root URL is https://www.example.com
, you can specify CloudFront to return the index.html
file as the default root object. You can specify a default root object so that viewers see a specific file or object, instead of another object in your distribution (for example, https://www.example.com/product-description.html
). A default root object avoids exposing the contents of your distribution.
You can specify the object name or a path to the object name (for example, index.html
or exampleFolderName/index.html
). Your string can't begin with a forward slash (/
). Only specify the object name or the path to the object.
If you don't want to specify a default root object when you create a distribution, include an empty DefaultRootObject
element.
To delete the default root object from an existing distribution, update the distribution configuration and include an empty DefaultRootObject
element.
To replace the default root object, update the distribution configuration and specify the new object.
For more information about the default root object, see Specify a default root object in the Amazon CloudFront Developer Guide.
" }, "Origins":{ "shape":"Origins", @@ -12865,6 +12865,14 @@ "VpcOriginId":{ "shape":"string", "documentation":"The VPC origin ID.
" + }, + "OriginReadTimeout":{ + "shape":"integer", + "documentation":"Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the origin response timeout. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 30 seconds.
For more information, see Response timeout (custom origins only) in the Amazon CloudFront Developer Guide.
" + }, + "OriginKeepaliveTimeout":{ + "shape":"integer", + "documentation":"Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 5 seconds.
For more information, see Keep-alive timeout (custom origins only) in the Amazon CloudFront Developer Guide.
" } }, "documentation":"An Amazon CloudFront VPC origin configuration.
" diff --git a/botocore/data/codepipeline/2015-07-09/service-2.json b/botocore/data/codepipeline/2015-07-09/service-2.json index 9ae1256b9b..f3b79d28d2 100644 --- a/botocore/data/codepipeline/2015-07-09/service-2.json +++ b/botocore/data/codepipeline/2015-07-09/service-2.json @@ -342,7 +342,7 @@ {"shape":"ValidationException"}, {"shape":"InvalidNextTokenException"} ], - "documentation":"Lists the rules for the condition.
" + "documentation":"Lists the rules for the condition. For more information about conditions, see Stage conditions. For more information about rules, see the CodePipeline rule reference.
" }, "ListTagsForResource":{ "name":"ListTagsForResource", @@ -1394,7 +1394,7 @@ "members":{ "category":{ "shape":"ActionCategory", - "documentation":"A category defines what kind of action can be taken in the stage, and constrains the provider type for the action. Valid categories are limited to one of the following values.
Source
Build
Test
Deploy
Invoke
Approval
A category defines what kind of action can be taken in the stage, and constrains the provider type for the action. Valid categories are limited to one of the following values.
Source
Build
Test
Deploy
Invoke
Approval
Compute
The rules that make up the condition.
" } }, - "documentation":"The condition for the stage. A condition is made up of the rules and the result for the condition.
" + "documentation":"The condition for the stage. A condition is made up of the rules and the result for the condition. For more information about conditions, see Stage conditions. For more information about rules, see the CodePipeline rule reference.
" }, "ConditionExecution":{ "type":"structure", @@ -2375,7 +2375,7 @@ "members":{ "category":{ "shape":"ActionCategory", - "documentation":"Defines what kind of action can be taken in the stage. The following are the valid values:
Source
Build
Test
Deploy
Approval
Invoke
Defines what kind of action can be taken in the stage. The following are the valid values:
Source
Build
Test
Deploy
Approval
Invoke
Compute
The name of the rule that is created for the condition, such as CheckAllResults.
" + "documentation":"The name of the rule that is created for the condition, such as VariableCheck
.
The action configuration fields for the rule.
" }, + "commands":{ + "shape":"CommandList", + "documentation":"The shell commands to run with your commands rule in CodePipeline. All commands are supported except multi-line formats. While CodeBuild logs and permissions are used, you do not need to create any resources in CodeBuild.
Using compute time for this action will incur separate charges in CodeBuild.
The input artifacts fields for the rule, such as specifying an input file for the rule.
" @@ -4453,7 +4457,7 @@ "documentation":"The action timeout for the rule.
" } }, - "documentation":"Represents information about the rule to be created for an associated condition. An example would be creating a new rule for an entry condition, such as a rule that checks for a test result before allowing the run to enter the deployment stage.
" + "documentation":"Represents information about the rule to be created for an associated condition. An example would be creating a new rule for an entry condition, such as a rule that checks for a test result before allowing the run to enter the deployment stage. For more information about conditions, see Stage conditions. For more information about rules, see the CodePipeline rule reference.
" }, "RuleDeclarationList":{ "type":"list", diff --git a/botocore/data/ecs/2014-11-13/service-2.json b/botocore/data/ecs/2014-11-13/service-2.json index 8f5a1fdf9a..71db4e50d2 100644 --- a/botocore/data/ecs/2014-11-13/service-2.json +++ b/botocore/data/ecs/2014-11-13/service-2.json @@ -2646,7 +2646,7 @@ }, "maximumPercent":{ "shape":"BoxedInteger", - "documentation":"If a service is using the rolling update (ECS
) deployment type, the maximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA
service scheduler and has a desiredCount
of four tasks and a maximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent
value for a service using the REPLICA
service scheduler is 200%.
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.
If a service is using either the blue/green (CODE_DEPLOY
) or EXTERNAL
deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state.
You can't specify a custom maximumPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) or EXTERNAL
deployment types and has tasks that use the EC2 launch type.
If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
" + "documentation":"If a service is using the rolling update (ECS
) deployment type, the maximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA
service scheduler and has a desiredCount
of four tasks and a maximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent
value for a service using the REPLICA
service scheduler is 200%.
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.
If a service is using either the blue/green (CODE_DEPLOY
) or EXTERNAL
deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state.
You can't specify a custom maximumPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) or EXTERNAL
deployment types and has tasks that use the EC2 launch type.
If the service uses either the blue/green (CODE_DEPLOY
) or EXTERNAL
deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.
The cluster that hosts the service. This can either be the cluster name or ARN. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performanceIf you don't specify a cluster, default
is used.
The cluster that hosts the service. This can either be the cluster name or ARN. Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. If you don't specify a cluster, default
is used.
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs
log driver to route logs to Amazon CloudWatch include the following:
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false
.
Your IAM policy must include the logs:CreateLogGroup
permission before you attempt to use awslogs-create-group
.
Required: Yes
Specify the Amazon Web Services Region that the awslogs
log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
Required: Yes
Make sure to specify a log group that the awslogs
log driver sends its log streams to.
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix
option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id
.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
Required: No
This option defines a multiline start pattern in Python strftime
format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format
and awslogs-multiline-pattern
options.
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format
is also configured.
You cannot configure both the awslogs-datetime-format
and awslogs-multiline-pattern
options.
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
Required: No
Valid values: non-blocking
| blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking
mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout
and stderr
streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking
mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size
option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs
container log driver.
Required: No
Default value: 1m
When non-blocking
mode is used, the max-buffer-size
log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk
log router, you need to specify a splunk-token
and a splunk-url
.
When you use the awsfirelens
log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit
option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens
to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region
and a name for the log stream with delivery_stream
.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region
and a data stream name with stream
.
When you export logs to Amazon OpenSearch Service, you can specify options like Name
, Host
(OpenSearch Service endpoint without protocol), Port
, Index
, Type
, Aws_auth
, Aws_region
, Suppress_Type_Name
, and tls
.
When you export logs to Amazon S3, you can specify the bucket using the bucket
option. You can also specify region
, total_file_size
, upload_timeout
, and use_put_object
as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs
log driver to route logs to Amazon CloudWatch include the following:
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false
.
Your IAM policy must include the logs:CreateLogGroup
permission before you attempt to use awslogs-create-group
.
Required: Yes
Specify the Amazon Web Services Region that the awslogs
log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
Required: Yes
Make sure to specify a log group that the awslogs
log driver sends its log streams to.
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix
option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id
.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
Required: No
This option defines a multiline start pattern in Python strftime
format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format
and awslogs-multiline-pattern
options.
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format
is also configured.
You cannot configure both the awslogs-datetime-format
and awslogs-multiline-pattern
options.
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
Required: No
Valid values: non-blocking
| blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking
mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout
and stderr
streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking
mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size
option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs
container log driver.
Required: No
Default value: 1m
When non-blocking
mode is used, the max-buffer-size
log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk
log router, you need to specify a splunk-token
and a splunk-url
.
When you use the awsfirelens
log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit
option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens
to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region
and a name for the log stream with delivery_stream
.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region
and a data stream name with stream
.
When you export logs to Amazon OpenSearch Service, you can specify options like Name
, Host
(OpenSearch Service endpoint without protocol), Port
, Index
, Type
, Aws_auth
, Aws_region
, Suppress_Type_Name
, and tls
. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket
option. You can also specify region
, total_file_size
, upload_timeout
, and use_put_object
as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
The operating system that your tasks definitions run on. A platform family is specified only for tasks using the Fargate launch type.
" + }, + "enableFaultInjection":{ + "shape":"BoxedBoolean", + "documentation":"Enables fault injection when you register your task definition and allows for fault injection requests to be accepted from the task's containers. The default value is false
.
The ephemeral storage settings to use for tasks run with the task definition.
" + }, + "enableFaultInjection":{ + "shape":"BoxedBoolean", + "documentation":"Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is false
.
The details of a task definition which describes the container and volume definitions of an Amazon Elastic Container Service task. You can specify which Docker images to use, the required resources, and other configurations related to launching the task definition through an Amazon ECS service or task.
" diff --git a/botocore/data/m2/2021-04-28/service-2.json b/botocore/data/m2/2021-04-28/service-2.json index f42b9c245d..129a76a80d 100644 --- a/botocore/data/m2/2021-04-28/service-2.json +++ b/botocore/data/m2/2021-04-28/service-2.json @@ -1228,6 +1228,10 @@ "shape":"EntityName", "documentation":"The name of the runtime environment. Must be unique within the account.
" }, + "networkType":{ + "shape":"NetworkType", + "documentation":"The network type required for the runtime environment.
" + }, "preferredMaintenanceWindow":{ "shape":"String50", "documentation":"Configures the maintenance window that you want for the runtime environment. The maintenance window must have the format ddd:hh24:mi-ddd:hh24:mi
and must be less than 24 hours. The following two examples are valid maintenance windows: sun:23:45-mon:00:15
or sat:01:00-sat:03:00
.
If you do not provide a value, a random system-generated value will be assigned.
" @@ -1760,6 +1764,10 @@ "shape":"EntityName", "documentation":"The name of the runtime environment.
" }, + "networkType":{ + "shape":"NetworkType", + "documentation":"The network type supported by the runtime environment.
" + }, "status":{ "shape":"EnvironmentLifecycle", "documentation":"The status of the runtime environment
" @@ -2368,6 +2376,10 @@ "shape":"EntityName", "documentation":"The name of the runtime environment. Must be unique within the account.
" }, + "networkType":{ + "shape":"NetworkType", + "documentation":"The network type supported by the runtime environment.
" + }, "pendingMaintenance":{ "shape":"PendingMaintenance", "documentation":"Indicates the pending maintenance scheduled on this environment.
" @@ -3036,6 +3048,13 @@ "max":2000, "min":1 }, + "NetworkType":{ + "type":"string", + "enum":[ + "ipv4", + "dual" + ] + }, "NextToken":{ "type":"string", "pattern":"^\\S{1,2000}$" diff --git a/botocore/data/synthetics/2017-10-11/service-2.json b/botocore/data/synthetics/2017-10-11/service-2.json index e241625299..dc279bb736 100644 --- a/botocore/data/synthetics/2017-10-11/service-2.json +++ b/botocore/data/synthetics/2017-10-11/service-2.json @@ -513,7 +513,7 @@ "type":"string", "max":2048, "min":1, - "pattern":"arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:canary:[0-9a-z_\\-]{1,255}" + "pattern":"arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:canary:[0-9a-z_\\-]{1,255}" }, "CanaryCodeInput":{ "type":"structure", @@ -1093,7 +1093,7 @@ "type":"string", "max":2048, "min":1, - "pattern":"arn:(aws[a-zA-Z-]*)?:lambda:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:function:[a-zA-Z0-9-_]+(:(\\$LATEST|[a-zA-Z0-9-_]+))?" + "pattern":"arn:(aws[a-zA-Z-]*)?:lambda:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:function:[a-zA-Z0-9-_]+(:(\\$LATEST|[a-zA-Z0-9-_]+))?" }, "GetCanaryRequest":{ "type":"structure", @@ -1204,7 +1204,7 @@ "type":"string", "max":128, "min":1, - "pattern":"arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:group:[0-9a-z]+" + "pattern":"arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:group:[0-9a-z]+" }, "GroupIdentifier":{ "type":"string", @@ -1260,7 +1260,7 @@ "type":"string", "max":2048, "min":1, - "pattern":"arn:(aws[a-zA-Z-]*)?:kms:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:key/[\\w\\-\\/]+" + "pattern":"arn:(aws[a-zA-Z-]*)?:kms:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:key/[\\w\\-\\/]+" }, "ListAssociatedGroupsRequest":{ "type":"structure", @@ -1446,7 +1446,7 @@ "type":"string", "max":2048, "min":1, - "pattern":"arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:(canary|group):[0-9a-z_\\-]+" + "pattern":"arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:(canary|group):[0-9a-z_\\-]+" }, "ResourceList":{ "type":"list", @@ -1755,7 +1755,7 @@ }, "BaseCanaryRunId":{ "shape":"String", - "documentation":"Specifies which canary run to use the screenshots from as the baseline for future visual monitoring with this canary. Valid values are nextrun
to use the screenshots from the next run after this update is made, lastrun
to use the screenshots from the most recent run before this update was made, or the value of Id
in the CanaryRun from any past run of this canary.
Specifies which canary run to use the screenshots from as the baseline for future visual monitoring with this canary. Valid values are nextrun
to use the screenshots from the next run after this update is made, lastrun
to use the screenshots from the most recent run before this update was made, or the value of Id
in the CanaryRun from a run of this a canary in the past 31 days. If you specify the Id
of a canary run older than 31 days, the operation returns a 400 validation exception error..
An object that specifies what screenshots to use as a baseline for visual monitoring by this canary. It can optionally also specify parts of the screenshots to ignore during the visual monitoring comparison.
Visual monitoring is supported only on canaries running the syn-puppeteer-node-3.2 runtime or later. For more information, see Visual monitoring and Visual monitoring blueprint
" @@ -1784,6 +1784,10 @@ "SecurityGroupIds":{ "shape":"SecurityGroupIds", "documentation":"The IDs of the security groups for this canary.
" + }, + "Ipv6AllowedForDualStack":{ + "shape":"NullableBoolean", + "documentation":"Set this to true
to allow outbound IPv6 traffic on VPC canaries that are connected to dual-stack subnets. The default is false
If this canary is to test an endpoint in a VPC, this structure contains information about the subnets and security groups of the VPC endpoint. For more information, see Running a Canary in a VPC.
" @@ -1802,6 +1806,10 @@ "SecurityGroupIds":{ "shape":"SecurityGroupIds", "documentation":"The IDs of the security groups for this canary.
" + }, + "Ipv6AllowedForDualStack":{ + "shape":"NullableBoolean", + "documentation":"Indicates whether this canary allows outbound IPv6 traffic if it is connected to dual-stack subnets.
" } }, "documentation":"If this canary is to test an endpoint in a VPC, this structure contains information about the subnets and security groups of the VPC endpoint. For more information, see Running a Canary in a VPC.
" diff --git a/tests/functional/endpoint-rules/account/endpoint-tests-1.json b/tests/functional/endpoint-rules/account/endpoint-tests-1.json index ac318cb0f9..640b9eadf6 100644 --- a/tests/functional/endpoint-rules/account/endpoint-tests-1.json +++ b/tests/functional/endpoint-rules/account/endpoint-tests-1.json @@ -1,31 +1,50 @@ { "testCases": [ { - "documentation": "For region aws-global with FIPS disabled and DualStack disabled", + "documentation": "For custom endpoint with region not set and fips disabled", "expect": { "endpoint": { - "properties": { - "authSchemes": [ - { - "name": "sigv4", - "signingName": "account", - "signingRegion": "us-east-1" - } - ] - }, - "url": "https://account.us-east-1.amazonaws.com" + "url": "https://example.com" } }, "params": { - "Region": "aws-global", + "Endpoint": "https://example.com", + "UseFIPS": false + } + }, + { + "documentation": "For custom endpoint with fips enabled", + "expect": { + "error": "Invalid Configuration: FIPS and custom endpoint are not supported" + }, + "params": { + "Endpoint": "https://example.com", + "UseFIPS": true + } + }, + { + "documentation": "For custom endpoint with fips disabled and dualstack enabled", + "expect": { + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" + }, + "params": { + "Endpoint": "https://example.com", "UseFIPS": false, - "UseDualStack": false + "UseDualStack": true } }, { "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-east-1" + } + ] + }, "url": "https://account-fips.us-east-1.api.aws" } }, @@ -39,6 +58,14 @@ "documentation": "For region us-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-east-1" + } + ] + }, "url": "https://account-fips.us-east-1.amazonaws.com" } }, @@ -52,6 +79,14 @@ "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-east-1" + } + ] + }, "url": "https://account.us-east-1.api.aws" } }, @@ -69,7 +104,6 @@ "authSchemes": [ { "name": "sigv4", - "signingName": "account", "signingRegion": "us-east-1" } ] @@ -84,75 +118,76 @@ } }, { - "documentation": "For region aws-cn-global with FIPS disabled and DualStack disabled", + "documentation": "For region cn-northwest-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { "properties": { "authSchemes": [ { "name": "sigv4", - "signingName": "account", "signingRegion": "cn-northwest-1" } ] }, - "url": "https://account.cn-northwest-1.amazonaws.com.cn" + "url": "https://account-fips.cn-northwest-1.api.amazonwebservices.com.cn" } }, "params": { - "Region": "aws-cn-global", - "UseFIPS": false, - "UseDualStack": false - } - }, - { - "documentation": "For region cn-north-1 with FIPS enabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://account-fips.cn-north-1.api.amazonwebservices.com.cn" - } - }, - "params": { - "Region": "cn-north-1", + "Region": "cn-northwest-1", "UseFIPS": true, "UseDualStack": true } }, { - "documentation": "For region cn-north-1 with FIPS enabled and DualStack disabled", + "documentation": "For region cn-northwest-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://account-fips.cn-north-1.amazonaws.com.cn" + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "cn-northwest-1" + } + ] + }, + "url": "https://account-fips.cn-northwest-1.amazonaws.com.cn" } }, "params": { - "Region": "cn-north-1", + "Region": "cn-northwest-1", "UseFIPS": true, "UseDualStack": false } }, { - "documentation": "For region cn-north-1 with FIPS disabled and DualStack enabled", + "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://account.cn-north-1.api.amazonwebservices.com.cn" + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "cn-northwest-1" + } + ] + }, + "url": "https://account.cn-northwest-1.api.amazonwebservices.com.cn" } }, "params": { - "Region": "cn-north-1", + "Region": "cn-northwest-1", "UseFIPS": false, "UseDualStack": true } }, { - "documentation": "For region cn-north-1 with FIPS disabled and DualStack disabled", + "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { "properties": { "authSchemes": [ { "name": "sigv4", - "signingName": "account", "signingRegion": "cn-northwest-1" } ] @@ -161,59 +196,91 @@ } }, "params": { - "Region": "cn-north-1", + "Region": "cn-northwest-1", "UseFIPS": false, "UseDualStack": false } }, { - "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled", + "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://account-fips.us-gov-east-1.api.aws" + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-gov-west-1" + } + ] + }, + "url": "https://account-fips.us-gov-west-1.api.aws" } }, "params": { - "Region": "us-gov-east-1", + "Region": "us-gov-west-1", "UseFIPS": true, "UseDualStack": true } }, { - "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled", + "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://account-fips.us-gov-east-1.amazonaws.com" + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-gov-west-1" + } + ] + }, + "url": "https://account-fips.us-gov-west-1.amazonaws.com" } }, "params": { - "Region": "us-gov-east-1", + "Region": "us-gov-west-1", "UseFIPS": true, "UseDualStack": false } }, { - "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled", + "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://account.us-gov-east-1.api.aws" + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-gov-west-1" + } + ] + }, + "url": "https://account.us-gov-west-1.api.aws" } }, "params": { - "Region": "us-gov-east-1", + "Region": "us-gov-west-1", "UseFIPS": false, "UseDualStack": true } }, { - "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled", + "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://account.us-gov-east-1.amazonaws.com" + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-gov-west-1" + } + ] + }, + "url": "https://account.us-gov-west-1.amazonaws.com" } }, "params": { - "Region": "us-gov-east-1", + "Region": "us-gov-west-1", "UseFIPS": false, "UseDualStack": false } @@ -233,6 +300,14 @@ "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-iso-east-1" + } + ] + }, "url": "https://account-fips.us-iso-east-1.c2s.ic.gov" } }, @@ -257,6 +332,14 @@ "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-iso-east-1" + } + ] + }, "url": "https://account.us-iso-east-1.c2s.ic.gov" } }, @@ -281,6 +364,14 @@ "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-isob-east-1" + } + ] + }, "url": "https://account-fips.us-isob-east-1.sc2s.sgov.gov" } }, @@ -305,6 +396,14 @@ "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-isob-east-1" + } + ] + }, "url": "https://account.us-isob-east-1.sc2s.sgov.gov" } }, @@ -315,54 +414,131 @@ } }, { - "documentation": "For custom endpoint with region set and fips disabled and dualstack disabled", + "documentation": "For region eu-isoe-west-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "eu-isoe-west-1", + "UseFIPS": true, + "UseDualStack": true + } + }, + { + "documentation": "For region eu-isoe-west-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://example.com" + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "eu-isoe-west-1" + } + ] + }, + "url": "https://account-fips.eu-isoe-west-1.cloud.adc-e.uk" } }, "params": { - "Region": "us-east-1", + "Region": "eu-isoe-west-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region eu-isoe-west-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "eu-isoe-west-1", "UseFIPS": false, - "UseDualStack": false, - "Endpoint": "https://example.com" + "UseDualStack": true } }, { - "documentation": "For custom endpoint with region not set and fips disabled and dualstack disabled", + "documentation": "For region eu-isoe-west-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://example.com" + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "eu-isoe-west-1" + } + ] + }, + "url": "https://account.eu-isoe-west-1.cloud.adc-e.uk" } }, "params": { + "Region": "eu-isoe-west-1", "UseFIPS": false, - "UseDualStack": false, - "Endpoint": "https://example.com" + "UseDualStack": false } }, { - "documentation": "For custom endpoint with fips enabled and dualstack disabled", + "documentation": "For region us-isof-south-1 with FIPS enabled and DualStack enabled", "expect": { - "error": "Invalid Configuration: FIPS and custom endpoint are not supported" + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { - "Region": "us-east-1", + "Region": "us-isof-south-1", "UseFIPS": true, - "UseDualStack": false, - "Endpoint": "https://example.com" + "UseDualStack": true } }, { - "documentation": "For custom endpoint with fips disabled and dualstack enabled", + "documentation": "For region us-isof-south-1 with FIPS enabled and DualStack disabled", "expect": { - "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-isof-south-1" + } + ] + }, + "url": "https://account-fips.us-isof-south-1.csp.hci.ic.gov" + } }, "params": { - "Region": "us-east-1", + "Region": "us-isof-south-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-isof-south-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-isof-south-1", + "UseFIPS": false, + "UseDualStack": true + } + }, + { + "documentation": "For region us-isof-south-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-isof-south-1" + } + ] + }, + "url": "https://account.us-isof-south-1.csp.hci.ic.gov" + } + }, + "params": { + "Region": "us-isof-south-1", "UseFIPS": false, - "UseDualStack": true, - "Endpoint": "https://example.com" + "UseDualStack": false } }, { diff --git a/tests/functional/endpoint-rules/backupsearch/endpoint-tests-1.json b/tests/functional/endpoint-rules/backupsearch/endpoint-tests-1.json new file mode 100644 index 0000000000..5986f9074b --- /dev/null +++ b/tests/functional/endpoint-rules/backupsearch/endpoint-tests-1.json @@ -0,0 +1,313 @@ +{ + "testCases": [ + { + "documentation": "For custom endpoint with region not set and fips disabled", + "expect": { + "endpoint": { + "url": "https://example.com" + } + }, + "params": { + "Endpoint": "https://example.com", + "UseFIPS": false + } + }, + { + "documentation": "For custom endpoint with fips enabled", + "expect": { + "error": "Invalid Configuration: FIPS and custom endpoint are not supported" + }, + "params": { + "Endpoint": "https://example.com", + "UseFIPS": true + } + }, + { + "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-east-1" + } + ] + }, + "url": "https://backup-search-fips.us-east-1.api.aws" + } + }, + "params": { + "Region": "us-east-1", + "UseFIPS": true + } + }, + { + "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-east-1" + } + ] + }, + "url": "https://backup-search.us-east-1.api.aws" + } + }, + "params": { + "Region": "us-east-1", + "UseFIPS": false + } + }, + { + "documentation": "For region cn-northwest-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "cn-northwest-1" + } + ] + }, + "url": "https://backup-search-fips.cn-northwest-1.api.amazonwebservices.com.cn" + } + }, + "params": { + "Region": "cn-northwest-1", + "UseFIPS": true + } + }, + { + "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "cn-northwest-1" + } + ] + }, + "url": "https://backup-search.cn-northwest-1.api.amazonwebservices.com.cn" + } + }, + "params": { + "Region": "cn-northwest-1", + "UseFIPS": false + } + }, + { + "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-gov-west-1" + } + ] + }, + "url": "https://backup-search-fips.us-gov-west-1.api.aws" + } + }, + "params": { + "Region": "us-gov-west-1", + "UseFIPS": true + } + }, + { + "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-gov-west-1" + } + ] + }, + "url": "https://backup-search.us-gov-west-1.api.aws" + } + }, + "params": { + "Region": "us-gov-west-1", + "UseFIPS": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-iso-east-1" + } + ] + }, + "url": "https://backup-search-fips.us-iso-east-1.c2s.ic.gov" + } + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": true + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-iso-east-1" + } + ] + }, + "url": "https://backup-search.us-iso-east-1.c2s.ic.gov" + } + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-isob-east-1" + } + ] + }, + "url": "https://backup-search-fips.us-isob-east-1.sc2s.sgov.gov" + } + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": true + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-isob-east-1" + } + ] + }, + "url": "https://backup-search.us-isob-east-1.sc2s.sgov.gov" + } + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": false + } + }, + { + "documentation": "For region eu-isoe-west-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "eu-isoe-west-1" + } + ] + }, + "url": "https://backup-search-fips.eu-isoe-west-1.cloud.adc-e.uk" + } + }, + "params": { + "Region": "eu-isoe-west-1", + "UseFIPS": true + } + }, + { + "documentation": "For region eu-isoe-west-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "eu-isoe-west-1" + } + ] + }, + "url": "https://backup-search.eu-isoe-west-1.cloud.adc-e.uk" + } + }, + "params": { + "Region": "eu-isoe-west-1", + "UseFIPS": false + } + }, + { + "documentation": "For region us-isof-south-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-isof-south-1" + } + ] + }, + "url": "https://backup-search-fips.us-isof-south-1.csp.hci.ic.gov" + } + }, + "params": { + "Region": "us-isof-south-1", + "UseFIPS": true + } + }, + { + "documentation": "For region us-isof-south-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "properties": { + "authSchemes": [ + { + "name": "sigv4", + "signingRegion": "us-isof-south-1" + } + ] + }, + "url": "https://backup-search.us-isof-south-1.csp.hci.ic.gov" + } + }, + "params": { + "Region": "us-isof-south-1", + "UseFIPS": false + } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } + } + ], + "version": "1.0" +} \ No newline at end of file From c34e3c388a9bf03f1860f6616363b7e4246bc8ee Mon Sep 17 00:00:00 2001 From: aws-sdk-python-automation