diff --git a/clients/client-ecs/src/commands/CreateCapacityProviderCommand.ts b/clients/client-ecs/src/commands/CreateCapacityProviderCommand.ts index d1c8acacf857..cab7d7718c5a 100644 --- a/clients/client-ecs/src/commands/CreateCapacityProviderCommand.ts +++ b/clients/client-ecs/src/commands/CreateCapacityProviderCommand.ts @@ -104,6 +104,16 @@ export interface CreateCapacityProviderCommandOutput extends CreateCapacityProvi *
These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified parameter isn't valid. Review the available parameters for the API diff --git a/clients/client-ecs/src/commands/CreateClusterCommand.ts b/clients/client-ecs/src/commands/CreateClusterCommand.ts index e7b572f1a47b..02f023ac2787 100644 --- a/clients/client-ecs/src/commands/CreateClusterCommand.ts +++ b/clients/client-ecs/src/commands/CreateClusterCommand.ts @@ -178,6 +178,16 @@ export interface CreateClusterCommandOutput extends CreateClusterResponse, __Met *
These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified parameter isn't valid. Review the available parameters for the API diff --git a/clients/client-ecs/src/commands/CreateServiceCommand.ts b/clients/client-ecs/src/commands/CreateServiceCommand.ts index 34b727d2c0dd..c5d204c34cc2 100644 --- a/clients/client-ecs/src/commands/CreateServiceCommand.ts +++ b/clients/client-ecs/src/commands/CreateServiceCommand.ts @@ -108,8 +108,8 @@ export interface CreateServiceCommandOutput extends CreateServiceResponse, __Met *
When creating a service that uses the EXTERNAL
deployment controller, you
* can specify only parameters that aren't controlled at the task set level. The only
* required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement. For information
- * about task placement and task placement strategies, see Amazon ECS
+ * When the service scheduler launches new tasks, it determines task placement. For
+ * information about task placement and task placement strategies, see Amazon ECS
* task placement in the Amazon Elastic Container Service Developer Guide
* Starting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition. For information about the maximum number of task sets and otther quotas, see Amazon ECS
+ * For information about the maximum number of task sets and other quotas, see Amazon ECS
* service quotas in the Amazon Elastic Container Service Developer Guide. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DeleteAttributesCommand.ts b/clients/client-ecs/src/commands/DeleteAttributesCommand.ts
index cc37319dccb7..a7154e1f5b07 100644
--- a/clients/client-ecs/src/commands/DeleteAttributesCommand.ts
+++ b/clients/client-ecs/src/commands/DeleteAttributesCommand.ts
@@ -76,8 +76,8 @@ export interface DeleteAttributesCommandOutput extends DeleteAttributesResponse,
*
* @throws {@link TargetNotFoundException} (client fault)
* The specified target wasn't found. You can view your available container instances
- * with ListContainerInstances. Amazon ECS container instances are
- * cluster-specific and Region-specific. Base exception class for all service exceptions from ECS service. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DeleteClusterCommand.ts b/clients/client-ecs/src/commands/DeleteClusterCommand.ts
index 78f9d06f0de3..b05fbe5733dc 100644
--- a/clients/client-ecs/src/commands/DeleteClusterCommand.ts
+++ b/clients/client-ecs/src/commands/DeleteClusterCommand.ts
@@ -131,6 +131,16 @@ export interface DeleteClusterCommandOutput extends DeleteClusterResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The You can't delete a cluster that has registered container instances. First, deregister
diff --git a/clients/client-ecs/src/commands/DeleteServiceCommand.ts b/clients/client-ecs/src/commands/DeleteServiceCommand.ts
index eb5754ed872c..c6c509129332 100644
--- a/clients/client-ecs/src/commands/DeleteServiceCommand.ts
+++ b/clients/client-ecs/src/commands/DeleteServiceCommand.ts
@@ -348,6 +348,16 @@ export interface DeleteServiceCommandOutput extends DeleteServiceResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DeleteTaskSetCommand.ts b/clients/client-ecs/src/commands/DeleteTaskSetCommand.ts
index 7888c3b56f25..1c1ce3394214 100644
--- a/clients/client-ecs/src/commands/DeleteTaskSetCommand.ts
+++ b/clients/client-ecs/src/commands/DeleteTaskSetCommand.ts
@@ -129,6 +129,16 @@ export interface DeleteTaskSetCommandOutput extends DeleteTaskSetResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DescribeCapacityProvidersCommand.ts b/clients/client-ecs/src/commands/DescribeCapacityProvidersCommand.ts
index 982c4a245fef..4ab33faa407c 100644
--- a/clients/client-ecs/src/commands/DescribeCapacityProvidersCommand.ts
+++ b/clients/client-ecs/src/commands/DescribeCapacityProvidersCommand.ts
@@ -97,6 +97,16 @@ export interface DescribeCapacityProvidersCommandOutput extends DescribeCapacity
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DescribeClustersCommand.ts b/clients/client-ecs/src/commands/DescribeClustersCommand.ts
index 7f08e74087ed..170cae289983 100644
--- a/clients/client-ecs/src/commands/DescribeClustersCommand.ts
+++ b/clients/client-ecs/src/commands/DescribeClustersCommand.ts
@@ -140,6 +140,16 @@ export interface DescribeClustersCommandOutput extends DescribeClustersResponse,
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DescribeContainerInstancesCommand.ts b/clients/client-ecs/src/commands/DescribeContainerInstancesCommand.ts
index 35029fffe8ed..8223837d8660 100644
--- a/clients/client-ecs/src/commands/DescribeContainerInstancesCommand.ts
+++ b/clients/client-ecs/src/commands/DescribeContainerInstancesCommand.ts
@@ -151,6 +151,16 @@ export interface DescribeContainerInstancesCommandOutput extends DescribeContain
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/DescribeTaskSetsCommand.ts b/clients/client-ecs/src/commands/DescribeTaskSetsCommand.ts
index 2087ba458bd3..97885c31ede6 100644
--- a/clients/client-ecs/src/commands/DescribeTaskSetsCommand.ts
+++ b/clients/client-ecs/src/commands/DescribeTaskSetsCommand.ts
@@ -144,6 +144,16 @@ export interface DescribeTaskSetsCommandOutput extends DescribeTaskSetsResponse,
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The These errors are usually caused by a server issue. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListClustersCommand.ts b/clients/client-ecs/src/commands/ListClustersCommand.ts
index d10e4d15c706..4e09360d53d3 100644
--- a/clients/client-ecs/src/commands/ListClustersCommand.ts
+++ b/clients/client-ecs/src/commands/ListClustersCommand.ts
@@ -60,6 +60,16 @@ export interface ListClustersCommandOutput extends ListClustersResponse, __Metad
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListContainerInstancesCommand.ts b/clients/client-ecs/src/commands/ListContainerInstancesCommand.ts
index fdd113d2a2fc..837a7433d1ea 100644
--- a/clients/client-ecs/src/commands/ListContainerInstancesCommand.ts
+++ b/clients/client-ecs/src/commands/ListContainerInstancesCommand.ts
@@ -65,6 +65,16 @@ export interface ListContainerInstancesCommandOutput extends ListContainerInstan
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListServicesCommand.ts b/clients/client-ecs/src/commands/ListServicesCommand.ts
index 1b06362de3f2..994d47829991 100644
--- a/clients/client-ecs/src/commands/ListServicesCommand.ts
+++ b/clients/client-ecs/src/commands/ListServicesCommand.ts
@@ -64,6 +64,16 @@ export interface ListServicesCommandOutput extends ListServicesResponse, __Metad
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListTaskDefinitionsCommand.ts b/clients/client-ecs/src/commands/ListTaskDefinitionsCommand.ts
index 0b56a4f0174a..b4ebb02b5357 100644
--- a/clients/client-ecs/src/commands/ListTaskDefinitionsCommand.ts
+++ b/clients/client-ecs/src/commands/ListTaskDefinitionsCommand.ts
@@ -65,6 +65,16 @@ export interface ListTaskDefinitionsCommandOutput extends ListTaskDefinitionsRes
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/ListTasksCommand.ts b/clients/client-ecs/src/commands/ListTasksCommand.ts
index 63cd9b3731ad..164c24934301 100644
--- a/clients/client-ecs/src/commands/ListTasksCommand.ts
+++ b/clients/client-ecs/src/commands/ListTasksCommand.ts
@@ -70,6 +70,16 @@ export interface ListTasksCommandOutput extends ListTasksResponse, __MetadataBea
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/PutAccountSettingDefaultCommand.ts b/clients/client-ecs/src/commands/PutAccountSettingDefaultCommand.ts
index 250918126c6e..081605d863b7 100644
--- a/clients/client-ecs/src/commands/PutAccountSettingDefaultCommand.ts
+++ b/clients/client-ecs/src/commands/PutAccountSettingDefaultCommand.ts
@@ -63,6 +63,16 @@ export interface PutAccountSettingDefaultCommandOutput extends PutAccountSetting
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/PutAttributesCommand.ts b/clients/client-ecs/src/commands/PutAttributesCommand.ts
index 185532518e61..97e7086e9d85 100644
--- a/clients/client-ecs/src/commands/PutAttributesCommand.ts
+++ b/clients/client-ecs/src/commands/PutAttributesCommand.ts
@@ -84,8 +84,8 @@ export interface PutAttributesCommandOutput extends PutAttributesResponse, __Met
*
* @throws {@link TargetNotFoundException} (client fault)
* The specified target wasn't found. You can view your available container instances
- * with ListContainerInstances. Amazon ECS container instances are
- * cluster-specific and Region-specific.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterContainsContainerInstancesException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ServerException} (server fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
Base exception class for all service exceptions from ECS service.
diff --git a/clients/client-ecs/src/commands/PutClusterCapacityProvidersCommand.ts b/clients/client-ecs/src/commands/PutClusterCapacityProvidersCommand.ts index 662f2ffd9205..6540eba5601f 100644 --- a/clients/client-ecs/src/commands/PutClusterCapacityProvidersCommand.ts +++ b/clients/client-ecs/src/commands/PutClusterCapacityProvidersCommand.ts @@ -151,6 +151,16 @@ export interface PutClusterCapacityProvidersCommandOutput *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/RegisterContainerInstanceCommand.ts b/clients/client-ecs/src/commands/RegisterContainerInstanceCommand.ts index 30fad1007bde..7b37ae5a481c 100644 --- a/clients/client-ecs/src/commands/RegisterContainerInstanceCommand.ts +++ b/clients/client-ecs/src/commands/RegisterContainerInstanceCommand.ts @@ -179,6 +179,16 @@ export interface RegisterContainerInstanceCommandOutput extends RegisterContaine *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified parameter isn't valid. Review the available parameters for the API diff --git a/clients/client-ecs/src/commands/RegisterTaskDefinitionCommand.ts b/clients/client-ecs/src/commands/RegisterTaskDefinitionCommand.ts index ec0950a4ca30..74ad0dd4d959 100644 --- a/clients/client-ecs/src/commands/RegisterTaskDefinitionCommand.ts +++ b/clients/client-ecs/src/commands/RegisterTaskDefinitionCommand.ts @@ -39,9 +39,7 @@ export interface RegisterTaskDefinitionCommandOutput extends RegisterTaskDefinit * policy that's associated with the role. For more information, see IAM * Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
*You can specify a Docker networking mode for the containers in your task definition
- * with the networkMode
parameter. The available network modes correspond to
- * those described in Network
- * settings in the Docker run reference. If you specify the awsvpc
+ * with the networkMode
parameter. If you specify the awsvpc
* network mode, the task is allocated an elastic network interface, and you must specify a
* NetworkConfiguration when you create a service or run a task with
* the task definition. For more information, see Task Networking
@@ -81,6 +79,13 @@ export interface RegisterTaskDefinitionCommandOutput extends RegisterTaskDefinit
* },
* ],
* essential: true || false,
+ * restartPolicy: { // ContainerRestartPolicy
+ * enabled: true || false, // required
+ * ignoredExitCodes: [ // IntegerList
+ * Number("int"),
+ * ],
+ * restartAttemptPeriod: Number("int"),
+ * },
* entryPoint: [
* "STRING_VALUE",
* ],
@@ -333,6 +338,13 @@ export interface RegisterTaskDefinitionCommandOutput extends RegisterTaskDefinit
* // },
* // ],
* // essential: true || false,
+ * // restartPolicy: { // ContainerRestartPolicy
+ * // enabled: true || false, // required
+ * // ignoredExitCodes: [ // IntegerList
+ * // Number("int"),
+ * // ],
+ * // restartAttemptPeriod: Number("int"),
+ * // },
* // entryPoint: [
* // "STRING_VALUE",
* // ],
@@ -590,6 +602,16 @@ export interface RegisterTaskDefinitionCommandOutput extends RegisterTaskDefinit
*
These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified parameter isn't valid. Review the available parameters for the API diff --git a/clients/client-ecs/src/commands/RunTaskCommand.ts b/clients/client-ecs/src/commands/RunTaskCommand.ts index 7b13358bbd45..e3d8270ccae8 100644 --- a/clients/client-ecs/src/commands/RunTaskCommand.ts +++ b/clients/client-ecs/src/commands/RunTaskCommand.ts @@ -386,6 +386,16 @@ export interface RunTaskCommandOutput extends RunTaskResponse, __MetadataBearer *
These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/StartTaskCommand.ts b/clients/client-ecs/src/commands/StartTaskCommand.ts index eade8a0cd359..7d0794305f25 100644 --- a/clients/client-ecs/src/commands/StartTaskCommand.ts +++ b/clients/client-ecs/src/commands/StartTaskCommand.ts @@ -333,6 +333,16 @@ export interface StartTaskCommandOutput extends StartTaskResponse, __MetadataBea *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/StopTaskCommand.ts b/clients/client-ecs/src/commands/StopTaskCommand.ts index 5e1017155cc3..99ba27e5e055 100644 --- a/clients/client-ecs/src/commands/StopTaskCommand.ts +++ b/clients/client-ecs/src/commands/StopTaskCommand.ts @@ -35,8 +35,8 @@ export interface StopTaskCommandOutput extends StopTaskResponse, __MetadataBeare *SIGKILL
value is sent and the containers are forcibly stopped. If the
* container handles the SIGTERM
value gracefully and exits within 30 seconds
* from receiving it, no SIGKILL
value is sent.
- * For Windows containers, POSIX signals do not work and runtime stops the container by sending
- * a For Windows containers, POSIX signals do not work and runtime stops the container by
+ * sending a The default 30-second timeout can be configured on the Amazon ECS container agent with
@@ -232,6 +232,16 @@ export interface StopTaskCommandOutput extends StopTaskResponse, __MetadataBeare
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/SubmitContainerStateChangeCommand.ts b/clients/client-ecs/src/commands/SubmitContainerStateChangeCommand.ts
index 84f76366bdd8..965c08044c56 100644
--- a/clients/client-ecs/src/commands/SubmitContainerStateChangeCommand.ts
+++ b/clients/client-ecs/src/commands/SubmitContainerStateChangeCommand.ts
@@ -78,6 +78,16 @@ export interface SubmitContainerStateChangeCommandOutput extends SubmitContainer
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The These errors are usually caused by a server issue. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/TagResourceCommand.ts b/clients/client-ecs/src/commands/TagResourceCommand.ts
index 5aaf38df237e..c478cd833d67 100644
--- a/clients/client-ecs/src/commands/TagResourceCommand.ts
+++ b/clients/client-ecs/src/commands/TagResourceCommand.ts
@@ -63,6 +63,16 @@ export interface TagResourceCommandOutput extends TagResourceResponse, __Metadat
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified parameter isn't valid. Review the available parameters for the API
diff --git a/clients/client-ecs/src/commands/UpdateClusterCommand.ts b/clients/client-ecs/src/commands/UpdateClusterCommand.ts
index 4e107e51c41f..3f164d0118b2 100644
--- a/clients/client-ecs/src/commands/UpdateClusterCommand.ts
+++ b/clients/client-ecs/src/commands/UpdateClusterCommand.ts
@@ -152,6 +152,16 @@ export interface UpdateClusterCommandOutput extends UpdateClusterResponse, __Met
* These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. These errors are usually caused by a client action. This client action might be using
* an action or resource on behalf of a user that doesn't have permissions to use the
* action or resource. Or, it might be specifying an identifier that isn't valid. The following list includes additional causes for the error: The The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific. Any A container instance has completed draining when it has no more CTRL_SHUTDOWN_EVENT
. For more information, see Unable to react to graceful shutdown
+ * CTRL_SHUTDOWN_EVENT
. For more information, see Unable to react to graceful shutdown
* of (Windows) container #25982 on GitHub.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ServerException} (server fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link InvalidParameterException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
+ *
*
* @throws {@link ClusterNotFoundException} (client fault)
* RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.PENDING
or RUNNING
tasks that do not belong to a service
* aren't affected. You must wait for them to finish or stop them manually.RUNNING
- * tasks. You can verify this using ListTasks.
When a container instance has been drained, you can set a container instance to
* ACTIVE
status and once it has reached that status the Amazon ECS scheduler
* can begin scheduling tasks on the instance again.
These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/UpdateServiceCommand.ts b/clients/client-ecs/src/commands/UpdateServiceCommand.ts index 28eaba8bdd3f..9416699a2241 100644 --- a/clients/client-ecs/src/commands/UpdateServiceCommand.ts +++ b/clients/client-ecs/src/commands/UpdateServiceCommand.ts @@ -602,6 +602,16 @@ export interface UpdateServiceCommandOutput extends UpdateServiceResponse, __Met *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/UpdateServicePrimaryTaskSetCommand.ts b/clients/client-ecs/src/commands/UpdateServicePrimaryTaskSetCommand.ts index 08e2de8a37fc..6794cba10a78 100644 --- a/clients/client-ecs/src/commands/UpdateServicePrimaryTaskSetCommand.ts +++ b/clients/client-ecs/src/commands/UpdateServicePrimaryTaskSetCommand.ts @@ -133,6 +133,16 @@ export interface UpdateServicePrimaryTaskSetCommandOutput *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/UpdateTaskProtectionCommand.ts b/clients/client-ecs/src/commands/UpdateTaskProtectionCommand.ts index 2236b11f9df8..8c8af456b1ce 100644 --- a/clients/client-ecs/src/commands/UpdateTaskProtectionCommand.ts +++ b/clients/client-ecs/src/commands/UpdateTaskProtectionCommand.ts @@ -103,6 +103,16 @@ export interface UpdateTaskProtectionCommandOutput extends UpdateTaskProtectionR *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/commands/UpdateTaskSetCommand.ts b/clients/client-ecs/src/commands/UpdateTaskSetCommand.ts index a312a183d6ff..5dac762f1c12 100644 --- a/clients/client-ecs/src/commands/UpdateTaskSetCommand.ts +++ b/clients/client-ecs/src/commands/UpdateTaskSetCommand.ts @@ -6,7 +6,8 @@ import { MetadataBearer as __MetadataBearer } from "@smithy/types"; import { ECSClientResolvedConfig, ServiceInputTypes, ServiceOutputTypes } from "../ECSClient"; import { commonParams } from "../endpoint/EndpointParameters"; -import { UpdateTaskSetRequest, UpdateTaskSetResponse } from "../models/models_0"; +import { UpdateTaskSetRequest } from "../models/models_0"; +import { UpdateTaskSetResponse } from "../models/models_1"; import { de_UpdateTaskSetCommand, se_UpdateTaskSetCommand } from "../protocols/Aws_json1_1"; /** @@ -133,6 +134,16 @@ export interface UpdateTaskSetCommandOutput extends UpdateTaskSetResponse, __Met *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
The specified cluster wasn't found. You can view your available clusters with ListClusters. Amazon ECS clusters are Region specific.
diff --git a/clients/client-ecs/src/models/index.ts b/clients/client-ecs/src/models/index.ts index 9eaceb12865f..1657800f73ce 100644 --- a/clients/client-ecs/src/models/index.ts +++ b/clients/client-ecs/src/models/index.ts @@ -1,2 +1,3 @@ // smithy-typescript generated code export * from "./models_0"; +export * from "./models_1"; diff --git a/clients/client-ecs/src/models/models_0.ts b/clients/client-ecs/src/models/models_0.ts index ecd0c438903a..d59c3f8d774b 100644 --- a/clients/client-ecs/src/models/models_0.ts +++ b/clients/client-ecs/src/models/models_0.ts @@ -45,6 +45,16 @@ export type AgentUpdateStatus = (typeof AgentUpdateStatus)[keyof typeof AgentUpd *These errors are usually caused by a client action. This client action might be using * an action or resource on behalf of a user that doesn't have permissions to use the * action or resource. Or, it might be specifying an identifier that isn't valid.
+ *The following list includes additional causes for the error:
+ *The RunTask
could not be processed because you use managed
+ * scaling and there is a capacity error because the quota of tasks in the
+ * PROVISIONING
per cluster has been reached. For information
+ * about the service quotas, see Amazon ECS
+ * service quotas.
FARGATE_SPOT
capacity providers. The Fargate capacity providers are
* available to all accounts and only need to be associated with a cluster to be used in a
* capacity provider strategy.
- * With FARGATE_SPOT
, you can run interruption
- * tolerant tasks at a rate that's discounted compared to the FARGATE
price.
- * FARGATE_SPOT
runs tasks on spare compute capacity. When Amazon Web Services needs the
- * capacity back, your tasks are interrupted with a two-minute warning.
- * FARGATE_SPOT
only supports Linux tasks with the X86_64 architecture on
- * platform version 1.3.0 or later.
With FARGATE_SPOT
, you can run interruption tolerant tasks at a rate
+ * that's discounted compared to the FARGATE
price. FARGATE_SPOT
+ * runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are
+ * interrupted with a two-minute warning. FARGATE_SPOT
only supports Linux
+ * tasks with the X86_64 architecture on platform version 1.3.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
* @public */ @@ -1477,15 +1486,13 @@ export interface DeploymentConfiguration { * the task towards the minimum healthy percent total. * * - *The default value for a replica service for
- * minimumHealthyPercent
is 100%. The default
- * minimumHealthyPercent
value for a service using
- * the DAEMON
service schedule is 0% for the CLI,
- * the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The default value for a replica service for minimumHealthyPercent
is
+ * 100%. The default minimumHealthyPercent
value for a service using the
+ * DAEMON
service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the
+ * APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the
- * desiredCount
multiplied by the
- * minimumHealthyPercent
/100, rounded up to the
- * nearest integer value.
desiredCount
multiplied by the minimumHealthyPercent
/100,
+ * rounded up to the nearest integer value.
* If a service is using either the blue/green (CODE_DEPLOY
) or
* EXTERNAL
deployment types and is running tasks that use the
* EC2 launch type, the minimum healthy
@@ -1651,8 +1658,7 @@ export type AssignPublicIp = (typeof AssignPublicIp)[keyof typeof AssignPublicIp
/**
*
An object representing the networking details for a task or service. For example
- * awsvpcConfiguration=\{subnets=["subnet-12344321"],securityGroups=["sg-12344321"]\}
- *
awsVpcConfiguration=\{subnets=["subnet-12344321"],securityGroups=["sg-12344321"]\}
.
* @public
*/
export interface AwsVpcConfiguration {
@@ -1883,16 +1889,12 @@ export interface Secret {
/**
* The log configuration for the container. This parameter maps to LogConfig
- * in the Create a container section of the Docker Remote API and the
- * --log-driver
option to
- * docker
- * run
- * .
--log-driver
option to docker
+ * run.
* By default, containers use the same logging driver that the Docker daemon uses. * However, the container might use a different logging driver than the Docker daemon by - * specifying a log driver configuration in the container definition. For more information - * about the options for different supported log drivers, see Configure logging - * drivers in the Docker documentation.
+ * specifying a log driver configuration in the container definition. *Understand the following when specifying a log configuration for your * containers.
*splunk
, and awsfirelens
.
* For tasks hosted on Amazon EC2 instances, the supported log drivers are
* awslogs
, fluentd
, gelf
,
- * json-file
, journald
,
- * logentries
,syslog
, splunk
, and
- * awsfirelens
.
json-file
, journald
,syslog
,
+ * splunk
, and awsfirelens
.
*
* This parameter requires version 1.18 of the Docker Remote API or greater on
@@ -1936,12 +1937,12 @@ export interface LogConfiguration {
* splunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are
* awslogs
, fluentd
, gelf
,
- * json-file
, journald
,
- * logentries
,syslog
, splunk
, and
- * awsfirelens
.
For more information about using the awslogs
log driver, see Using
- * the awslogs log driver in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens
log driver, see Custom log routing in the Amazon Elastic Container Service Developer Guide.
json-file
, journald
, syslog
,
+ * splunk
, and awsfirelens
.
+ * For more information about using the awslogs
log driver, see Send
+ * Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens
log driver, see Send
+ * Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
If you have a custom driver that isn't listed, you can fork the Amazon ECS container
* agent project that's available
@@ -2183,16 +2184,12 @@ export interface ServiceConnectConfiguration {
/**
* The log configuration for the container. This parameter maps to LogConfig
- * in the Create a container section of the Docker Remote API and the
- * --log-driver
option to
- * docker
- * run
- * .--log-driver
option to docker
+ * run.
By default, containers use the same logging driver that the Docker daemon uses. * However, the container might use a different logging driver than the Docker daemon by - * specifying a log driver configuration in the container definition. For more information - * about the options for different supported log drivers, see Configure logging - * drivers in the Docker documentation.
+ * specifying a log driver configuration in the container definition. *Understand the following when specifying a log configuration for your * containers.
*splunk
, and awsfirelens
.
* For tasks hosted on Amazon EC2 instances, the supported log drivers are
* awslogs
, fluentd
, gelf
,
- * json-file
, journald
,
- * logentries
,syslog
, splunk
, and
- * awsfirelens
.
json-file
, journald
,syslog
,
+ * splunk
, and awsfirelens
.
* This parameter requires version 1.18 of the Docker Remote API or greater on @@ -2646,8 +2642,8 @@ export interface CreateServiceRequest { * infrastructure.
*Fargate Spot infrastructure is available for use but a capacity provider - * strategy must be used. For more information, see Fargate capacity providers in the - * Amazon ECS Developer Guide.
+ * strategy must be used. For more information, see Fargate capacity providers in the Amazon ECS + * Developer Guide. *The EC2
launch type runs your tasks on Amazon EC2 instances registered to your
* cluster.
The platform version that your tasks in the service are running on. A platform version
* is specified only for tasks using the Fargate launch type. If one isn't
* specified, the LATEST
platform version is used. For more information, see
- * Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
If you do not use an Elastic Load Balancing, we recommend that you use the startPeriod
in
* the task definition health check parameters. For more information, see Health
* check.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can - * specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). + *
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you + * can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). * During that time, the Amazon ECS service scheduler ignores health check status. This grace * period can prevent the service scheduler from marking tasks as unhealthy and stopping * them before they have time to come up.
@@ -2848,7 +2845,9 @@ export interface CreateServiceRequest { *Specifies whether to propagate the tags from the task definition to the task. If no * value is specified, the tags aren't propagated. Tags can only be propagated to the task * during task creation. To add tags to a task after task creation, use the TagResource API action.
- *You must set this to a value other than NONE
when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide.
You must set this to a value other than NONE
when you use Cost Explorer.
+ * For more information, see Amazon ECS usage reports
+ * in the Amazon Elastic Container Service Developer Guide.
The default is NONE
.
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
+ *Specify an Key Management Service key ID to encrypt the ephemeral storage for + * deployment.
* @public */ kmsKeyId?: string; @@ -4207,8 +4207,8 @@ export interface DeleteAttributesResponse { /** *The specified target wasn't found. You can view your available container instances - * with ListContainerInstances. Amazon ECS container instances are - * cluster-specific and Region-specific.
+ * with ListContainerInstances. Amazon ECS container instances are cluster-specific and + * Region-specific. * @public */ export class TargetNotFoundException extends __BaseException { @@ -4538,8 +4538,10 @@ export type EnvironmentFileType = (typeof EnvironmentFileType)[keyof typeof Envi * parameter in a container definition, they take precedence over the variables contained * within an environment file. If multiple environment files are specified that contain the * same variable, they're processed from the top down. We recommend that you use unique - * variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide. - *Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
+ * variable names. For more information, see Use a file to pass + * environment variables to a container in the Amazon Elastic Container Service Developer Guide. + *Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations + * apply.
*You must use the following platforms for the Fargate launch type:
*The file type to use. Environment files are objects in Amazon S3. The only supported value is
- * s3
.
The file type to use. Environment files are objects in Amazon S3. The only supported value
+ * is s3
.
An object representing a container health check. Health check parameters that are
* specified in a container definition override any Docker health checks that exist in the
* container image (such as those specified in a parent image or from the image's
- * Dockerfile). This configuration maps to the HEALTHCHECK
parameter of docker run.
HEALTHCHECK
parameter of docker run.
* The Amazon ECS container agent only monitors and reports on the health checks specified * in the task definition. Amazon ECS does not monitor Docker health checks that are @@ -4761,17 +4763,18 @@ export interface FirelensConfiguration { *
The following are notes about container health check support:
*If the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this won't
- * cause a container to transition to an UNHEALTHY
status. This is by design,
- * to ensure that containers remain running during agent restarts or temporary
- * unavailability. The health check status is the "last heard from" response from the Amazon ECS
- * agent, so if the container was considered HEALTHY
prior to the disconnect,
- * that status will remain until the agent reconnects and another health check occurs.
- * There are no assumptions made about the status of the container health checks.
If the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this
+ * won't cause a container to transition to an UNHEALTHY
status. This
+ * is by design, to ensure that containers remain running during agent restarts or
+ * temporary unavailability. The health check status is the "last heard from"
+ * response from the Amazon ECS agent, so if the container was considered
+ * HEALTHY
prior to the disconnect, that status will remain until
+ * the agent reconnects and another health check occurs. There are no assumptions
+ * made about the status of the container health checks.
Container health checks require version Container health checks require version 1.17.0
or greater of the Amazon ECS
- * container agent. For more information, see Updating the
+ * 1.17.0
or greater of the
+ * Amazon ECS container agent. For more information, see Updating the
* Amazon ECS container agent.
CMD-SHELL, curl -f http://localhost/ || exit 1
*
* An exit code of 0 indicates success, and non-zero exit code indicates failure. For
- * more information, see HealthCheck
in the Create a container
- * section of the Docker Remote API.
HealthCheck
in tthe docker create-container command
* @public
*/
command: string[] | undefined;
@@ -4846,19 +4848,16 @@ export interface HealthCheck {
}
/**
- * The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more information about the default capabilities - * and the non-default available capabilities, see Runtime privilege and Linux capabilities in the Docker run - * reference. For more detailed information about these Linux capabilities, + *
The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more detailed information about these Linux capabilities, * see the capabilities(7) Linux manual page.
* @public */ export interface KernelCapabilities { /** *The Linux capabilities for the container that have been added to the default
- * configuration provided by Docker. This parameter maps to CapAdd
in the
- * Create a container section of the Docker Remote API and the
- * --cap-add
option to docker
- * run.
CapAdd
in the docker create-container command and the
+ * --cap-add
option to docker
+ * run.
* Tasks launched on Fargate only support adding the SYS_PTRACE
kernel
* capability.
The Linux capabilities for the container that have been removed from the default
- * configuration provided by Docker. This parameter maps to CapDrop
in the
- * Create a container section of the Docker Remote API and the
- * --cap-drop
option to docker
- * run.
CapDrop
in the docker create-container command and the
+ * --cap-drop
option to docker
+ * run.
* Valid values: Any host devices to expose to the container. This parameter maps to
- * "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" |
* "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" |
* "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" |
@@ -4988,8 +4986,7 @@ export interface LinuxParameters {
/**
*
Devices
in the Create a container section of the
- * Docker Remote API and the --device
option to docker run.Devices
in tthe docker create-container command and the --device
option to docker run.
If you're using tasks that use the Fargate launch type, the
* devices
parameter isn't supported.
The value for the size (in MiB) of the /dev/shm
volume. This parameter
- * maps to the --shm-size
option to docker
- * run.
--shm-size
option to docker
+ * run.
* If you are using tasks that use the Fargate launch type, the
* sharedMemorySize
parameter is not supported.
The container path, mount options, and size (in MiB) of the tmpfs mount. This
- * parameter maps to the --tmpfs
option to docker run.
--tmpfs
option to docker run.
* If you're using tasks that use the Fargate launch type, the
* tmpfs
parameter isn't supported.
0
and 100
. If the swappiness
parameter is not
* specified, a default value of 60
is used. If a value is not specified for
* maxSwap
then this parameter is ignored. This parameter maps to the
- * --memory-swappiness
option to docker run.
+ * --memory-swappiness
option to docker run.
* If you're using tasks that use the Fargate launch type, the
* swappiness
parameter isn't supported.
hostPort
can be left blank or it must be the same value as the
* containerPort
.
* Most fields of this parameter (containerPort
, hostPort
,
- * protocol
) maps to PortBindings
in the
- * Create a container section of the Docker Remote API and the
- * --publish
option to
- * docker
- * run
- * . If the network mode of a task definition is set to
+ * protocol
) maps to PortBindings
in the docker create-container command and the
+ * --publish
option to docker
+ * run
. If the network mode of a task definition is set to
* host
, host ports must either be undefined or match the container port
* in the port mapping.
The value for the specified resource type.
- *When the type is GPU
, the value is the number of physical GPUs
the
- * Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for
- * all containers in a task can't exceed the number of available GPUs on the container
- * instance that the task is launched on.
When the type is InferenceAccelerator
, the value
matches
- * the deviceName
for an InferenceAccelerator specified in a task definition.
When the type is GPU
, the value is the number of physical
+ * GPUs
the Amazon ECS container agent reserves for the container. The number
+ * of GPUs that's reserved for all containers in a task can't exceed the number of
+ * available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator
, the value
matches the
+ * deviceName
for an InferenceAccelerator specified in a task definition.
You can enable a restart policy for each container defined in your + * task definition, to overcome transient failures faster and maintain task availability. When you + * enable a restart policy for a container, Amazon ECS can restart the container if it exits, without needing to replace + * the task. For more information, see Restart individual containers + * in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
+ * @public + */ +export interface ContainerRestartPolicy { + /** + *Specifies whether a restart policy is enabled for the + * container.
+ * @public + */ + enabled: boolean | undefined; + + /** + *A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit + * codes. By default, Amazon ECS does not ignore + * any exit codes.
+ * @public + */ + ignoredExitCodes?: number[]; + + /** + *A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be
+ * restarted only once every restartAttemptPeriod
seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum
+ * restartAttemptPeriod
of 60 seconds and a maximum restartAttemptPeriod
of 1800 seconds.
+ * By default, a container must run for 300 seconds before it can be restarted.
A list of namespaced kernel parameters to set in the container. This parameter maps to
- * Sysctls
in the Create a container section of the
- * Docker Remote API and the --sysctl
option to docker run. For example, you can configure
+ * Sysctls
in tthe docker create-container command and the --sysctl
option to docker run. For example, you can configure
* net.ipv4.tcp_keepalive_time
setting to maintain longer lived
* connections.
We don't recommend that you specify network-related systemControls
@@ -5478,7 +5505,7 @@ export type UlimitName = (typeof UlimitName)[keyof typeof UlimitName];
* the nofile
resource limit parameter which Fargate
* overrides. The nofile
resource limit sets a restriction on
* the number of open files that a container can use. The default
- * nofile
soft limit is 1024
and the default hard limit
+ * nofile
soft limit is 65535
and the default hard limit
* is 65535
.
You can specify the ulimit
settings for a container in a task
* definition.
The name of a container. If you're linking multiple containers together in a task
* definition, the name
of one container can be entered in the
* links
of another container to connect the containers.
- * Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name
in the
- * Create a container section of the Docker Remote API and the
- * --name
option to docker
- * run.
name
in tthe docker create-container command and the
+ * --name
option to docker
+ * run.
* @public
*/
name?: string;
@@ -5550,10 +5576,9 @@ export interface ContainerDefinition {
* repository-url/image:tag
* or
* repository-url/image@digest
- *
. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image
in the
- * Create a container section of the Docker Remote API and the
- * IMAGE
parameter of docker
- * run.
+ * . Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image
in the docker create-container command and the
+ * IMAGE
parameter of docker
+ * run.
* When a new task starts, the Amazon ECS container agent pulls the latest version of @@ -5594,8 +5619,7 @@ export interface ContainerDefinition { /** *
The number of cpu
units reserved for the container. This parameter maps
- * to CpuShares
in the Create a container section of the
- * Docker Remote API and the --cpu-shares
option to docker run.
CpuShares
in the docker create-container commandand the --cpu-shares
option to docker run.
* This field is optional for tasks using the Fargate launch type, and the
* only requirement is that the total amount of CPU reserved for all containers within a
* task be lower than the task-level cpu
value.
On Linux container instances, the Docker daemon on the container instance uses the CPU - * value to calculate the relative CPU share ratios for running containers. For more - * information, see CPU share - * constraint in the Docker documentation. The minimum valid CPU share value - * that the Linux kernel allows is 2. However, the CPU parameter isn't required, and you - * can use CPU values below 2 in your container definitions. For CPU values below 2 - * (including null), the behavior varies based on your Amazon ECS container agent + * value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value + * that the Linux kernel allows is 2, and the + * maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you + * can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 + * (including null) or above 262144, the behavior varies based on your Amazon ECS container agent * version:
*+ * Agent versions greater than or equal to + * 1.84.0: CPU values greater than 256 vCPU are passed to Docker as + * 256, which is equivalent to 262144 CPU shares.
+ *On Windows container instances, the CPU limit is enforced as an absolute limit, or a
* quota. Windows containers only have access to the specified amount of CPU that's
@@ -5648,8 +5677,7 @@ export interface ContainerDefinition {
* to exceed the memory specified here, the container is killed. The total amount of memory
* reserved for all containers within a task must be lower than the task
* memory
value, if one is specified. This parameter maps to
- * Memory
in the Create a container section of the
- * Docker Remote API and the --memory
option to docker run.
Memory
in thethe docker create-container command and the --memory
option to docker run.
* If using the Fargate launch type, this parameter is optional.
*If using the EC2 launch type, you must specify either a task-level
* memory value or a container-level memory value. If you specify both a container-level
@@ -5672,8 +5700,7 @@ export interface ContainerDefinition {
* However, your container can consume more memory when it needs to, up to either the hard
* limit specified with the memory
parameter (if applicable), or all of the
* available memory on the container instance, whichever comes first. This parameter maps
- * to MemoryReservation
in the Create a container section of
- * the Docker Remote API and the --memory-reservation
option to docker run.
MemoryReservation
in the the docker create-container command and the --memory-reservation
option to docker run.
* If a task-level memory value is not specified, you must specify a non-zero integer for
* one or both of memory
or memoryReservation
in a container
* definition. If you specify both, memory
must be greater than
@@ -5700,12 +5727,9 @@ export interface ContainerDefinition {
* without the need for port mappings. This parameter is only supported if the network mode
* of a task definition is bridge
. The name:internalName
* construct is analogous to name:alias
in Docker links.
- * Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. For more information about linking Docker containers, go to
- * Legacy container links
- * in the Docker documentation. This parameter maps to Links
in the
- * Create a container section of the Docker Remote API and the
- * --link
option to docker
- * run.
Links
in the docker create-container command and the
+ * --link
option to docker
+ * run.
* This parameter is not supported for Windows containers.
*localhost
. There's no loopback for port mappings on Windows, so you
* can't access a container's mapped port from the host itself.
* This parameter maps to PortBindings
in the
- * Create a container section of the Docker Remote API and the
- * --publish
option to docker
- * run. If the network mode of a task definition is set to none
,
+ * the docker create-container command and the
+ * --publish
option to docker
+ * run. If the network mode of a task definition is set to none
,
* then you can't specify port mappings. If the network mode of a task definition is set to
* host
, then host ports must either be undefined or they must match the
* container port in the port mapping.
The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the + * task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
+ * @public + */ + restartPolicy?: ContainerRestartPolicy; + /** *Early versions of the Amazon ECS container agent don't properly handle
@@ -5770,17 +5801,16 @@ export interface ContainerDefinition {
* arguments as command
array items instead.
The entry point that's passed to the container. This parameter maps to
- * Entrypoint
in the Create a container section of the
- * Docker Remote API and the --entrypoint
option to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#entrypoint.
Entrypoint
in tthe docker create-container command and the --entrypoint
option to docker run.
* @public
*/
entryPoint?: string[];
/**
* The command that's passed to the container. This parameter maps to Cmd
in
- * the Create a container section of the Docker Remote API and the
- * COMMAND
parameter to docker
- * run. For more information, see https://docs.docker.com/engine/reference/builder/#cmd. If there are multiple arguments, each
+ * the docker create-container command and the
+ * COMMAND
parameter to docker
+ * run. If there are multiple arguments, each
* argument is a separated string in the array.
The environment variables to pass to a container. This parameter maps to
- * Env
in the Create a container section of the
- * Docker Remote API and the --env
option to docker run.
Env
in the docker create-container command and the --env
option to docker run.
* We don't recommend that you use plaintext environment variables for sensitive * information, such as credential data.
@@ -5800,13 +5829,11 @@ export interface ContainerDefinition { /** *A list of files containing the environment variables to pass to a container. This
- * parameter maps to the --env-file
option to docker run.
--env-file
option to docker run.
* You can specify up to ten environment files. The file must have a .env
* file extension. Each line in an environment file contains an environment variable in
* VARIABLE=VALUE
format. Lines beginning with #
are treated
- * as comments and are ignored. For more information about the environment variable file
- * syntax, see Declare default
- * environment variables in file.
If there are environment variables specified using the environment
* parameter in a container definition, they take precedence over the variables contained
* within an environment file. If multiple environment files are specified that contain the
@@ -5819,8 +5846,7 @@ export interface ContainerDefinition {
/**
*
The mount points for data volumes in your container.
- *This parameter maps to Volumes
in the Create a container
- * section of the Docker Remote API and the --volume
option to docker run.
This parameter maps to Volumes
in the the docker create-container command and the --volume
option to docker run.
Windows containers can mount whole directories on the same drive as
* $env:ProgramData
. Windows containers can't mount directories on a
* different drive, and mount point can't be across drives.
Data volumes to mount from another container. This parameter maps to
- * VolumesFrom
in the Create a container section of the
- * Docker Remote API and the --volumes-from
option to docker run.
VolumesFrom
in tthe docker create-container command and the --volumes-from
option to docker run.
* @public
*/
volumesFrom?: VolumeFrom[];
@@ -5914,7 +5939,7 @@ export interface ContainerDefinition {
* later, then they contain the required versions of the container agent and
* ecs-init
. For more information, see Amazon ECS-optimized Linux AMI
* in the Amazon Elastic Container Service Developer Guide.
- * The valid values are 2-120 seconds.
+ *The valid values for Fargate are 2-120 seconds.
* @public */ startTimeout?: number; @@ -5954,9 +5979,9 @@ export interface ContainerDefinition { /** *The hostname to use for your container. This parameter maps to Hostname
- * in the Create a container section of the Docker Remote API and the
- * --hostname
option to docker
- * run.
--hostname
option to docker
+ * run.
* The hostname
parameter is not supported if you're using the
* awsvpc
network mode.
The user to use inside the container. This parameter maps to User
in the
- * Create a container section of the Docker Remote API and the
- * --user
option to docker
- * run.
The user to use inside the container. This parameter maps to User
in the docker create-container command and the
+ * --user
option to docker
+ * run.
When running tasks using the host
network mode, don't run containers
* using the root user (UID 0). We recommend using a non-root user for better
@@ -6018,16 +6042,14 @@ export interface ContainerDefinition {
/**
*
The working directory to run commands inside the container in. This parameter maps to
- * WorkingDir
in the Create a container section of the
- * Docker Remote API and the --workdir
option to docker run.
WorkingDir
in the docker create-container command and the --workdir
option to docker run.
* @public
*/
workingDirectory?: string;
/**
* When this parameter is true, networking is off within the container. This parameter
- * maps to NetworkDisabled
in the Create a container section
- * of the Docker Remote API.
NetworkDisabled
in the docker create-container command.
* This parameter is not supported for Windows containers.
*When this parameter is true, the container is given elevated privileges on the host
* container instance (similar to the root
user). This parameter maps to
- * Privileged
in the Create a container section of the
- * Docker Remote API and the --privileged
option to docker run.
Privileged
in the the docker create-container command and the --privileged
option to docker run
* This parameter is not supported for Windows containers or tasks run on Fargate.
*When this parameter is true, the container is given read-only access to its root file
- * system. This parameter maps to ReadonlyRootfs
in the
- * Create a container section of the Docker Remote API and the
- * --read-only
option to docker
- * run.
ReadonlyRootfs
in the docker create-container command and the
+ * --read-only
option to docker
+ * run.
* This parameter is not supported for Windows containers.
*A list of DNS servers that are presented to the container. This parameter maps to
- * Dns
in the Create a container section of the
- * Docker Remote API and the --dns
option to docker run.
Dns
in the the docker create-container command and the --dns
option to docker run.
* This parameter is not supported for Windows containers.
*A list of DNS search domains that are presented to the container. This parameter maps
- * to DnsSearch
in the Create a container section of the
- * Docker Remote API and the --dns-search
option to docker run.
DnsSearch
in the docker create-container command and the --dns-search
option to docker run.
* This parameter is not supported for Windows containers.
*A list of hostnames and IP address mappings to append to the /etc/hosts
- * file on the container. This parameter maps to ExtraHosts
in the
- * Create a container section of the Docker Remote API and the
- * --add-host
option to docker
- * run.
ExtraHosts
in the docker create-container command and the
+ * --add-host
option to docker
+ * run.
* This parameter isn't supported for Windows containers or tasks that use the
* awsvpc
network mode.
A list of strings to provide custom configuration for multiple security systems. For - * more information about valid values, see Docker - * Run Security Configuration. This field isn't valid for containers in tasks + *
A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks * using the Fargate launch type.
*For Linux tasks on EC2, this parameter can be used to reference custom * labels for SELinux and AppArmor multi-level security systems.
@@ -6108,10 +6123,9 @@ export interface ContainerDefinition { * For more information, see Using gMSAs for Windows * Containers and Using gMSAs for Linux * Containers in the Amazon Elastic Container Service Developer Guide. - *This parameter maps to SecurityOpt
in the
- * Create a container section of the Docker Remote API and the
- * --security-opt
option to docker
- * run.
This parameter maps to SecurityOpt
in the docker create-container command and the
+ * --security-opt
option to docker
+ * run.
The Amazon ECS container agent running on a container instance must register with the
* ECS_SELINUX_CAPABLE=true
or ECS_APPARMOR_CAPABLE=true
@@ -6119,8 +6133,6 @@ export interface ContainerDefinition {
* security options. For more information, see Amazon ECS Container
* Agent Configuration in the Amazon Elastic Container Service Developer Guide.
For more information about valid values, see Docker - * Run Security Configuration.
*Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | * "credentialspec:CredentialSpecFilePath"
* @public @@ -6130,24 +6142,21 @@ export interface ContainerDefinition { /** *When this parameter is true
, you can deploy containerized applications
* that require stdin
or a tty
to be allocated. This parameter
- * maps to OpenStdin
in the Create a container section of the
- * Docker Remote API and the --interactive
option to docker run.
OpenStdin
in the docker create-container command and the --interactive
option to docker run.
* @public
*/
interactive?: boolean;
/**
* When this parameter is true
, a TTY is allocated. This parameter maps to
- * Tty
in the Create a container section of the
- * Docker Remote API and the --tty
option to docker run.
Tty
in tthe docker create-container command and the --tty
option to docker run.
* @public
*/
pseudoTerminal?: boolean;
/**
* A key/value map of labels to add to the container. This parameter maps to
- * Labels
in the Create a container section of the
- * Docker Remote API and the --label
option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '\{\{.Server.APIVersion\}\}'
+ * Labels
in the docker create-container command and the --label
option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '\{\{.Server.APIVersion\}\}'
*
A list of ulimits
to set in the container. If a ulimit
value
* is specified in a task definition, it overrides the default values set by Docker. This
- * parameter maps to Ulimits
in the Create a container section
- * of the Docker Remote API and the --ulimit
option to docker run. Valid naming values are displayed
+ * parameter maps to Ulimits
in tthe docker create-container command and the --ulimit
option to docker run. Valid naming values are displayed
* in the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default
* resource limit values set by the operating system with the exception of
* the nofile
resource limit parameter which Fargate
* overrides. The nofile
resource limit sets a restriction on
* the number of open files that a container can use. The default
- * nofile
soft limit is 1024
and the default hard limit
+ * nofile
soft limit is 65535
and the default hard limit
* is 65535
.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '\{\{.Server.APIVersion\}\}'
*
The log configuration specification for the container.
- *This parameter maps to LogConfig
in the
- * Create a container section of the Docker Remote API and the
- * --log-driver
option to docker
- * run. By default, containers use the same logging driver that the Docker
+ *
This parameter maps to LogConfig
in the docker create-container command and the
+ * --log-driver
option to docker
+ * run. By default, containers use the same logging driver that the Docker
* daemon uses. However the container can use a different logging driver than the Docker
* daemon by specifying a log driver with this parameter in the container definition. To
* use a different logging driver for a container, the log system must be configured
* properly on the container instance (or on a different log server for remote logging
- * options). For more information about the options for different supported log drivers,
- * see Configure
- * logging drivers in the Docker documentation.
Amazon ECS currently supports a subset of the logging drivers available to the Docker * daemon (shown in the LogConfiguration data type). Additional log @@ -6209,18 +6214,16 @@ export interface ContainerDefinition { /** *
The container health check command and associated configuration parameters for the
- * container. This parameter maps to HealthCheck
in the
- * Create a container section of the Docker Remote API and the
- * HEALTHCHECK
parameter of docker
- * run.
HealthCheck
in the docker create-container command and the
+ * HEALTHCHECK
parameter of docker
+ * run.
* @public
*/
healthCheck?: HealthCheck;
/**
* A list of namespaced kernel parameters to set in the container. This parameter maps to
- * Sysctls
in the Create a container section of the
- * Docker Remote API and the --sysctl
option to docker run. For example, you can configure
+ * Sysctls
in tthe docker create-container command and the --sysctl
option to docker run. For example, you can configure
* net.ipv4.tcp_keepalive_time
setting to maintain longer lived
* connections.
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported
- * value is 20
GiB and the maximum supported value is
+ *
The total amount, in GiB, of ephemeral storage to set for the task. The minimum
+ * supported value is 20
GiB and the maximum supported value is
* 200
GiB.
docker plugin ls
to retrieve the driver name from
* your container instance. If the driver was installed using another method, use Docker
- * plugin discovery to retrieve the driver name. For more information, see Docker
- * plugin discovery. This parameter maps to Driver
in the
- * Create a volume section of the Docker Remote API and the
- * xxdriver
option to docker
- * volume create.
+ * plugin discovery to retrieve the driver name. This parameter maps to Driver
in the docker create-container command and the
+ * xxdriver
option to docker
+ * volume create.
* @public
*/
driver?: string;
/**
* A map of Docker driver-specific options passed through. This parameter maps to
- * DriverOpts
in the Create a volume section of the
- * Docker Remote API and the xxopt
option to docker
- * volume create.
DriverOpts
in the docker create-volume command and the xxopt
option to docker
+ * volume create.
* @public
*/
driverOpts?: RecordCustom metadata to add to your Docker volume. This parameter maps to
- * Labels
in the Create a volume section of the
- * Docker Remote API and the xxlabel
option to docker
- * volume create.
Labels
in the docker create-container command and the xxlabel
option to docker
+ * volume create.
* @public
*/
labels?: RecordThe short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the - * task permission to call Amazon Web Services APIs on your behalf. For more information, see Amazon ECS - * Task Role in the Amazon Elastic Container Service Developer Guide.
- *IAM roles for tasks on Windows require that the -EnableTaskIAMRole
- * option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some
- * configuration code to use the feature. For more information, see Windows IAM roles
- * for tasks in the Amazon Elastic Container Service Developer Guide.
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent - * permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required - * depending on the requirements of your task. For more information, see Amazon ECS task - * execution IAM role in the Amazon Elastic Container Service Developer Guide.
+ * permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide. * @public */ executionRoleArn?: string; @@ -6990,13 +6984,11 @@ export interface TaskDefinition { * to use a non-root user. *If the network mode is awsvpc
, the task is allocated an elastic network
- * interface, and you must specify a NetworkConfiguration value when you create
+ * interface, and you must specify a NetworkConfiguration value when you create
* a service or run a task with the task definition. For more information, see Task Networking in the
* Amazon Elastic Container Service Developer Guide.
If the network mode is host
, you cannot run multiple instantiations of the
* same task on a single container instance when port mappings are used.
For more information, see Network - * settings in the Docker run reference.
* @public */ networkMode?: NetworkMode; @@ -7081,6 +7073,9 @@ export interface TaskDefinition { * this field is optional. Any value can be used. If you use the Fargate launch type, this * field is required. You must use one of the following values. The value that you choose * determines your range of valid values for thememory
parameter.
+ * If you use the EC2 launch type, this field is optional. Supported values
+ * are between 128
CPU units (0.125
vCPUs) and 10240
+ * CPU units (10
vCPUs).
The CPU units cannot be less than 1 vCPU when you use Windows containers on * Fargate.
*If task
is specified, all containers within the specified
* task share the same process namespace.
If no value is specified, the - * default is a private namespace for each container. For more information, - * see PID settings in the Docker run - * reference.
+ * default is a private namespace for each container. *If the host
PID mode is used, there's a heightened risk
- * of undesired process namespace exposure. For more information, see
- * Docker security.
This parameter is not supported for Windows containers.
*none
is specified, then IPC resources
* within the containers of a task are private and not shared with other containers in a
* task or on the container instance. If no value is specified, then the IPC resource
- * namespace sharing depends on the Docker daemon setting on the container instance. For
- * more information, see IPC
- * settings in the Docker run reference.
+ * namespace sharing depends on the Docker daemon setting on the container instance.
* If the host
IPC mode is used, be aware that there is a heightened risk of
- * undesired IPC namespace expose. For more information, see Docker
- * security.
If you are setting namespaced kernel parameters using The total amount, in GiB, of the ephemeral storage to set for the task. The minimum
- * supported value is systemControls
for
* the containers in the task, the following will apply to your IPC resource namespace. For
* more information, see System
@@ -8477,14 +8466,15 @@ export interface Container {
export interface TaskEphemeralStorage {
/**
* 20
GiB and the maximum supported value is
 200
- * GiB.20
GiB and the maximum supported value is
+ * 200
GiB.
Specify an Key Management Service key ID to encrypt the ephemeral storage for the task.
+ *Specify an Key Management Service key ID to encrypt the ephemeral storage for the + * task.
* @public */ kmsKeyId?: string; @@ -10799,9 +10789,7 @@ export interface RegisterTaskDefinitionRequest { /** *The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent - * permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required - * depending on the requirements of your task. For more information, see Amazon ECS task - * execution IAM role in the Amazon Elastic Container Service Developer Guide.
+ * permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide. * @public */ executionRoleArn?: string; @@ -10828,13 +10816,11 @@ export interface RegisterTaskDefinitionRequest { * to use a non-root user. *If the network mode is awsvpc
, the task is allocated an elastic network
- * interface, and you must specify a NetworkConfiguration value when you create
+ * interface, and you must specify a NetworkConfiguration value when you create
* a service or run a task with the task definition. For more information, see Task Networking in the
* Amazon Elastic Container Service Developer Guide.
If the network mode is host
, you cannot run multiple instantiations of the
* same task on a single container instance when port mappings are used.
For more information, see Network - * settings in the Docker run reference.
* @public */ networkMode?: NetworkMode; @@ -11018,12 +11004,9 @@ export interface RegisterTaskDefinitionRequest { *If task
is specified, all containers within the specified
* task share the same process namespace.
If no value is specified, the - * default is a private namespace for each container. For more information, - * see PID settings in the Docker run - * reference.
+ * default is a private namespace for each container. *If the host
PID mode is used, there's a heightened risk
- * of undesired process namespace exposure. For more information, see
- * Docker security.
This parameter is not supported for Windows containers.
*none
is specified, then IPC resources
* within the containers of a task are private and not shared with other containers in a
* task or on the container instance. If no value is specified, then the IPC resource
- * namespace sharing depends on the Docker daemon setting on the container instance. For
- * more information, see IPC
- * settings in the Docker run reference.
+ * namespace sharing depends on the Docker daemon setting on the container instance.
* If the host
IPC mode is used, be aware that there is a heightened risk of
- * undesired IPC namespace expose. For more information, see Docker
- * security.
If you are setting namespaced kernel parameters using An optional tag specified when a task is started. For example, if you automatically
* trigger a task to run a batch process job, you could apply a unique identifier for that
* job to your task with the systemControls
for
* the containers in the task, the following will apply to your IPC resource namespace. For
* more information, see System
@@ -11569,9 +11549,9 @@ export interface RunTaskRequest {
* startedBy
parameter. You can then identify which
- * tasks belong to that job by filtering the results of a ListTasks call
- * with the startedBy
value. Up to 128 letters (uppercase and lowercase),
- * numbers, hyphens (-), and underscores (_) are allowed.startedBy
value. Up to 128 letters (uppercase and lowercase), numbers,
+ * hyphens (-), forward slash (/), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, then the startedBy
parameter
* contains the deployment ID of the service that starts it.
To specify a specific revision, include the revision number in the ARN. For example,
* to specify revision 2, use
* arn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:2
.
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify all - * revisions, use + *
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify
+ * all revisions, use
* arn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:*
.
For more information, see Policy Resources for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
* @public @@ -11659,7 +11639,6 @@ export interface RunTaskResponse { /** *A full description of the tasks that were run. The tasks that were successfully placed * on your cluster are described here.
- * * @public */ tasks?: Task[]; @@ -11753,9 +11732,9 @@ export interface StartTaskRequest { *An optional tag specified when a task is started. For example, if you automatically
* trigger a task to run a batch process job, you could apply a unique identifier for that
* job to your task with the startedBy
parameter. You can then identify which
- * tasks belong to that job by filtering the results of a ListTasks call
- * with the startedBy
value. Up to 36 letters (uppercase and lowercase),
- * numbers, hyphens (-), and underscores (_) are allowed.
startedBy
value. Up to 36 letters (uppercase and lowercase), numbers,
+ * hyphens (-), forward slash (/), and underscores (_) are allowed.
* If a task is started by an Amazon ECS service, the startedBy
parameter
* contains the deployment ID of the service that starts it.
Details about the task set.
- * @public - */ - taskSet?: TaskSet; -} - /** * @internal */ diff --git a/clients/client-ecs/src/models/models_1.ts b/clients/client-ecs/src/models/models_1.ts new file mode 100644 index 000000000000..fd0da70788da --- /dev/null +++ b/clients/client-ecs/src/models/models_1.ts @@ -0,0 +1,13 @@ +// smithy-typescript generated code +import { TaskSet } from "./models_0"; + +/** + * @public + */ +export interface UpdateTaskSetResponse { + /** + *Details about the task set.
+ * @public + */ + taskSet?: TaskSet; +} diff --git a/clients/client-ecs/src/protocols/Aws_json1_1.ts b/clients/client-ecs/src/protocols/Aws_json1_1.ts index 1fee3b94304f..f4104f54aff4 100644 --- a/clients/client-ecs/src/protocols/Aws_json1_1.ts +++ b/clients/client-ecs/src/protocols/Aws_json1_1.ts @@ -198,6 +198,7 @@ import { ContainerInstanceField, ContainerInstanceHealthStatus, ContainerOverride, + ContainerRestartPolicy, ContainerStateChange, CreateCapacityProviderRequest, CreateClusterRequest, @@ -370,11 +371,11 @@ import { UpdateTaskProtectionRequest, UpdateTaskProtectionResponse, UpdateTaskSetRequest, - UpdateTaskSetResponse, VersionInfo, Volume, VolumeFrom, } from "../models/models_0"; +import { UpdateTaskSetResponse } from "../models/models_1"; /** * serializeAws_json1_1CreateCapacityProviderCommand @@ -2772,6 +2773,8 @@ const de_UpdateInProgressExceptionRes = async ( // se_ContainerOverrides omitted. +// se_ContainerRestartPolicy omitted. + // se_ContainerStateChange omitted. // se_ContainerStateChanges omitted. @@ -2903,6 +2906,8 @@ const se_CreateTaskSetRequest = (input: CreateTaskSetRequest, context: __SerdeCo // se_InferenceAccelerators omitted. +// se_IntegerList omitted. + // se_KernelCapabilities omitted. // se_KeyValuePair omitted. @@ -3353,6 +3358,8 @@ const de_ContainerInstances = (output: any, context: __SerdeContext): ContainerI // de_ContainerOverrides omitted. +// de_ContainerRestartPolicy omitted. + /** * deserializeAws_json1_1Containers */ @@ -3652,6 +3659,8 @@ const de_InstanceHealthCheckResultList = (output: any, context: __SerdeContext): return retVal; }; +// de_IntegerList omitted. + // de_InvalidParameterException omitted. // de_KernelCapabilities omitted. diff --git a/codegen/sdk-codegen/aws-models/ecs.json b/codegen/sdk-codegen/aws-models/ecs.json index 6f4fad9674ec..bfcc1fe5e498 100644 --- a/codegen/sdk-codegen/aws-models/ecs.json +++ b/codegen/sdk-codegen/aws-models/ecs.json @@ -1518,7 +1518,7 @@ } }, "traits": { - "smithy.api#documentation": "An object representing the networking details for a task or service. For example\n\t\t\t\tawsvpcConfiguration={subnets=[\"subnet-12344321\"],securityGroups=[\"sg-12344321\"]}
\n
An object representing the networking details for a task or service. For example\n\t\t\t\tawsVpcConfiguration={subnets=[\"subnet-12344321\"],securityGroups=[\"sg-12344321\"]}
.
The details of a capacity provider strategy. A capacity provider strategy can be set\n\t\t\twhen using the RunTask or CreateCluster APIs or as\n\t\t\tthe default capacity provider strategy for a cluster with the CreateCluster API.
\nOnly capacity providers that are already associated with a cluster and have an\n\t\t\t\tACTIVE
or UPDATING
status can be used in a capacity\n\t\t\tprovider strategy. The PutClusterCapacityProviders API is used to\n\t\t\tassociate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity\n\t\t\tprovider must already be created. New Auto Scaling group capacity providers can be\n\t\t\tcreated with the CreateCapacityProvider API operation.
\nTo use a Fargate capacity provider, specify either the FARGATE
or\n\t\t\t\tFARGATE_SPOT
capacity providers. The Fargate capacity providers are\n\t\t\tavailable to all accounts and only need to be associated with a cluster to be used in a\n\t\t\tcapacity provider strategy.
With FARGATE_SPOT
, you can run interruption\n\t\t\ttolerant tasks at a rate that's discounted compared to the FARGATE
price.\n\t\t\t\tFARGATE_SPOT
runs tasks on spare compute capacity. When Amazon Web Services needs the\n\t\t\tcapacity back, your tasks are interrupted with a two-minute warning.\n\t\t\t\tFARGATE_SPOT
only supports Linux tasks with the X86_64 architecture on\n\t\t\tplatform version 1.3.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
" + "smithy.api#documentation": "The details of a capacity provider strategy. A capacity provider strategy can be set\n\t\t\twhen using the RunTask or CreateCluster APIs or as\n\t\t\tthe default capacity provider strategy for a cluster with the CreateCluster API.
\nOnly capacity providers that are already associated with a cluster and have an\n\t\t\t\tACTIVE
or UPDATING
status can be used in a capacity\n\t\t\tprovider strategy. The PutClusterCapacityProviders API is used to\n\t\t\tassociate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity\n\t\t\tprovider must already be created. New Auto Scaling group capacity providers can be\n\t\t\tcreated with the CreateCapacityProvider API operation.
\nTo use a Fargate capacity provider, specify either the FARGATE
or\n\t\t\t\tFARGATE_SPOT
capacity providers. The Fargate capacity providers are\n\t\t\tavailable to all accounts and only need to be associated with a cluster to be used in a\n\t\t\tcapacity provider strategy.
With FARGATE_SPOT
, you can run interruption tolerant tasks at a rate\n\t\t\tthat's discounted compared to the FARGATE
price. FARGATE_SPOT
\n\t\t\truns tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are\n\t\t\tinterrupted with a two-minute warning. FARGATE_SPOT
only supports Linux\n\t\t\ttasks with the X86_64 architecture on platform version 1.3.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
" } }, "com.amazonaws.ecs#CapacityProviderStrategyItemBase": { @@ -1762,7 +1762,7 @@ } }, "traits": { - "smithy.api#documentation": "These errors are usually caused by a client action. This client action might be using\n\t\t\tan action or resource on behalf of a user that doesn't have permissions to use the\n\t\t\taction or resource. Or, it might be specifying an identifier that isn't valid.
", + "smithy.api#documentation": "These errors are usually caused by a client action. This client action might be using\n\t\t\tan action or resource on behalf of a user that doesn't have permissions to use the\n\t\t\taction or resource. Or, it might be specifying an identifier that isn't valid.
\nThe following list includes additional causes for the error:
\nThe RunTask
could not be processed because you use managed\n\t\t\t\t\tscaling and there is a capacity error because the quota of tasks in the\n\t\t\t\t\t\tPROVISIONING
per cluster has been reached. For information\n\t\t\t\t\tabout the service quotas, see Amazon ECS\n\t\t\t\t\t\tservice quotas.
The name of a container. If you're linking multiple containers together in a task\n\t\t\tdefinition, the name
of one container can be entered in the\n\t\t\t\tlinks
of another container to connect the containers.\n\t\t\tUp to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--name
option to docker\n\t\t\trun.
The name of a container. If you're linking multiple containers together in a task\n\t\t\tdefinition, the name
of one container can be entered in the\n\t\t\t\tlinks
of another container to connect the containers.\n\t\t\tUp to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name
in tthe docker create-container command and the\n\t\t\t\t--name
option to docker\n\t\t\trun.
The image used to start a container. This string is passed directly to the Docker\n\t\t\tdaemon. By default, images in the Docker Hub registry are available. Other repositories\n\t\t\tare specified with either \n repository-url/image:tag\n
or \n repository-url/image@digest\n
. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\tIMAGE
parameter of docker\n\t\t\t\trun.
When a new task starts, the Amazon ECS container agent pulls the latest version of\n\t\t\t\t\tthe specified image and tag for the container to use. However, subsequent\n\t\t\t\t\tupdates to a repository image aren't propagated to already running tasks.
\nImages in Amazon ECR repositories can be specified by either using the full\n\t\t\t\t\t\tregistry/repository:tag
or\n\t\t\t\t\t\tregistry/repository@digest
. For example,\n\t\t\t\t\t\t012345678910.dkr.ecr.
\n\t\t\t\t\tor\n\t\t\t\t\t\t012345678910.dkr.ecr.
.\n\t\t\t\t
Images in official repositories on Docker Hub use a single name (for example,\n\t\t\t\t\t\tubuntu
or mongo
).
Images in other repositories on Docker Hub are qualified with an organization\n\t\t\t\t\tname (for example, amazon/amazon-ecs-agent
).
Images in other online repositories are qualified further by a domain name\n\t\t\t\t\t(for example, quay.io/assemblyline/ubuntu
).
The image used to start a container. This string is passed directly to the Docker\n\t\t\tdaemon. By default, images in the Docker Hub registry are available. Other repositories\n\t\t\tare specified with either \n repository-url/image:tag\n
or \n repository-url/image@digest\n
. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image
in the docker create-container command and the\n\t\t\t\tIMAGE
parameter of docker\n\t\t\t\trun.
When a new task starts, the Amazon ECS container agent pulls the latest version of\n\t\t\t\t\tthe specified image and tag for the container to use. However, subsequent\n\t\t\t\t\tupdates to a repository image aren't propagated to already running tasks.
\nImages in Amazon ECR repositories can be specified by either using the full\n\t\t\t\t\t\tregistry/repository:tag
or\n\t\t\t\t\t\tregistry/repository@digest
. For example,\n\t\t\t\t\t\t012345678910.dkr.ecr.
\n\t\t\t\t\tor\n\t\t\t\t\t\t012345678910.dkr.ecr.
.\n\t\t\t\t
Images in official repositories on Docker Hub use a single name (for example,\n\t\t\t\t\t\tubuntu
or mongo
).
Images in other repositories on Docker Hub are qualified with an organization\n\t\t\t\t\tname (for example, amazon/amazon-ecs-agent
).
Images in other online repositories are qualified further by a domain name\n\t\t\t\t\t(for example, quay.io/assemblyline/ubuntu
).
The number of cpu
units reserved for the container. This parameter maps\n\t\t\tto CpuShares
in the Create a container section of the\n\t\t\tDocker Remote API and the --cpu-shares
option to docker run.
This field is optional for tasks using the Fargate launch type, and the\n\t\t\tonly requirement is that the total amount of CPU reserved for all containers within a\n\t\t\ttask be lower than the task-level cpu
value.
You can determine the number of CPU units that are available per EC2 instance type\n\t\t\t\tby multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page\n\t\t\t\tby 1,024.
\nLinux containers share unallocated CPU units with other containers on the container\n\t\t\tinstance with the same ratio as their allocated amount. For example, if you run a\n\t\t\tsingle-container task on a single-core instance type with 512 CPU units specified for\n\t\t\tthat container, and that's the only task running on the container instance, that\n\t\t\tcontainer could use the full 1,024 CPU unit share at any given time. However, if you\n\t\t\tlaunched another copy of the same task on that container instance, each task is\n\t\t\tguaranteed a minimum of 512 CPU units when needed. Moreover, each container could float\n\t\t\tto higher CPU usage if the other container was not using it. If both tasks were 100%\n\t\t\tactive all of the time, they would be limited to 512 CPU units.
\nOn Linux container instances, the Docker daemon on the container instance uses the CPU\n\t\t\tvalue to calculate the relative CPU share ratios for running containers. For more\n\t\t\tinformation, see CPU share\n\t\t\t\tconstraint in the Docker documentation. The minimum valid CPU share value\n\t\t\tthat the Linux kernel allows is 2. However, the CPU parameter isn't required, and you\n\t\t\tcan use CPU values below 2 in your container definitions. For CPU values below 2\n\t\t\t(including null), the behavior varies based on your Amazon ECS container agent\n\t\t\tversion:
\n\n Agent versions less than or equal to 1.1.0:\n\t\t\t\t\tNull and zero CPU values are passed to Docker as 0, which Docker then converts\n\t\t\t\t\tto 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux\n\t\t\t\t\tkernel converts to two CPU shares.
\n\n Agent versions greater than or equal to 1.2.0:\n\t\t\t\t\tNull, zero, and CPU values of 1 are passed to Docker as 2.
\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a\n\t\t\tquota. Windows containers only have access to the specified amount of CPU that's\n\t\t\tdescribed in the task definition. A null or zero CPU value is passed to Docker as\n\t\t\t\t0
, which Windows interprets as 1% of one CPU.
The number of cpu
units reserved for the container. This parameter maps\n\t\t\tto CpuShares
in the docker create-container commandand the --cpu-shares
option to docker run.
This field is optional for tasks using the Fargate launch type, and the\n\t\t\tonly requirement is that the total amount of CPU reserved for all containers within a\n\t\t\ttask be lower than the task-level cpu
value.
You can determine the number of CPU units that are available per EC2 instance type\n\t\t\t\tby multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page\n\t\t\t\tby 1,024.
\nLinux containers share unallocated CPU units with other containers on the container\n\t\t\tinstance with the same ratio as their allocated amount. For example, if you run a\n\t\t\tsingle-container task on a single-core instance type with 512 CPU units specified for\n\t\t\tthat container, and that's the only task running on the container instance, that\n\t\t\tcontainer could use the full 1,024 CPU unit share at any given time. However, if you\n\t\t\tlaunched another copy of the same task on that container instance, each task is\n\t\t\tguaranteed a minimum of 512 CPU units when needed. Moreover, each container could float\n\t\t\tto higher CPU usage if the other container was not using it. If both tasks were 100%\n\t\t\tactive all of the time, they would be limited to 512 CPU units.
\nOn Linux container instances, the Docker daemon on the container instance uses the CPU\n\t\t\tvalue to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value\n\t\t\tthat the Linux kernel allows is 2, and the\n\t\t\tmaximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you\n\t\t\tcan use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2\n\t\t\t(including null) or above 262144, the behavior varies based on your Amazon ECS container agent\n\t\t\tversion:
\n\n Agent versions less than or equal to 1.1.0:\n\t\t\t\t\tNull and zero CPU values are passed to Docker as 0, which Docker then converts\n\t\t\t\t\tto 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux\n\t\t\t\t\tkernel converts to two CPU shares.
\n\n Agent versions greater than or equal to 1.2.0:\n\t\t\t\t\tNull, zero, and CPU values of 1 are passed to Docker as 2.
\n\n Agent versions greater than or equal to\n\t\t\t\t\t\t1.84.0: CPU values greater than 256 vCPU are passed to Docker as\n\t\t\t\t\t256, which is equivalent to 262144 CPU shares.
\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a\n\t\t\tquota. Windows containers only have access to the specified amount of CPU that's\n\t\t\tdescribed in the task definition. A null or zero CPU value is passed to Docker as\n\t\t\t\t0
, which Windows interprets as 1% of one CPU.
The amount (in MiB) of memory to present to the container. If your container attempts\n\t\t\tto exceed the memory specified here, the container is killed. The total amount of memory\n\t\t\treserved for all containers within a task must be lower than the task\n\t\t\t\tmemory
value, if one is specified. This parameter maps to\n\t\t\t\tMemory
in the Create a container section of the\n\t\t\tDocker Remote API and the --memory
option to docker run.
If using the Fargate launch type, this parameter is optional.
\nIf using the EC2 launch type, you must specify either a task-level\n\t\t\tmemory value or a container-level memory value. If you specify both a container-level\n\t\t\t\tmemory
and memoryReservation
value, memory
\n\t\t\tmust be greater than memoryReservation
. If you specify\n\t\t\t\tmemoryReservation
, then that value is subtracted from the available\n\t\t\tmemory resources for the container instance where the container is placed. Otherwise,\n\t\t\tthe value of memory
is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 6 MiB of memory for your containers.
\nThe Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 4 MiB of memory for your containers.
" + "smithy.api#documentation": "The amount (in MiB) of memory to present to the container. If your container attempts\n\t\t\tto exceed the memory specified here, the container is killed. The total amount of memory\n\t\t\treserved for all containers within a task must be lower than the task\n\t\t\t\tmemory
value, if one is specified. This parameter maps to\n\t\t\tMemory
in thethe docker create-container command and the --memory
option to docker run.
If using the Fargate launch type, this parameter is optional.
\nIf using the EC2 launch type, you must specify either a task-level\n\t\t\tmemory value or a container-level memory value. If you specify both a container-level\n\t\t\t\tmemory
and memoryReservation
value, memory
\n\t\t\tmust be greater than memoryReservation
. If you specify\n\t\t\t\tmemoryReservation
, then that value is subtracted from the available\n\t\t\tmemory resources for the container instance where the container is placed. Otherwise,\n\t\t\tthe value of memory
is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 6 MiB of memory for your containers.
\nThe Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 4 MiB of memory for your containers.
" } }, "memoryReservation": { "target": "com.amazonaws.ecs#BoxedInteger", "traits": { - "smithy.api#documentation": "The soft limit (in MiB) of memory to reserve for the container. When system memory is\n\t\t\tunder heavy contention, Docker attempts to keep the container memory to this soft limit.\n\t\t\tHowever, your container can consume more memory when it needs to, up to either the hard\n\t\t\tlimit specified with the memory
parameter (if applicable), or all of the\n\t\t\tavailable memory on the container instance, whichever comes first. This parameter maps\n\t\t\tto MemoryReservation
in the Create a container section of\n\t\t\tthe Docker Remote API and the --memory-reservation
option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for\n\t\t\tone or both of memory
or memoryReservation
in a container\n\t\t\tdefinition. If you specify both, memory
must be greater than\n\t\t\t\tmemoryReservation
. If you specify memoryReservation
, then\n\t\t\tthat value is subtracted from the available memory resources for the container instance\n\t\t\twhere the container is placed. Otherwise, the value of memory
is\n\t\t\tused.
For example, if your container normally uses 128 MiB of memory, but occasionally\n\t\t\tbursts to 256 MiB of memory for short periods of time, you can set a\n\t\t\t\tmemoryReservation
of 128 MiB, and a memory
hard limit of\n\t\t\t300 MiB. This configuration would allow the container to only reserve 128 MiB of memory\n\t\t\tfrom the remaining resources on the container instance, but also allow the container to\n\t\t\tconsume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 6 MiB of memory for your containers.
\nThe Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 4 MiB of memory for your containers.
" + "smithy.api#documentation": "The soft limit (in MiB) of memory to reserve for the container. When system memory is\n\t\t\tunder heavy contention, Docker attempts to keep the container memory to this soft limit.\n\t\t\tHowever, your container can consume more memory when it needs to, up to either the hard\n\t\t\tlimit specified with the memory
parameter (if applicable), or all of the\n\t\t\tavailable memory on the container instance, whichever comes first. This parameter maps\n\t\t\tto MemoryReservation
in the the docker create-container command and the --memory-reservation
option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for\n\t\t\tone or both of memory
or memoryReservation
in a container\n\t\t\tdefinition. If you specify both, memory
must be greater than\n\t\t\t\tmemoryReservation
. If you specify memoryReservation
, then\n\t\t\tthat value is subtracted from the available memory resources for the container instance\n\t\t\twhere the container is placed. Otherwise, the value of memory
is\n\t\t\tused.
For example, if your container normally uses 128 MiB of memory, but occasionally\n\t\t\tbursts to 256 MiB of memory for short periods of time, you can set a\n\t\t\t\tmemoryReservation
of 128 MiB, and a memory
hard limit of\n\t\t\t300 MiB. This configuration would allow the container to only reserve 128 MiB of memory\n\t\t\tfrom the remaining resources on the container instance, but also allow the container to\n\t\t\tconsume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 6 MiB of memory for your containers.
\nThe Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a\n\t\t\tcontainer. So, don't specify less than 4 MiB of memory for your containers.
" } }, "links": { "target": "com.amazonaws.ecs#StringList", "traits": { - "smithy.api#documentation": "The links
parameter allows containers to communicate with each other\n\t\t\twithout the need for port mappings. This parameter is only supported if the network mode\n\t\t\tof a task definition is bridge
. The name:internalName
\n\t\t\tconstruct is analogous to name:alias
in Docker links.\n\t\t\tUp to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. For more information about linking Docker containers, go to\n\t\t\t\tLegacy container links\n\t\t\tin the Docker documentation. This parameter maps to Links
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--link
option to docker\n\t\t\trun.
This parameter is not supported for Windows containers.
\nContainers that are collocated on a single container instance may be able to\n\t\t\t\tcommunicate with each other without requiring links or host port mappings. Network\n\t\t\t\tisolation is achieved on the container instance using security groups and VPC\n\t\t\t\tsettings.
\nThe links
parameter allows containers to communicate with each other\n\t\t\twithout the need for port mappings. This parameter is only supported if the network mode\n\t\t\tof a task definition is bridge
. The name:internalName
\n\t\t\tconstruct is analogous to name:alias
in Docker links.\n\t\t\tUp to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to Links
in the docker create-container command and the\n\t\t\t\t--link
option to docker\n\t\t\trun.
This parameter is not supported for Windows containers.
\nContainers that are collocated on a single container instance may be able to\n\t\t\t\tcommunicate with each other without requiring links or host port mappings. Network\n\t\t\t\tisolation is achieved on the container instance using security groups and VPC\n\t\t\t\tsettings.
\nThe list of port mappings for the container. Port mappings allow containers to access\n\t\t\tports on the host container instance to send or receive traffic.
\nFor task definitions that use the awsvpc
network mode, only specify the\n\t\t\t\tcontainerPort
. The hostPort
can be left blank or it must\n\t\t\tbe the same value as the containerPort
.
Port mappings on Windows use the NetNAT
gateway address rather than\n\t\t\t\tlocalhost
. There's no loopback for port mappings on Windows, so you\n\t\t\tcan't access a container's mapped port from the host itself.
This parameter maps to PortBindings
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--publish
option to docker\n\t\t\t\trun. If the network mode of a task definition is set to none
,\n\t\t\tthen you can't specify port mappings. If the network mode of a task definition is set to\n\t\t\t\thost
, then host ports must either be undefined or they must match the\n\t\t\tcontainer port in the port mapping.
After a task reaches the RUNNING
status, manual and automatic host\n\t\t\t\tand container port assignments are visible in the Network\n\t\t\t\t\tBindings section of a container description for a selected task in\n\t\t\t\tthe Amazon ECS console. The assignments are also visible in the\n\t\t\t\t\tnetworkBindings
section DescribeTasks\n\t\t\t\tresponses.
The list of port mappings for the container. Port mappings allow containers to access\n\t\t\tports on the host container instance to send or receive traffic.
\nFor task definitions that use the awsvpc
network mode, only specify the\n\t\t\t\tcontainerPort
. The hostPort
can be left blank or it must\n\t\t\tbe the same value as the containerPort
.
Port mappings on Windows use the NetNAT
gateway address rather than\n\t\t\t\tlocalhost
. There's no loopback for port mappings on Windows, so you\n\t\t\tcan't access a container's mapped port from the host itself.
This parameter maps to PortBindings
in the\n\t\t\tthe docker create-container command and the\n\t\t\t\t--publish
option to docker\n\t\t\t\trun. If the network mode of a task definition is set to none
,\n\t\t\tthen you can't specify port mappings. If the network mode of a task definition is set to\n\t\t\t\thost
, then host ports must either be undefined or they must match the\n\t\t\tcontainer port in the port mapping.
After a task reaches the RUNNING
status, manual and automatic host\n\t\t\t\tand container port assignments are visible in the Network\n\t\t\t\t\tBindings section of a container description for a selected task in\n\t\t\t\tthe Amazon ECS console. The assignments are also visible in the\n\t\t\t\t\tnetworkBindings
section DescribeTasks\n\t\t\t\tresponses.
If the essential
parameter of a container is marked as true
,\n\t\t\tand that container fails or stops for any reason, all other containers that are part of\n\t\t\tthe task are stopped. If the essential
parameter of a container is marked\n\t\t\tas false
, its failure doesn't affect the rest of the containers in a task.\n\t\t\tIf this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application\n\t\t\tthat's composed of multiple containers, group containers that are used for a common\n\t\t\tpurpose into components, and separate the different components into multiple task\n\t\t\tdefinitions. For more information, see Application\n\t\t\t\tArchitecture in the Amazon Elastic Container Service Developer Guide.
" } }, + "restartPolicy": { + "target": "com.amazonaws.ecs#ContainerRestartPolicy", + "traits": { + "smithy.api#documentation": "The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the\n\t\t\ttask. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
" + } + }, "entryPoint": { "target": "com.amazonaws.ecs#StringList", "traits": { - "smithy.api#documentation": "Early versions of the Amazon ECS container agent don't properly handle\n\t\t\t\t\tentryPoint
parameters. If you have problems using\n\t\t\t\t\tentryPoint
, update your container agent or enter your commands and\n\t\t\t\targuments as command
array items instead.
The entry point that's passed to the container. This parameter maps to\n\t\t\t\tEntrypoint
in the Create a container section of the\n\t\t\tDocker Remote API and the --entrypoint
option to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#entrypoint.
Early versions of the Amazon ECS container agent don't properly handle\n\t\t\t\t\tentryPoint
parameters. If you have problems using\n\t\t\t\t\tentryPoint
, update your container agent or enter your commands and\n\t\t\t\targuments as command
array items instead.
The entry point that's passed to the container. This parameter maps to\n\t\t\tEntrypoint
in tthe docker create-container command and the --entrypoint
option to docker run.
The command that's passed to the container. This parameter maps to Cmd
in\n\t\t\tthe Create a container section of the Docker Remote API and the\n\t\t\t\tCOMMAND
parameter to docker\n\t\t\t\trun. For more information, see https://docs.docker.com/engine/reference/builder/#cmd. If there are multiple arguments, each\n\t\t\targument is a separated string in the array.
The command that's passed to the container. This parameter maps to Cmd
in\n\t\t\tthe docker create-container command and the\n\t\t\t\tCOMMAND
parameter to docker\n\t\t\t\trun. If there are multiple arguments, each\n\t\t\targument is a separated string in the array.
The environment variables to pass to a container. This parameter maps to\n\t\t\t\tEnv
in the Create a container section of the\n\t\t\tDocker Remote API and the --env
option to docker run.
We don't recommend that you use plaintext environment variables for sensitive\n\t\t\t\tinformation, such as credential data.
\nThe environment variables to pass to a container. This parameter maps to\n\t\t\tEnv
in the docker create-container command and the --env
option to docker run.
We don't recommend that you use plaintext environment variables for sensitive\n\t\t\t\tinformation, such as credential data.
\nA list of files containing the environment variables to pass to a container. This\n\t\t\tparameter maps to the --env-file
option to docker run.
You can specify up to ten environment files. The file must have a .env
\n\t\t\tfile extension. Each line in an environment file contains an environment variable in\n\t\t\t\tVARIABLE=VALUE
format. Lines beginning with #
are treated\n\t\t\tas comments and are ignored. For more information about the environment variable file\n\t\t\tsyntax, see Declare default\n\t\t\t\tenvironment variables in file.
If there are environment variables specified using the environment
\n\t\t\tparameter in a container definition, they take precedence over the variables contained\n\t\t\twithin an environment file. If multiple environment files are specified that contain the\n\t\t\tsame variable, they're processed from the top down. We recommend that you use unique\n\t\t\tvariable names. For more information, see Specifying Environment\n\t\t\t\tVariables in the Amazon Elastic Container Service Developer Guide.
A list of files containing the environment variables to pass to a container. This\n\t\t\tparameter maps to the --env-file
option to docker run.
You can specify up to ten environment files. The file must have a .env
\n\t\t\tfile extension. Each line in an environment file contains an environment variable in\n\t\t\t\tVARIABLE=VALUE
format. Lines beginning with #
are treated\n\t\t\tas comments and are ignored.
If there are environment variables specified using the environment
\n\t\t\tparameter in a container definition, they take precedence over the variables contained\n\t\t\twithin an environment file. If multiple environment files are specified that contain the\n\t\t\tsame variable, they're processed from the top down. We recommend that you use unique\n\t\t\tvariable names. For more information, see Specifying Environment\n\t\t\t\tVariables in the Amazon Elastic Container Service Developer Guide.
The mount points for data volumes in your container.
\nThis parameter maps to Volumes
in the Create a container\n\t\t\tsection of the Docker Remote API and the --volume
option to docker run.
Windows containers can mount whole directories on the same drive as\n\t\t\t\t$env:ProgramData
. Windows containers can't mount directories on a\n\t\t\tdifferent drive, and mount point can't be across drives.
The mount points for data volumes in your container.
\nThis parameter maps to Volumes
in the the docker create-container command and the --volume
option to docker run.
Windows containers can mount whole directories on the same drive as\n\t\t\t\t$env:ProgramData
. Windows containers can't mount directories on a\n\t\t\tdifferent drive, and mount point can't be across drives.
Data volumes to mount from another container. This parameter maps to\n\t\t\t\tVolumesFrom
in the Create a container section of the\n\t\t\tDocker Remote API and the --volumes-from
option to docker run.
Data volumes to mount from another container. This parameter maps to\n\t\t\tVolumesFrom
in tthe docker create-container command and the --volumes-from
option to docker run.
Time duration (in seconds) to wait before giving up on resolving dependencies for a\n\t\t\tcontainer. For example, you specify two containers in a task definition with containerA\n\t\t\thaving a dependency on containerB reaching a COMPLETE
,\n\t\t\tSUCCESS
, or HEALTHY
status. If a startTimeout
\n\t\t\tvalue is specified for containerB and it doesn't reach the desired status within that\n\t\t\ttime then containerA gives up and not start. This results in the task transitioning to a\n\t\t\t\tSTOPPED
state.
When the ECS_CONTAINER_START_TIMEOUT
container agent configuration\n\t\t\t\tvariable is used, it's enforced independently from this start timeout value.
For tasks using the Fargate launch type, the task or service requires\n\t\t\tthe following platforms:
\nLinux platform version 1.3.0
or later.
Windows platform version 1.0.0
or later.
For tasks using the EC2 launch type, your container instances require at\n\t\t\tleast version 1.26.0
of the container agent to use a container start\n\t\t\ttimeout value. However, we recommend using the latest container agent version. For\n\t\t\tinformation about checking your agent version and updating to the latest version, see\n\t\t\t\tUpdating the Amazon ECS\n\t\t\t\tContainer Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI,\n\t\t\tyour instance needs at least version 1.26.0-1
of the ecs-init
\n\t\t\tpackage. If your container instances are launched from version 20190301
or\n\t\t\tlater, then they contain the required versions of the container agent and\n\t\t\t\tecs-init
. For more information, see Amazon ECS-optimized Linux AMI\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
The valid values are 2-120 seconds.
" + "smithy.api#documentation": "Time duration (in seconds) to wait before giving up on resolving dependencies for a\n\t\t\tcontainer. For example, you specify two containers in a task definition with containerA\n\t\t\thaving a dependency on containerB reaching a COMPLETE
,\n\t\t\tSUCCESS
, or HEALTHY
status. If a startTimeout
\n\t\t\tvalue is specified for containerB and it doesn't reach the desired status within that\n\t\t\ttime then containerA gives up and not start. This results in the task transitioning to a\n\t\t\t\tSTOPPED
state.
When the ECS_CONTAINER_START_TIMEOUT
container agent configuration\n\t\t\t\tvariable is used, it's enforced independently from this start timeout value.
For tasks using the Fargate launch type, the task or service requires\n\t\t\tthe following platforms:
\nLinux platform version 1.3.0
or later.
Windows platform version 1.0.0
or later.
For tasks using the EC2 launch type, your container instances require at\n\t\t\tleast version 1.26.0
of the container agent to use a container start\n\t\t\ttimeout value. However, we recommend using the latest container agent version. For\n\t\t\tinformation about checking your agent version and updating to the latest version, see\n\t\t\t\tUpdating the Amazon ECS\n\t\t\t\tContainer Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI,\n\t\t\tyour instance needs at least version 1.26.0-1
of the ecs-init
\n\t\t\tpackage. If your container instances are launched from version 20190301
or\n\t\t\tlater, then they contain the required versions of the container agent and\n\t\t\t\tecs-init
. For more information, see Amazon ECS-optimized Linux AMI\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
" } }, "stopTimeout": { @@ -2400,103 +2406,103 @@ "hostname": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "The hostname to use for your container. This parameter maps to Hostname
\n\t\t\tin the Create a container section of the Docker Remote API and the\n\t\t\t\t--hostname
option to docker\n\t\t\t\trun.
The hostname
parameter is not supported if you're using the\n\t\t\t\t\tawsvpc
network mode.
The hostname to use for your container. This parameter maps to Hostname
\n\t\t\tin thethe docker create-container command and the\n\t\t\t\t--hostname
option to docker\n\t\t\t\trun.
The hostname
parameter is not supported if you're using the\n\t\t\t\t\tawsvpc
network mode.
The user to use inside the container. This parameter maps to User
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--user
option to docker\n\t\t\trun.
When running tasks using the host
network mode, don't run containers\n\t\t\t\tusing the root user (UID 0). We recommend using a non-root user for better\n\t\t\t\tsecurity.
You can specify the user
using the following formats. If specifying a UID\n\t\t\tor GID, you must specify it as a positive integer.
\n user
\n
\n user:group
\n
\n uid
\n
\n uid:gid
\n
\n user:gid
\n
\n uid:group
\n
This parameter is not supported for Windows containers.
\nThe user to use inside the container. This parameter maps to User
in the docker create-container command and the\n\t\t\t\t--user
option to docker\n\t\t\trun.
When running tasks using the host
network mode, don't run containers\n\t\t\t\tusing the root user (UID 0). We recommend using a non-root user for better\n\t\t\t\tsecurity.
You can specify the user
using the following formats. If specifying a UID\n\t\t\tor GID, you must specify it as a positive integer.
\n user
\n
\n user:group
\n
\n uid
\n
\n uid:gid
\n
\n user:gid
\n
\n uid:group
\n
This parameter is not supported for Windows containers.
\nThe working directory to run commands inside the container in. This parameter maps to\n\t\t\t\tWorkingDir
in the Create a container section of the\n\t\t\tDocker Remote API and the --workdir
option to docker run.
The working directory to run commands inside the container in. This parameter maps to\n\t\t\tWorkingDir
in the docker create-container command and the --workdir
option to docker run.
When this parameter is true, networking is off within the container. This parameter\n\t\t\tmaps to NetworkDisabled
in the Create a container section\n\t\t\tof the Docker Remote API.
This parameter is not supported for Windows containers.
\nWhen this parameter is true, networking is off within the container. This parameter\n\t\t\tmaps to NetworkDisabled
in the docker create-container command.
This parameter is not supported for Windows containers.
\nWhen this parameter is true, the container is given elevated privileges on the host\n\t\t\tcontainer instance (similar to the root
user). This parameter maps to\n\t\t\t\tPrivileged
in the Create a container section of the\n\t\t\tDocker Remote API and the --privileged
option to docker run.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nWhen this parameter is true, the container is given elevated privileges on the host\n\t\t\tcontainer instance (similar to the root
user). This parameter maps to\n\t\t\tPrivileged
in the the docker create-container command and the --privileged
option to docker run
This parameter is not supported for Windows containers or tasks run on Fargate.
\nWhen this parameter is true, the container is given read-only access to its root file\n\t\t\tsystem. This parameter maps to ReadonlyRootfs
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--read-only
option to docker\n\t\t\t\trun.
This parameter is not supported for Windows containers.
\nWhen this parameter is true, the container is given read-only access to its root file\n\t\t\tsystem. This parameter maps to ReadonlyRootfs
in the docker create-container command and the\n\t\t\t\t--read-only
option to docker\n\t\t\t\trun.
This parameter is not supported for Windows containers.
\nA list of DNS servers that are presented to the container. This parameter maps to\n\t\t\t\tDns
in the Create a container section of the\n\t\t\tDocker Remote API and the --dns
option to docker run.
This parameter is not supported for Windows containers.
\nA list of DNS servers that are presented to the container. This parameter maps to\n\t\t\tDns
in the the docker create-container command and the --dns
option to docker run.
This parameter is not supported for Windows containers.
\nA list of DNS search domains that are presented to the container. This parameter maps\n\t\t\tto DnsSearch
in the Create a container section of the\n\t\t\tDocker Remote API and the --dns-search
option to docker run.
This parameter is not supported for Windows containers.
\nA list of DNS search domains that are presented to the container. This parameter maps\n\t\t\tto DnsSearch
in the docker create-container command and the --dns-search
option to docker run.
This parameter is not supported for Windows containers.
\nA list of hostnames and IP address mappings to append to the /etc/hosts
\n\t\t\tfile on the container. This parameter maps to ExtraHosts
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--add-host
option to docker\n\t\t\t\trun.
This parameter isn't supported for Windows containers or tasks that use the\n\t\t\t\t\tawsvpc
network mode.
A list of hostnames and IP address mappings to append to the /etc/hosts
\n\t\t\tfile on the container. This parameter maps to ExtraHosts
in the docker create-container command and the\n\t\t\t\t--add-host
option to docker\n\t\t\t\trun.
This parameter isn't supported for Windows containers or tasks that use the\n\t\t\t\t\tawsvpc
network mode.
A list of strings to provide custom configuration for multiple security systems. For\n\t\t\tmore information about valid values, see Docker\n\t\t\t\tRun Security Configuration. This field isn't valid for containers in tasks\n\t\t\tusing the Fargate launch type.
\nFor Linux tasks on EC2, this parameter can be used to reference custom\n\t\t\tlabels for SELinux and AppArmor multi-level security systems.
\nFor any tasks on EC2, this parameter can be used to reference a\n\t\t\tcredential spec file that configures a container for Active Directory authentication.\n\t\t\tFor more information, see Using gMSAs for Windows\n\t\t\t\tContainers and Using gMSAs for Linux\n\t\t\t\tContainers in the Amazon Elastic Container Service Developer Guide.
\nThis parameter maps to SecurityOpt
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--security-opt
option to docker\n\t\t\t\trun.
The Amazon ECS container agent running on a container instance must register with the\n\t\t\t\t\tECS_SELINUX_CAPABLE=true
or ECS_APPARMOR_CAPABLE=true
\n\t\t\t\tenvironment variables before containers placed on that instance can use these\n\t\t\t\tsecurity options. For more information, see Amazon ECS Container\n\t\t\t\t\tAgent Configuration in the Amazon Elastic Container Service Developer Guide.
For more information about valid values, see Docker\n\t\t\t\tRun Security Configuration.
\nValid values: \"no-new-privileges\" | \"apparmor:PROFILE\" | \"label:value\" |\n\t\t\t\"credentialspec:CredentialSpecFilePath\"
" + "smithy.api#documentation": "A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks\n\t\t\tusing the Fargate launch type.
\nFor Linux tasks on EC2, this parameter can be used to reference custom\n\t\t\tlabels for SELinux and AppArmor multi-level security systems.
\nFor any tasks on EC2, this parameter can be used to reference a\n\t\t\tcredential spec file that configures a container for Active Directory authentication.\n\t\t\tFor more information, see Using gMSAs for Windows\n\t\t\t\tContainers and Using gMSAs for Linux\n\t\t\t\tContainers in the Amazon Elastic Container Service Developer Guide.
\nThis parameter maps to SecurityOpt
in the docker create-container command and the\n\t\t\t\t--security-opt
option to docker\n\t\t\t\trun.
The Amazon ECS container agent running on a container instance must register with the\n\t\t\t\t\tECS_SELINUX_CAPABLE=true
or ECS_APPARMOR_CAPABLE=true
\n\t\t\t\tenvironment variables before containers placed on that instance can use these\n\t\t\t\tsecurity options. For more information, see Amazon ECS Container\n\t\t\t\t\tAgent Configuration in the Amazon Elastic Container Service Developer Guide.
Valid values: \"no-new-privileges\" | \"apparmor:PROFILE\" | \"label:value\" |\n\t\t\t\"credentialspec:CredentialSpecFilePath\"
" } }, "interactive": { "target": "com.amazonaws.ecs#BoxedBoolean", "traits": { - "smithy.api#documentation": "When this parameter is true
, you can deploy containerized applications\n\t\t\tthat require stdin
or a tty
to be allocated. This parameter\n\t\t\tmaps to OpenStdin
in the Create a container section of the\n\t\t\tDocker Remote API and the --interactive
option to docker run.
When this parameter is true
, you can deploy containerized applications\n\t\t\tthat require stdin
or a tty
to be allocated. This parameter\n\t\t\tmaps to OpenStdin
in the docker create-container command and the --interactive
option to docker run.
When this parameter is true
, a TTY is allocated. This parameter maps to\n\t\t\t\tTty
in the Create a container section of the\n\t\t\tDocker Remote API and the --tty
option to docker run.
When this parameter is true
, a TTY is allocated. This parameter maps to\n\t\t\tTty
in tthe docker create-container command and the --tty
option to docker run.
A key/value map of labels to add to the container. This parameter maps to\n\t\t\t\tLabels
in the Create a container section of the\n\t\t\tDocker Remote API and the --label
option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
\n
A key/value map of labels to add to the container. This parameter maps to\n\t\t\tLabels
in the docker create-container command and the --label
option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
\n
A list of ulimits
to set in the container. If a ulimit
value\n\t\t\tis specified in a task definition, it overrides the default values set by Docker. This\n\t\t\tparameter maps to Ulimits
in the Create a container section\n\t\t\tof the Docker Remote API and the --ulimit
option to docker run. Valid naming values are displayed\n\t\t\tin the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default\n\t\t\t\t\t\t\tresource limit values set by the operating system with the exception of\n\t\t\t\t\t\t\tthe nofile
resource limit parameter which Fargate\n\t\t\t\t\t\t\toverrides. The nofile
resource limit sets a restriction on\n\t\t\t\t\t\t\tthe number of open files that a container can use. The default\n\t\t\t\t\t\t\t\tnofile
soft limit is 1024
and the default hard limit\n\t\t\t\t\t\t\tis 65535
.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
\n
This parameter is not supported for Windows containers.
\nA list of ulimits
to set in the container. If a ulimit
value\n\t\t\tis specified in a task definition, it overrides the default values set by Docker. This\n\t\t\tparameter maps to Ulimits
in tthe docker create-container command and the --ulimit
option to docker run. Valid naming values are displayed\n\t\t\tin the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default\n\t\t\t\t\t\t\tresource limit values set by the operating system with the exception of\n\t\t\t\t\t\t\tthe nofile
resource limit parameter which Fargate\n\t\t\t\t\t\t\toverrides. The nofile
resource limit sets a restriction on\n\t\t\t\t\t\t\tthe number of open files that a container can use. The default\n\t\t\t\t\t\t\t\tnofile
soft limit is 65535
and the default hard limit\n\t\t\t\t\t\t\tis 65535
.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
\n
This parameter is not supported for Windows containers.
\nThe log configuration specification for the container.
\nThis parameter maps to LogConfig
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--log-driver
option to docker\n\t\t\t\trun. By default, containers use the same logging driver that the Docker\n\t\t\tdaemon uses. However the container can use a different logging driver than the Docker\n\t\t\tdaemon by specifying a log driver with this parameter in the container definition. To\n\t\t\tuse a different logging driver for a container, the log system must be configured\n\t\t\tproperly on the container instance (or on a different log server for remote logging\n\t\t\toptions). For more information about the options for different supported log drivers,\n\t\t\tsee Configure\n\t\t\t\tlogging drivers in the Docker documentation.
Amazon ECS currently supports a subset of the logging drivers available to the Docker\n\t\t\t\tdaemon (shown in the LogConfiguration data type). Additional log\n\t\t\t\tdrivers may be available in future releases of the Amazon ECS container agent.
\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
\n
The Amazon ECS container agent running on a container instance must register the\n\t\t\t\tlogging drivers available on that instance with the\n\t\t\t\t\tECS_AVAILABLE_LOGGING_DRIVERS
environment variable before\n\t\t\t\tcontainers placed on that instance can use these log configuration options. For more\n\t\t\t\tinformation, see Amazon ECS Container\n\t\t\t\t\tAgent Configuration in the Amazon Elastic Container Service Developer Guide.
The log configuration specification for the container.
\nThis parameter maps to LogConfig
in the docker create-container command and the\n\t\t\t\t--log-driver
option to docker\n\t\t\t\trun. By default, containers use the same logging driver that the Docker\n\t\t\tdaemon uses. However the container can use a different logging driver than the Docker\n\t\t\tdaemon by specifying a log driver with this parameter in the container definition. To\n\t\t\tuse a different logging driver for a container, the log system must be configured\n\t\t\tproperly on the container instance (or on a different log server for remote logging\n\t\t\toptions).
Amazon ECS currently supports a subset of the logging drivers available to the Docker\n\t\t\t\tdaemon (shown in the LogConfiguration data type). Additional log\n\t\t\t\tdrivers may be available in future releases of the Amazon ECS container agent.
\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
\n
The Amazon ECS container agent running on a container instance must register the\n\t\t\t\tlogging drivers available on that instance with the\n\t\t\t\t\tECS_AVAILABLE_LOGGING_DRIVERS
environment variable before\n\t\t\t\tcontainers placed on that instance can use these log configuration options. For more\n\t\t\t\tinformation, see Amazon ECS Container\n\t\t\t\t\tAgent Configuration in the Amazon Elastic Container Service Developer Guide.
The container health check command and associated configuration parameters for the\n\t\t\tcontainer. This parameter maps to HealthCheck
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\tHEALTHCHECK
parameter of docker\n\t\t\t\trun.
The container health check command and associated configuration parameters for the\n\t\t\tcontainer. This parameter maps to HealthCheck
in the docker create-container command and the\n\t\t\t\tHEALTHCHECK
parameter of docker\n\t\t\t\trun.
A list of namespaced kernel parameters to set in the container. This parameter maps to\n\t\t\t\tSysctls
in the Create a container section of the\n\t\t\tDocker Remote API and the --sysctl
option to docker run. For example, you can configure\n\t\t\t\tnet.ipv4.tcp_keepalive_time
setting to maintain longer lived\n\t\t\tconnections.
A list of namespaced kernel parameters to set in the container. This parameter maps to\n\t\t\tSysctls
in tthe docker create-container command and the --sysctl
option to docker run. For example, you can configure\n\t\t\t\tnet.ipv4.tcp_keepalive_time
setting to maintain longer lived\n\t\t\tconnections.
Specifies whether a restart policy is enabled for the\n\t\t\tcontainer.
", + "smithy.api#required": {} + } + }, + "ignoredExitCodes": { + "target": "com.amazonaws.ecs#IntegerList", + "traits": { + "smithy.api#documentation": "A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit\n\t\t\tcodes. By default, Amazon ECS does not ignore\n\t\t\tany exit codes.
" + } + }, + "restartAttemptPeriod": { + "target": "com.amazonaws.ecs#BoxedInteger", + "traits": { + "smithy.api#documentation": "A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be\n\t\t\trestarted only once every restartAttemptPeriod
seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum\n\t\t\trestartAttemptPeriod
of 60 seconds and a maximum restartAttemptPeriod
of 1800 seconds.\n\t\t\tBy default, a container must run for 300 seconds before it can be restarted.
You can enable a restart policy for each container defined in your\n\t\t\ttask definition, to overcome transient failures faster and maintain task availability. When you\n\t\t\tenable a restart policy for a container, Amazon ECS can restart the container if it exits, without needing to replace\n\t\t\tthe task. For more information, see Restart individual containers\n\t\t\t\tin Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
" + } + }, "com.amazonaws.ecs#ContainerStateChange": { "type": "structure", "members": { @@ -3103,7 +3136,7 @@ } ], "traits": { - "smithy.api#documentation": "Runs and maintains your desired number of tasks from a specified task definition. If\n\t\t\tthe number of tasks running in a service drops below the desiredCount
,\n\t\t\tAmazon ECS runs another copy of the task in the specified cluster. To update an existing\n\t\t\tservice, see the UpdateService action.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
\nIn addition to maintaining the desired count of tasks in your service, you can\n\t\t\toptionally run your service behind one or more load balancers. The load balancers\n\t\t\tdistribute traffic across the tasks that are associated with the service. For more\n\t\t\tinformation, see Service load balancing in the Amazon Elastic Container Service Developer Guide.
\nYou can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. volumeConfigurations
is only supported for REPLICA\n\t\t\tservice and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
Tasks for services that don't use a load balancer are considered healthy if they're in\n\t\t\tthe RUNNING
state. Tasks for services that use a load balancer are\n\t\t\tconsidered healthy if they're in the RUNNING
state and are reported as\n\t\t\thealthy by the load balancer.
There are two service scheduler strategies available:
\n\n REPLICA
- The replica scheduling strategy places and\n\t\t\t\t\tmaintains your desired number of tasks across your cluster. By default, the\n\t\t\t\t\tservice scheduler spreads tasks across Availability Zones. You can use task\n\t\t\t\t\tplacement strategies and constraints to customize task placement decisions. For\n\t\t\t\t\tmore information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
\n DAEMON
- The daemon scheduling strategy deploys exactly one\n\t\t\t\t\ttask on each active container instance that meets all of the task placement\n\t\t\t\t\tconstraints that you specify in your cluster. The service scheduler also\n\t\t\t\t\tevaluates the task placement constraints for running tasks. It also stops tasks\n\t\t\t\t\tthat don't meet the placement constraints. When using this strategy, you don't\n\t\t\t\t\tneed to specify a desired number of tasks, a task placement strategy, or use\n\t\t\t\t\tService Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment\n\t\t\tis initiated by changing properties. For example, the deployment might be initiated by\n\t\t\tthe task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for\n\t\t\t\tminimumHealthyPercent
is 100%. The default value for a daemon service\n\t\t\tfor minimumHealthyPercent
is 0%.
If a service uses the ECS
deployment controller, the minimum healthy\n\t\t\tpercent represents a lower limit on the number of tasks in a service that must remain in\n\t\t\tthe RUNNING
state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of your desired number of tasks (rounded up to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING
state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can deploy without using additional cluster capacity. For example, if you\n\t\t\tset your service to have desired number of four tasks and a minimum healthy percent of\n\t\t\t50%, the scheduler might stop two existing tasks to free up cluster capacity before\n\t\t\tstarting two new tasks. If they're in the RUNNING
state, tasks for services\n\t\t\tthat don't use a load balancer are considered healthy . If they're in the\n\t\t\t\tRUNNING
state and reported as healthy by the load balancer, tasks for\n\t\t\tservices that do use a load balancer are considered healthy . The\n\t\t\tdefault value for minimum healthy percent is 100%.
If a service uses the ECS
deployment controller, the maximum percent parameter represents an upper limit on the\n\t\t\tnumber of tasks in a service that are allowed in the RUNNING
or\n\t\t\t\tPENDING
state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of the desired number of tasks (rounded down to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING
state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can define the deployment batch size. For example, if your service has a\n\t\t\tdesired number of four tasks and a maximum percent value of 200%, the scheduler may\n\t\t\tstart four new tasks before stopping the four older tasks (provided that the cluster\n\t\t\tresources required to do this are available). The default value for maximum percent is\n\t\t\t200%.
If a service uses either the CODE_DEPLOY
or EXTERNAL
\n\t\t\tdeployment controller types and tasks that use the EC2 launch type, the\n\t\t\t\tminimum healthy percent and maximum percent values are used only to define the lower and upper limit\n\t\t\ton the number of the tasks in the service that remain in the RUNNING
state.\n\t\t\tThis is while the container instances are in the DRAINING
state. If the\n\t\t\ttasks in the service use the Fargate launch type, the minimum healthy\n\t\t\tpercent and maximum percent values aren't used. This is the case even if they're\n\t\t\tcurrently visible when describing your service.
When creating a service that uses the EXTERNAL
deployment controller, you\n\t\t\tcan specify only parameters that aren't controlled at the task set level. The only\n\t\t\trequired parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement. For information\n\t\t\tabout task placement and task placement strategies, see Amazon ECS\n\t\t\t\ttask placement in the Amazon Elastic Container Service Developer Guide\n
\nStarting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.
", + "smithy.api#documentation": "Runs and maintains your desired number of tasks from a specified task definition. If\n\t\t\tthe number of tasks running in a service drops below the desiredCount
,\n\t\t\tAmazon ECS runs another copy of the task in the specified cluster. To update an existing\n\t\t\tservice, see the UpdateService action.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
\nIn addition to maintaining the desired count of tasks in your service, you can\n\t\t\toptionally run your service behind one or more load balancers. The load balancers\n\t\t\tdistribute traffic across the tasks that are associated with the service. For more\n\t\t\tinformation, see Service load balancing in the Amazon Elastic Container Service Developer Guide.
\nYou can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or\n\t\t\tupdating a service. volumeConfigurations
is only supported for REPLICA\n\t\t\tservice and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
Tasks for services that don't use a load balancer are considered healthy if they're in\n\t\t\tthe RUNNING
state. Tasks for services that use a load balancer are\n\t\t\tconsidered healthy if they're in the RUNNING
state and are reported as\n\t\t\thealthy by the load balancer.
There are two service scheduler strategies available:
\n\n REPLICA
- The replica scheduling strategy places and\n\t\t\t\t\tmaintains your desired number of tasks across your cluster. By default, the\n\t\t\t\t\tservice scheduler spreads tasks across Availability Zones. You can use task\n\t\t\t\t\tplacement strategies and constraints to customize task placement decisions. For\n\t\t\t\t\tmore information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
\n DAEMON
- The daemon scheduling strategy deploys exactly one\n\t\t\t\t\ttask on each active container instance that meets all of the task placement\n\t\t\t\t\tconstraints that you specify in your cluster. The service scheduler also\n\t\t\t\t\tevaluates the task placement constraints for running tasks. It also stops tasks\n\t\t\t\t\tthat don't meet the placement constraints. When using this strategy, you don't\n\t\t\t\t\tneed to specify a desired number of tasks, a task placement strategy, or use\n\t\t\t\t\tService Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment\n\t\t\tis initiated by changing properties. For example, the deployment might be initiated by\n\t\t\tthe task definition or by your desired count of a service. This is done with an UpdateService operation. The default value for a replica service for\n\t\t\t\tminimumHealthyPercent
is 100%. The default value for a daemon service\n\t\t\tfor minimumHealthyPercent
is 0%.
If a service uses the ECS
deployment controller, the minimum healthy\n\t\t\tpercent represents a lower limit on the number of tasks in a service that must remain in\n\t\t\tthe RUNNING
state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of your desired number of tasks (rounded up to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING
state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can deploy without using additional cluster capacity. For example, if you\n\t\t\tset your service to have desired number of four tasks and a minimum healthy percent of\n\t\t\t50%, the scheduler might stop two existing tasks to free up cluster capacity before\n\t\t\tstarting two new tasks. If they're in the RUNNING
state, tasks for services\n\t\t\tthat don't use a load balancer are considered healthy . If they're in the\n\t\t\t\tRUNNING
state and reported as healthy by the load balancer, tasks for\n\t\t\tservices that do use a load balancer are considered healthy . The\n\t\t\tdefault value for minimum healthy percent is 100%.
If a service uses the ECS
deployment controller, the maximum percent parameter represents an upper limit on the\n\t\t\tnumber of tasks in a service that are allowed in the RUNNING
or\n\t\t\t\tPENDING
state during a deployment. Specifically, it represents it as a\n\t\t\tpercentage of the desired number of tasks (rounded down to the nearest integer). This\n\t\t\thappens when any of your container instances are in the DRAINING
state if\n\t\t\tthe service contains tasks using the EC2 launch type. Using this\n\t\t\tparameter, you can define the deployment batch size. For example, if your service has a\n\t\t\tdesired number of four tasks and a maximum percent value of 200%, the scheduler may\n\t\t\tstart four new tasks before stopping the four older tasks (provided that the cluster\n\t\t\tresources required to do this are available). The default value for maximum percent is\n\t\t\t200%.
If a service uses either the CODE_DEPLOY
or EXTERNAL
\n\t\t\tdeployment controller types and tasks that use the EC2 launch type, the\n\t\t\t\tminimum healthy percent and maximum percent values are used only to define the lower and upper limit\n\t\t\ton the number of the tasks in the service that remain in the RUNNING
state.\n\t\t\tThis is while the container instances are in the DRAINING
state. If the\n\t\t\ttasks in the service use the Fargate launch type, the minimum healthy\n\t\t\tpercent and maximum percent values aren't used. This is the case even if they're\n\t\t\tcurrently visible when describing your service.
When creating a service that uses the EXTERNAL
deployment controller, you\n\t\t\tcan specify only parameters that aren't controlled at the task set level. The only\n\t\t\trequired parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement. For\n\t\t\tinformation about task placement and task placement strategies, see Amazon ECS\n\t\t\t\ttask placement in the Amazon Elastic Container Service Developer Guide\n
\nStarting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.
", "smithy.api#examples": [ { "title": "To create a new service", @@ -3262,7 +3295,7 @@ "launchType": { "target": "com.amazonaws.ecs#LaunchType", "traits": { - "smithy.api#documentation": "The infrastructure that you run your service on. For more information, see Amazon ECS\n\t\t\t\tlaunch types in the Amazon Elastic Container Service Developer Guide.
\nThe FARGATE
launch type runs your tasks on Fargate On-Demand\n\t\t\tinfrastructure.
Fargate Spot infrastructure is available for use but a capacity provider\n\t\t\t\tstrategy must be used. For more information, see Fargate capacity providers in the\n\t\t\t\t\tAmazon ECS Developer Guide.
\nThe EC2
launch type runs your tasks on Amazon EC2 instances registered to your\n\t\t\tcluster.
The EXTERNAL
launch type runs your tasks on your on-premises server or\n\t\t\tvirtual machine (VM) capacity registered to your cluster.
A service can use either a launch type or a capacity provider strategy. If a\n\t\t\t\tlaunchType
is specified, the capacityProviderStrategy
\n\t\t\tparameter must be omitted.
The infrastructure that you run your service on. For more information, see Amazon ECS\n\t\t\t\tlaunch types in the Amazon Elastic Container Service Developer Guide.
\nThe FARGATE
launch type runs your tasks on Fargate On-Demand\n\t\t\tinfrastructure.
Fargate Spot infrastructure is available for use but a capacity provider\n\t\t\t\tstrategy must be used. For more information, see Fargate capacity providers in the Amazon ECS\n\t\t\t\t\tDeveloper Guide.
\nThe EC2
launch type runs your tasks on Amazon EC2 instances registered to your\n\t\t\tcluster.
The EXTERNAL
launch type runs your tasks on your on-premises server or\n\t\t\tvirtual machine (VM) capacity registered to your cluster.
A service can use either a launch type or a capacity provider strategy. If a\n\t\t\t\tlaunchType
is specified, the capacityProviderStrategy
\n\t\t\tparameter must be omitted.
The platform version that your tasks in the service are running on. A platform version\n\t\t\tis specified only for tasks using the Fargate launch type. If one isn't\n\t\t\tspecified, the LATEST
platform version is used. For more information, see\n\t\t\t\tFargate platform versions in the Amazon Elastic Container Service Developer Guide.
The platform version that your tasks in the service are running on. A platform version\n\t\t\tis specified only for tasks using the Fargate launch type. If one isn't\n\t\t\tspecified, the LATEST
platform version is used. For more information, see\n\t\t\t\tFargate platform\n\t\t\t\tversions in the Amazon Elastic Container Service Developer Guide.
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy\n\t\t\tElastic Load Balancing target health checks after a task has first started. This is only used when your\n\t\t\tservice is configured to use a load balancer. If your service has a load balancer\n\t\t\tdefined and you don't specify a health check grace period value, the default value of\n\t\t\t\t0
is used.
If you do not use an Elastic Load Balancing, we recommend that you use the startPeriod
in\n\t\t\tthe task definition health check parameters. For more information, see Health\n\t\t\t\tcheck.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can\n\t\t\tspecify a health check grace period of up to 2,147,483,647 seconds (about 69 years).\n\t\t\tDuring that time, the Amazon ECS service scheduler ignores health check status. This grace\n\t\t\tperiod can prevent the service scheduler from marking tasks as unhealthy and stopping\n\t\t\tthem before they have time to come up.
" + "smithy.api#documentation": "The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy\n\t\t\tElastic Load Balancing target health checks after a task has first started. This is only used when your\n\t\t\tservice is configured to use a load balancer. If your service has a load balancer\n\t\t\tdefined and you don't specify a health check grace period value, the default value of\n\t\t\t\t0
is used.
If you do not use an Elastic Load Balancing, we recommend that you use the startPeriod
in\n\t\t\tthe task definition health check parameters. For more information, see Health\n\t\t\t\tcheck.
If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you\n\t\t\tcan specify a health check grace period of up to 2,147,483,647 seconds (about 69 years).\n\t\t\tDuring that time, the Amazon ECS service scheduler ignores health check status. This grace\n\t\t\tperiod can prevent the service scheduler from marking tasks as unhealthy and stopping\n\t\t\tthem before they have time to come up.
" } }, "schedulingStrategy": { @@ -3341,7 +3374,7 @@ "propagateTags": { "target": "com.amazonaws.ecs#PropagateTags", "traits": { - "smithy.api#documentation": "Specifies whether to propagate the tags from the task definition to the task. If no\n\t\t\tvalue is specified, the tags aren't propagated. Tags can only be propagated to the task\n\t\t\tduring task creation. To add tags to a task after task creation, use the TagResource API action.
\nYou must set this to a value other than NONE
when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide.
The default is NONE
.
Specifies whether to propagate the tags from the task definition to the task. If no\n\t\t\tvalue is specified, the tags aren't propagated. Tags can only be propagated to the task\n\t\t\tduring task creation. To add tags to a task after task creation, use the TagResource API action.
\nYou must set this to a value other than NONE
when you use Cost Explorer.\n\t\t\tFor more information, see Amazon ECS usage reports\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
The default is NONE
.
Create a task set in the specified cluster and service. This is used when a service\n\t\t\tuses the EXTERNAL
deployment controller type. For more information, see\n\t\t\t\tAmazon ECS deployment\n\t\t\t\ttypes in the Amazon Elastic Container Service Developer Guide.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
\nFor information about the maximum number of task sets and otther quotas, see Amazon ECS\n\t\t\tservice quotas in the Amazon Elastic Container Service Developer Guide.
" + "smithy.api#documentation": "Create a task set in the specified cluster and service. This is used when a service\n\t\t\tuses the EXTERNAL
deployment controller type. For more information, see\n\t\t\t\tAmazon ECS deployment\n\t\t\t\ttypes in the Amazon Elastic Container Service Developer Guide.
On March 21, 2024, a change was made to resolve the task definition revision before authorization. When a task definition revision is not specified, authorization will occur using the latest revision of a task definition.
\nFor information about the maximum number of task sets and other quotas, see Amazon ECS\n\t\t\tservice quotas in the Amazon Elastic Container Service Developer Guide.
" } }, "com.amazonaws.ecs#CreateTaskSetRequest": { @@ -4254,7 +4287,7 @@ "minimumHealthyPercent": { "target": "com.amazonaws.ecs#BoxedInteger", "traits": { - "smithy.api#documentation": "If a service is using the rolling update (ECS
) deployment type, the\n\t\t\t\tminimumHealthyPercent
represents a lower limit on the number of your\n\t\t\tservice's tasks that must remain in the RUNNING
state during a deployment,\n\t\t\tas a percentage of the desiredCount
(rounded up to the nearest integer).\n\t\t\tThis parameter enables you to deploy without using additional cluster capacity. For\n\t\t\texample, if your service has a desiredCount
of four tasks and a\n\t\t\t\tminimumHealthyPercent
of 50%, the service scheduler may stop two\n\t\t\texisting tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following\n\t\t\tshould be noted:
\nA service is considered healthy if all essential containers within the tasks\n\t\t\t\t\tin the service pass their health checks.
\nIf a task has no essential containers with a health check defined, the service\n\t\t\t\t\tscheduler will wait for 40 seconds after a task reaches a RUNNING
\n\t\t\t\t\tstate before the task is counted towards the minimum healthy percent\n\t\t\t\t\ttotal.
If a task has one or more essential containers with a health check defined,\n\t\t\t\t\tthe service scheduler will wait for the task to reach a healthy status before\n\t\t\t\t\tcounting it towards the minimum healthy percent total. A task is considered\n\t\t\t\t\thealthy when all essential containers within the task have passed their health\n\t\t\t\t\tchecks. The amount of time the service scheduler can wait for is determined by\n\t\t\t\t\tthe container health check settings.
\nFor services that do use a load balancer, the following should be\n\t\t\tnoted:
\nIf a task has no essential containers with a health check defined, the service\n\t\t\t\t\tscheduler will wait for the load balancer target group health check to return a\n\t\t\t\t\thealthy status before counting the task towards the minimum healthy percent\n\t\t\t\t\ttotal.
\nIf a task has an essential container with a health check defined, the service\n\t\t\t\t\tscheduler will wait for both the task to reach a healthy status and the load\n\t\t\t\t\tbalancer target group health check to return a healthy status before counting\n\t\t\t\t\tthe task towards the minimum healthy percent total.
\nThe default value for a replica service for\n\t\t\tminimumHealthyPercent
is 100%. The default\n\t\t\tminimumHealthyPercent
value for a service using\n\t\t\tthe DAEMON
service schedule is 0% for the CLI,\n\t\t\tthe Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the\n\t\t\tdesiredCount
multiplied by the\n\t\t\tminimumHealthyPercent
/100, rounded up to the\n\t\t\tnearest integer value.
If a service is using either the blue/green (CODE_DEPLOY
) or\n\t\t\t\tEXTERNAL
deployment types and is running tasks that use the\n\t\t\tEC2 launch type, the minimum healthy\n\t\t\t\tpercent value is set to the default value and is used to define the lower\n\t\t\tlimit on the number of the tasks in the service that remain in the RUNNING
\n\t\t\tstate while the container instances are in the DRAINING
state. If a service\n\t\t\tis using either the blue/green (CODE_DEPLOY
) or EXTERNAL
\n\t\t\tdeployment types and is running tasks that use the Fargate launch type,\n\t\t\tthe minimum healthy percent value is not used, although it is returned when describing\n\t\t\tyour service.
If a service is using the rolling update (ECS
) deployment type, the\n\t\t\t\tminimumHealthyPercent
represents a lower limit on the number of your\n\t\t\tservice's tasks that must remain in the RUNNING
state during a deployment,\n\t\t\tas a percentage of the desiredCount
(rounded up to the nearest integer).\n\t\t\tThis parameter enables you to deploy without using additional cluster capacity. For\n\t\t\texample, if your service has a desiredCount
of four tasks and a\n\t\t\t\tminimumHealthyPercent
of 50%, the service scheduler may stop two\n\t\t\texisting tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following\n\t\t\tshould be noted:
\nA service is considered healthy if all essential containers within the tasks\n\t\t\t\t\tin the service pass their health checks.
\nIf a task has no essential containers with a health check defined, the service\n\t\t\t\t\tscheduler will wait for 40 seconds after a task reaches a RUNNING
\n\t\t\t\t\tstate before the task is counted towards the minimum healthy percent\n\t\t\t\t\ttotal.
If a task has one or more essential containers with a health check defined,\n\t\t\t\t\tthe service scheduler will wait for the task to reach a healthy status before\n\t\t\t\t\tcounting it towards the minimum healthy percent total. A task is considered\n\t\t\t\t\thealthy when all essential containers within the task have passed their health\n\t\t\t\t\tchecks. The amount of time the service scheduler can wait for is determined by\n\t\t\t\t\tthe container health check settings.
\nFor services that do use a load balancer, the following should be\n\t\t\tnoted:
\nIf a task has no essential containers with a health check defined, the service\n\t\t\t\t\tscheduler will wait for the load balancer target group health check to return a\n\t\t\t\t\thealthy status before counting the task towards the minimum healthy percent\n\t\t\t\t\ttotal.
\nIf a task has an essential container with a health check defined, the service\n\t\t\t\t\tscheduler will wait for both the task to reach a healthy status and the load\n\t\t\t\t\tbalancer target group health check to return a healthy status before counting\n\t\t\t\t\tthe task towards the minimum healthy percent total.
\nThe default value for a replica service for minimumHealthyPercent
is\n\t\t\t100%. The default minimumHealthyPercent
value for a service using the\n\t\t\t\tDAEMON
service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the\n\t\t\tAPIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the\n\t\t\t\tdesiredCount
multiplied by the minimumHealthyPercent
/100,\n\t\t\trounded up to the nearest integer value.
If a service is using either the blue/green (CODE_DEPLOY
) or\n\t\t\t\tEXTERNAL
deployment types and is running tasks that use the\n\t\t\tEC2 launch type, the minimum healthy\n\t\t\t\tpercent value is set to the default value and is used to define the lower\n\t\t\tlimit on the number of the tasks in the service that remain in the RUNNING
\n\t\t\tstate while the container instances are in the DRAINING
state. If a service\n\t\t\tis using either the blue/green (CODE_DEPLOY
) or EXTERNAL
\n\t\t\tdeployment types and is running tasks that use the Fargate launch type,\n\t\t\tthe minimum healthy percent value is not used, although it is returned when describing\n\t\t\tyour service.
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
" + "smithy.api#documentation": "Specify an Key Management Service key ID to encrypt the ephemeral storage for\n\t\t\tdeployment.
" } } }, @@ -5537,19 +5570,19 @@ "driver": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "The Docker volume driver to use. The driver value must match the driver name provided\n\t\t\tby Docker because it is used for task placement. If the driver was installed using the\n\t\t\tDocker plugin CLI, use docker plugin ls
to retrieve the driver name from\n\t\t\tyour container instance. If the driver was installed using another method, use Docker\n\t\t\tplugin discovery to retrieve the driver name. For more information, see Docker\n\t\t\t\tplugin discovery. This parameter maps to Driver
in the\n\t\t\tCreate a volume section of the Docker Remote API and the\n\t\t\t\txxdriver
option to docker\n\t\t\t\tvolume create.
The Docker volume driver to use. The driver value must match the driver name provided\n\t\t\tby Docker because it is used for task placement. If the driver was installed using the\n\t\t\tDocker plugin CLI, use docker plugin ls
to retrieve the driver name from\n\t\t\tyour container instance. If the driver was installed using another method, use Docker\n\t\t\tplugin discovery to retrieve the driver name. This parameter maps to Driver
in the docker create-container command and the\n\t\t\t\txxdriver
option to docker\n\t\t\t\tvolume create.
A map of Docker driver-specific options passed through. This parameter maps to\n\t\t\t\tDriverOpts
in the Create a volume section of the\n\t\t\tDocker Remote API and the xxopt
option to docker\n\t\t\t\tvolume create.
A map of Docker driver-specific options passed through. This parameter maps to\n\t\t\t\tDriverOpts
in the docker create-volume command and the xxopt
option to docker\n\t\t\t\tvolume create.
Custom metadata to add to your Docker volume. This parameter maps to\n\t\t\t\tLabels
in the Create a volume section of the\n\t\t\tDocker Remote API and the xxlabel
option to docker\n\t\t\t\tvolume create.
Custom metadata to add to your Docker volume. This parameter maps to\n\t\t\t\tLabels
in the docker create-container command and the xxlabel
option to docker\n\t\t\t\tvolume create.
The file type to use. Environment files are objects in Amazon S3. The only supported value is\n\t\t\t\ts3
.
The file type to use. Environment files are objects in Amazon S3. The only supported value\n\t\t\tis s3
.
A list of files containing the environment variables to pass to a container. You can\n\t\t\tspecify up to ten environment files. The file must have a .env
file\n\t\t\textension. Each line in an environment file should contain an environment variable in\n\t\t\t\tVARIABLE=VALUE
format. Lines beginning with #
are treated\n\t\t\tas comments and are ignored.
If there are environment variables specified using the environment
\n\t\t\tparameter in a container definition, they take precedence over the variables contained\n\t\t\twithin an environment file. If multiple environment files are specified that contain the\n\t\t\tsame variable, they're processed from the top down. We recommend that you use unique\n\t\t\tvariable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
\nYou must use the following platforms for the Fargate launch type:
\nLinux platform version 1.4.0
or later.
Windows platform version 1.0.0
or later.
Consider the following when using the Fargate launch type:
\nThe file is handled like a native Docker env-file.
\nThere is no support for shell escape handling.
\nThe container entry point interperts the VARIABLE
values.
A list of files containing the environment variables to pass to a container. You can\n\t\t\tspecify up to ten environment files. The file must have a .env
file\n\t\t\textension. Each line in an environment file should contain an environment variable in\n\t\t\t\tVARIABLE=VALUE
format. Lines beginning with #
are treated\n\t\t\tas comments and are ignored.
If there are environment variables specified using the environment
\n\t\t\tparameter in a container definition, they take precedence over the variables contained\n\t\t\twithin an environment file. If multiple environment files are specified that contain the\n\t\t\tsame variable, they're processed from the top down. We recommend that you use unique\n\t\t\tvariable names. For more information, see Use a file to pass\n\t\t\t\tenvironment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations\n\t\t\tapply.
\nYou must use the following platforms for the Fargate launch type:
\nLinux platform version 1.4.0
or later.
Windows platform version 1.0.0
or later.
Consider the following when using the Fargate launch type:
\nThe file is handled like a native Docker env-file.
\nThere is no support for shell escape handling.
\nThe container entry point interperts the VARIABLE
values.
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported\n\t\t\tvalue is 20
GiB and the maximum supported value is\n\t\t\t\t200
GiB.
The total amount, in GiB, of ephemeral storage to set for the task. The minimum\n\t\t\tsupported value is 20
GiB and the maximum supported value is\n\t\t\t\t200
GiB.
A string array representing the command that the container runs to determine if it is\n\t\t\thealthy. The string array must start with CMD
to run the command arguments\n\t\t\tdirectly, or CMD-SHELL
to run the command with the container's default\n\t\t\tshell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list\n\t\t\tof commands in double quotes and brackets.
\n\n [ \"CMD-SHELL\", \"curl -f http://localhost/ || exit 1\" ]
\n
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
\n\n CMD-SHELL, curl -f http://localhost/ || exit 1
\n
An exit code of 0 indicates success, and non-zero exit code indicates failure. For\n\t\t\tmore information, see HealthCheck
in the Create a container\n\t\t\tsection of the Docker Remote API.
A string array representing the command that the container runs to determine if it is\n\t\t\thealthy. The string array must start with CMD
to run the command arguments\n\t\t\tdirectly, or CMD-SHELL
to run the command with the container's default\n\t\t\tshell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list\n\t\t\tof commands in double quotes and brackets.
\n\n [ \"CMD-SHELL\", \"curl -f http://localhost/ || exit 1\" ]
\n
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
\n\n CMD-SHELL, curl -f http://localhost/ || exit 1
\n
An exit code of 0 indicates success, and non-zero exit code indicates failure. For\n\t\t\tmore information, see HealthCheck
in tthe docker create-container command
An object representing a container health check. Health check parameters that are\n\t\t\tspecified in a container definition override any Docker health checks that exist in the\n\t\t\tcontainer image (such as those specified in a parent image or from the image's\n\t\t\tDockerfile). This configuration maps to the HEALTHCHECK
parameter of docker run.
The Amazon ECS container agent only monitors and reports on the health checks specified\n\t\t\t\tin the task definition. Amazon ECS does not monitor Docker health checks that are\n\t\t\t\tembedded in a container image and not specified in the container definition. Health\n\t\t\t\tcheck parameters that are specified in a container definition override any Docker\n\t\t\t\thealth checks that exist in the container image.
\nYou can view the health status of both individual containers and a task with the\n\t\t\tDescribeTasks API operation or when viewing the task details in the console.
\nThe health check is designed to make sure that your containers survive agent restarts,\n\t\t\tupgrades, or temporary unavailability.
\nAmazon ECS performs health checks on containers with the default that launched the\n\t\t\tcontainer instance or the task.
\nThe following describes the possible healthStatus
values for a\n\t\t\tcontainer:
\n HEALTHY
-The container health check has passed\n\t\t\t\t\tsuccessfully.
\n UNHEALTHY
-The container health check has failed.
\n UNKNOWN
-The container health check is being evaluated,\n\t\t\t\t\tthere's no container health check defined, or Amazon ECS doesn't have the health\n\t\t\t\t\tstatus of the container.
The following describes the possible healthStatus
values based on the\n\t\t\tcontainer health checker status of essential containers in the task with the following\n\t\t\tpriority order (high to low):
\n UNHEALTHY
-One or more essential containers have failed\n\t\t\t\t\ttheir health check.
\n UNKNOWN
-Any essential container running within the task is\n\t\t\t\t\tin an UNKNOWN
state and no other essential containers have an\n\t\t\t\t\t\tUNHEALTHY
state.
\n HEALTHY
-All essential containers within the task have\n\t\t\t\t\tpassed their health checks.
Consider the following task health example with 2 containers.
\nIf Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tUNKNOWN
, the task health is UNHEALTHY
.
If Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tHEALTHY
, the task health is UNHEALTHY
.
If Container1 is HEALTHY
and Container2 is UNKNOWN
,\n\t\t\t\t\tthe task health is UNKNOWN
.
If Container1 is HEALTHY
and Container2 is HEALTHY
,\n\t\t\t\t\tthe task health is HEALTHY
.
Consider the following task health example with 3 containers.
\nIf Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tUNKNOWN
, and Container3 is UNKNOWN
, the task health is\n\t\t\t\t\t\tUNHEALTHY
.
If Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tUNKNOWN
, and Container3 is HEALTHY
, the task health is\n\t\t\t\t\t\tUNHEALTHY
.
If Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tHEALTHY
, and Container3 is HEALTHY
, the task health is\n\t\t\t\t\t\tUNHEALTHY
.
If Container1 is HEALTHY
and Container2 is UNKNOWN
,\n\t\t\t\t\tand Container3 is HEALTHY
, the task health is\n\t\t\t\t\tUNKNOWN
.
If Container1 is HEALTHY
and Container2 is UNKNOWN
,\n\t\t\t\t\tand Container3 is UNKNOWN
, the task health is\n\t\t\t\t\tUNKNOWN
.
If Container1 is HEALTHY
and Container2 is HEALTHY
,\n\t\t\t\t\tand Container3 is HEALTHY
, the task health is\n\t\t\t\t\tHEALTHY
.
If a task is run manually, and not as part of a service, the task will continue its\n\t\t\tlifecycle regardless of its health status. For tasks that are part of a service, if the\n\t\t\ttask reports as unhealthy then the task will be stopped and the service scheduler will\n\t\t\treplace it.
\nThe following are notes about container health check support:
\nIf the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this won't\n\t\t\t\t\tcause a container to transition to an UNHEALTHY
status. This is by design,\n\t\t\t\t\tto ensure that containers remain running during agent restarts or temporary\n\t\t\t\t\tunavailability. The health check status is the \"last heard from\" response from the Amazon ECS\n\t\t\t\t\tagent, so if the container was considered HEALTHY
prior to the disconnect,\n\t\t\t\t\tthat status will remain until the agent reconnects and another health check occurs.\n\t\t\t\t\tThere are no assumptions made about the status of the container health checks.
Container health checks require version 1.17.0
or greater of the Amazon ECS\n\t\t\t\t\tcontainer agent. For more information, see Updating the\n\t\t\t\t\t\tAmazon ECS container agent.
Container health checks are supported for Fargate tasks if\n\t\t\t\t\tyou're using platform version 1.1.0
or greater. For more\n\t\t\t\t\tinformation, see Fargate\n\t\t\t\t\t\tplatform versions.
Container health checks aren't supported for tasks that are part of a service\n\t\t\t\t\tthat's configured to use a Classic Load Balancer.
\nAn object representing a container health check. Health check parameters that are\n\t\t\tspecified in a container definition override any Docker health checks that exist in the\n\t\t\tcontainer image (such as those specified in a parent image or from the image's\n\t\t\tDockerfile). This configuration maps to the HEALTHCHECK
parameter of docker run.
The Amazon ECS container agent only monitors and reports on the health checks specified\n\t\t\t\tin the task definition. Amazon ECS does not monitor Docker health checks that are\n\t\t\t\tembedded in a container image and not specified in the container definition. Health\n\t\t\t\tcheck parameters that are specified in a container definition override any Docker\n\t\t\t\thealth checks that exist in the container image.
\nYou can view the health status of both individual containers and a task with the\n\t\t\tDescribeTasks API operation or when viewing the task details in the console.
\nThe health check is designed to make sure that your containers survive agent restarts,\n\t\t\tupgrades, or temporary unavailability.
\nAmazon ECS performs health checks on containers with the default that launched the\n\t\t\tcontainer instance or the task.
\nThe following describes the possible healthStatus
values for a\n\t\t\tcontainer:
\n HEALTHY
-The container health check has passed\n\t\t\t\t\tsuccessfully.
\n UNHEALTHY
-The container health check has failed.
\n UNKNOWN
-The container health check is being evaluated,\n\t\t\t\t\tthere's no container health check defined, or Amazon ECS doesn't have the health\n\t\t\t\t\tstatus of the container.
The following describes the possible healthStatus
values based on the\n\t\t\tcontainer health checker status of essential containers in the task with the following\n\t\t\tpriority order (high to low):
\n UNHEALTHY
-One or more essential containers have failed\n\t\t\t\t\ttheir health check.
\n UNKNOWN
-Any essential container running within the task is\n\t\t\t\t\tin an UNKNOWN
state and no other essential containers have an\n\t\t\t\t\t\tUNHEALTHY
state.
\n HEALTHY
-All essential containers within the task have\n\t\t\t\t\tpassed their health checks.
Consider the following task health example with 2 containers.
\nIf Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tUNKNOWN
, the task health is UNHEALTHY
.
If Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tHEALTHY
, the task health is UNHEALTHY
.
If Container1 is HEALTHY
and Container2 is UNKNOWN
,\n\t\t\t\t\tthe task health is UNKNOWN
.
If Container1 is HEALTHY
and Container2 is HEALTHY
,\n\t\t\t\t\tthe task health is HEALTHY
.
Consider the following task health example with 3 containers.
\nIf Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tUNKNOWN
, and Container3 is UNKNOWN
, the task health is\n\t\t\t\t\t\tUNHEALTHY
.
If Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tUNKNOWN
, and Container3 is HEALTHY
, the task health is\n\t\t\t\t\t\tUNHEALTHY
.
If Container1 is UNHEALTHY
and Container2 is\n\t\t\t\t\tHEALTHY
, and Container3 is HEALTHY
, the task health is\n\t\t\t\t\t\tUNHEALTHY
.
If Container1 is HEALTHY
and Container2 is UNKNOWN
,\n\t\t\t\t\tand Container3 is HEALTHY
, the task health is\n\t\t\t\t\tUNKNOWN
.
If Container1 is HEALTHY
and Container2 is UNKNOWN
,\n\t\t\t\t\tand Container3 is UNKNOWN
, the task health is\n\t\t\t\t\tUNKNOWN
.
If Container1 is HEALTHY
and Container2 is HEALTHY
,\n\t\t\t\t\tand Container3 is HEALTHY
, the task health is\n\t\t\t\t\tHEALTHY
.
If a task is run manually, and not as part of a service, the task will continue its\n\t\t\tlifecycle regardless of its health status. For tasks that are part of a service, if the\n\t\t\ttask reports as unhealthy then the task will be stopped and the service scheduler will\n\t\t\treplace it.
\nThe following are notes about container health check support:
\nIf the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this\n\t\t\t\t\twon't cause a container to transition to an UNHEALTHY
status. This\n\t\t\t\t\tis by design, to ensure that containers remain running during agent restarts or\n\t\t\t\t\ttemporary unavailability. The health check status is the \"last heard from\"\n\t\t\t\t\tresponse from the Amazon ECS agent, so if the container was considered\n\t\t\t\t\t\tHEALTHY
prior to the disconnect, that status will remain until\n\t\t\t\t\tthe agent reconnects and another health check occurs. There are no assumptions\n\t\t\t\t\tmade about the status of the container health checks.
Container health checks require version 1.17.0
or greater of the\n\t\t\t\t\tAmazon ECS container agent. For more information, see Updating the\n\t\t\t\t\t\tAmazon ECS container agent.
Container health checks are supported for Fargate tasks if\n\t\t\t\t\tyou're using platform version 1.1.0
or greater. For more\n\t\t\t\t\tinformation, see Fargate\n\t\t\t\t\t\tplatform versions.
Container health checks aren't supported for tasks that are part of a service\n\t\t\t\t\tthat's configured to use a Classic Load Balancer.
\nThe Linux capabilities for the container that have been added to the default\n\t\t\tconfiguration provided by Docker. This parameter maps to CapAdd
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--cap-add
option to docker\n\t\t\t\trun.
Tasks launched on Fargate only support adding the SYS_PTRACE
kernel\n\t\t\t\tcapability.
Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" |\n\t\t\t\t\"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" |\n\t\t\t\t\"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" |\n\t\t\t\t\"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\"\n\t\t\t\t| \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" |\n\t\t\t\t\"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" |\n\t\t\t\t\"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" |\n\t\t\t\"WAKE_ALARM\"
\n
The Linux capabilities for the container that have been added to the default\n\t\t\tconfiguration provided by Docker. This parameter maps to CapAdd
in the docker create-container command and the\n\t\t\t\t--cap-add
option to docker\n\t\t\t\trun.
Tasks launched on Fargate only support adding the SYS_PTRACE
kernel\n\t\t\t\tcapability.
Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" |\n\t\t\t\t\"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" |\n\t\t\t\t\"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" |\n\t\t\t\t\"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\"\n\t\t\t\t| \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" |\n\t\t\t\t\"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" |\n\t\t\t\t\"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" |\n\t\t\t\"WAKE_ALARM\"
\n
The Linux capabilities for the container that have been removed from the default\n\t\t\tconfiguration provided by Docker. This parameter maps to CapDrop
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--cap-drop
option to docker\n\t\t\t\trun.
Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" |\n\t\t\t\t\"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" |\n\t\t\t\t\"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" |\n\t\t\t\t\"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\"\n\t\t\t\t| \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" |\n\t\t\t\t\"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" |\n\t\t\t\t\"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" |\n\t\t\t\"WAKE_ALARM\"
\n
The Linux capabilities for the container that have been removed from the default\n\t\t\tconfiguration provided by Docker. This parameter maps to CapDrop
in the docker create-container command and the\n\t\t\t\t--cap-drop
option to docker\n\t\t\t\trun.
Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" |\n\t\t\t\t\"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" |\n\t\t\t\t\"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" |\n\t\t\t\t\"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\"\n\t\t\t\t| \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" |\n\t\t\t\t\"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" |\n\t\t\t\t\"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" |\n\t\t\t\"WAKE_ALARM\"
\n
The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more information about the default capabilities\n\t\t\tand the non-default available capabilities, see Runtime privilege and Linux capabilities in the Docker run\n\t\t\t\treference. For more detailed information about these Linux capabilities,\n\t\t\tsee the capabilities(7) Linux manual page.
" + "smithy.api#documentation": "The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more detailed information about these Linux capabilities,\n\t\t\tsee the capabilities(7) Linux manual page.
" } }, "com.amazonaws.ecs#KeyValuePair": { @@ -6595,7 +6634,7 @@ "devices": { "target": "com.amazonaws.ecs#DevicesList", "traits": { - "smithy.api#documentation": "Any host devices to expose to the container. This parameter maps to\n\t\t\t\tDevices
in the Create a container section of the\n\t\t\tDocker Remote API and the --device
option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\tdevices
parameter isn't supported.
Any host devices to expose to the container. This parameter maps to\n\t\t\tDevices
in tthe docker create-container command and the --device
option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\tdevices
parameter isn't supported.
The value for the size (in MiB) of the /dev/shm
volume. This parameter\n\t\t\tmaps to the --shm-size
option to docker\n\t\t\t\trun.
If you are using tasks that use the Fargate launch type, the\n\t\t\t\t\tsharedMemorySize
parameter is not supported.
The value for the size (in MiB) of the /dev/shm
volume. This parameter\n\t\t\tmaps to the --shm-size
option to docker\n\t\t\t\trun.
If you are using tasks that use the Fargate launch type, the\n\t\t\t\t\tsharedMemorySize
parameter is not supported.
The container path, mount options, and size (in MiB) of the tmpfs mount. This\n\t\t\tparameter maps to the --tmpfs
option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\ttmpfs
parameter isn't supported.
The container path, mount options, and size (in MiB) of the tmpfs mount. This\n\t\t\tparameter maps to the --tmpfs
option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\ttmpfs
parameter isn't supported.
This allows you to tune a container's memory swappiness behavior. A\n\t\t\t\tswappiness
value of 0
will cause swapping to not happen\n\t\t\tunless absolutely necessary. A swappiness
value of 100
will\n\t\t\tcause pages to be swapped very aggressively. Accepted values are whole numbers between\n\t\t\t\t0
and 100
. If the swappiness
parameter is not\n\t\t\tspecified, a default value of 60
is used. If a value is not specified for\n\t\t\t\tmaxSwap
then this parameter is ignored. This parameter maps to the\n\t\t\t\t--memory-swappiness
option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\tswappiness
parameter isn't supported.
If you're using tasks on Amazon Linux 2023 the swappiness
parameter isn't\n\t\t\t\tsupported.
This allows you to tune a container's memory swappiness behavior. A\n\t\t\t\tswappiness
value of 0
will cause swapping to not happen\n\t\t\tunless absolutely necessary. A swappiness
value of 100
will\n\t\t\tcause pages to be swapped very aggressively. Accepted values are whole numbers between\n\t\t\t\t0
and 100
. If the swappiness
parameter is not\n\t\t\tspecified, a default value of 60
is used. If a value is not specified for\n\t\t\t\tmaxSwap
then this parameter is ignored. This parameter maps to the\n\t\t\t\t--memory-swappiness
option to docker run.
If you're using tasks that use the Fargate launch type, the\n\t\t\t\t\tswappiness
parameter isn't supported.
If you're using tasks on Amazon Linux 2023 the swappiness
parameter isn't\n\t\t\t\tsupported.
The log driver to use for the container.
\nFor tasks on Fargate, the supported log drivers are awslogs
,\n\t\t\t\tsplunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are\n\t\t\t\tawslogs
, fluentd
, gelf
,\n\t\t\t\tjson-file
, journald
,\n\t\t\t\tlogentries
,syslog
, splunk
, and\n\t\t\t\tawsfirelens
.
For more information about using the awslogs
log driver, see Using\n\t\t\t\tthe awslogs log driver in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens
log driver, see Custom log routing in the Amazon Elastic Container Service Developer Guide.
If you have a custom driver that isn't listed, you can fork the Amazon ECS container\n\t\t\t\tagent project that's available\n\t\t\t\t\ton GitHub and customize it to work with that driver. We encourage you to\n\t\t\t\tsubmit pull requests for changes that you would like to have included. However, we\n\t\t\t\tdon't currently provide support for running modified copies of this software.
\nThe log driver to use for the container.
\nFor tasks on Fargate, the supported log drivers are awslogs
,\n\t\t\t\tsplunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are\n\t\t\t\tawslogs
, fluentd
, gelf
,\n\t\t\t\tjson-file
, journald
, syslog
,\n\t\t\t\tsplunk
, and awsfirelens
.
For more information about using the awslogs
log driver, see Send\n\t\t\t\tAmazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens
log driver, see Send\n\t\t\t\tAmazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
If you have a custom driver that isn't listed, you can fork the Amazon ECS container\n\t\t\t\tagent project that's available\n\t\t\t\t\ton GitHub and customize it to work with that driver. We encourage you to\n\t\t\t\tsubmit pull requests for changes that you would like to have included. However, we\n\t\t\t\tdon't currently provide support for running modified copies of this software.
\nThe log configuration for the container. This parameter maps to LogConfig
\n\t\t\tin the Create a container section of the Docker Remote API and the\n\t\t\t\t--log-driver
option to \n docker\n\t\t\t\t\trun
\n .
By default, containers use the same logging driver that the Docker daemon uses.\n\t\t\tHowever, the container might use a different logging driver than the Docker daemon by\n\t\t\tspecifying a log driver configuration in the container definition. For more information\n\t\t\tabout the options for different supported log drivers, see Configure logging\n\t\t\t\tdrivers in the Docker documentation.
\nUnderstand the following when specifying a log configuration for your\n\t\t\tcontainers.
\nAmazon ECS currently supports a subset of the logging drivers available to the\n\t\t\t\t\tDocker daemon. Additional log drivers may be available in future releases of the\n\t\t\t\t\tAmazon ECS container agent.
\nFor tasks on Fargate, the supported log drivers are awslogs
,\n\t\t\t\t\t\tsplunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are\n\t\t\t\t\t\tawslogs
, fluentd
, gelf
,\n\t\t\t\t\t\tjson-file
, journald
,\n\t\t\t\t\t\tlogentries
,syslog
, splunk
, and\n\t\t\t\t\t\tawsfirelens
.
This parameter requires version 1.18 of the Docker Remote API or greater on\n\t\t\t\t\tyour container instance.
\nFor tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must\n\t\t\t\t\tregister the available logging drivers with the\n\t\t\t\t\t\tECS_AVAILABLE_LOGGING_DRIVERS
environment variable before\n\t\t\t\t\tcontainers placed on that instance can use these log configuration options. For\n\t\t\t\t\tmore information, see Amazon ECS container agent configuration in the\n\t\t\t\t\tAmazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the\n\t\t\t\t\tunderlying infrastructure your tasks are hosted on, any additional software\n\t\t\t\t\tneeded must be installed outside of the task. For example, the Fluentd output\n\t\t\t\t\taggregators or a remote host running Logstash to send Gelf logs to.
\nThe log configuration for the container. This parameter maps to LogConfig
\n\t\t\tin the docker create-container command and the\n\t\t\t\t--log-driver
option to docker\n\t\t\t\t\trun.
By default, containers use the same logging driver that the Docker daemon uses.\n\t\t\tHowever, the container might use a different logging driver than the Docker daemon by\n\t\t\tspecifying a log driver configuration in the container definition.
\nUnderstand the following when specifying a log configuration for your\n\t\t\tcontainers.
\nAmazon ECS currently supports a subset of the logging drivers available to the\n\t\t\t\t\tDocker daemon. Additional log drivers may be available in future releases of the\n\t\t\t\t\tAmazon ECS container agent.
\nFor tasks on Fargate, the supported log drivers are awslogs
,\n\t\t\t\t\t\tsplunk
, and awsfirelens
.
For tasks hosted on Amazon EC2 instances, the supported log drivers are\n\t\t\t\t\t\tawslogs
, fluentd
, gelf
,\n\t\t\t\t\t\tjson-file
, journald
,syslog
,\n\t\t\t\t\t\tsplunk
, and awsfirelens
.
This parameter requires version 1.18 of the Docker Remote API or greater on\n\t\t\t\t\tyour container instance.
\nFor tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must\n\t\t\t\t\tregister the available logging drivers with the\n\t\t\t\t\t\tECS_AVAILABLE_LOGGING_DRIVERS
environment variable before\n\t\t\t\t\tcontainers placed on that instance can use these log configuration options. For\n\t\t\t\t\tmore information, see Amazon ECS container agent configuration in the\n\t\t\t\t\tAmazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the\n\t\t\t\t\tunderlying infrastructure your tasks are hosted on, any additional software\n\t\t\t\t\tneeded must be installed outside of the task. For example, the Fluentd output\n\t\t\t\t\taggregators or a remote host running Logstash to send Gelf logs to.
\nPort mappings allow containers to access ports on the host container instance to send\n\t\t\tor receive traffic. Port mappings are specified as part of the container\n\t\t\tdefinition.
\nIf you use containers in a task with the awsvpc
or host
\n\t\t\tnetwork mode, specify the exposed ports using containerPort
. The\n\t\t\t\thostPort
can be left blank or it must be the same value as the\n\t\t\t\tcontainerPort
.
Most fields of this parameter (containerPort
, hostPort
,\n\t\t\t\tprotocol
) maps to PortBindings
in the\n\t\t\tCreate a container section of the Docker Remote API and the\n\t\t\t\t--publish
option to \n docker\n\t\t\t\t\trun
\n . If the network mode of a task definition is set to\n\t\t\t\thost
, host ports must either be undefined or match the container port\n\t\t\tin the port mapping.
You can't expose the same container port for multiple protocols. If you attempt\n\t\t\t\tthis, an error is returned.
\nAfter a task reaches the RUNNING
status, manual and automatic host and\n\t\t\tcontainer port assignments are visible in the networkBindings
section of\n\t\t\t\tDescribeTasks API responses.
Port mappings allow containers to access ports on the host container instance to send\n\t\t\tor receive traffic. Port mappings are specified as part of the container\n\t\t\tdefinition.
\nIf you use containers in a task with the awsvpc
or host
\n\t\t\tnetwork mode, specify the exposed ports using containerPort
. The\n\t\t\t\thostPort
can be left blank or it must be the same value as the\n\t\t\t\tcontainerPort
.
Most fields of this parameter (containerPort
, hostPort
,\n\t\t\tprotocol
) maps to PortBindings
in the docker create-container command and the\n\t\t\t\t--publish
option to docker\n\t\t\t\t\trun
. If the network mode of a task definition is set to\n\t\t\t\thost
, host ports must either be undefined or match the container port\n\t\t\tin the port mapping.
You can't expose the same container port for multiple protocols. If you attempt\n\t\t\t\tthis, an error is returned.
\nAfter a task reaches the RUNNING
status, manual and automatic host and\n\t\t\tcontainer port assignments are visible in the networkBindings
section of\n\t\t\t\tDescribeTasks API responses.
Registers a new task definition from the supplied family
and\n\t\t\t\tcontainerDefinitions
. Optionally, you can add data volumes to your\n\t\t\tcontainers with the volumes
parameter. For more information about task\n\t\t\tdefinition parameters and defaults, see Amazon ECS Task\n\t\t\t\tDefinitions in the Amazon Elastic Container Service Developer Guide.
You can specify a role for your task with the taskRoleArn
parameter. When\n\t\t\tyou specify a role for a task, its containers can then use the latest versions of the\n\t\t\tCLI or SDKs to make API requests to the Amazon Web Services services that are specified in the\n\t\t\tpolicy that's associated with the role. For more information, see IAM\n\t\t\t\tRoles for Tasks in the Amazon Elastic Container Service Developer Guide.
You can specify a Docker networking mode for the containers in your task definition\n\t\t\twith the networkMode
parameter. The available network modes correspond to\n\t\t\tthose described in Network\n\t\t\t\tsettings in the Docker run reference. If you specify the awsvpc
\n\t\t\tnetwork mode, the task is allocated an elastic network interface, and you must specify a\n\t\t\t\tNetworkConfiguration when you create a service or run a task with\n\t\t\tthe task definition. For more information, see Task Networking\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
Registers a new task definition from the supplied family
and\n\t\t\t\tcontainerDefinitions
. Optionally, you can add data volumes to your\n\t\t\tcontainers with the volumes
parameter. For more information about task\n\t\t\tdefinition parameters and defaults, see Amazon ECS Task\n\t\t\t\tDefinitions in the Amazon Elastic Container Service Developer Guide.
You can specify a role for your task with the taskRoleArn
parameter. When\n\t\t\tyou specify a role for a task, its containers can then use the latest versions of the\n\t\t\tCLI or SDKs to make API requests to the Amazon Web Services services that are specified in the\n\t\t\tpolicy that's associated with the role. For more information, see IAM\n\t\t\t\tRoles for Tasks in the Amazon Elastic Container Service Developer Guide.
You can specify a Docker networking mode for the containers in your task definition\n\t\t\twith the networkMode
parameter. If you specify the awsvpc
\n\t\t\tnetwork mode, the task is allocated an elastic network interface, and you must specify a\n\t\t\t\tNetworkConfiguration when you create a service or run a task with\n\t\t\tthe task definition. For more information, see Task Networking\n\t\t\tin the Amazon Elastic Container Service Developer Guide.
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent\n permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required\n depending on the requirements of your task. For more information, see Amazon ECS task\n execution IAM role in the Amazon Elastic Container Service Developer Guide.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent\n permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
" } }, "networkMode": { "target": "com.amazonaws.ecs#NetworkMode", "traits": { - "smithy.api#documentation": "The Docker networking mode to use for the containers in the task. The valid values are\n none
, bridge
, awsvpc
, and host
.\n If no network mode is specified, the default is bridge
.
For Amazon ECS tasks on Fargate, the awsvpc
network mode is required. \n For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances,
or awsvpc
can be used. If the network\n mode is set to none
, you cannot specify port mappings in your container\n definitions, and the tasks containers do not have external connectivity. The\n host
and awsvpc
network modes offer the highest networking\n performance for containers because they use the EC2 network stack instead of the\n virtualized network stack provided by the bridge
mode.
With the host
and awsvpc
network modes, exposed container\n ports are mapped directly to the corresponding host port (for the host
\n network mode) or the attached elastic network interface port (for the\n awsvpc
network mode), so you cannot take advantage of dynamic host port\n mappings.
When using the host
network mode, you should not run\n containers using the root user (UID 0). It is considered best practice\n to use a non-root user.
If the network mode is awsvpc
, the task is allocated an elastic network\n interface, and you must specify a NetworkConfiguration value when you create\n a service or run a task with the task definition. For more information, see Task Networking in the\n Amazon Elastic Container Service Developer Guide.
If the network mode is host
, you cannot run multiple instantiations of the\n same task on a single container instance when port mappings are used.
For more information, see Network\n settings in the Docker run reference.
" + "smithy.api#documentation": "The Docker networking mode to use for the containers in the task. The valid values are\n none
, bridge
, awsvpc
, and host
.\n If no network mode is specified, the default is bridge
.
For Amazon ECS tasks on Fargate, the awsvpc
network mode is required. \n For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances,
or awsvpc
can be used. If the network\n mode is set to none
, you cannot specify port mappings in your container\n definitions, and the tasks containers do not have external connectivity. The\n host
and awsvpc
network modes offer the highest networking\n performance for containers because they use the EC2 network stack instead of the\n virtualized network stack provided by the bridge
mode.
With the host
and awsvpc
network modes, exposed container\n ports are mapped directly to the corresponding host port (for the host
\n network mode) or the attached elastic network interface port (for the\n awsvpc
network mode), so you cannot take advantage of dynamic host port\n mappings.
When using the host
network mode, you should not run\n containers using the root user (UID 0). It is considered best practice\n to use a non-root user.
If the network mode is awsvpc
, the task is allocated an elastic network\n interface, and you must specify a NetworkConfiguration value when you create\n a service or run a task with the task definition. For more information, see Task Networking in the\n Amazon Elastic Container Service Developer Guide.
If the network mode is host
, you cannot run multiple instantiations of the\n same task on a single container instance when port mappings are used.
The process namespace to use for the containers in the task. The valid\n values are host
or task
. On Fargate for\n Linux containers, the only valid value is task
. For\n example, monitoring sidecars might need pidMode
to access\n information about other containers running in the same task.
If host
is specified, all containers within the tasks\n that specified the host
PID mode on the same container\n instance share the same process namespace with the host Amazon EC2\n instance.
If task
is specified, all containers within the specified\n task share the same process namespace.
If no value is specified, the\n default is a private namespace for each container. For more information,\n see PID settings in the Docker run\n reference.
\nIf the host
PID mode is used, there's a heightened risk\n of undesired process namespace exposure. For more information, see\n Docker security.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0
or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The process namespace to use for the containers in the task. The valid\n values are host
or task
. On Fargate for\n Linux containers, the only valid value is task
. For\n example, monitoring sidecars might need pidMode
to access\n information about other containers running in the same task.
If host
is specified, all containers within the tasks\n that specified the host
PID mode on the same container\n instance share the same process namespace with the host Amazon EC2\n instance.
If task
is specified, all containers within the specified\n task share the same process namespace.
If no value is specified, the\n default is a private namespace for each container.
\nIf the host
PID mode is used, there's a heightened risk\n of undesired process namespace exposure.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0
or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The IPC resource namespace to use for the containers in the task. The valid values are\n host
, task
, or none
. If host
is\n specified, then all containers within the tasks that specified the host
IPC\n mode on the same container instance share the same IPC resources with the host Amazon EC2\n instance. If task
is specified, all containers within the specified task\n share the same IPC resources. If none
is specified, then IPC resources\n within the containers of a task are private and not shared with other containers in a\n task or on the container instance. If no value is specified, then the IPC resource\n namespace sharing depends on the Docker daemon setting on the container instance. For\n more information, see IPC\n settings in the Docker run reference.
If the host
IPC mode is used, be aware that there is a heightened risk of\n undesired IPC namespace expose. For more information, see Docker\n security.
If you are setting namespaced kernel parameters using systemControls
for\n the containers in the task, the following will apply to your IPC resource namespace. For\n more information, see System\n Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host
IPC mode, IPC namespace related\n systemControls
are not supported.
For tasks that use the task
IPC mode, IPC namespace related\n systemControls
will apply to all containers within a\n task.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nThe IPC resource namespace to use for the containers in the task. The valid values are\n host
, task
, or none
. If host
is\n specified, then all containers within the tasks that specified the host
IPC\n mode on the same container instance share the same IPC resources with the host Amazon EC2\n instance. If task
is specified, all containers within the specified task\n share the same IPC resources. If none
is specified, then IPC resources\n within the containers of a task are private and not shared with other containers in a\n task or on the container instance. If no value is specified, then the IPC resource\n namespace sharing depends on the Docker daemon setting on the container instance.
If the host
IPC mode is used, be aware that there is a heightened risk of\n undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls
for\n the containers in the task, the following will apply to your IPC resource namespace. For\n more information, see System\n Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host
IPC mode, IPC namespace related\n systemControls
are not supported.
For tasks that use the task
IPC mode, IPC namespace related\n systemControls
will apply to all containers within a\n task.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nThe value for the specified resource type.
\nWhen the type is GPU
, the value is the number of physical GPUs
the\n\t\t\tAmazon ECS container agent reserves for the container. The number of GPUs that's reserved for\n\t\t\tall containers in a task can't exceed the number of available GPUs on the container\n\t\t\tinstance that the task is launched on.
When the type is InferenceAccelerator
, the value
matches\n\t\t\tthe deviceName
for an InferenceAccelerator specified in a task definition.
The value for the specified resource type.
\nWhen the type is GPU
, the value is the number of physical\n\t\t\t\tGPUs
the Amazon ECS container agent reserves for the container. The number\n\t\t\tof GPUs that's reserved for all containers in a task can't exceed the number of\n\t\t\tavailable GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator
, the value
matches the\n\t\t\t\tdeviceName
for an InferenceAccelerator specified in a task definition.
An optional tag specified when a task is started. For example, if you automatically\n\t\t\ttrigger a task to run a batch process job, you could apply a unique identifier for that\n\t\t\tjob to your task with the startedBy
parameter. You can then identify which\n\t\t\ttasks belong to that job by filtering the results of a ListTasks call\n\t\t\twith the startedBy
value. Up to 128 letters (uppercase and lowercase),\n\t\t\tnumbers, hyphens (-), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, then the startedBy
parameter\n\t\t\tcontains the deployment ID of the service that starts it.
An optional tag specified when a task is started. For example, if you automatically\n\t\t\ttrigger a task to run a batch process job, you could apply a unique identifier for that\n\t\t\tjob to your task with the startedBy
parameter. You can then identify which\n\t\t\ttasks belong to that job by filtering the results of a ListTasks call with\n\t\t\tthe startedBy
value. Up to 128 letters (uppercase and lowercase), numbers,\n\t\t\thyphens (-), forward slash (/), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, then the startedBy
parameter\n\t\t\tcontains the deployment ID of the service that starts it.
The family
and revision
(family:revision
) or\n\t\t\tfull ARN of the task definition to run. If a revision
isn't specified,\n\t\t\tthe latest ACTIVE
revision is used.
The full ARN value must match the value that you specified as the\n\t\t\t\tResource
of the principal's permissions policy.
When you specify a task definition, you must either specify a specific revision, or\n\t\t\tall revisions in the ARN.
\nTo specify a specific revision, include the revision number in the ARN. For example,\n\t\t\tto specify revision 2, use\n\t\t\t\tarn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:2
.
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify all\n\t\t\trevisions, use\n\t\t\t\tarn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:*
.
For more information, see Policy Resources for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
", + "smithy.api#documentation": "The family
and revision
(family:revision
) or\n\t\t\tfull ARN of the task definition to run. If a revision
isn't specified,\n\t\t\tthe latest ACTIVE
revision is used.
The full ARN value must match the value that you specified as the\n\t\t\t\tResource
of the principal's permissions policy.
When you specify a task definition, you must either specify a specific revision, or\n\t\t\tall revisions in the ARN.
\nTo specify a specific revision, include the revision number in the ARN. For example,\n\t\t\tto specify revision 2, use\n\t\t\t\tarn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:2
.
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify\n\t\t\tall revisions, use\n\t\t\t\tarn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:*
.
For more information, see Policy Resources for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
", "smithy.api#required": {} } }, @@ -9609,7 +9648,7 @@ "tasks": { "target": "com.amazonaws.ecs#Tasks", "traits": { - "smithy.api#documentation": "A full description of the tasks that were run. The tasks that were successfully placed\n\t\t\ton your cluster are described here.
\n " + "smithy.api#documentation": "A full description of the tasks that were run. The tasks that were successfully placed\n\t\t\ton your cluster are described here.
" } }, "failures": { @@ -10618,7 +10657,7 @@ "startedBy": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "An optional tag specified when a task is started. For example, if you automatically\n\t\t\ttrigger a task to run a batch process job, you could apply a unique identifier for that\n\t\t\tjob to your task with the startedBy
parameter. You can then identify which\n\t\t\ttasks belong to that job by filtering the results of a ListTasks call\n\t\t\twith the startedBy
value. Up to 36 letters (uppercase and lowercase),\n\t\t\tnumbers, hyphens (-), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, the startedBy
parameter\n\t\t\tcontains the deployment ID of the service that starts it.
An optional tag specified when a task is started. For example, if you automatically\n\t\t\ttrigger a task to run a batch process job, you could apply a unique identifier for that\n\t\t\tjob to your task with the startedBy
parameter. You can then identify which\n\t\t\ttasks belong to that job by filtering the results of a ListTasks call with\n\t\t\tthe startedBy
value. Up to 36 letters (uppercase and lowercase), numbers,\n\t\t\thyphens (-), forward slash (/), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, the startedBy
parameter\n\t\t\tcontains the deployment ID of the service that starts it.
Stops a running task. Any tags associated with the task will be deleted.
\nWhen StopTask is called on a task, the equivalent of docker\n\t\t\t\tstop
is issued to the containers running in the task. This results in a\n\t\t\t\tSIGTERM
value and a default 30-second timeout, after which the\n\t\t\t\tSIGKILL
value is sent and the containers are forcibly stopped. If the\n\t\t\tcontainer handles the SIGTERM
value gracefully and exits within 30 seconds\n\t\t\tfrom receiving it, no SIGKILL
value is sent.
For Windows containers, POSIX signals do not work and runtime stops the container by sending\n\t\t\ta CTRL_SHUTDOWN_EVENT
. For more information, see Unable to react to graceful shutdown\n\t\t\t\tof (Windows) container #25982 on GitHub.
The default 30-second timeout can be configured on the Amazon ECS container agent with\n\t\t\t\tthe ECS_CONTAINER_STOP_TIMEOUT
variable. For more information, see\n\t\t\t\t\tAmazon ECS Container Agent Configuration in the\n\t\t\t\tAmazon Elastic Container Service Developer Guide.
Stops a running task. Any tags associated with the task will be deleted.
\nWhen StopTask is called on a task, the equivalent of docker\n\t\t\t\tstop
is issued to the containers running in the task. This results in a\n\t\t\t\tSIGTERM
value and a default 30-second timeout, after which the\n\t\t\t\tSIGKILL
value is sent and the containers are forcibly stopped. If the\n\t\t\tcontainer handles the SIGTERM
value gracefully and exits within 30 seconds\n\t\t\tfrom receiving it, no SIGKILL
value is sent.
For Windows containers, POSIX signals do not work and runtime stops the container by\n\t\t\tsending a CTRL_SHUTDOWN_EVENT
. For more information, see Unable to react to graceful shutdown\n\t\t\t\tof (Windows) container #25982 on GitHub.
The default 30-second timeout can be configured on the Amazon ECS container agent with\n\t\t\t\tthe ECS_CONTAINER_STOP_TIMEOUT
variable. For more information, see\n\t\t\t\t\tAmazon ECS Container Agent Configuration in the\n\t\t\t\tAmazon Elastic Container Service Developer Guide.
A list of namespaced kernel parameters to set in the container. This parameter maps to\n\t\t\t\tSysctls
in the Create a container section of the\n\t\t\tDocker Remote API and the --sysctl
option to docker run. For example, you can configure\n\t\t\t\tnet.ipv4.tcp_keepalive_time
setting to maintain longer lived\n\t\t\tconnections.
We don't recommend that you specify network-related systemControls
\n\t\t\tparameters for multiple containers in a single task that also uses either the\n\t\t\t\tawsvpc
or host
network mode. Doing this has the following\n\t\t\tdisadvantages:
For tasks that use the awsvpc
network mode including Fargate,\n\t\t\t\t\tif you set systemControls
for any container, it applies to all\n\t\t\t\t\tcontainers in the task. If you set different systemControls
for\n\t\t\t\t\tmultiple containers in a single task, the container that's started last\n\t\t\t\t\tdetermines which systemControls
take effect.
For tasks that use the host
network mode, the network namespace\n\t\t\t\t\t\tsystemControls
aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the\n\t\t\tfollowing conditions apply to your system controls. For more information, see IPC mode.
\nFor tasks that use the host
IPC mode, IPC namespace\n\t\t\t\t\t\tsystemControls
aren't supported.
For tasks that use the task
IPC mode, IPC namespace\n\t\t\t\t\t\tsystemControls
values apply to all containers within a\n\t\t\t\t\ttask.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0
or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
A list of namespaced kernel parameters to set in the container. This parameter maps to\n\t\t\tSysctls
in tthe docker create-container command and the --sysctl
option to docker run. For example, you can configure\n\t\t\t\tnet.ipv4.tcp_keepalive_time
setting to maintain longer lived\n\t\t\tconnections.
We don't recommend that you specify network-related systemControls
\n\t\t\tparameters for multiple containers in a single task that also uses either the\n\t\t\t\tawsvpc
or host
network mode. Doing this has the following\n\t\t\tdisadvantages:
For tasks that use the awsvpc
network mode including Fargate,\n\t\t\t\t\tif you set systemControls
for any container, it applies to all\n\t\t\t\t\tcontainers in the task. If you set different systemControls
for\n\t\t\t\t\tmultiple containers in a single task, the container that's started last\n\t\t\t\t\tdetermines which systemControls
take effect.
For tasks that use the host
network mode, the network namespace\n\t\t\t\t\t\tsystemControls
aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the\n\t\t\tfollowing conditions apply to your system controls. For more information, see IPC mode.
\nFor tasks that use the host
IPC mode, IPC namespace\n\t\t\t\t\t\tsystemControls
aren't supported.
For tasks that use the task
IPC mode, IPC namespace\n\t\t\t\t\t\tsystemControls
values apply to all containers within a\n\t\t\t\t\ttask.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0
or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The specified target wasn't found. You can view your available container instances\n\t\t\twith ListContainerInstances. Amazon ECS container instances are\n\t\t\tcluster-specific and Region-specific.
", + "smithy.api#documentation": "The specified target wasn't found. You can view your available container instances\n\t\t\twith ListContainerInstances. Amazon ECS container instances are cluster-specific and\n\t\t\tRegion-specific.
", "smithy.api#error": "client" } }, @@ -11473,19 +11512,19 @@ "taskRoleArn": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the\n\t\t\ttask permission to call Amazon Web Services APIs on your behalf. For more information, see Amazon ECS\n\t\t\t\tTask Role in the Amazon Elastic Container Service Developer Guide.
\nIAM roles for tasks on Windows require that the -EnableTaskIAMRole
\n\t\t\toption is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some\n\t\t\tconfiguration code to use the feature. For more information, see Windows IAM roles\n\t\t\t\tfor tasks in the Amazon Elastic Container Service Developer Guide.
The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the\n\t\t\ttask permission to call Amazon Web Services APIs on your behalf. For informationabout the required\n\t\t\tIAM roles for Amazon ECS, see IAM\n\t\t\t\troles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
" } }, "executionRoleArn": { "target": "com.amazonaws.ecs#String", "traits": { - "smithy.api#documentation": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent\n permission to make Amazon Web Services API calls on your behalf. The task execution IAM role is required\n depending on the requirements of your task. For more information, see Amazon ECS task\n execution IAM role in the Amazon Elastic Container Service Developer Guide.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent\n permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
" } }, "networkMode": { "target": "com.amazonaws.ecs#NetworkMode", "traits": { - "smithy.api#documentation": "The Docker networking mode to use for the containers in the task. The valid values are\n none
, bridge
, awsvpc
, and host
.\n If no network mode is specified, the default is bridge
.
For Amazon ECS tasks on Fargate, the awsvpc
network mode is required. \n For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances,
or awsvpc
can be used. If the network\n mode is set to none
, you cannot specify port mappings in your container\n definitions, and the tasks containers do not have external connectivity. The\n host
and awsvpc
network modes offer the highest networking\n performance for containers because they use the EC2 network stack instead of the\n virtualized network stack provided by the bridge
mode.
With the host
and awsvpc
network modes, exposed container\n ports are mapped directly to the corresponding host port (for the host
\n network mode) or the attached elastic network interface port (for the\n awsvpc
network mode), so you cannot take advantage of dynamic host port\n mappings.
When using the host
network mode, you should not run\n containers using the root user (UID 0). It is considered best practice\n to use a non-root user.
If the network mode is awsvpc
, the task is allocated an elastic network\n interface, and you must specify a NetworkConfiguration value when you create\n a service or run a task with the task definition. For more information, see Task Networking in the\n Amazon Elastic Container Service Developer Guide.
If the network mode is host
, you cannot run multiple instantiations of the\n same task on a single container instance when port mappings are used.
For more information, see Network\n settings in the Docker run reference.
" + "smithy.api#documentation": "The Docker networking mode to use for the containers in the task. The valid values are\n none
, bridge
, awsvpc
, and host
.\n If no network mode is specified, the default is bridge
.
For Amazon ECS tasks on Fargate, the awsvpc
network mode is required. \n For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances,
or awsvpc
can be used. If the network\n mode is set to none
, you cannot specify port mappings in your container\n definitions, and the tasks containers do not have external connectivity. The\n host
and awsvpc
network modes offer the highest networking\n performance for containers because they use the EC2 network stack instead of the\n virtualized network stack provided by the bridge
mode.
With the host
and awsvpc
network modes, exposed container\n ports are mapped directly to the corresponding host port (for the host
\n network mode) or the attached elastic network interface port (for the\n awsvpc
network mode), so you cannot take advantage of dynamic host port\n mappings.
When using the host
network mode, you should not run\n containers using the root user (UID 0). It is considered best practice\n to use a non-root user.
If the network mode is awsvpc
, the task is allocated an elastic network\n interface, and you must specify a NetworkConfiguration value when you create\n a service or run a task with the task definition. For more information, see Task Networking in the\n Amazon Elastic Container Service Developer Guide.
If the network mode is host
, you cannot run multiple instantiations of the\n same task on a single container instance when port mappings are used.
The number of cpu
units used by the task. If you use the EC2 launch type,\n\t\t\tthis field is optional. Any value can be used. If you use the Fargate launch type, this\n\t\t\tfield is required. You must use one of the following values. The value that you choose\n\t\t\tdetermines your range of valid values for the memory
parameter.
The CPU units cannot be less than 1 vCPU when you use Windows containers on\n\t\t\tFargate.
\n256 (.25 vCPU) - Available memory
values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)
512 (.5 vCPU) - Available memory
values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)
1024 (1 vCPU) - Available memory
values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)
2048 (2 vCPU) - Available memory
values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)
4096 (4 vCPU) - Available memory
values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)
8192 (8 vCPU) - Available memory
values: 16 GB and 60 GB in 4 GB increments
This option requires Linux platform 1.4.0
or\n later.
16384 (16vCPU) - Available memory
values: 32GB and 120 GB in 8 GB increments
This option requires Linux platform 1.4.0
or\n later.
The number of cpu
units used by the task. If you use the EC2 launch type,\n\t\t\tthis field is optional. Any value can be used. If you use the Fargate launch type, this\n\t\t\tfield is required. You must use one of the following values. The value that you choose\n\t\t\tdetermines your range of valid values for the memory
parameter.
If you use the EC2 launch type, this field is optional. Supported values\n\t\t\tare between 128
CPU units (0.125
vCPUs) and 10240
\n\t\t\tCPU units (10
vCPUs).
The CPU units cannot be less than 1 vCPU when you use Windows containers on\n\t\t\tFargate.
\n256 (.25 vCPU) - Available memory
values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)
512 (.5 vCPU) - Available memory
values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)
1024 (1 vCPU) - Available memory
values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)
2048 (2 vCPU) - Available memory
values: 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)
4096 (4 vCPU) - Available memory
values: 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)
8192 (8 vCPU) - Available memory
values: 16 GB and 60 GB in 4 GB increments
This option requires Linux platform 1.4.0
or\n later.
16384 (16vCPU) - Available memory
values: 32GB and 120 GB in 8 GB increments
This option requires Linux platform 1.4.0
or\n later.
The process namespace to use for the containers in the task. The valid\n values are host
or task
. On Fargate for\n Linux containers, the only valid value is task
. For\n example, monitoring sidecars might need pidMode
to access\n information about other containers running in the same task.
If host
is specified, all containers within the tasks\n that specified the host
PID mode on the same container\n instance share the same process namespace with the host Amazon EC2\n instance.
If task
is specified, all containers within the specified\n task share the same process namespace.
If no value is specified, the\n default is a private namespace for each container. For more information,\n see PID settings in the Docker run\n reference.
\nIf the host
PID mode is used, there's a heightened risk\n of undesired process namespace exposure. For more information, see\n Docker security.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0
or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The process namespace to use for the containers in the task. The valid\n values are host
or task
. On Fargate for\n Linux containers, the only valid value is task
. For\n example, monitoring sidecars might need pidMode
to access\n information about other containers running in the same task.
If host
is specified, all containers within the tasks\n that specified the host
PID mode on the same container\n instance share the same process namespace with the host Amazon EC2\n instance.
If task
is specified, all containers within the specified\n task share the same process namespace.
If no value is specified, the\n default is a private namespace for each container.
\nIf the host
PID mode is used, there's a heightened risk\n of undesired process namespace exposure.
This parameter is not supported for Windows containers.
\nThis parameter is only supported for tasks that are hosted on\n Fargate if the tasks are using platform version 1.4.0
or later\n (Linux). This isn't supported for Windows containers on\n Fargate.
The IPC resource namespace to use for the containers in the task. The valid values are\n host
, task
, or none
. If host
is\n specified, then all containers within the tasks that specified the host
IPC\n mode on the same container instance share the same IPC resources with the host Amazon EC2\n instance. If task
is specified, all containers within the specified task\n share the same IPC resources. If none
is specified, then IPC resources\n within the containers of a task are private and not shared with other containers in a\n task or on the container instance. If no value is specified, then the IPC resource\n namespace sharing depends on the Docker daemon setting on the container instance. For\n more information, see IPC\n settings in the Docker run reference.
If the host
IPC mode is used, be aware that there is a heightened risk of\n undesired IPC namespace expose. For more information, see Docker\n security.
If you are setting namespaced kernel parameters using systemControls
for\n the containers in the task, the following will apply to your IPC resource namespace. For\n more information, see System\n Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host
IPC mode, IPC namespace related\n systemControls
are not supported.
For tasks that use the task
IPC mode, IPC namespace related\n systemControls
will apply to all containers within a\n task.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nThe IPC resource namespace to use for the containers in the task. The valid values are\n host
, task
, or none
. If host
is\n specified, then all containers within the tasks that specified the host
IPC\n mode on the same container instance share the same IPC resources with the host Amazon EC2\n instance. If task
is specified, all containers within the specified task\n share the same IPC resources. If none
is specified, then IPC resources\n within the containers of a task are private and not shared with other containers in a\n task or on the container instance. If no value is specified, then the IPC resource\n namespace sharing depends on the Docker daemon setting on the container instance.
If the host
IPC mode is used, be aware that there is a heightened risk of\n undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls
for\n the containers in the task, the following will apply to your IPC resource namespace. For\n more information, see System\n Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host
IPC mode, IPC namespace related\n systemControls
are not supported.
For tasks that use the task
IPC mode, IPC namespace related\n systemControls
will apply to all containers within a\n task.
This parameter is not supported for Windows containers or tasks run on Fargate.
\nThe total amount, in GiB, of the ephemeral storage to set for the task. The minimum \t\t\n\t\t\tsupported value is 20
GiB and the maximum supported value is\u2028 200
\n\t\t\tGiB.
The total amount, in GiB, of the ephemeral storage to set for the task. The minimum\n\t\t\tsupported value is 20
GiB and the maximum supported value is\u2028\n\t\t\t\t200
GiB.
Specify an Key Management Service key ID to encrypt the ephemeral storage for the task.
" + "smithy.api#documentation": "Specify an Key Management Service key ID to encrypt the ephemeral storage for the\n\t\t\ttask.
" } } }, @@ -12285,7 +12324,7 @@ } }, "traits": { - "smithy.api#documentation": "The ulimit
settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default\n\t\t\t\t\t\t\tresource limit values set by the operating system with the exception of\n\t\t\t\t\t\t\tthe nofile
resource limit parameter which Fargate\n\t\t\t\t\t\t\toverrides. The nofile
resource limit sets a restriction on\n\t\t\t\t\t\t\tthe number of open files that a container can use. The default\n\t\t\t\t\t\t\t\tnofile
soft limit is 1024
and the default hard limit\n\t\t\t\t\t\t\tis 65535
.
You can specify the ulimit
settings for a container in a task\n\t\t\tdefinition.
The ulimit
settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default\n\t\t\t\t\t\t\tresource limit values set by the operating system with the exception of\n\t\t\t\t\t\t\tthe nofile
resource limit parameter which Fargate\n\t\t\t\t\t\t\toverrides. The nofile
resource limit sets a restriction on\n\t\t\t\t\t\t\tthe number of open files that a container can use. The default\n\t\t\t\t\t\t\t\tnofile
soft limit is 65535
and the default hard limit\n\t\t\t\t\t\t\tis 65535
.
You can specify the ulimit
settings for a container in a task\n\t\t\tdefinition.
Modifies the status of an Amazon ECS container instance.
\nOnce a container instance has reached an ACTIVE
state, you can change the\n\t\t\tstatus of a container instance to DRAINING
to manually remove an instance\n\t\t\tfrom a cluster, for example to perform system updates, update the Docker daemon, or\n\t\t\tscale down the cluster size.
A container instance can't be changed to DRAINING
until it has\n\t\t\t\treached an ACTIVE
status. If the instance is in any other status, an\n\t\t\t\terror will be received.
When you set a container instance to DRAINING
, Amazon ECS prevents new tasks\n\t\t\tfrom being scheduled for placement on the container instance and replacement service\n\t\t\ttasks are started on other container instances in the cluster if the resources are\n\t\t\tavailable. Service tasks on the container instance that are in the PENDING
\n\t\t\tstate are stopped immediately.
Service tasks on the container instance that are in the RUNNING
state are\n\t\t\tstopped and replaced according to the service's deployment configuration parameters,\n\t\t\t\tminimumHealthyPercent
and maximumPercent
. You can change\n\t\t\tthe deployment configuration of your service using UpdateService.
If minimumHealthyPercent
is below 100%, the scheduler can ignore\n\t\t\t\t\t\tdesiredCount
temporarily during task replacement. For example,\n\t\t\t\t\t\tdesiredCount
is four tasks, a minimum of 50% allows the\n\t\t\t\t\tscheduler to stop two existing tasks before starting two new tasks. If the\n\t\t\t\t\tminimum is 100%, the service scheduler can't remove existing tasks until the\n\t\t\t\t\treplacement tasks are considered healthy. Tasks for services that do not use a\n\t\t\t\t\tload balancer are considered healthy if they're in the RUNNING
\n\t\t\t\t\tstate. Tasks for services that use a load balancer are considered healthy if\n\t\t\t\t\tthey're in the RUNNING
state and are reported as healthy by the\n\t\t\t\t\tload balancer.
The maximumPercent
parameter represents an upper limit on the\n\t\t\t\t\tnumber of running tasks during task replacement. You can use this to define the\n\t\t\t\t\treplacement batch size. For example, if desiredCount
is four tasks,\n\t\t\t\t\ta maximum of 200% starts four new tasks before stopping the four tasks to be\n\t\t\t\t\tdrained, provided that the cluster resources required to do this are available.\n\t\t\t\t\tIf the maximum is 100%, then replacement tasks can't start until the draining\n\t\t\t\t\ttasks have stopped.
Any PENDING
or RUNNING
tasks that do not belong to a service\n\t\t\taren't affected. You must wait for them to finish or stop them manually.
A container instance has completed draining when it has no more RUNNING
\n\t\t\ttasks. You can verify this using ListTasks.
When a container instance has been drained, you can set a container instance to\n\t\t\t\tACTIVE
status and once it has reached that status the Amazon ECS scheduler\n\t\t\tcan begin scheduling tasks on the instance again.
Modifies the status of an Amazon ECS container instance.
\nOnce a container instance has reached an ACTIVE
state, you can change the\n\t\t\tstatus of a container instance to DRAINING
to manually remove an instance\n\t\t\tfrom a cluster, for example to perform system updates, update the Docker daemon, or\n\t\t\tscale down the cluster size.
A container instance can't be changed to DRAINING
until it has\n\t\t\t\treached an ACTIVE
status. If the instance is in any other status, an\n\t\t\t\terror will be received.
When you set a container instance to DRAINING
, Amazon ECS prevents new tasks\n\t\t\tfrom being scheduled for placement on the container instance and replacement service\n\t\t\ttasks are started on other container instances in the cluster if the resources are\n\t\t\tavailable. Service tasks on the container instance that are in the PENDING
\n\t\t\tstate are stopped immediately.
Service tasks on the container instance that are in the RUNNING
state are\n\t\t\tstopped and replaced according to the service's deployment configuration parameters,\n\t\t\t\tminimumHealthyPercent
and maximumPercent
. You can change\n\t\t\tthe deployment configuration of your service using UpdateService.
If minimumHealthyPercent
is below 100%, the scheduler can ignore\n\t\t\t\t\t\tdesiredCount
temporarily during task replacement. For example,\n\t\t\t\t\t\tdesiredCount
is four tasks, a minimum of 50% allows the\n\t\t\t\t\tscheduler to stop two existing tasks before starting two new tasks. If the\n\t\t\t\t\tminimum is 100%, the service scheduler can't remove existing tasks until the\n\t\t\t\t\treplacement tasks are considered healthy. Tasks for services that do not use a\n\t\t\t\t\tload balancer are considered healthy if they're in the RUNNING
\n\t\t\t\t\tstate. Tasks for services that use a load balancer are considered healthy if\n\t\t\t\t\tthey're in the RUNNING
state and are reported as healthy by the\n\t\t\t\t\tload balancer.
The maximumPercent
parameter represents an upper limit on the\n\t\t\t\t\tnumber of running tasks during task replacement. You can use this to define the\n\t\t\t\t\treplacement batch size. For example, if desiredCount
is four tasks,\n\t\t\t\t\ta maximum of 200% starts four new tasks before stopping the four tasks to be\n\t\t\t\t\tdrained, provided that the cluster resources required to do this are available.\n\t\t\t\t\tIf the maximum is 100%, then replacement tasks can't start until the draining\n\t\t\t\t\ttasks have stopped.
Any PENDING
or RUNNING
tasks that do not belong to a service\n\t\t\taren't affected. You must wait for them to finish or stop them manually.
A container instance has completed draining when it has no more RUNNING
\n\t\t\ttasks. You can verify this using ListTasks.
When a container instance has been drained, you can set a container instance to\n\t\t\t\tACTIVE
status and once it has reached that status the Amazon ECS scheduler\n\t\t\tcan begin scheduling tasks on the instance again.