Skip to content

Commit

Permalink
fix the last of the commands
Browse files Browse the repository at this point in the history
  • Loading branch information
dfangl committed Oct 6, 2021
1 parent 7358ba4 commit d0a18f3
Show file tree
Hide file tree
Showing 19 changed files with 105 additions and 106 deletions.
33 changes: 16 additions & 17 deletions content/en/docs/Integrations/aws-cli/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,38 +13,37 @@ All CLI commands that access [services that are implemented in LocalStack]({{< r
There are two ways to use the CLI:

* Use our `awslocal` drop-in replacement:
```
awslocal kinesis list-streams
```
{{< command >}}
$ awslocal kinesis list-streams
{{< / command >}}
* Configure AWS test environment variables and add the `--endpoint-url=<localstack-url>` flag to your `aws` CLI invocations.
For example:
```
export AWS_ACCESS_KEY_ID="test"
export AWS_SECRET_ACCESS_KEY="test"
export AWS_DEFAULT_REGION="us-east-1"
{{< command >}}
$ export AWS_ACCESS_KEY_ID="test"
$ export AWS_SECRET_ACCESS_KEY="test"
$ export AWS_DEFAULT_REGION="us-east-1"

aws --endpoint-url=http://localhost:4566 kinesis list-streams
```
$ aws --endpoint-url=http://localhost:4566 kinesis list-streams
{{< / command >}}

## AWS CLI

Use the below command to install `aws`, if not installed already.

```
pip install awscli
```
{{< command >}}
$ pip install awscli
{{< / command >}}

### Setting up local region and credentials to run LocalStack

aws requires the region and the credentials to be set in order to run the aws commands.
Create the default configuration and the credentials.
Below key will ask for the Access key id, secret Access Key, region & output format.
Config & credential file will be created under ~/.aws folder

```
aws configure --profile default
# Config & credential file will be created under ~/.aws folder
```
{{< command >}}
$ aws configure --profile default
{{< / command >}}

{{< alert >}}
**Note** Please use `test` as value for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to make pre-signed URLs for S3 buckets work.
Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/Integrations/pulumi/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ This guide follows the instructions from Pulumi's [Get Started with Pulumi and A

First, run the following commands and follow the instructions in the CLI to create a new project.

```
mkdir quickstart && cd quickstart
pulumi new aws-typescript
```
{{< command >}}
$ mkdir quickstart && cd quickstart
$ pulumi new aws-typescript
{{< / command >}}

We use the default configuration values:

Expand Down
12 changes: 6 additions & 6 deletions content/en/docs/Integrations/spring-cloud-function/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,16 +79,16 @@ install the Gradle build tool on your machine.

Then run the following command to initialize a new Gradle project

```shell
gradle init
```
{{< command >}}
$ gradle init
{{< / command >}}

After initialization, you will find the Gradle wrapper script `gradlew`.
From now on, we will use the wrapper instead of the globally installed Gradle binary:

```
./gradlew <command>
```
{{< command >}}
$ ./gradlew <command>
{{< / command >}}

### Project Settings

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ description: >

A basic version of Elastic Container Registry (ECR) is available to store application images. ECR is often used in combination with other APIs that deploy containerized apps, like ECS or EKS.

```
$ awslocal ecr create-repository --repository-name repo1
{{< command >}}
$ $ awslocal ecr create-repository --repository-name repo1
{
"repository": {
"repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/repo1",
Expand All @@ -18,10 +18,10 @@ $ awslocal ecr create-repository --repository-name repo1
"repositoryUri": "localhost:4510/repo1"
}
}
```
{{< / command >}}

You can then build and tag a new Docker image, and push it to the repository URL (`localhost:4510/repo1` in the example above):
```
{{< command >}}
$ cat Dockerfile
FROM nginx
ENV foo=bar
Expand All @@ -36,4 +36,4 @@ fe08d5d042ab: Pushed
f2cb0ecef392: Pushed
latest: digest: sha256:4dd893a43df24c8f779a5ab343b7ef172fb147c69ed5e1278d95b97fe0f584a5 size: 948
...
```
{{< / command >}}
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Please note that EKS requires an existing local Kubernetes installation. In rece
![Kubernetes in Docker](kubernetes.png)

The example below illustrates how to create an EKS cluster configuration (assuming you have [`awslocal`](https://github.com/localstack/awscli-local) installed):
```
{{< command >}}
$ awslocal eks create-cluster --name cluster1 --role-arn r1 --resources-vpc-config '{}'
{
"cluster": {
Expand All @@ -30,5 +30,5 @@ $ awslocal eks list-clusters
"cluster1"
]
}
```
{{< / command >}}
Simply configure your Kubernetes client (e.g., `kubectl` or other SDK) to point to the `endpoint` specified in the `create-cluster` output above. Depending on whether you're calling the Kubernetes API from the local machine or from within a Lambda, you may have to use different endpoint URLs (`https://localhost:6443` vs `https://172.17.0.1:6443`).
4 changes: 2 additions & 2 deletions content/en/docs/Local AWS Services/elastic-mapreduce/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ description: >
LocalStack Pro allows running data analytics workloads locally via the [EMR](https://aws.amazon.com/emr) API. EMR utilizes various tools in the [Hadoop](https://hadoop.apache.org/) and [Spark](https://spark.apache.org) ecosystem, and your EMR instance is automatically configured to connect seamlessly to the LocalStack S3 API.

To create a virtual EMR cluster locally from the command line (assuming you have [`awslocal`](https://github.com/localstack/awscli-local) installed):
```
{{< command >}}
$ awslocal emr create-cluster --release-label emr-5.9.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=1,InstanceType=m4.large
{
"ClusterId": "j-A2KF3EKLAOWRI"
}
```
{{< / command >}}

The commmand above will spin up one more more Docker containers on your local machine that can be used to run analytics workloads using Spark, Hadoop, Pig, and other tools.

Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/Local AWS Services/elasticache/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: >
A basic version of [ElastiCache](https://aws.amazon.com/elasticache/) is provided. By default, the API is started on http://localhost:4598 and supports running a local Redis instance (Memcached support coming soon).

After starting LocalStack Pro, you can test the following commands:
```
{{< command >}}
$ awslocal elasticache create-cache-cluster --cache-cluster-id i1
{
"CacheCluster": {
Expand All @@ -20,14 +20,14 @@ $ awslocal elasticache create-cache-cluster --cache-cluster-id i1
}
}
}
```
{{< / command >}}

Then use the returned port number (`4530`) to connect to the Redis instance:
```
{{< command >}}
$ redis-cli -p 4530 ping
PONG
$ redis-cli -p 4530 set foo bar
OK
$ redis-cli -p 4530 get foo
"bar"
```
{{< / command >}}
28 changes: 14 additions & 14 deletions content/en/docs/Local AWS Services/glue/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ In order to run Glue jobs, some additional dependencies have to be fetched from
## Creating Databases and Table Metadata

The commands below illustrate the creation of some very basic entries (databases, tables) in the Glue data catalog:
```
{{< command >}}
$ awslocal glue create-database --database-input '{"Name":"db1"}'
$ awslocal glue create-table --database db1 --table-input '{"Name":"table1"}'
$ awslocal glue get-tables --database db1
Expand All @@ -27,30 +27,30 @@ $ awslocal glue get-tables --database db1
}
]
}
```
{{< / command >}}

## Running Scripts with Scala and PySpark

Assuming we would like to deploy a simple PySpark script `job.py` in the local folder, we can first copy the script to an S3 bucket:
```
{{< command >}}
$ awslocal s3 mb s3://glue-test
$ awslocal s3 cp job.py s3://glue-test/job.py
```
{{< / command >}}

Next, we can create a job definition:
```
{{< command >}}
$ awslocal glue create-job --name job1 --role r1 \
--command '{"Name": "pythonshell", "ScriptLocation": "s3://glue-test/job.py"}'
```
{{< / command >}}
... and finally start the job:
```
{{< command >}}
$ awslocal glue start-job-run --job-name job1
{
"JobRunId": "733b76d0"
}
```
{{< / command >}}
The returned `JobRunId` can be used to query the status job the job execution, until it becomes `SUCCEEDED`:
```
{{< command >}}
$ awslocal glue get-job-run --job-name job1 --run-id 733b76d0
{
"JobRun": {
Expand All @@ -59,7 +59,7 @@ $ awslocal glue get-job-run --job-name job1 --run-id 733b76d0
"JobRunState": "SUCCEEDED"
}
}
```
{{< / command >}}

For a more detailed example illustrating how to run a local Glue PySpark job, please refer to this [sample repository](https://github.com/localstack/localstack-pro-samples/tree/master/glue-etl-jobs).

Expand All @@ -75,12 +75,12 @@ CREATE EXTERNAL TABLE db2.table2 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://tes
```

Then this command will import these DB/table definitions into the Glue data catalog:
```
{{< command >}}
$ awslocal glue import-catalog-to-glue
```
{{< / command >}}

... and finally they will be available in Glue:
```
{{< command >}}
$ awslocal glue get-databases
{
"DatabaseList": [
Expand Down Expand Up @@ -112,7 +112,7 @@ $ awslocal glue get-tables --database-name db2
}
]
}
```
{{< / command >}}

## Further Reading

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/Local AWS Services/iam/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The environment configuration `ENFORCE_IAM=1` is required to enable this feature
{{< /alert >}}

Below is a simple example that illustrates the use of IAM policy enforcement. It first attempts to create an S3 bucket with the default user (which fails), then create a user and attempts to create a bucket with that user (which fails again), and then finally attaches a policy to the user to allow `s3:CreateBucket`, which allows the bucket to be created.
```
{{< command >}}
$ awslocal s3 mb s3://test
make_bucket failed: s3://test An error occurred (AccessDeniedException) when calling the CreateBucket operation: Access to the specified resource is denied
$ awslocal iam create-user --user-name test
Expand All @@ -32,7 +32,7 @@ $ awslocal iam create-policy --policy-name p1 --policy-document '{"Version":"201
$ awslocal iam attach-user-policy --user-name test --policy-arn arn:aws:iam::000000000000:policy/p1
$ awslocal s3 mb s3://test
make_bucket: test
```
{{< / command >}}

### Supported APIs

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/Local AWS Services/iot/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ description: >
Basic support for [IoT](https://aws.amazon.com/iot/) (including IoT Analytics, IoT Data, and related APIs) is provided in the Pro version. The main endpoints for creating and updating entities are currently implemented, as well as the CloudFormation integrations for creating them.

The IoT API ships with a built-in MQTT message broker. In order to get the MQTT endpoint, the `describe-endpoint` API can be used; for example, using [`awslocal`](https://github.com/localstack/awscli-local):
```
{{< command >}}
$ awslocal iot describe-endpoint
{
"endpointAddress": "localhost:4520"
}
```
{{< / command >}}

This endpoint can then be used with any MQTT client to send/receive messages (e.g., using the endpoint URL `mqtt://localhost:4520`).

Expand Down
12 changes: 6 additions & 6 deletions content/en/docs/Local AWS Services/multi-account-setups/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ description: >
Unlike the open source LocalStack, which uses a single hardcoded account ID (`000000000000`), the Pro version allows to use multiple instances for different AWS account IDs in parallel.

In order to set up a multi-account environment, simply configure the `TEST_AWS_ACCOUNT_ID` to include a comma-separated list of account IDs. For example, use the following to start up LocalStack with two account IDs:
```
{{< command >}}
$ TEST_AWS_ACCOUNT_ID=000000000001,000000000002 SERVICES=s3 localstack start
```
{{< / command >}}

You can then use `AWS_ACCESS_KEY_ID` to address resources in the two separate account instances:
```
{{< command >}}
$ AWS_ACCESS_KEY_ID=000000000001 aws --endpoint-url=http://localhost:4566 s3 mb s3://bucket-account-one
make_bucket: bucket-account-one
$ AWS_ACCESS_KEY_ID=000000000002 aws --endpoint-url=http://localhost:4566 s3 mb s3://bucket-account-two
Expand All @@ -23,13 +23,13 @@ $ AWS_ACCESS_KEY_ID=000000000001 aws --endpoint-url=http://localhost:4566 s3 ls
2020-05-24 17:09:41 bucket-account-one
$ AWS_ACCESS_KEY_ID=000000000002 aws --endpoint-url=http://localhost:4566 s3 ls
2020-05-24 17:09:53 bucket-account-two
```
{{< / command >}}

Note that using an invalid account ID should result in a 404 (not found) error response from the API:
```
{{< command >}}
$ AWS_ACCESS_KEY_ID=123000000123 aws --endpoint-url=http://localhost:4566 s3 ls
An error occurred (404) when calling the ListBuckets operation: Not Found
```
{{< / command >}}

{{< alert >}}
**Note:** For now, the account ID is encoded directly in the `AWS_ACCESS_KEY_ID` client-side variable, for simplicity. In a future version, we will support proper access key IDs issued by the local IAM service, which will then internally be translated to corresponding account IDs.
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/Local AWS Services/rds/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: >
LocalStack supports a basic version of [RDS](https://aws.amazon.com/rds/) for testing. Currently, it is possible to spin up PostgreSQL databases on the local machine; support for MySQL and other DB engines is under development and coming soon.

The local RDS service also supports the [RDS Data API](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html), which allows executing data queries over a JSON/REST interface. Below is a simple example that illustrates (1) creation of an RDS database, (2) creation of a SecretsManager secret with the DB password, and (3) running a simple `SELECT 123` query via the RDS Data API.
```
{{< command >}}
$ awslocal rds create-db-instance --db-instance-identifier db1 --db-instance-class c1 --engine postgres
...
$ awslocal secretsmanager create-secret --name dbpass --secret-string test
Expand All @@ -28,4 +28,4 @@ $ awslocal rds-data execute-statement --database test --resource-arn arn:aws:rds
{ "doubleValue": 123 }
]]
}
```
{{< / command >}}
4 changes: 2 additions & 2 deletions content/en/docs/Local AWS Services/route53/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@ description: >
The Route53 API in LocalStack Pro allows you to create hosted zones and to manage DNS entries (e.g., A records) which can then be queried via the built-in DNS server.

The example below illustrates the creation of a hosted zone `example.com`, registration of an A record named `test.example.com` that points to `1.2.3.4`, and finally querying the DNS record by using the `dig` command against the DNS server running on `localhost` (inside the LocalStack container, on port `53`):
```
{{< command >}}
$ zone_id=$(awslocal route53 create-hosted-zone --name example.com --caller-reference r1 | jq -r '.HostedZone.Id')
$ awslocal route53 change-resource-record-sets --hosted-zone-id $zone_id --change-batch 'Changes=[{Action=CREATE,ResourceRecordSet={Name=test.example.com,Type=A,ResourceRecords=[{Value=1.2.3.4}]}}]'
$ dig @localhost test.example.com
...
;; ANSWER SECTION:
test.example.com. 300 IN A 1.2.3.4
```
{{< / command >}}

{{< alert >}}
**Note**: Using the built-in DNS capabilities requires privileged access for the LocalStack container (please also refer to the `DNS_ADDRESS` configuration variable).
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/Local AWS Services/ses/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ The Pro version ships with extended support Simple Email Service (SES), includin
Please refer to the [Configuration section]({{< ref "#configuration" >}}) for instructions on how to configure the connection parameters of your SMTP server (`SMTP_HOST`/`SMTP_USER`/`SMTP_PASS`).

Once your SMTP server has been configured, you can use the SES user interface in the Web app to create a new email account (e.g., `user1@yourdomain.com`), and then send an email via the command line (or your SES client SDK):
```
{{< command >}}
$ awslocal ses send-email --from user1@yourdomain.com --message 'Body={Text={Data="Lorem ipsum dolor sit amet, consectetur adipiscing elit, ..."}},Subject={Data=Test Email}' --destination 'ToAddresses=recipient1@example.com'
```
{{< / command >}}

The [Web user interface](https://app.localstack.cloud) then allows you to interactively browse through the sent email messages, as illustrated in the screenshot below:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,9 @@ services:
Open the `Run/Debug Configurations` window and create a new `Shell Script` with
the following content:

```shell
while [[ -z $(docker ps | grep :5050) ]]; do sleep 1; done
```
{{< command >}}
$ while [[ -z $(docker ps | grep :5050) ]]; do sleep 1; done
{{< / command >}}

![Run/Debug Configurations](../img-inteliji-debugger-1.png)

Expand Down
Loading

0 comments on commit d0a18f3

Please sign in to comment.