export AWS_REGION="ca-central-1"
gp env AWS_REGION="ca-central-1"
Add to the requirements.txt
aws-xray-sdk
Install pythonpendencies
pip install -r requirements.txt
Add to app.py
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.ext.flask.middleware import XRayMiddleware
xray_url = os.getenv("AWS_XRAY_URL")
xray_recorder.configure(service='Cruddur', dynamic_naming=xray_url)
XRayMiddleware(app, xray_recorder)
Add aws/json/xray.json
{
"SamplingRule": {
"RuleName": "Cruddur",
"ResourceARN": "*",
"Priority": 9000,
"FixedRate": 0.1,
"ReservoirSize": 5,
"ServiceName": "Cruddur",
"ServiceType": "*",
"Host": "*",
"HTTPMethod": "*",
"URLPath": "*",
"Version": 1
}
}
FLASK_ADDRESS="https://4567-${GITPOD_WORKSPACE_ID}.${GITPOD_WORKSPACE_CLUSTER_HOST}"
aws xray create-group \
--group-name "Cruddur" \
--filter-expression "service(\"$FLASK_ADDRESS\") {fault OR error}"
aws xray create-sampling-rule --cli-input-json file://aws/json/xray.json
Github aws-xray-daemon X-Ray Docker Compose example
wget https://s3.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-3.x.deb
sudo dpkg -i **.deb
xray-daemon:
image: "amazon/aws-xray-daemon"
environment:
AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
AWS_REGION: "us-east-1"
command:
- "xray -o -b xray-daemon:2000"
ports:
- 2000:2000/udp
We need to add these two env vars to our backend-flask in our docker-compose.yml
file
AWS_XRAY_URL: "*4567-${GITPOD_WORKSPACE_ID}.${GITPOD_WORKSPACE_CLUSTER_HOST}*"
AWS_XRAY_DAEMON_ADDRESS: "xray-daemon:2000"
EPOCH=$(date +%s)
aws xray get-service-graph --start-time $(($EPOCH-600)) --end-time $EPOCH
When creating a new dataset in Honeycomb it will provide all these installation insturctions
We'll add the following files to our requirements.txt
opentelemetry-api
opentelemetry-sdk
opentelemetry-exporter-otlp-proto-http
opentelemetry-instrumentation-flask
opentelemetry-instrumentation-requests
We'll install these dependencies:
pip install -r requirements.txt
Add to the app.py
from opentelemetry import trace
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from opentelemetry.instrumentation.requests import RequestsInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# Initialize tracing and an exporter that can send data to Honeycomb
provider = TracerProvider()
processor = BatchSpanProcessor(OTLPSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)
# Initialize automatic instrumentation with Flask
app = Flask(__name__)
FlaskInstrumentor().instrument_app(app)
RequestsInstrumentor().instrument()
Add teh following Env Vars to backend-flask
in docker compose
OTEL_EXPORTER_OTLP_ENDPOINT: "https://api.honeycomb.io"
OTEL_EXPORTER_OTLP_HEADERS: "x-honeycomb-team=${HONEYCOMB_API_KEY}"
OTEL_SERVICE_NAME: "${HONEYCOMB_SERVICE_NAME}"
You'll need to grab the API key from your honeycomb account:
export HONEYCOMB_API_KEY=""
export HONEYCOMB_SERVICE_NAME="Cruddur"
gp env HONEYCOMB_API_KEY=""
gp env HONEYCOMB_SERVICE_NAME="Cruddur"
Add to the requirements.txt
watchtower
pip install -r requirements.txt
In app.py
import watchtower
import logging
from time import strftime
# Configuring Logger to Use CloudWatch
LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
cw_handler = watchtower.CloudWatchLogHandler(log_group='cruddur')
LOGGER.addHandler(console_handler)
LOGGER.addHandler(cw_handler)
LOGGER.info("some message")
@app.after_request
def after_request(response):
timestamp = strftime('[%Y-%b-%d %H:%M]')
LOGGER.error('%s %s %s %s %s %s', timestamp, request.remote_addr, request.method, request.scheme, request.full_path, response.status)
return response
We'll log something in an API endpoint
LOGGER.info('Hello Cloudwatch! from /api/activities/home')
Set the env var in your backend-flask for docker-compose.yml
AWS_DEFAULT_REGION: "${AWS_DEFAULT_REGION}"
AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
passing AWS_REGION doesn't seems to get picked up by boto3 so pass default region instead
Create a new project in Rollbar called Cruddur
Add to requirements.txt
blinker
rollbar
Install deps
pip install -r requirements.txt
We need to set our access token
export ROLLBAR_ACCESS_TOKEN=""
gp env ROLLBAR_ACCESS_TOKEN=""
Add to backend-flask for docker-compose.yml
ROLLBAR_ACCESS_TOKEN: "${ROLLBAR_ACCESS_TOKEN}"
Import for Rollbar
import rollbar
import rollbar.contrib.flask
from flask import got_request_exception
rollbar_access_token = os.getenv('ROLLBAR_ACCESS_TOKEN')
@app.before_first_request
def init_rollbar():
"""init rollbar module"""
rollbar.init(
# access token
rollbar_access_token,
# environment name
'production',
# server root directory, makes tracebacks prettier
root=os.path.dirname(os.path.realpath(__file__)),
# flask already sets up logging
allow_logging_basic_config=False)
# send exceptions from `app` to rollbar, using flask's signal system.
got_request_exception.connect(rollbar.contrib.flask.report_exception, app)
We'll add an endpoint just for testing rollbar to app.py
@app.route('/rollbar/test')
def rollbar_test():
rollbar.report_message('Hello World!', 'warning')
return "Hello World!"
During the original bootcamp cohort, there was a newer version of flask. This resulted in rollback implementation breaking due to a change is the flask api.
If you notice rollbar is not tracking, utilize the code from this file:
https://github.com/omenking/aws-bootcamp-cruddur-2023/blob/week-x/backend-flask/lib/rollbar.py