Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix spelling #5518

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions .github/workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,20 @@

There are a few [GitHub secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets) to configure to fully leverage the build.

You can use and set the followings secrets also in your fork.
You can use and set the following secrets also in your fork.

## Ngrok Debugging

You can debug a GitHub Action build using [NGROK](https://ngrok.com/).

It is disabled for automated build triggered by push and pull_requests.

You can trigger a workflow run manually enabling ngrok debugging.
You can trigger a workflow run manually enabling ngrok debugging.

It will open an ssh connection to the VM and keep it up and running for one hour.
The connection url is showns in the log for debugAction.sh
The connection URL is shown in the log for debugAction.sh

You can then connect to the build vm, and debug it.
You can then connect to the build VM, and debug it.
You need to use a password of your choice to access it.

You can continue the build with `touch /tmp/continue`.
Expand All @@ -30,7 +30,7 @@ Then set the following secrets:

## Log Upload

The build uploads the logs to an s3 bucket allowing to inspect them with a browser.
The build uploads the logs to a S3 bucket allowing to inspect them with a browser.

You need to create the bucket with the following commands:

Expand All @@ -53,4 +53,4 @@ To enable upload to the created bucket you need to set the following secrets:

If you want to get notified of what happens on slack, create an [Incoming Web Hook](https://api.slack.com/messaging/webhooks) and then set the following secret:

- `SLACK_WEBHOOK`: the incoming webhook url provided by slack.
- `SLACK_WEBHOOK`: the incoming webhook URL provided by slack.
2 changes: 1 addition & 1 deletion ansible/group_vars/all
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ nginx:

# These are the variables to define all database relevant settings.
# The authKeys are the users, that are initially created to use OpenWhisk.
# The keys are stored in ansible/files and will be inserted into the authentication databse.
# The keys are stored in ansible/files and will be inserted into the authentication database.
# The key db.whisk.actions is the name of the database where all artifacts of the user are stored. These artifacts are actions, triggers, rules and packages.
# The key db.whisk.activation is the name of the database where all activations are stored.
# The key db.whisk.auth is the name of the authentication database where all keys of all users are stored.
Expand Down
2 changes: 1 addition & 1 deletion ansible/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
#
---
# This playbook updates CLIs and SDKs on an existing edge host.
# Artifacts get built and published to NGINX. This assumes an already running egde host in an Openwhisk deployment.
# Artifacts get built and published to NGINX. This assumes an already running edge host in an Openwhisk deployment.

- hosts: edge
roles:
Expand Down
12 changes: 6 additions & 6 deletions common/scala/src/main/resources/application.conf
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ whisk {
acks = 1
request-timeout-ms = 30000
metadata-max-age-ms = 15000
# max-request-size is defined programatically for producers related to the "completed" and "invoker" topics
# max-request-size is defined programmatically for producers related to the "completed" and "invoker" topics
# as ${whisk.activation.kafka.payload.max} + ${whisk.activation.kafka.serdes-overhead}. All other topics use
# the default of 1 MB.
}
Expand Down Expand Up @@ -182,14 +182,14 @@ whisk {
segment-bytes = 536870912
retention-bytes = 1073741824
retention-ms = 3600000
# max-message-bytes is defined programatically as ${whisk.activation.kafka.payload.max} +
# max-message-bytes is defined programmatically as ${whisk.activation.kafka.payload.max} +
# ${whisk.activation.kafka.serdes-overhead}.
}
creationAck {
segment-bytes = 536870912
retention-bytes = 1073741824
retention-ms = 3600000
# max-message-bytes is defined programatically as ${whisk.activation.kafka.payload.max} +
# max-message-bytes is defined programmatically as ${whisk.activation.kafka.payload.max} +
# ${whisk.activation.kafka.serdes-overhead}.
}
health {
Expand All @@ -201,7 +201,7 @@ whisk {
segment-bytes = 536870912
retention-bytes = 1073741824
retention-ms = 172800000
# max-message-bytes is defined programatically as ${whisk.activation.kafka.payload.max} +
# max-message-bytes is defined programmatically as ${whisk.activation.kafka.payload.max} +
# ${whisk.activation.kafka.serdes-overhead}.
}
events {
Expand Down Expand Up @@ -586,9 +586,9 @@ whisk {
cache-expiry = 30 seconds #how long to keep spans in cache. Set to appropriate value to trace long running requests
#Zipkin configuration. Uncomment following to enable zipkin based tracing
#zipkin {
# url = "http://localhost:9411" //url to connecto to zipkin server
# url = "http://localhost:9411" //URL to connect to zipkin server
//sample-rate to decide a request is sampled or not.
//sample-rate 0.5 eqauls to sampling 50% of the requests
//sample-rate 0.5 equals to sampling 50% of the requests
//sample-rate of 1 means 100% sampling.
//sample-rate of 0 means no sampling
# sample-rate = "0.01" // sample 1% of requests by default
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ class MessagingActiveAck(producer: MessageProducer, instance: InstanceId, eventS

// An acknowledgement containing the result is only needed for blocking invokes in order to further the
// continuation. A result message for a non-blocking activation is not actually registered in the load balancer
// and the container proxy should not send such an acknowlegement unless it's a blocking request. Here the code
// and the container proxy should not send such an acknowledgement unless it's a blocking request. Here the code
// is defensive and will shrink all non-blocking acknowledgements.
send(if (blockingInvoke) acknowledgement else acknowledgement.shrink).recoverWith {
case t if t.getCause.isInstanceOf[RecordTooLargeException] =>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ abstract class AcknowledgementMessage(private val tid: TransactionId) extends Me
* combines the `CompletionMessage` and `ResultMessage`. The `response` may be an `ActivationId` to allow for failures
* to send the activation result because of event-bus size limitations.
*
* The constructor is private so that callers must use the more restrictive constructors which ensure the respose is always
* The constructor is private so that callers must use the more restrictive constructors which ensure the response is always
* Right when this message is created.
*/
case class CombinedCompletionAndResultMessage private (override val transid: TransactionId,
Expand Down Expand Up @@ -167,7 +167,7 @@ case class CompletionMessage private (override val transid: TransactionId,
* This is part of a split phase notification, and does not indicate that the slot is available, which is indicated with
* a `CompletionMessage`. Note that activation record will not contain any logs from the action execution, only the result.
*
* The constructor is private so that callers must use the more restrictive constructors which ensure the respose is always
* The constructor is private so that callers must use the more restrictive constructors which ensure the response is always
* Right when this message is created.
*/
case class ResultMessage private (override val transid: TransactionId, response: Either[ActivationId, WhiskActivation])
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ trait Container {
endTime = r.interval.end,
logLevel = InfoLevel)
case Failure(t) =>
transid.failed(this, start, s"initializiation failed with $t")
transid.failed(this, start, s"initialization failed with $t")
}
.flatMap { result =>
// if runtime container is shutting down, reschedule the activation message
Expand Down
12 changes: 6 additions & 6 deletions tools/owperf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ This test tool benchmarks an OpenWhisk deployment for (warm) latency and through
1. Parameter size - controls the size of the parameter passed to the action or event
1. Actions per iteration (a.k.a. _ratio_) - controls how many rules are associated with a trigger [for rules] or how many actions are asynchronously invoked (burst size) at each iteration of a test worker [for actions].
1. "Master apart" mode - Allow the master client to perform latency measurements while the worker clients stress OpenWhisk using a specific invocation pattern in the background. Useful for measuring latency under load, and for comparing latencies of rules and actions under load.
The tool is written in node.js, using mainly the modules of OpenWhisk client, cluster for concurrency, and commander for CLI procssing.
The tool is written in node.js, using mainly the modules of OpenWhisk client, cluster for concurrency, and commander for CLI processing.

### Operation
The general operation of a test is simple:
Expand All @@ -39,11 +39,11 @@ The general operation of a test is simple:

Final results are written to the standard output stream (so can be redirected to a file) as a single highly-detailed CSV record containing all the input settings and the output measurements (see below). There is additional control information that is written to the standard error stream and can be silenced in CLI. The control information also contains the CSV header, so it can be copied into a spreadsheet if needed.

It is possible to invoke the tool in "Master apart" mode, where the master client is invoking a different activity than the workers, and at possibly a different (very likely, much slower) rate. In this mode, latency statsitics are computed based solely on the master's data, since the worker's activity is used only as background to stress the OpenWhisk deployment. So one experiment can have the master client invoke rules and another one can have the master client invoke actions, while in both experiments the worker clients perform the same background activity.
It is possible to invoke the tool in "Master apart" mode, where the master client is invoking a different activity than the workers, and at possibly a different (very likely, much slower) rate. In this mode, latency statistics are computed based solely on the master's data, since the worker's activity is used only as background to stress the OpenWhisk deployment. So one experiment can have the master client invoke rules and another one can have the master client invoke actions, while in both experiments the worker clients perform the same background activity.

The tool is highly customizable via CLI options. All the independent test variables are controlled via CLI. This includes number of workers, invocation pattern, OW client configuration, test action sleep time, etc.

Test setup and teardown can be independently skipped via CLI, and/or directly invoked from the external setup script (```setup.sh```), so that setup can be shared between multiple tests. More advanced users can replace the test action with a custom action in the setup script to benchmark action invocation or event-respose throughput and latency of specific applications.
Test setup and teardown can be independently skipped via CLI, and/or directly invoked from the external setup script (```setup.sh```), so that setup can be shared between multiple tests. More advanced users can replace the test action with a custom action in the setup script to benchmark action invocation or event-response throughput and latency of specific applications.

**Clock skew**: OpenWhisk is a distributed system, which means that clock skew is expected between the client machine computing invocation timestamps and the controllers or invokers that generate the timestamps in the activation records. However, this tool assumes that clock skew is bound at few msec range, due to having all machines clocks synchronized, typically using NTP. At such a scale, clock skew is quite small compared to the measured time periods. Some of the time periods are measured using the same clock (see below) and are therefore oblivious to clock skew issues.

Expand All @@ -67,7 +67,7 @@ The following time-stamps are collected for each invocation, of either action, o
* **TS** (Trigger Start) - taken from the activation record of the trigger linked to the rules, so applies only to rule tests. All actions invoked by the rules of the same trigger have the same TS value.
* **AS** (Action Start) - taken from the activation record of the action.
* **AE** (Action End) - taken from the activation record of the action.
* **AI** (After Invocation) - taken by the client immmediately after the invocation, for blocking action invocation tests only.
* **AI** (After Invocation) - taken by the client immediately after the invocation, for blocking action invocation tests only.

Based on these timestamps, the following measurements are taken:
* **OEA** (Overhead of Entering Action) - OpenWhisk processing overhead from sending the action invocation or trigger fire to the beginning of the action execution. OEA = AS-BI
Expand All @@ -77,7 +77,7 @@ Based on these timestamps, the following measurements are taken:
* **TA** (Trigger to Answer) - the processing time from the start of the trigger process to the start of the action (rule tests only). TA = AS-TS
* **ORA** (Overhead of Returning from Action) - time from action end till being received by the client (blocking action tests only). ORA = AI - AE
* **RTT** (Round Trip Time) - time at the client from action invocation till reply received (blocking action tests only). RTT = AI - BI
* **ORTT** (Overhead of RTT) - RTT at the client exclugin the net action computation time. ORTT = RTT - D
* **ORTT** (Overhead of RTT) - RTT at the client excluding the net action computation time. ORTT = RTT - D

For each measurement, the tool computes average (_avg_), standard deviation (_std_), and extremes (_min_ and _max_).

Expand All @@ -92,7 +92,7 @@ Throughput is measured w.r.t. several different counters. During post-processing
* **Activations** - number of completed activations inside the time frame, counting both trigger activations (based on TS), and action activations (based on AS and AE).
* **Invocations** - number of successful invocations of complete rules or actions (depending on the activity). This is the "service rate" of invocations (assuming errors happen only because OW is overloaded).

For each counter, the tool reports the total counter value (_abs_), total throughput per second (_tp_), througput of the worker clients without the master (_tpw_) and the master's percentage of throughput relative to workers (_tpd_). The last two values are important mostly for master apart mode.
For each counter, the tool reports the total counter value (_abs_), total throughput per second (_tp_), throughput of the worker clients without the master (_tpw_) and the master's percentage of throughput relative to workers (_tpd_). The last two values are important mostly for master apart mode.

Aside from that, the tool also counts **errors**. Failed invocations - of actions, of triggers, or of actions from triggers (via rules) are counted each as an error. The tool reports both absolute error count (_abs_) and percent out of requests (_percent_).

Expand Down
Loading