Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"User Flow" integration test #2159

Open
thehowl opened this issue May 21, 2024 · 1 comment
Open

"User Flow" integration test #2159

thehowl opened this issue May 21, 2024 · 1 comment
Labels
🌟 improvement performance improvements, refactors ...

Comments

@thehowl
Copy link
Member

thehowl commented May 21, 2024

A bash script that tests a full user flow, setting up a new repository with a gno mod init, creating a git repository, setting up some code, testing it, running it, and publishing it on a local chain (gnodev) and/or staging.

Suggested by @moul . Can be long (ie. 1-3 minutes), but should make sure that the flow we are instructing end-users to do works and continues working.

@moul
Copy link
Member

moul commented Oct 18, 2024

The goal is to have a script located in either the misc/ or contribs/ directory.

This script should be enjoyable to maintain. Unlike unit tests, which check all edge cases, this script focuses on a single "valid" flow that should always work. Its purpose is to explore relationships and compatibility among our tools, including dependencies like the internet and GitHub.

The script will fail relatively often, and will lead to false positives. However, it should be easy to determine if a failure is a false positive by checking if it occurs twice. If it does not, we can quickly create a more focused unit test. Essentially, this test simulates a random user sending us an email to inform us that our website is not working. We don't expect the user to debug; that's our responsibility. At least we will know about the issue before a random external user informs us.

The script will consist of steps. Essentially, it operates in a black-and-white manner: it either works or it doesn't. However, it also has gray areas due to the steps, as we maintain a notion of "where" or "when."

Since we have steps, we can implement a simple cumulative benchmark. Each step will have a duration, and when we sum these durations, we obtain the total time. This will allow us to generate a single graph showing total duration over time, with stacked lines to quickly identify which parts are slower. For instance, we can determine if the slowdown is due to GitHub artifacts, our DNS, or other factors we typically overlook.

The goal of this script is to be runnable in CI (just in warn mode?) or locally. More importantly, its unique feature is that it can also be executed in production. If we can make the test self-cleaning or operate in randomized namespaces, we can run it every few minutes or hours against our networks to measure performance and functionality—whether everything is green (working) or red (not working).

Thus, this script serves not only as an integration test for developers and pull requests but also as a hybrid tool for monitoring.

Related:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🌟 improvement performance improvements, refactors ...
Projects
Status: Backlog
Development

No branches or pull requests

2 participants