-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvement on interactive testing for Neuron #331
Comments
In terms of automated testing, it is worth considering the introduction of some RPA (Robotic Process Automation) technologies, such as Alibaba Cloud RPA, UiBot, UiPath, and other similar products. We can synchronize projects related to automation processes to a specific repository. "按键精灵" lacks active maintenance and is not recommended for use. |
Quite informative about RPA, we should use it to facilitate interactive testing |
There is a recording function in UiPath and OpenRpa, which will be more convenient during the editing steps and reduce learning time to a certain extent. https://www.uipath.com/rpa/robotic-process-automation |
I searched a way to automate application UI in MacOs(which is free ,but need write code): The way to open the app is as follows(notice the right sidebar): 2023-12-19.15.24.47.movwe need to download xcode and use Accessibility Inspector to locate element and operate specific function. |
In the field of Robotic Process Automation (RPA), there aren't many software solutions that offer good support for multiple systems simultaneously. This means that if you want to test multiple systems (such as macOS + Windows), you would need to write two sets of testing code. Otherwise, testing only a single platform may not be comprehensive enough. If you only need to test a single system (such as macOS), then this approach is fine. However, if we need to support automation testing on multiple platforms, I suggest considering trying the Electron adapter of Playwright: https://playwright.dev/docs/api/class-electron |
Does it work for interactive testing?
Playwright is going to be used in |
I haven't done any actual research, but based on my speculation, interactive testing should be its strong suit, and UX testing should include the interactive aspect. |
Yes, UX testing includes the interactive aspect. Or to say, interactive tests are complement to automated UX tests. UI/UX tests were split into 2 But before the release, neuron should be tested on testnet, so unpredicted activities are inevitable, and interactive tests would be added to handle them, e.g. transaction is committed much later than expected because the testnet is busy. If this case is automatic, it would hit |
I learned that Playwrite is used for software runned on Web browser. Does it can be used for app ? |
Playwright provides experimental support for running on Electron: https://playwright.dev/docs/api/class-electron The Neuron client is based on Electron. |
There is also a recently popular library for desktop automation called Using this library, you can achieve cross-platform compatibility. It might be worth considering researching and exploring this library for development purposes. |
Recently, Neuron discovered a critical bug in an upcoming release, and according to the developer's analysis, it has been present in previous versions. This raises the subtopic of the accuracy of test results. As we work on fixing this bug and move into the acceptance phase, I've noticed that the reporting time is significantly long. The primary reason for this delay is the time consumed in synchronization during each acceptance, highlighting the second subtopic: insufficient testing efficiency.
Improving the accuracy and efficiency of testing results is a common challenge, often addressed with the introduction of automation. However, before Neuron is released, it inevitably undergoes a manual full regression. Therefore, the focus narrows down to improving the accuracy and efficiency of the interactive testing process.
My suggestion is to introduce semi-automation to assist in this process.
Taking this bug as an example, the most noticeable symptom is the misidentification of transaction amounts in Nervos DAO. This error is easily detectable since Nervos DAO records shouldn't have such significant amount changes. Yet, in repetitive mechanical tasks, humans are prone to overlooking details, especially in familiar content.
For this example, I propose exporting the transaction list before and after testing and programmatically comparing them to ensure consistency. This involves a semi-automated process where manual data export is followed by automated verification through a program.
While the manual operation in this case is relatively simple, our test cases may involve more complex actions.
For instance, validating a multi-output transaction requires repetitive inputs of addresses and amounts, password input, and observation of balance and transaction record changes after a certain time.
Or, to verify that the sliding bar for the Nervos DAO deposit amount updates correctly in different positions, multiple repetitions of sliding actions are needed, with observation of the displayed amount after each operation.
Both examples can be accomplished using tools to perform the actions and ultimately rely on human judgment to determine if the results match expectations.
Especially in the first example, using a "按键精灵" to perform steps like "export transaction records A," "input addresses and amounts," "input password," "export transaction records B," and "remove transaction records A from B" allows for judgment based solely on the output of the last step. Furthermore, saving this result as a benchmark allows for automated checking of correctness in subsequent repetitions.
The key difference between this approach and full automation is that the various automated steps are not necessarily seamlessly connected but are triggered manually. This provides flexibility and a closer resemblance to the production environment compared to fully automated tests that can only run within test data.
In short, we should introduce some tools to automate interactive testing for accuracy and efficiency. It would be kinda close to fully automated testing but more flexible and real as the final step of full-regression test.
The text was updated successfully, but these errors were encountered: