-
Notifications
You must be signed in to change notification settings - Fork 683
Definition Of Quality
All functionality must work in any browser listed in the following queries:
Test only against the 360x460 viewport (portrait mode). Landscape mode and desktop mode are not prioritized.
Chief call to action must be "above the fold" in the viewport.
All UI controls must respond to user input unless it is indicated that they are disabled.
All PWA checklist items must be supported.
- ServiceWorker installs and caches
-
manifest.json
exists and all assets it lists exist - All sizes of supported icon exist and are listed in manifest
This defines the criteria of items that must be completed before a user story can be considered complete. This is applied consistently and serves as an official gate separating things from being In Progress
to Done
. The Definition of Done (DoD) ensures that the features delivered are complete in functionality, and of high quality.
The most basic thing that needs to be completed for any user story or issue to be Done
is that it's built. It should satisfy all Acceptance Criteria defined in the user story. The code should also be written with the Test Plan in mind, to cover all known user cases. All new code must also be covered with appropriate automated tests where available; and existing code must have updated tests. Code must also be performant, and compliant.
Code must meet the following standards:
- Satisfies all Acceptance Criteria
- Satisfies the Test Plan
- New code is covered with appropriate automated tests
- Existing / refactored code must have updated tests
- Code meets performance standards
- Code meets security compliance standards
- Translations have been added for new strings.
The code should not decrease the storefront performance. This can be measured by using Lighthouse.
- No single asset on a landing page should be larger than 400KB. Use bundlesize to measure this metric.
- Total JavaScript payload should not be larger than 500KB. Check this manually or use either Webpack Bundle Analyzer or bundlesize.
- Time to First Contentful Paint (FCP) must be under 4 seconds.
- Time to Largest Contentful Paint (LCP) must be under 4 seconds.
During code review, developers evaluate a PR from the perspectives of architecture and implementation, answering the following questions:
- Does the proposed solution satisfy the acceptance criteria?
- Does the proposal cover all use cases? Did review uncover any new ones?
- Is the proposed solution clear and obvious?
- Could any parts be made more clear or obvious?
- Does the proposal utilize our existing tools?
- Does the proposal differ from previous solutions? Why?
- What implications would accepting the proposal have?
- Does the proposal follow the Open/Close Principle (Open for extension, and closed for modification)?
- Is the code efficient?
- Are there any errors, gaps, or redundancies?
- Does the code adhere to our accepted best practices?
- Does the code follow our own patterns and precedents?
- Does the code style match our own?
- Have translations been added for new strings?
Functionality is deployed to a cloud test environment so that stakeholders can review a demo of the functionality and provide their approvals.
During UX review, a UX designer evaluates the story from a UX perspective, asking the following questions:
- Does the UX match the provided mockups closely enough?
- Is the end result successful or does it need revisions?
During PO review, the Product Owner evaluates the story, asking the following questions:
- Is this important to do now?
- Does this meet the Acceptance Criteria?
During QA, a QA engineer (another team member that has not written, nor peer-reviewed the code), evaluates the code from a functional perspective, answering the following questions:
The QA engineer goes through a Functional PR checklist, and can approve the code once all checklist items are satisfied.
- Have unit tests been included for new code?
- Have unit tests been updated for existing code?
- Have E2E Functional tests been included for new critical user flows?
- Have E2E Functional tests been updated for existing user flows?
- Have manual Zephyr test cases been created for new critical use flows that cannot be automated?
- Does the regression suite pass?
- Do manual workflows work as expected?
- Are there any new build or runtime errors?
- Does the proposal have any unintended side effects?
- Have Lighthouse and WPT scores changed?
Functionality and backwards incompatibility is document in necessary developer documentation.
- If the code creates a new public API, does it have the necessary jsdoc blocks?
- If the code changes an existing public API, are the jsdoc blocks updated as necessary?
- If the code introduces a new feature, are there notes in a PR, wiki, or markdown page somewhere that answers:
- What it does
- How does it work (at a high level)
- How can storefront developers use it
Functionality and backwards incompatibility is document in necessary user documentation.
When we merge our code to the delivery branch, we consider this to be Ready for Release
. Once our code is ready to be released to market, we publish release packages managers, and then we can call our story "Done".
- Sync calls:
- Check the calendar
- Recordings - https://goo.gl/2uWUhX
- Slack: #pwa Join #pwa
- Contributing
- Product