Skip to content

Latest commit

 

History

History
66 lines (39 loc) · 4.45 KB

kpis.md

File metadata and controls

66 lines (39 loc) · 4.45 KB

Key performance indicators

Back to top

The following KPIs are the primary indicators that will be used during the project's iterations to facilitate scrum retrospectives and reviews.

Individual velocity

Velocity charts on individual team members, which have been evaluated using the following non-dependent factors:

  1. Stress
  2. Difficulty compared to estimation
  3. Performance
  4. Produced quality

A form with questions will be sent out to every group member in the end of every sprint. The answers will be collected by the Scrum Master to appreciate wether the psychological stress produced by the work environment is healthy/effective or not. The results (1-10 on each question) will be saved for later use in the reflection, which will be when the product has been shipped. Also, this data is to be distributed to each corresponding individual for them to evaluate their weekly psychological health.

The main reason for this KPI is to derive an indication of wether a good work enviroment has been achieved or not. This KPI will be used as an indicator of both group dynamics as a whole group and as smaller teams. If taken seriously, this KPI will also indicate, and warn, each team member if they are overworking themselves while working with the project.

Another useful reason for using this KPI is to give each individual the ability to reflect over the past sprints more concretely and enable them to optimize the workload for the upcoming sprint.

Produced velocity

Estimations regarding production value will be made for each iteration. The threshold will be kept static, while individual feature estimations will be adapted. If a feature isn't finished, then the score will be 0 in that sprint.

The user stories' score will be estimated in the whole group and delegated into the individual teams. The delegated user story will then be split up into different use cases with a specific score by the associated teams. Each user story will then be split into different tasks that will be used when implementing features.

Each story and each task will be assigned a specific value, and when finished they will accumulate for the combined group score. This will be visualized with a Burn-Up chart.

The objective with this method is to have an estimation of the size of each story/task while being able to show the product owner the amount of work that has been done last sprint. This will also be used for having a set defined amount of work each sprint that the team should strive for.

Strategy

Each individual group has a goal of 100 points in velocity per sprint. The group will then set their estimations to reach this goal by the end of the sprint by summarizing the velocity that has been estimated for each assigned user story/task.

The point system will be kept constant, but the velocity of each task will be adapted to new estimations at each Sprint Planning.

Results Produced velocity

The results from the project can be found here: https://docs.google.com/spreadsheets/d/1D-e4eu4Ox63KY1qPNM5qqKBDRvf7SXSrMOx7200cRGw/edit?usp=sharing

Product Owner satisfaction

Strategy

A problem with not having the Product Owner available to you while working at all times is getting off track and not producing what the owner actually wants. To make sure that this does not happen we are going to use product owner satisfaction as a kpi.

At the weekly sprint review, the Product Owner will be asked to grade his/her value of the current product. This will then be set into perspective with the agreed Produced velocity. If there are big differences, then actions regarding adaptation and negotiation with the Product Owner

Code/test coverage

Charts based upon test coverage on produced code.

Strategy

To reduce the risk of writing unnecessary or untested code we have chosen to use code coverage as an kpi. Striving for a high code coverage will result in cleaner code that works as intended. We will use a plugin that automatically calculates code coverage. Testing coverage does not mean that the code is perfect, it is just as good as the test.