Gaming the system: Some project examples
Earlier this year Liz Keogh gave a talk at QCon London titled 'Learning and Perverse Incentives: The Evil Hat' where she eventually encouraged people to try and game the systems that they take part in.
Over the last month or so we’ve had two different metrics visibly on show and are therefore prime targets for being gamed.
The first metric is one we included on our build radiator which shows how many commits to the git repository each person has for that day.
We originally created the metric to try and see which people were embracing git and committing locally and which were still treating it like Subversion and only committing when they had something to push to the central repository.
The other advantage we wanted to try and encourage is that by creating lots of small commits it’s easier for someone browsing 'git log' to see what’s happened over time just from glancing at the commit messages.
Bigger commits tend to mean that changes have been made in multiple places and perhaps not all those changes are related to each other.
Since we made that metric visible the number of commits have visibly increased and it’s mostly been positive because people tend to push to the central repository quite frequently.
There have, however, been a couple of occasions where people have made 10/15 commits locally over the day and then pushed them all at the end of the day and gone straight to the top of the leader board.
The disadvantage of this approach is that it means other people on the team aren’t integrating with your changes until right at the end of the day which can lead to merge hell for them.
There have also been some times when people’s count has artificially increased because they’ve checked in, broke the build and then checked in again to fix it.
We’re going to try and find a way to combine local commits with remote pushes in a combined metric as our next trick.
Another metric which we’ve recently made visible is the number of points that we’ve completed so far in the iteration.
Previously we’ve had this data available in our Project Manager’s head and in Mingle but since a big part of how the team is judged is based on the number of points 'achieved' the team asked for the score to be made visible.
Since that happened from my observation we’ve 'achieved' or got very close to the planned velocity every week whereas before that it was a bit hit and miss.
I think sub consciously the estimates made on stories have started to veer towards the cautious side whereas previously they were probably more optimistic.
Another change in behaviour I’ve noticed is that people tend to postpone any technical tasks they have to do when we’re near the end of an iteration and instead keep focus on the story to ensure it gets completed in time.
We’ve also seen a couple of occasions where people stayed 2/3 hours longer on the last day of the iteration to ensure that stories got signed off so the points could be counted.
It’s been quite interesting to observe how behaviour can change based on increasing the visibility of metrics even when in the first case it’s actually irrelevant to the perception of the team.
About the author
I'm currently working on short form content at ClickHouse. I publish short 5 minute videos showing how to solve data problems on YouTube @LearnDataWithMark. I previously worked on graph analytics at Neo4j, where I also co-authored the O'Reilly Graph Algorithms Book with Amy Hodler.