No, it’s not over yet, and we are in preparations for the follow-up meeting. We’ve compiled a few more charts, and fixed a few mistakes, and will be ready to show those, as well discuss the topics we haven’t covered last time.
For instance: The charts we produced did not show how many non-reproducible bugs the projects had. In actuality here’s what happens most of the time: The tester gets an abnormal behavior. She repeats the scenario, and it doesn’t repeat. Due to time constraints the bug is reported as not-reproducible.
The PMs thought that since the numbers presented at the release time do not exclude the non-reproducible, it makes the total quality of the release look worse than it is if inspected through the “reproducible bugs glasses”.
Obviously, this is wrong (and there is an abstract agreement not to dismiss them). Plus, you know that the bugs that you can’t reproduce here, will be reproduced by your customer.
However, the numbers showed there aren’t a lot of these non-reproducible bugs. So back to square one: We proved that the numbers still show the correct picture, but we invested time in proving this. I’m not sure what would happen if the results were different, but we’re not there.
We feel we’re thrashing, instead of working on some real improvement. We churn more data, but no action is taken.
We’ll see how this turns out in the next meeting.