Friday, December 14, 2007

If Collective Intelligence is the question, then dashboards are not the answer

I've been thinking recently about collective intelligence and how it applies to software engineering in general and software metrics in particular.

My initial perspective on collective intelligence is that provides an organizing principle for a process in which a small "nugget" of information is somehow made available to one or more people, who then refine it, communicate it to others, or discard it as the case may be. Collective Intelligence results when mechanisms exist to prevent these nuggets from being viewed as spam (and the people communicating them from being viewed as spammers), along with mechanisms to support the refinement and elaboration of the initial nugget of information into an actual "chunk" of insight or knowledge. Such chunks, as they grow over time, enable long-term organization learning from collective intelligence processes.

Or something like that. What strikes me about the software metrics community and literature is that when it comes to how "measures become knowledge", the most common approach seems to be:
  1. Management hires some people to be the "Software Process Group"
  2. The SPG goes out and measures developers and artifacts somehow.
  3. The SPG returns to their office and generates some statistics and regression lines.
  4. The SPG reports to management about the "best practices" they have discovered, such as "The optimal code inspection review rate is 200 LOC per hour".
  5. Management issues a decree to developers that, from now on, they are to review code at the rate of 200 LOC per hour.
I'm not sure what to call this, but collective intelligence does not come to mind.

When we started work on Hackystat 8, it became clear that there were new opportunities to integrate our system with technologies like Google Gadgets, Twitter, Facebook, and so forth. I suspect that some of these integrations, like Google Gadgets, will turn out to be little more than neat hacks with only minor impact on the usability of the system. My conjecture is that the Hackystat killer app has nothing to do with "dashboards"; most modern metrics collection systems provide dashboards and people can ignore them just as easily as they ignore their web app predecessors.

On the other hand, integrating Hackystat with a communication facility like Twitter or with a social networking application has much more profound implications, because these integrations have the potential to create brand new ways to coordinate, communicate, and annotate an initial "nugget" generated by Hackystat into a "chunk" of knowledge of wider use to developers. It could also work the other way: an anecdotal "nugget" generated by a developer ("Hey folks, I think that we should all run verify before committing our code to reduce continuous integration build failure") could be refined into institutional knowledge (a telemetry graph showing the relationship between verify-before-commit and build success), or discarded (if the telemetry graph shows no relationship).

No comments: