Thursday, September 27, 2007

Representation of Developer Expertise

Developer expertise is generally represented by "years of experience", which is generally useless. Does someone have 10 years of experience, or 1 year of experience 10 times over?

The process and product data collected by Hackystat has the potential to provide a much richer and more meaningful representation of developer expertise. Let's restrict ourselves to the domain of Java software development, for the sake of discussion. First, let's consider the kinds of data we could potentially collect as a developer works:
  • When they are working on a Java system.
  • The packages imported by the class that they are editing.
  • The IDE they are using.
  • The IDE features (debugger, build system, refactoring, configuration management) they are using.
  • Their behaviors within the IDE (writing tests, writing production code, running the system, running the debugger, running tests, invoking the build system, etc.)
  • Their configuration management behavior: frequency of commits, degree of churn; conflicts.
  • The number of developers associated with their current project.
  • The level of interdependency between developers: are files worked on by multiple developers? Has the developer ever worked on code written by someone else? Has someone else ever worked on code written by this developer?
I believe that capturing this kind of information can provide a much richer and more useful representation of developer expertise. It can provide information useful to determining:
  • Who has experience with a particular tool/technique?
  • Who has complementary skills to my own?
  • Who has recent experience with a given tool/technique?
At a higher level, this information could also be useful in forming groups, by helping identify developers with similar and/or complementary skills.

Important research issues include providing the developer with "control" over their profile, how to display this information, and how to perform queries.

Tuesday, September 18, 2007

Twitter, Hackystat, and solving the context switching problem

A recent conversation with Lorin Hochstein at ISERN 2007 has led me to wonder if Twitter + Hackystat solves what we found in prior research to be a fundamental show stopper with the manual metrics collection techniques like the Personal Software Process: the "context switching problem".

Back in the day when we were building automated support for PSP, a basic problem we couldn't solve was developer hatred for having to constantly stop what they were doing in order to tell the tool what they were doing. We called this the "context switching problem", and we came to believe that no amount of tool support for the kind of data that the PSP wants to collect can overcome it, because PSP data intrinsically requires ongoing developer interruption.

I believe a central design win in Twitter is that it does not force a context switch: the messages that you write are so constrained in size and form that they do not interrupt the "flow" of whatever else you're doing. This is fundamentally different from a phone call, a blog entry, an email, or a PSP log entry.

What makes Twitter + Hackystat so synergistic (and, IMHO, so compelling) is that Hackystat sensors can provide a significant amount of useful context to a Twitter message. For example, suppose Developer Joe twitters:

"Argh, I'm so frustrated with JUnit!"

Joe's recent Hackystat event stream could reveal, for example, that he is struggling to resolve a compiler error involving annotations. Developer Jane could see that combination of Twitter and Hackystat information, realize she could help Joe in a couple of minutes, and IM to the rescue.

A second very cool hypothesis is that this combination overcomes the need to "ask questions the smart way". Indeed, Developer Joe is not asking a question or even asking anyone explicitly for help: he is merely expressing an emotion. The additional Hackystat data turns it into a "smart question" that Developer Jane decides to volunteer to answer.

So, how do we create this mashup? I can see at least two different user interfaces:

(a) Hackystat-centric. Under this model, developers in a group use Twitter in the normal way, but Hackystat will also be a member of that community and subscribe to the feed. Then, we create a widget in, say, NetVibes that displays the twitter messages augmented with (possibly abstractions) of the raw sensor data which provides the additional interesting context. Developers then use this UI to monitor the HackyTwitter conversation.

(b) Twitter-centric. In this case, Hackystat abstractions of events are posted to Twitter, which thus becomes part of the normal Twitter feed. People use Twitter in the normal way, but now they are getting an additional stream of Twitter messages representing info from Hackystat.

How might we test these hypotheses? As an initial step, I propose a simple pilot study in which a software engineering class works on a group project in the "normal" way for a month, then installs Hackystat+Twitter and continues on. After the second month, a questionnaire is administered to get feedback from the students on how their communication and coordination changed from month one to month two, and what benefits and problems the introduction of Hackystat + Twitter created.

If this experience is successful, then we refine our experimental method and move on to an industrial case study with more longitudinal data collection. For example, we could build upon the case study by Ko, Deline, and Venolia to see if Hackystat+Twitter reduces the activities engaged in by developers in order to know what their co-workers on a project are currently doing.

What would we call this? I still like the term Project Proprioception.

Sunday, September 16, 2007


I came across the Panopticode project today. It is an interesting approach to metrics aggregation. They motivate their approach by listing the following limitations of current Java metrics tools:
  1. Installation and configuration can be difficult and time consuming
  2. Most only measure the status at a point in time and have no concept of historical data
  3. Each tool represents its data in a proprietary format
  4. It is very difficult to correlate data from different tools
  5. It can be difficult to switch between competing tools that provide similar functions
  6. Most have extremely limited reporting and visualization capabilities
Of course, I agree absolutely. Panopticon provides a way to simplify the download, installation, and invocation of the following tools so far: Emma, JDepend, Checkstyle, JavaNCSS, and provide an interesting visualization of the results called TreeMaps.

There are some substantial differences between their approach and ours in Hackystat:
  • Panopticode limits itself to the world of Java code built using Ant. This is the simplifying assumption they use to achieve (1). Hackystat is agnostic with respect to language and build technology.
  • Current reports do not appear to include history, so I don't know how they plan to do provide (2). Hackystat includes a domain specific language for analysis and visualization of project history called Software Project Telemetry. This also provides a general solution to problem (4) of correlating data from different tools. Panopticode does not appear to provide a solution to (4), at least from perusing the gallery and documentation pages. I will also be interested to see how they create a scalable solution as the toolset grows to, say 30 or 40. This is a hard problem that the Telemetry DSL addresses.
  • While I agree with statement (6) that current reporting tools have an extremely limited reporting and visualization capability, Panopticode seems to currently suffer from that same problem :-) Hackystat, at least with Version 8, will break out of the Java JSP WebApp prison with an architecture well suited to a variety of reporting and visualization approaches, includes Ambient devices, Ruby on Rails, GWT, and so forth. Finally, while TreeMaps are certainly sexy, I don't really see how they are fundamentally better than the unsexy HTML reports of JavaNCSS, Emma, etc. (at least, I don't see it given the way Panopticode uses TreeMaps at present). If I am trying to find low coverage, Emma's HTML interface gets me there just about as easily as the TreeMap does. TreeMaps are cute and all, but they feel more like syntactic sugar than some fundamental interface paradigm shift.
The project is in its bootstrapping phases, so in some sense it's not fair to compare it to Hackystat, which is in its 6th year of public release. I also think it's an interesting decision to limit oneself to Java/Ant, which I think can really simplify certain problems that Hackystat faces in order to appeal to a broader audience. I look forward to seeing how this project progresses in the future.

Thursday, September 6, 2007

CodeRuler Tournament Setup

I keep having to re-learn how to hack the CodeRuler package each semester for the in-class tournament, so here's the steps:

  1. Expand the file in the eclipse/plugins directory. Move the actual jar file out of the way, so that Eclipse will load plugins/
  2. You will need to update two kinds of files in this directory and restart Eclipse to see the changes. The first file is META-INF/ Second, in the directory, there are pairs of .class and .xml files that implement the sample rulers.
  3. Go back into Eclipse, create a Games project, and create a MyRuler. Copy the student MyRuler implementation into your template MyRuler. Then, refactor this MyRuler to be in the package (still in your Games project directory), and with the class name <StudentName>. Save the file to compile the code and generate the <StudentName>.class file in the bin directory.
  4. Finally, copy the <StudentName>.class file into the eclipse/plugins/ directory, create the corresponding .xml file, and update the file.
  5. Restart Eclipse and the student code ruler should now be present as a Sample.
When I do the tournament, I first have the students go through their code in front of the class to explain their strategy. This approach makes it easy to control the tournament and also have the code right at hand for review.