2 min read

Incentivizing Innovation

One of the things I love the most about the Ruby community is how easy it is to try out small mutations in practices, which leads to very rapid evolution in best practices. Rather than having the community look toward authority to design, plan, and implement "best practices" (a la the JSR model), members of the Ruby community try different things, and have rapidly made refinements to the practices over time.

It is natural to assume, looking from the outside, that the proliferation of practices is dangerous or fracturing. It is not. Instead, it functions more like biological evolution, where small mutations conspire over time to refine and improve the underlying organism. Consider the example of testing. There are a number of testing frameworks used by Rubyists, but they have largely converging feature-sets. As the feature-sets converge on superior solutions (e.g. Rails' flavor of Test::Unit now comes with Rspec-style declarative tests), another round of differentiation occurs, allowing the community to zoom in on the now smaller differences and allow evolution to take its course.

The analogy isn't perfect, but the basic idea is sound. It's tempting to find consolidated practices on May 2, 2009, and find a way to shout them from the rooftops in more official form, so that those who haven't caught up yet will have a way to immediately select the "winner" practice without having to do detailed investigation. Further, some have suggested that we should rank Ruby firms by how well they conform with the most popular practices of the moment. This will allow those who are looking for a firm to hire to determine whether or not their potential hirees conform with those practices.

Unfortunately, while that might work for a given slice in time, it provides unwelcome and artificial inertia for the practices of today. Now, in addition to having to contend with the normal inertial forces that resist changes until they are proven (wise forces), firms that want to try out new practices will need to contend with the artificial inertia imposed by being moved down on a list of firms conforming with other practices.

In effect, in creates a chilling effect on experimentation and innovation, and a drag on natural evolution.

It makes perfect sense to create a forum for sharing and aggregating the practices that people are finding useful at the moment. What makes less sense is creating a ranked list of "popular" practices, with no obvious mechanism for mediating differences except pure popularity. And even worse is ranking firms by their aggregate level of conformance.

As Rubyists, we need to discourage artificial attempts to encourage conformance and discourage innovation. Rails shops should find other ways to advertise the quality of their practices without falling back on appeals to the masses, and those in the market for Rails services should do their due dilligence. Measuring the popularity of a practice as a replacement for due diligence is frankly a recipe for failure, and once real investigations have been done, hollow measures of popularity won't add much.