Yehuda Katz is a member of the Ember.js, Ruby on Rails and jQuery Core Teams; his 9-to-5 home is at the startup he founded, Tilde Inc.. There he works on Skylight, the smart profiler for Rails, and does Ember.js consulting. He is best known for his open source work, which also includes Thor and Handlebars. He travels the world doing open source evangelism and web standards work.

Incentivizing Innovation

One of the things I love the most about the Ruby community is how easy it is to try out small mutations in practices, which leads to very rapid evolution in best practices. Rather than having the community look toward authority to design, plan, and implement “best practices” (a la the JSR model), members of the Ruby community try different things, and have rapidly made refinements to the practices over time.

It is natural to assume, looking from the outside, that the proliferation of practices is dangerous or fracturing. It is not. Instead, it functions more like biological evolution, where small mutations conspire over time to refine and improve the underlying organism. Consider the example of testing. There are a number of testing frameworks used by Rubyists, but they have largely converging feature-sets. As the feature-sets converge on superior solutions (e.g. Rails’ flavor of Test::Unit now comes with Rspec-style declarative tests), another round of differentiation occurs, allowing the community to zoom in on the now smaller differences and allow evolution to take its course.

The analogy isn’t perfect, but the basic idea is sound. It’s tempting to find consolidated practices on May 2, 2009, and find a way to shout them from the rooftops in more official form, so that those who haven’t caught up yet will have a way to immediately select the “winner” practice without having to do detailed investigation. Further, some have suggested that we should rank Ruby firms by how well they conform with the most popular practices of the moment. This will allow those who are looking for a firm to hire to determine whether or not their potential hirees conform with those practices.

Unfortunately, while that might work for a given slice in time, it provides unwelcome and artificial inertia for the practices of today. Now, in addition to having to contend with the normal inertial forces that resist changes until they are proven (wise forces), firms that want to try out new practices will need to contend with the artificial inertia imposed by being moved down on a list of firms conforming with other practices.

In effect, in creates a chilling effect on experimentation and innovation, and a drag on natural evolution.

It makes perfect sense to create a forum for sharing and aggregating the practices that people are finding useful at the moment. What makes less sense is creating a ranked list of “popular” practices, with no obvious mechanism for mediating differences except pure popularity. And even worse is ranking firms by their aggregate level of conformance.

As Rubyists, we need to discourage artificial attempts to encourage conformance and discourage innovation. Rails shops should find other ways to advertise the quality of their practices without falling back on appeals to the masses, and those in the market for Rails services should do their due dilligence. Measuring the popularity of a practice as a replacement for due diligence is frankly a recipe for failure, and once real investigations have been done, hollow measures of popularity won’t add much.

32 Responses to “Incentivizing Innovation”

Agreed. This is the first brick on the road to hell that other development communities have found themselves in. The greatest strength of the Ruby community is the freedom to innovate without penalty. If RMM catches on as a way to judge shops you’ll quickly find the death of such innovation.

[Edited at the request of the author]

Nice post, Yehuda. Let me be even more pessimistic: I don’t think it’s a given that over time it will even become apparent which practices are “best practices” and which are not. There may not be such a thing as an ideal, eternal, perfectible best practice. To extend the Darwinian metaphor: From my dilettantish reading I believe that biologists generally believe that there’s no such thing as a “perfect” biological form, just that which competes better than others in that given ecological context. The dinosaurs were doing pretty great ’til that meteor hit. Their mass extinction doesn’t make them right or wrong, just unlucky.

Though, that might actually be less pessimistic: I don’t think a site like RMM will lead to any sort of stagnation in practices since I don’t think such a stagnation is close to possible.

But, yes, diversity is good. That way if the field gets hit by a meteor (metaphorically speaking), some of us are likely to survive :)

You and other commenters hit the nail in terms of the RMM most likely breaking the innovation snowball that has been seen in the Rails community since it’s inception.
I am also worried that the whole idea behind RMM will cause fragmentation in the community. I hope that the community is able to take the driving seat on this, and it doesn’t end up hurting all of us: there’s still time to shape it up to become a useful resource.

I have been pondering how the community in general tends to lean towards a monoculture view of things. In many respects it has reasonable short term effects, but long-term is has pretty crippling effects.

I always admired Merb for just this reason. Merb core seemed to push the envelope of what was already as accepted as “good enough” as not being truly being good enough.

I would argue that results are what matter and not the process/tool/framework used to get there. For example business people like rails because it is quick to market for their ideas. They don’t give a crap about whether it’s the ruby language or convention over configuration or whatever that helps get them there. They just care that it delivers as promised.

RMM is flawed in that it is focused on the process and not the result. I think that the Ruby community has gotten caught up with testing in a similar way. People are more concerned with “how” to test rather than the reason for testing to begin with. So while I agree with your testing analogy, I think declaring a “winner” is bad news for testing innovation in the long run.

You could apply this same thinking to a number of things in the Rails community. I love the points you make here. I love that that you brought this kind of brashness to Merb and that you are continuing to pursue it in Rails. I just ask that you apply it to more than RMM.

In my opinion, these are valid concerns. To my knowledge, this is the first cohesive criticism of RMM that isn’t FUD or just lack of understanding. However, I’d like to point out that if what you’re indicating here were to happen, it would be an unfortunate side effect, rather than the actual intent of the application.

In my mind RMM has always been about census. As a journeyman software craftsman, I have a set of tools that I employ to achieve success. Part of being a journeyman is the investigation into how others achieve success using different tools. RMM allows you to easily investigate others patterns, practices and tools for success. Does that mean it couldn’t be used for evil? Of course not, but that is most definitely *not* the intent and I, for one, would welcome discussion about how to prevent gaming of the system or stifling of innovation.

The idea behind the ranking of practices is to allow practices to shift through positions over time (fluctuating naturally as a reflection of our actual practices) so we can have an informal poll of practices which seem to work well for many people. There is obvious value in such a metric, though I do concede that human nature is going to likely assign more weight to it than there should be.

I’ll be the first to admit that the endorsements are a popularity contest, in precisely the same way that WWR endorsements are. At the same time, it provides a system of vouching wherein we can deduce if a firm actually employs the practices they espouse to. The original idea there was to try to stop people from gaming the system in precisely the way you envision. It has it’s own flaws though.

Bearing in mind the above desirable traits that we’ll be unlikely to abandon, do you have any suggestions for how we might improve the system to address some of your concerns?

Speaking as someone who works with .NET in New York and runs a user group, we are _stuck_ with best practices from yesteryear because of the certs system and what it teaches. There’s an industry built on teaching the best practices that have existed for the last 10 years and that is a hard thing to challenge or even get moving in another direction.

I sincerely hope that Ruby doesn’t continue down that path because it’s horrible, boring, and only provides the guarantee that you won’t be able to change the norms that don’t make sense when the time comes.

I hate RMM.
I hate it so much it’s making me think it’s time to start moving on from Rails and Ruby. Which is probably a slight over-reaction on my part, but anyway.

I think you are hitting on the core problem. It’s not that we’re against a forum for sharing practice and process. But the moment we start rating and ranking organisations we are engaging in the worst kind of group think that many of us are trying desperately to avoid.

If I wanted this sort of Best Practice, I would have stayed in the Java world, you know?

@voxdolo There are a few obvious flaws in the current system that made me doubt the sincerity of the folks who are building the tool.

The current RMM tool does not serve as a census of practices, since users endorse and recommend firms, not practices. New firms create their own practices, and those practices are recommended per firm.

Additionally, the first batch of endorsements are mostly Hashrocketeers endorsing Hashrocket, not exactly a great way to engender trust in the system.

I’d like to see the entire “firm” section of the site removed entirely, and I’d like to see conflicting practices explicitly acknowledged and debated. Instead of allowing unrestricted voting on conflicting practices to choose winners, I’d like to see the folks supporting those practices to be given a space to argue for the practices. Ideally, a third party would aggregate those arguments into coherent “for” and “against” positions, and all of the arguments in favor of different practices could be examined by interested parties.

Stephen:

“The idea behind the ranking of practices is to allow practices to shift through positions over time (fluctuating naturally as a reflection of our actual practices) so we can have an informal poll of practices which seem to work well for many people. There is obvious value in such a metric, though I do concede that human nature is going to likely assign more weight to it than there should be.”

What’s the value in such a metric other than as an informal popularity contest? If a specific practice is more popular than another, what am I to make of that? That it’s more effective? That it’s more trendy? That it has yet to be debunked? That it’s more popular among the sorts of firms that are likely to participate in RMM?

And by the way can anybody explain to me why RMM is specific to Rails? I couldn’t find any Rails-specific practices on the “Practices” page.

@voxdolo – you made rude dismissive remarks about my criticism of RMM, but I answered your criticism in detail and politely on Obie’s blog. you never responded. I think you should respond. specifically, because in your comments here you say that Yehuda’s criticism of RMM is the first cohesive criticism. I assume you mean coherent, but if you believe my criticism did not cohere, I would love to find out why.

Looks like there’s going to be plenty of fodder for lively discussion in Vegas. I’m not concerned.

As someone who’s worked on the system and has been involved with the app since day one, even I regarded it with healthy skepticism initially. I didn’t like the original concept as I understood it at all. I *did* (and obviously still do) like the application itself, but even that didn’t start coming about until our conversation with Corey Haines and his thoughts on it’s similarity to a vouching system (some of which you can see in the video here: http://vimeo.com/3345483 beginning at 4:18 and specifically at 5:52 [Corey Haines] and 6:12 [Cory Foy]).

There’s a nuance that you’ve got wrong in the application model. Users actually do endorse the firm’s use of a practice. The call to action says: “Endorse Hashrocket”, but the endorsement itself is of the individual practices of the firm, not the firm itself.

In regards to the first batch of endorsements being Rocketeers, that’s true. We’ve put a halt to any further internal endorsement until other firms gain traction, so as not to appear disingenuous.

I think a significant part of the value of RMM is being able to look at the practices of a particular firm. Personally, I want to see what the Pivotal Labs and the ENTP’s of the world are doing… this reasserts what I think is valuable about the journeyman process and evaluating what your peers are doing and finding success in.

I like the idea of allowing some sort of conversation around individual practices. I think we would lose a lot of value though if the conversation became purely academic though. When you endorse a firms employment of a practice, it keeps things concrete. I might think that pairing all the time is a great *concept* and argue stringently for it, but if I’m not practicing it, my argument is markedly less valuable than if I am.

Thanks for taking some time to think about this. I’m positive we can collectively make RMM something that will benefit the community and this is precisely the sort of dialogue that can make that happen.

I am with Francis on this. What happened to the science in computer science? Show me research that demonstrates Pair Programming is superior. A popularity list is just that. How does it help anyone to see that 32 firms supply drinks to their workers?

This whole thing really smells to me. These concerns were all raised when the idea was initially presented. Yet when the alpha/beta site goes up, it has firm ranking built in. A not so hidden agenda.

Stephen, sorry if this is in the FAQ and I missed this point: When users endorse a firm’s use of a practice, what sort of user are we talking about here? An employee of the firm? An employee of a client?

Also, which of these practices are Rails-specific? Is there a reason that this couldn’t be extended to software development shops in general, whether they use Python or C or Haskell?

Francis:

I suppose the value of the metric is the same as in any census. It can be interesting and perhaps even valuable to the right person to know that in 2007, in Kentucky, there were 92,289 privately held non-farm business establishments out of the 7,601,160 in the United States. Who that person is in our specific case, I’m not sure. I’m certainly interested in seeing how many other firms practice the Pomodoro Technique, as I’m a fan of it and made an app to help implement it (http://github.com/voxdolo/ding) :)

Again, that’s not to say that there’s not potential for abuse… but as previously stated, I think we can all work together to help prevent most of that.

Francis: IMO, there’s nothing specific to Rails about it. There’s been a good amount of discussion internally about it’s cross-applicability. Personally, I’d love to see it open to other ecosystems.

Francis: re: types of users:

There are basic users: “I have seen firm Y do X”
There are employees of the firm: “We do X”
And there are clients: “When I engaged Y they did X”

The thinking there was that some endorsements mean more than others. In the current implementation, all endorsements are weighted equally, but I’d like to see a future version that weights client endorsement in particular more heavily, as that seems to me to be a more valuable vouching than an employee or some random person.

And can you speak a bit more about who’s doing the endorsing and what that implies?

I wonder if the use of the word “census” might be setting the methodological bar fairly high. Usually censuses are expensive, rigorous efforts intended to make sure that everybody is counted. I think in the U.S. we still try to count literally every single person among hundreds of millions, so it’s a good thing we only do it once a decade.

Many online vote-counting efforts, on the other hand, make no efforts to get a decently representative sample, and I don’t believe there are any reputable polling organizations that make significant use of the web in data gathering. It’s easy to limit participation to people you know and trust, but then you’re easily open to charges of stacking the vote with supporters. Any thoughts on how RMM will deal with these sorts of issues?

Oop, we’re cross-commenting. Thanks for answering my q about endorsement.

Christian: Actually my critique is different than that. I think the business of managing programmers, clients, requirements, methodology, etc., is necessarily a social science problem, not a physical science problem, and hence the question of proof is extremely difficult. I’d say that you can prove that, say, adding two chemicals together will produce a certain reaction, since such an experiment can be reasonably isolated. You can’t do that with pair programming, though. You can’t scientifically prove that pair programming is a good or bad practice, just like you can’t scientifically prove that small companies are better or worse than big companies, or that Democrats are better or worse at governing than Republicans. When variables can’t be isolated, you can’t prove things with statistics. So instead, when you’re asked if you want to try out pair programming, you rely on your personal experience, the experience of those you trust and admire, maybe some judgement and wisdom if you’re lucky enough to possess any, and maybe just dumb luck.

Thank you for the great post.

There’s a difference between “evangelizing” and “bullying”. Best-practices are great, but when it becomes an issue of conformance, it goes beyond encouraging or revealing best practices, and adds a ‘weight’ to innovation. Great firms are great because they want to be, not because they are forced to be.

I actually see RMM as a natural part of the evolution of Rails into a more staid piece of technology infrastructure to be approached by larger and larger organizations for their technology stacks. I do find your criticism of RMM valid, however it is concurrently irrelevant to the larger issue.

The point I think you’re missing is that you don’t seem to understand how bigger companies make decisions about technology. They aren’t looking to be on the edge of anything. They don’t *want* to participate in this amorphous, evolving world you (and I) clearly love. Being on the front wave of the technology curve incurs all risk and no gain for them; it either works and their compensation and prestige do not increase, or it doesn’t and they get fired.

With incentives like that, is it any wonder why things like RMM exist? (and CMM, before it) Harken back to the old saying “Nobody ever got fired for buying IBM” and ask yourself why that phrase came into being.

Toby:

NO!!! WE’RE GOING TO BE ROCKST4RS 4EVAR!!!1

Ahem. Yeah, I see your point. I’ve seen a little certification happen up-close and it always seems to be about limiting the range of outcomes. There are many organizations with low appetites for technical risk. They’ll gladly accept more cost and time if it lowers the possibility of catastrophic failure. And I think, on a personnel level, there may be developers whose work could benefit from a lot more restriction since they lack the horse-sense to do good work otherwise.

Quite frankly I don’t think those sorts of programmers overlap much with, say, the attendance at RubyConf, but maybe it’s just a fact of life that we all can’t be hypermotivated craft-oriented people. Some people got into this business only because they heard it was a good living, or (God help them) because they heard it was dependable. If your technology team is made up of a lot of people like that, then your shop might be able to benefit from some pre-canned methodology with a certification process and everything.

Of course, if your technology team is made up of a lot of people like that, and you’re having to go through some certification because of it, and you’re a friend of mine, then I’m going to tell you to quit that lousy job ASAP.

With the removal of the “firms” page, the popularity contest aspect of my critique is resolved.

My remaining critique could probably be resolved by making competing practices more obvious and making it easier to engage in dialogue whose archives can successfully reflect changes over time. In other words, to the extent that there is competition between practices, I would like to downplay the role of unrestricted voting and play up the role of ongoing dialogue.

In order ensure that the dialogue around competing practices doesn’t become stale, we could allow a combination of freshness and popularity of an argument in favor of a particular practice to cause it to bubble up (something like the Daily Kos recommended diary algorithm). This allows us to use crowdsourcing to rank arguments in favor (or against) a particular practice to bubble up while not allowing popularity to control WHICH practice is preferred.

Francis, the Pair Programming thing was just an example, probably a poor one. My point is there are far better methods in helping to determine these things than a popularity voting model.

@wycats, don’t we have those kinds of discussion happening every day in the community? I really don’t get the point of all this.

I could see technology helping make the discussions more public and easy to browse. That’s the sort of thing I would like to see here.

We could just try to do the c2 wiki all over again, if you guys are up for it :)

Totally agree, but after all web related programming is pretty damn basic in the greater scope of things so I guess we cant feel very smart regardless how nice our ruby code looks :)

I am really happy to read that here, after too much discussion in the ruby-community in the wrong direction.
The success of ruby is it’s open “nature”, as a language and a community.

Really great comments on here.

Those mentioning that clients don’t actually care too much about the cutting edge, best practices, etc. have an excellent point. While it would be nice if all clients were as engaged with the process of development as we are, we all know that isn’t the case. In fact, some of the most interesting practices to me are the ones that focus on making it easy for clients to engage in dialog on our level and vice versa.

I would love to see the RMM site make an effort to draw in real discussion from clients. It has been mentioned that Hashrocket would like to weight client endorsements heavier, and while that is a start it isn’t what is most interesting to me. I want to hear specifically from clients how something like “Uses Cucumber for Integration Testing” improved their experience. Or even better yet, how some practice helped them increase conversions for their app.

I also like the idea of a sort of face off between conflicting practices, with a focus on discussion more than declaring a definitive winner. The idea being that it would help draw out insights as to what context certain practices work better than others.

Finally, I still don’t like the name ;) The more this evolves, the less it has to do with Rails, Maturity, or Models & the more it has to do with transparency imo. I think a lot of the confusion around this could have been avoided if the label attached to the discussion did more to steer it in the direction shown by this thread.

Yehuda,

A almost never disagree with what you say, but this is one of those times. Let me explain why, and hopefully you can see it from my perspective.

See I don’t live in San Fransisco or anywhere in California. I don’t live on any coast, or in any big city. I live in a small town with a great college that loves their .net and java. There are no ruby user groups here, not really anyone here that practices ruby. 4 people in the whole city really, maybe 5.

For me its really hard to keep up with all the different ways of doing things, but with the Rails Maturity Model I can see what other companies are doing. Its a great resource for me to easily find what other successful companies are doing out there.

It’s not about setting the standard of what you should do, Its more about like “hey, this is what all the other companies are doing. Maybe this would be a good place to start.”

Imagine, your a younger developer who would like to work at one of these places. What kinds of things should I know how to do before sending a resume.

This really provides me with a roadmap to becoming a Mature Rails Developer, so hopefully one day I can become a Mature Rails Innovator, like You, Obie, Ezra, DrNic, the people at Pivotal Labs, Phusion, etc. I wanna know what you guys do, because your Mature Rails Firms.

I hope you can see it from my standpoint. Anyway, thanks for writing this post, because without it, I wouldn’t have thought about RMM that hard. $)

to be fair, i do like this site as well.

http://rubytrends.com/

the more the better.

Leave a Reply

Archives

Categories

Meta