Yehuda Katz is a member of the Ember.js, Ruby on Rails and jQuery Core Teams; his 9-to-5 home is at the startup he founded, Tilde Inc.. There he works on Skylight, the smart profiler for Rails, and does Ember.js consulting. He is best known for his open source work, which also includes Thor and Handlebars. He travels the world doing open source evangelism and web standards work.

Archive for the ‘Ruby’ Category

Threads (in Ruby): Enough Already

For a while now, the Ruby community has become enamored in the latest new hotness, evented programming and Node.js. It’s gone so far that I’ve heard a number of prominent Rubyists saying that JavaScript and Node.js are the only sane way to handle a number of concurrent users.

I should start by saying that I personally love writing evented JavaScript in the browser, and have been giving talks (for years) about using evented JavaScript to sanely organize client-side code. I think that for the browser environment, events are where it’s at. Further, I don’t have any major problem with Node.js or other ways of writing server-side evented code. For instance, if I needed to write a chat server, I would almost certainly write it using Node.js or EventMachine.

However, I’m pretty tired of hearing that threads (and especially Ruby threads) are completely useless, and if you don’t use evented code, you may as well be using a single process per concurrent user. To be fair, this has somewhat been the party line of the Rails team years ago, but Rails has been threadsafe since Rails 2.2, and Rails users have been taking advantage of it for some time.

Before I start, I should be clear that this post is talking about requests that spent a non-tiny amount of their time utilizing the CPU (normal web requests), even if they do spend a fair amount of time in blocking operations (disk IO, database). I am decidedly not talking about situations, like chat servers where requests sit idle for huge amounts of time with tiny amounts of intermittent CPU usage.

Threads and IO Blocking

I’ve heard a common misperception that Ruby inherently “blocks” when doing disk IO or making database queries. In reality, Ruby switches to another thread whenever it needs to block for IO. In other words, if a thread needs to wait, but isn’t using any CPU, Ruby’s built-in methods allow another waiting thread to use the CPU while the original thread waits.

If every one of your web requests uses the CPU for 30% of the time, and waits for IO for the rest of the time, you should be able to serve three requests in parallel, coming close to maxing out your CPU.

Here’s a couple of diagrams. The first shows how people imagine requests work in Ruby, even in threadsafe mode. The second is how an optimal Ruby environment will actually operate. This example is extremely simplified, showing only a few parts of the request, and assuming equal time spent in areas that are not necessarily equal.



I should be clear that Ruby 1.8 spends too much time context-switching between its green threads. However, if you’re not switching between threads extremely often, even Ruby 1.8′s overhead will amount to a small fraction of the total time needed to serve a request. A lot of the threading benchmarks you’ll see are testing pathological cases involve huge amounts of threads, not very similar to the profile of a web server.

(if you’re thinking that there are caveats to my “optimal Ruby environment”, keep reading)

“Threads are just HARD”

Another common gripe that pushes people to evented programming is that working with threads is just too hard. Working hard to avoid sharing state and using locks where necessary is just too tricky for the average web developer, the argument goes.

I agree with this argument in the general case. Web development, on the other hand, has an extremely clean concurrency primitive: the request. In a threadsafe Rails application, the framework manages threads and uses an environment hash (one per request) to store state. When you work inside a Rails controller, you’re working inside an object that is inherently unshared. When you instantiate a new instance of an ActiveRecord model inside the controller, it is rooted to that controller, and is therefore not used between live threads.

It is, of course, possible to use global state, but the vast majority of normal, day-to-day Rails programming (and for that matter, programming in any web framework in any language with a request model) is inherently threadsafe. This means that Ruby will transparently handle switching back and forth between active requests when you do something blocking (file, database, or memcache access, for instance), and you don’t need to personally manage the problems the arise when doing concurrent programming.

This is significantly less true about applications, like chat servers, that keep open a huge number of requests. In those cases, a lot of the application logic happens outside the individual request, so you need to personally manage shared state.

Historical Ruby Issues

What I’ve been talking about so far is how stock Ruby ought to operate. Unfortunately, a group of things have historically conspired to make Ruby’s concurrency story look much worse than it actually ought to be.

Most obviously, early versions of Rails were not threadsafe. As a result, all Rails users were operating with a mutex around the entire request, forcing Rails to behave like the first “Imagined” diagram above. Annoyingly, Mongrel, the most common Ruby web server for a few years, hardcoded this mutex into its Rails handler. As a result, if you spun up Rails in “threadsafe” mode a year ago using Mongrel, you would have gotten exactly zero concurrency. Also, even in threadsafe mode (when not using the built-in Rails support) Mongrel spins up a new thread for every request, not exactly optimal.

Second, the most common database driver, mysql is a very poorly behaved C extension. While built-in I/O (file or pipe access) correctly alerts Ruby to switch to another thread when it hits a blocking region, other C extensions don’t always do so. For safety, Ruby does not allow a context switch while in C code unless the C code explicitly tells the VM that it’s ok to do so.

All of the Data Objects drivers, which we built for DataMapper, correctly cause a context switch when entering a blocking area of their C code. The mysqlplus gem, released in March 2009, was designed to be a drop-in replacement for the mysql gem, but fix this problem. The new mysql2 gem, written by Brian Lopez, is a drop-in replacement for the old gem, also correctly handles encodings in Ruby 1.9, and is the new default MySQL driver in Rails.

Because Rails shipped with the (broken) mysql gem by default, even people running on working web servers (i.e. not mongrel) in threadsafe mode would have seen a large amount of their potential concurrency eaten away because their database driver wasn’t alerting Ruby that concurrent operation was possible. With mysql2 as the default, people should see real gains on threadsafe Rails applications.

A lot of people talk about the GIL (global interpreter lock) in Ruby 1.9 as a death knell for concurrency. For the uninitiated, the GIL disallows multiple CPU cores from running Ruby code simultaneously. That does mean that you’ll need one Ruby process (or thereabouts) per CPU core, but it also means that if your multithreaded code is running correctly, you should need only one process per CPU core. I’ve heard tales of six or more processes per core. Since it’s possible to fully utilize a CPU with a single process (even in Ruby 1.8), these applications could get a 4-6x improvement in RAM usage (depending on context-switching overhead) by switching to threadsafe mode and using modern drivers for blocking operations.

JRuby, Ruby 1.9 and Rubinius, and the Future

Finally, JRuby already runs without a global interpreter lock, allowing your code to run in true parallel, and to fully utilize all available CPUs with a single JRuby process. A future version of Rubinius will likely ship without a GIL (the work has already begun), also opening the door to utilizing all CPUs with a single Ruby process.

And all modern Ruby VMs that run Rails (Ruby 1.9′s YARV, Rubinius, and JRuby) use native threads, eliminating the annoying tax that you need to pay for using threads in Ruby 1.8. Again, though, since that tax is small relative to the time for your requests, you’d likely see a non-trivial improvement in latency in applications that spend time in the database layer.

To be honest, a big part of the reason for the poor practical concurrency story in Ruby has been that the Rails project didn’t take it seriously, which it difficult to get traction for efforts to fix a part of the problem (like the mysql driver).

We took concurrency very seriously in the Merb project, leading to the development of proper database drivers for DataMapper (Merb’s ORM), and a top-to-bottom understanding of parts of the stack that could run in parallel (even on Ruby 1.8), but which weren’t. Rails 3 doesn’t bring anything new to the threadsafety of Rails itself (Rails 2.3 was threadsafe too), but by making the mysql2 driver the default, we have eliminated a large barrier to Rails applications performing well in threadsafe mode without any additional research.

UPDATE: It’s worth pointing to Charlie Nutter’s 2008 threadsafety post, where he talked about how he expected threadsafe Rails would impact the landscape. Unfortunately, the blocking MySQL driver held back some of the promise of the improvement for the vast majority of Rails users.

What’s New in Bundler 1.0.0.rc.1

Taking into consideration the huge amount of feedback we received during the Bundler 0.9 series, we streamlined Bundler 1.0 significantly, and made it fit user expectations better.

Whether you have used bundler before or not, the easiest way to get up to speed is to read the following notes and go to for more in-depth information.

(note that is still being updated for the 1.0 changes, and should be ready for the final release).

Starting a new project with bundler

When you generate a new Rails application, Rails will create a Gemfile for you, which has everything needed to boot your application.

Otherwise, you can use bundle init to create a stub Gemfile, ready to go.

First, run bundle install to make sure that you have all the needed dependencies. If you already do, this process will happen instantaneously.

Bundler will automatically create a file called Gemfile.lock. This file is a snapshot of your application’s dependencies at that time.

You SHOULD check both files into version control. This will ensure that all team members (as well as your production server) are working with identical dependencies.

Checking out an existing project using bundler

After checking out an existing project using bundler, check to make sure that the Gemfile.lock snapshot is checked in. If it is not, you may end up using different dependencies than the person who last used and tested the project.

Next, run bundle install. This command will check whether you already have all the required dependencies in your system. If you do not, it will fetch the dependencies and install them.

Updating dependencies

If you modify the dependencies in your Gemfile, first try to run bundle install, as usual. Bundler will attempt to update only the gems you have modified, leaving the rest of the snapshot intact.

This may not be possible, if the changes conflict with other gems in the snapshot (or their dependencies). If this happens, Bundler will instruct you to run bundle update. This will re-resolve all dependencies from scratch.

The bundle update command will update the versions of all gems in your Gemfile, while bundle install will only update the gems that have changed since the last bundle install.

After modifying dependencies, make sure to check in your Gemfile and Gemfile.lock into version control.

By default, gems are installed to your system

If you follow the instructions above, Bundler will install the gems into the same place as gem install.

If necessary, Bundler will prompt you for your sudo password.

You can see the location of a particular gem with bundle show [GEM_NAME]. You can open it in your default editor with bundle open [GEM_NAME].

Bundler will still isolate your application from other gems. Installing your gems into a shared location allows multiple projects to avoid downloading the same gem over and over.

You might want to install your bundled gems to a different location, such as a directory in the application itself. This will ensure that each application has its own copies of the gems, and provides an extra level of isolation.

To do this, run the install command with bundle install /path/to/location. You can use a relative path as well: bundle install vendor.

In RC1, this command will use gems from the system, if they are already there (it only affects new gems). To ensure that all of your gems are located in the path you specified, run bundle install path --disable-shared-gems.

In Bundler 1.0 final, bundle install path will default to --disable-shared-gems.


When deploying, we strongly recommend that you isolate your gems into a local path (using bundle install path --disable-shared-gems). The final version of bundler will come with a --production flag, encapsulating all of the best deployment practices.

For now, please follow the following recommendations (described using Capistrano concepts):

  • Make sure to always check in a Gemfile.lock that is up to date. This means that after modifying your Gemfile, you should ALWAYS run bundle install.
  • Symlink the vendor/bundle directory into the application’s shared location (symlink release_path/current/vendor/bundle to release_path/shared/bundled_gems)
  • Install your bundle by running bundle install vendor/bundle --disable-shared-gems

Encodings, Unabridged

I wrote somewhat extensively about the problem of encodings in Ruby 1.9 in general last week.

For those who didn’t read that post, let me start with a quick refresher.

What’s an Encoding?

An encoding specifies how to take a list of characters (such as “hello”) and persist them onto disk as a sequence of bytes. You’re probably familiar with the ASCII encoding, which specifies how to store English characters in a single byte each (taking up the space in 0-127, leaving 128-255 empty).

Another common encoding is ISO-8859-1 (or Latin-1), which uses ASCII’s designation for the first 127 characters, and designates the numbers 128-255 for Latin characters (such as “é” or “ü”).

Obviously, 255 characters isn’t enough for all languages, so there are a number of ISO-8859-* encodings which each designate numbers 128-255 for their own purposes (for instance, ISO-8859-5 uses that space for Russian characters).

Unfortunately, the raw bytes themselves do not contain an “encoding specifier” or any kind, and the exact same bytes can either mean something in Western characters, Russian, Japanese, or any other language, depending on the character set that was originally used to store off the characters as bytes.

As a general rule, protocols (such as HTTP), provide a mechanism for specifying the encoding. For instance, in HTTP, you can specify the encoding in the Content-Type header, like this: Content-Type: text/html; charset=UTF-8. However, this is not a requirement, so it is possible to receive some content over HTTP and not know its encoding.

This brings us to an important point: Strings have no inherent encoding. By default, Strings are just BINARY data. Since the data
could be encoded using any number of different incompatible encodings, simply combining BINARY data from different sources could easily result in a corrupted String.

When you see a diamond with a question mark inside on the web, or gibberish characters (like a weird A with a 3/4 symbol), you’re seeing a mistaken attempt to combine binary data encoded differently into a single String.

What’s Unicode

Unicode is an effort to map every known character (to a point) to a number. Unicode does not define an encoding (how to represent those numbers in bytes). It simply provides a unique number for each known character.

Unicode tries to unify characters from different encodings that represent the same character. For instance, the A in ASCII, the A in
ISO-8859-1, and the A in the Japanese encoding SHIFT-JIS all map to the same Unicode character.

Unicode also takes pains to ensure round-tripping between existing encodings and Unicode. Theoretically, this should mean that it’s possible to take some data
encoded using any known encoding, use Unicode tables to map the characters to Unicode numbers, and then use the reverse versions of those tables to map the Unicode numbers back into the original encoding.

Unfortunately, both of these characteristics cause some problems for Asian character sets. First, there have been some historical errors in the process of
unification, which requires the Unicode committee to properly identify which characters in different existing Chinese, Japanese and Korean (CJK) character sets actually represent the same character.

In Japanese, personal names use slight variants of the non-personal-name version of the same character. This would be equivalent to the difference (in English) between “Cate” and “Kate”. Many of these characters (sometimes called Gaiji) cannot be represented in Unicode at all.

Second, there are still hundreds of characters in some Japanese
encodings (such as the Microsoft encoding to SHIFT-JIS called CP932 or Windows-31J) that simply do not round-trip through Unicode.

To make matters worse, Java and MySQL use a different mapping table than the standard Unicode mapping tables (making “This costs ¥5″ come out in Unicode as “This costs \5″). The standard Unicode mapping tables handle this particular case correctly (but cannot fully solve the round-tripping problem), but these quirks only serve to further raise doubts about Unicode in the minds of Japanese developers.

For a lot more information on these issues, check out the XML Japanese Profile document created by the W3C to explain how to deal with some of these problems in XML documents.

In the Western world, all encodings in use do not have these problems. For instance, it is trivial to take a String encoded as ISO-8859-1, convert it into
Unicode, and then convert it back into ISO-8859-1 when needed.

This means that for most of the Western world, it is a good idea to
use Unicode as the “one true character set” inside programming languages. This means that programmers can treat Strings as simple sequences of Unicode code points (several
code points may add up to a single character, such as the ¨ code point, which can be
applied to other code points to form characters like ü). In the Asian world, while this can sometimes be a good strategy, it is often significantly simpler to use the original encoding and handle merging Strings in different encodings together manually (when an appropriate decision about the tradeoffs around fidelity can be made).

Before I continue, I would note that the above is a vast simplification of the
issues surrounding Unicode and Japanese. I believe it to be a fair characterization,
but die-hard Unicode folks, and die-hard anti-Unicode folks would possibly disagree
with some elements of it. If I have made any factual errors, please let me know.

A Digression: UTF-*

Until now, I have talked only about “Unicode”, which simply maps code points
to numbers. Because Unicode uses counting numbers, it can accommodate as many
code points as it wants.

However, it is not an encoding. In other words, it does not specify how to
store the numbers on disk. The most obvious solution would be to use a few
bytes for each character. This is the solution that UTF-32 uses, specifying
that each Unicode character be stored as four bytes (accommodating over 4 billion characters). While this has the advantage of being simple, it also uses huge amounts of memory and disk space compared to the original encodings (like ASCII, ISO-8859-1 and SHIFT-JIS) that it is replacing.

On the other side of the spectrum is UTF-8. UTF-8 uses a single byte for English characters, using the exact same mapping as ASCII. This means that a UTF-8 string that contains only characters found in ASCII will have the identical bytes as a String stored in ASCII.

It then uses the high bit (the bytes representing 128-255) to specify a series of escape characters that can specify a multibyte character. This means that Strings using Western characters use relatively few bytes (often comparable with the original encodings Unicode replaces), because they are in the low area of the Unicode space, while the large number of characters in the Asian languages use more bytes than their native encodings, because they use characters with larger Unicode numbers.

This is another reason some Asian developers resent Unicode; while it does not significantly increase the memory requirements for most Western documents, it does so for Asian documents.

For the curious, UTF-16 uses 16-bits for the most common characters (the BMP, or basic multilingual plane), and 32-bits to represent characters from planes 1 through 16. This means that UTF-8 is most efficient for Strings containing mostly ASCII characters. UTF-8 and UTF-16 are approximately equivalent for Strings containing mostly characters outside ASCII but inside the the BMP. For Strings containing mostly characters outside the BMP, UTF-8, UTF-16, and UTF-32 are approximately equivalent. Note that when I say “approximately equivalent”, I’m not saying that they’re exactly the same, just that the differences are small in large Strings.

Of the Unicode encodings, only UTF-8 is compatible with ASCII. By this I mean that if a String is valid ASCII, it is also valid UTF-8. UTF-16 and UTF-32 encode ASCII characters using two or four bytes.

What Ruby 1.9 Does

Accepting that there are two very different ways of handling this problem, Ruby 1.9 has a String API that is somewhat different from most other languages, mostly influenced by the issues I described above in dealing with Japanese in Unicode.

First, Ruby does not mandate that all Strings be stored in a single internal encoding. Unfortunately, this is not possible to do reliably with common Japanese encodings (CP932, aka Windows-31J has 300 characters than cannot round-trip through Unicode without corrupting data). It is possible that the Unicode committee will some day fully solve these problems to everyone’s satisfaction, but that day has not yet come.

Instead, Ruby 1.9 stores Strings as the original sequence of bytes, but allows a String to be tagged with its encoding. It then provides a rich API for converting Strings from one encoding to another.

string = "hello"                     # by default, string is encoded as "ASCII"
string.force_encoding("ISO-8859-1")  # this simply retags the String as ISO-8859-1
                                     # this will work since ISO-8859-1
                                     # is a superset of ASCII.
string.encode("UTF-8")               # this will ask Ruby to convert the bytes in
                                     # the current encoding to bytes in
                                     # the target encoding, and retag it with the
                                     # new encoding
                                     # this is usually a lossless conversion, but
                                     # can sometimes be lossy

A more advanced example:

# encoding: UTF-8
# first, tell Ruby that our editor saved the file using the UTF-8 encoding.
# TextMate does this by default. If you lie to Ruby, very strange things
# will happen
utf8 = "hellö"
iso_8859_1 = "hellö".encode("ISO-8859-1")
# Because we specified an encoding for this file, Strings in here default
# to UTF-8 rather than ASCII. Note that if you didn't specify an encoding
# characters outside of ASCII will be rejected by the parser.
utf8 << iso_8859_1
# This produces an error, because Ruby does not automatically try to
# transcode Strings from one encoding into another. In practice, this
# should rarely, if ever happen in applications that can rely on
# Unicode; you'll see why shortly
utf8 << iso_8859_1.encode("UTF-8")
# This works fine, because you first made the two encodings the same

The problems people are really having

The problem of dealing with ISO-8859-1 encoded text and UTF-8 text in the same Ruby is real, and we’ll see soon how it is handled in Ruby. However, the problems people have been having are not of this variety.

If you examine virtually all of the bug reports involving incompatible encoding exceptions, you will find that one of the two encodings is ASCII-8BIT. In Ruby, ASCII-8BIT is the name of the BINARY encoding.

So what is happening is that a library somewhere in the stack is handing back raw bytes rather than encoded bytes. For a long time, the likely perpetrator here was database drivers, which had not been updated to properly encode the data they were getting back from the database.

There are several other potential sources of binary data, which we will discuss in due course. However, it’s important to note that a BINARY encoded String in Ruby 1.9 is the equivalent of a byte[] in Java. It is a type that cannot be reasonably concatenated onto an encoded String. In fact, it is best to think of BINARY encoded Strings as a different class with many methods in common.

In practice, as Ruby libraries continue to be updated, you should rarely ever see BINARY data inside of your application. If you do, it is because the library that handed it to you genuinely does not know the encoding, and if you want to combine it with non-BINARY String, you will need to convert it into an encoded String manually (using force_encoding).

Why this is, in practice, a rare problem

The problem of incompatible encodings is likely to happen in Western applications only when combining ISO-8859-* data with Unicode data.

In practice, most sources of data, without any further work, are already encoded as UTF-8. For instance, the default Rails MySQL connection specifies a UTF-8 client encoding, so even an ISO-8859-1 database will return UTF-8 data.

Many other data sources, such as MongoDB, only support UTF-8 data internally, so their Ruby 1.9-compatible drivers already return UTF-8 encoded data.

Your text editor (TextMate) likely defaults to saving your templates as UTF-8, so the characters in the templates are already encoded in UTF-8.

This is why Ruby 1.8 had the illusion of working. With the exception of some (unfortunately somewhat common) edge-cases, most of your data is already encoded in UTF-8, so simply treating it as BINARY data, and smashing it all together (as Ruby 1.8 does) works fairly reliably.

The only reason why this came tumbling down in Ruby 1.9 is that drivers that should have returned Strings tagged with UTF-8 were returning Strings tagged with BINARY, which Ruby rightly refused to concatenate with UTF-8 Strings. In other words, the vast majority of encoding problems to date are the result of buggy Ruby libraries.

Those libraries, almost entirely, have now been updated. This means that if you use UTF-8 data sources, which you were likely doing by accident already, everything will continue to work as it did in Ruby 1.8.

Digression: force_encoding

When people encounter this problem for the first time, they are often instructed by otherwise well-meaning people to simply call force_encoding("UTF-8") on the offending String.

This will work reliably if the original data is stored in UTF-8, which is often true about the person who made the original suggestion. However, it will mysteriously fail to work (resulting in “�” characters appearing) if the original data is encoded in ISO-8859-1. This can cause major confusion because some people swear up and down that it’s working and others can clearly see that it’s not.

Additionally, since ISO-8859-1 and UTF-8 are both compatible with ASCII, if the characters being force_encoded are ASCII characters, everything will appear to work until a non-ASCII character is entered one day. This further complicates efforts of members of the community to identify and help resolve issues if they are not fluent in the general issues surrounding encodings.

I’d note that this particular issue (BINARY data entering the system that is actually ISO-8859-1) would cause similar problems in Java and Python, which would either silently assume Unicode, or present a byte[], forcing you to force_encoding it into something like UTF-8.

Where it doesn’t work

Unfortunately, there are a few sources of data that are common in Rails applications that are not already encoded in UTF-8.

In order to identify these cases, we will need to identify the boundary between a Rails application and the outside world. Let’s look at a common web request.

First, the user goes to a URL. That URL is probably encoded in ASCII, but can also contain Unicode characters. The encoding for this part of the request (the URI) is not provided by the browser, but it appears safe to assume that it’s UTF-8 (which is a superset of ASCII). I have tested in various versions of Firefox, Safari, Chrome, and Internet Explorer and it seems reliable. I personally thank the Encoding gods for that.

Next, the request goes through the Rack stack, and makes its way into the Rails application. If all has gone well, the Rails application will see the parameters and other bits of the URI exposed through the request object encoded as UTF-8. At the moment (and after this post, it will probably be true for mere days), Rack actually returns BINARY Strings for these elements.

At the moment, Ruby allows BINARY Strings that contain only ASCII characters to be concatenated with any ASCII-compatible encoding (such as UTF-8). I believe this is a mistake, because it will make scenarios such as the current state of Rack work in all tested cases, and then mysteriously cause errors when the user enters a UTF-8 character in the URI. I have already reported this issue and it should be fixed in Ruby. Thankfully, this issue only relates to libraries that are mistakenly returning
BINARY data, so we can cut this off at the pass by fixing Rack to return UTF-8 data here.

Next, that data will be used to make a request of the data store. Because we are likely using a UTF-8 encoded data-store, once the Rack issue is resolved, the request will go through without incident. If we were using an ISO-8859-1 data store (possible, but unlikely), this could pose issues. For instance, we could be looking up a story by a non-ASCII identifier that the database would not find because the request String is encoded in UTF-8.

Next, the data store returns the contents. Again, you are likely using using a UTF-8 data store (things like CouchDB and MongoDB return Strings as UTF-8). Your template is likely encoded in UTF-8 (and Rails actually makes the assumption that templates without any encoding specified are UTF-8), so the String from your database should merge with your template without incident.

However, there is another potential problem here. If your data source does not return UTF-8 data, Ruby will refuse to concatenate the Strings, giving you an incompatible encoding error (which will report UTF-8 as incompatible with, for instance, ISO-8859-1). In all of the encoding-related bug reports I’ve seen, I’ve only ever seen reports of BINARY data causing this problem, again, likely because your data source actually is UTF-8.

Next, you send the data back to the browser. Rails defaults to specifying a UTF-8 character set, so the browser should correctly interpret the String, if it got this far. Note that in Ruby 1.8, if you had received data as ISO-8859-1 and stored it in an ISO-8859-1 database, your users would now see “�”, because the browser cannot identify a valid Unicode character for the bytes that came back from the database.

In Ruby 1.9, this scenario (but not the much more common scenario where the database returns content as UTF-8, which is common because Rails specifies a UTF-8 client encoding in the default database.yml), you would receive an error rather than sending corrupted data to the client.

If your page included a form, we now have another potential avenue for problems. This is especially insidious because browsers allow the user to change the “document’s character set”, and users routinely fiddle with that setting to “fix” pages that are actually encoded in ISO-8859-1, but are specifying UTF-8 as the character set.

Unfortunately, while browsers generally use the document’s character set for POSTed form data, this is both not reliable and possible for the user to manually change. To add insult to injury, the browsers with the largest problems in this area do not send a Content-Type header with the correct charset to let the server know the character set of the POSTed data.

Newer standards specify an attribute accept-charset that page authors can add to forms to tell the client what character set to send the POSTed data as, but again, the browsers with the largest issues here are also the ones with issues in implementing accept-charset properly.

The most common scenario where you can see this issue is when the user pastes in content from Microsoft Word, and it makes it into the database and back out again as gibberish.

After a lot of research, I have discovered several hacks that, together, should completely solve this problem. I am still testing the solution, but I believe we should be able to completely solve this problem in Rails. By Rails 3.0 final, Rails application should be able to reliably assume that POSTed form data comes in as UTF-8.

Moving that data to the server presents another potential encoding problem, but again, if we can rely on the database to be using UTF-8 as the client (or internal) encoding, and the solution for POSTed form data pans out, the data should smoothly get into the database as UTF-8.

But what if we still do have non-UTF-8 data

Even with all of this, it is still possible that some non-BINARY data sneaks over the boundary and into our Rails application from a non-UTF-8 source.

For this scenario, Ruby 1.9 provides an option called Encoding.default_internal, which allows the user to specify an preferred encoding for Strings. Ruby itself and Ruby’s standard libraries respect this option, so even if, for instance, it opens some IO encoded in ISO-8859-1, it will give the data to the Ruby program transcoded to the preferred encoding.

Libraries, such as database drivers, should also support this option, which means that even if the database is somehow set up to receive UTF-8 String, the driver should convert those String transparently to the preferred encoding before handing it to the program.

Rails can take advantage of this by setting the default_internal to UTF-8, which will then ensure that String from non-UTF-8 sources still make their way into Rails encoded as UTF-8.

Since I started asking libraries to honor this option a week ago, do_sqlite, do_mysql, do_postgres, Nokogiri, Psych (the new YAML parser in Ruby 1.9), sqlite3, and the MongoDB driver have all added support for this option. The fix should be applied to the MySQL driver shortly, and I am still waiting on a response from the pg driver maintainer.

In short, by the time 1.9.2-final ships, I don’t see any reason why all libraries in use don’t honor this setting.

I’d also add that MongoDB and Nokogiri already return only UTF-8 data, so supporting this option was primarily a matter of correctness. If a driver already deals entirely in UTF-8, it will work transparently with Rails because Rails deals only in UTF-8.

That said, we plan to robustly be able to support scenarios where UTF-8 cannot be used in this way (because encoding are in use that cannot be transparently encoded at the boundary without data loss), so proper support for default_internal will be essential in the long-term.


The vast majority of encoding bugs to date have resulted from outdated drivers that returned BINARY data instead of Strings with proper encoding tags.

The pipeline that brings Strings in and out of Rails is reasonably well-understood, and simply by using UTF-8 libraries for each part of that pipeline, Ruby 1.9 will transparently work.

If you accidentally use non-UTF-8 sources in the pipeline, Ruby 1.9 will throw an error, an improvement over the Ruby 1.8 behavior of simply sending corrupted data to the client.

For this scenario, Ruby 1.9 allows you to specify a preferred encoding, which instructs the non-UTF-8 source to convert Strings in other encodings to UTF-8.

By default, Rails will set this option to UTF-8, which means that you should not see ISO-8859-1 Strings in your Rails application.

By the time Ruby 1.9 is released in a few months, this should be a reality, and your experience dealing with Ruby 1.9 String should be superior to the 1.8 experience, because it should generally work, but libraries will have properly considered encoding issues. This means that serving misencoded data should be basically impossible.


When using Rails 3.0 with Ruby 1.9.2-final, you will generally not have to care about encodings.


With all that said, there can be scenarios where you receive BINARY data from a source. This can happen in any language that handles encodings more transparently than Ruby, such as Java and Python.

This is because it is possible for a library to receive BINARY data and not have the necessary metadata to tag it with an encoding.

In this case, you will either need to determine the encoding yourself or treat it as raw BINARY data, and not a String. The reason this scenario is rare is that if there is a way that you can determine the encoding (such as by looking at provided with the bytes), the original library can do the same.

If you get into a scenario where you know the encoding, but it is not machine available, you will want to do something like:

data = API.get("data")
data.encoding #=> ASCII-8BIT # alias for BINARY
# This first tags the data with the encoding that
# you know it is, and then re-encodes it to
# the default_internal encoding, if one was
# specified

Some of the Problems Bundler Solves

This post does not attempt to convince you to use bundler, or compare it to alternatives. Instead, I will try to articulate some of the problems that bundler tries to solve, since people have often asked. To be clear, users of bundler should not need to understand these issues, but some might be curious.

If you’re looking for information on bundler usage, check out the official Bundler site.

Dependency Resolution

This is the problem most associated with bundler. In short, by asking you to list all of your dependencies in a single manifest, bundler can determine, up front, a valid list of all of the gems and versions needed to satisfy that manifest.

Here is a simple example of this problem:

$ gem install thin
Successfully installed rack-1.1.0
Successfully installed eventmachine-0.12.10
Successfully installed daemons-1.0.10
Successfully installed thin-1.2.7
4 gems installed

$ gem install rails
Successfully installed activesupport-2.3.5
Successfully installed activerecord-2.3.5
Successfully installed rack-1.0.1
Successfully installed actionpack-2.3.5
Successfully installed actionmailer-2.3.5
Successfully installed activeresource-2.3.5
Successfully installed rails-2.3.5
7 gems installed

$ gem dependency actionpack -v 2.3.5
Gem actionpack-2.3.5
  activesupport (= 2.3.5, runtime)
  rack (~> 1.0.0, runtime)

$ gem dependency thin
Gem thin-1.2.7
  daemons (>= 1.0.9, runtime)
  eventmachine (>= 0.12.6, runtime)
  rack (>= 1.0.0, runtime)

$ irb
>> require "thin"
=> true
>> require "actionpack"
Gem::LoadError: can't activate rack (~> 1.0.0, runtime) 
for ["actionpack-2.3.5"], already activated rack-1.1.0
for ["thin-1.2.7"]

What happened here?

Thin declares that it can support any version of Rack above 1.0. ActionPack declares that it can support versions 1.0.x of Rack. When we require thin, it looks for the highest version of Rack that thin can support (1.1), and makes it available on the load path. When we require actionpack, it notes that the version of Rack already on the load path (1.1) is incompatible with actionpack (which requires 1.0.x) and throws an exception.

Thankfully, newer versions of Rubygems provide reasonable information about exactly what gem (“thin 1.2.7″) put Rack 1.1.0 on the load path. Unfortunately, there is often nothing the end user can do about it.

Rails could theoretically solve this problem by loosening its Rack requirement, but that would mean that ActionPack declared compatibility with any future version of Rack, a declaration ActionPack is unwilling to make.

The user can solve this problem by carefully ordering requires, but the user is never in control of all requires, so the process of figuring out the right order to require all dependencies can get quite tricky.

It is conceptually possible in this case, but it gets extremely hard when more than a few dependencies are in play (as in Rails 3).

Groups of Dependencies

When writing applications for deployments, developers commonly want to group their dependencies. For instance, you might use SQLite in development but Postgres in production.

For most people, the most important part of the grouping problem is making it possible to install the gems in their Gemfile, except the ones in specific groups. This introduces two additional problems.

First, consider the following Gemfile:

gem "rails", "2.3.5"
group :production do
  gem "thin"

Bundler allows you to install the gems in a Gemfile minus the gems in a specific group by running bundle install --without production. In this case, since rails depends on Rack, specifying that you don’t want to include thin means no thin, no daemons and no eventmachine but yes rack. In other words, we want to exclude the gems in the group specified, and any dependencies of those gems that are not dependencies of other gems.

Second, consider the following Gemfile:

gem "soap4r", "1.5.8"
group :production do
  gem "dm-salesforce", "0.10.3"

The soap4r gem depends on httpclient >= 2.1.1, while the dm-salesforce gem depends on httpclient = Initially, when you did bundle install --without production, we did not include gems in the production group in the dependency resolution process.

In this case, consider the case where httpclient and httpclient 2.2 exist on In development mode, your app will use the latest version (2.2), but in production, when dm-salesforce is included, the older version will be used.

Note that this happened even though you specified only hard versions at the top level, because not all gems use hard versions as their dependencies.

To solve this problem, Bundler downloads (but does not install) all gems, including gems in groups that you exclude (via --without). This allows you to specify gems with C extensions that can only compile in production (or testing requirements that depend on OSX for compilation) while maintaining a coherent list of gems used across all of these environments.

System Gems

In 0.8 and before, bundler installed all gems in the local application. This provided a neat sandbox, but broke the normal path for running a new Rails app:

$ gem install rails
$ rails myapp
$ cd myapp
$ rails server

Instead, in 0.8, you’d have to do:

$ gem install rails
$ rails myapp
$ cd myapp
$ gem bundle
$ rails server

Note that the gem bundle command became bundle install in Bundler 0.9.

In addition, this meant that Bundler needed to download and install commonly used gems over and over again if you were working on multiple apps. Finally, every time you changed the Gemfile, you needed to run gem bundle again, adding a “build step” that broke the flow of early Rails application.

In Bundler 0.9, we listened to this feedback, making it possible for bundler to use gems installed in the system. This meant that the ideal Rails installation steps could work, and you could share common gems between applications.

However, there were a few complications.

Since we now use gems installed in the system, Bundler resolves the dependencies in your Gemfile against your system sources at runtime, making a list of all of the gems to push onto the load path. Calling Bundler.setup kicks off this process. If you specified some gems not to install, we needed to make sure bundler did not try to find those gems in the system.

In order to solve this problem, we create a .bundle directory inside your application that remembers any settings that need to persist across bundler invocations.

Unfortunately, this meant that we couldn’t simply have people run sudo bundle install because root would own your application’s .bundle directory.

On OSX, root owns all paths that are, by default, in $PATH. It also owns the default GEM_HOME. This has two consequences. First, we could not trivially install executables from bundled gems into a system path. Second, we could not trivially install gems into a place that gem list would see.

In 0.9, we solved this problem by placing gems installed by bundler into BUNDLE_PATH, which defaults to ~/.bundle/#{RUBY_ENGINE}/#{RUBY_VERSION}. rvm, which does not install executables or gems into a path owned by root, helpfully sets BUNDLE_PATH to the same location as GEM_HOME. This means that when using rvm, gems installed via bundle install appear in gem list.

This also means that when not using rvm, you need to use bundle exec to place the executables installed by bundler onto the path and set up the environment.

In 0.10, we plan to bump up the permissions (by shelling out to sudo) when installing gems so we can install to the default GEM_HOME and install executables to a location on the $PATH. This will make executables created by bundle install available without bundle exec and will make gems installed by bundle install available to gem list on OSX without rvm.

Another complication: because gems no longer live in your application, we needed a way to snapshot the list of all versions of all gems used at a particular time, to ensure consistent versions across machines and across deployments.

We solved this problem by introducing a new command, bundle lock, which created a file called Gemfile.lock with a serialized representation of all versions of all gems in use.

However, in order to make Gemfile.lock useful, it would need to work in development, testing, and production, even if you ran bundle install --without production in development and then ran bundle lock. Since we had already decided that we needed to download (but not install) gems even if they were excluded by --without, we could easily include all gems (including those from excluded groups) in the Gemfile.lock.

Initially, we didn’t serialize groups exactly right in the Gemfile.lock causing inconsistencies between how groups behaved in unlocked and locked mode. Fixing this required a small change in the lock file format, which caused a small amount of frustration by users of early versions of Bundler 0.9.


Very early (0.5 era) we decided that we would support prerelease “gems” that lived in git repositories.

At first, we figured we could just clone the git repositories and add the lib directory to the load path when the user ran Bundler.setup.

We abstracted away the idea of “gem source”, making it possible for gems to be found in system rubygems, remote rubygems, or git repositories. To specify that a gem was located in a git “source”, you could say:

gem "rspec-core", "2.0.0.beta.6", :git => "git://"

This says: “You’ll find version 2.0.0.beta.6 in git://”.

However, there were a number of issues involving git repositories.

First, if a prerelease gem had dependencies, we’d want to include those dependencies in the dependency graph. However, simply trying to run rake build was a nonstarter, as a huge number of prerelease gems have dependencies in their rake file that are only available to a tool like bundler once the gem is built (a chicken and egg problem). On the flip side, if another gem depended on a gem provided by a git repository, we were asking users to supply the version, an error-prone process since the version could change in the git repository and bundler wouldn’t be the wiser.

To solve this, we asked gem authors to put a .gemspec in the root of their repository, which would allow us to see the dependencies. A lot of people were familiar with this process, since github had used it for a while for automatically generating gems from git repositories.

At first, we assumed (like github did) that we could execute the .gemspec standalone, out of the context of its original repository. This allowed us to avoid cloning the full repository simply to resolve dependencies. However, a number of gems required files that were in the repository (most commonly, they required a version file from the gem itself to avoid duplication), so we modified bundler to do a full checkout of the repository so we could execute the gemspec in its original context.

Next, we found that a number of git repositories (notably, Rails) actually contained a number of gems. To support this, we allowed any number of .gemspec files in a repository, and would evaluate each in the context of its root. This meant that a git repository was more analogous to a gem source (like than a single .gem file.

Soon enough, people started complaining that they tried to use prerelease gems like nokogiri from git and bundler wasn’t compiling C extensions. This proved tricky, because the process that Rubygems uses to compile C extensions is more than a few lines, and we wanted to reuse the logic if possible.

In most cases, we were able to solve this problem by having bundler run gem build gem_name.gemspec on the gemspec, and using Rubygems’ native C extension process to compile the gem.

In a related problem, we started receiving reports that bundler couldn’t find rake while trying to compile C extensions. It turns out that Rubygems supports a rake compile mode if you use s.extensions = %w(Rakefile) or something containing mkrf. This essentially means that Rubygems itself has an implicit dependency on Rake. Since we sort the installed gems to make sure that dependencies get installed before the gems that depend on them, we needed to make sure that Rake was installed before any gem.

For git gems, we needed to make sure that Gemfile.lock remembered the exact revision used when bundler took the snapshot. This required some more abstraction, so sources could provide and load in agnostic information that they could use to reinstall everything identically to when bundler took the snapshot.

If a git gem didn’t supply a .gemspec, we needed to create a fake .gemspec that we could use throughout the process, based on the name and version the user specified for the repository. This would allow it to participate in the dependency resolution process, even if the repository itself didn’t provide a .gemspec.

If a repository did provide a .gemspec, and the user supplied a version or version range, we needed to confirm that the version provided matched the version specified in the .gemspec.

We checked out the git repositories into BUNDLE_PATH (again, defaulting to ~/.bundle/#{RUBY_ENGINE}/#{RUBY_VERSION} or $GEM_HOME with rvm) using the --bare option. This allows us to share git repositories like the rails repository, and then make local checkouts of specific revisions, branches or tags as specified by individual Gemfiles.

One final problem, if your Gemfile looks like this:

source ""
gem "nokogiri"
gem "rails", :git => "git://", :tag => "v2.3.4"

You do not expect bundler to pull in the version from, even though it’s newer. Because bundler treats the git repository as a gem source, it initially pulled in the latest version of the gem, regardless of the source. To solve this problem, we added the concept of “pinned dependencies” to the dependency resolver, allowing us to ask it to skip traversing paths that got the rails dependencies from other sources.


Now that we had git repositories working, it was a hop, skip and jump to support any path. We could use all of the same heuristics as we used for git repositories (including using gem build to install C extensions and having multiple version) on any path in the file system.

With so many sources in the mix, we started seeing cases where people had different gems with the exact same name and version in different sources. Most commonly, people would have created a gem from a local checkout of something (like Rack), and then, when the final version of the gem was released to, we were still using the version installed locally.

We tried to solve this problem by forcing a lookup in for a gem, but this contrasted with people who didn’t want to have to hit a remote repository when they had all the gems locally.

When we first started talking to early adopters, they were incredulous that this could happen. “If you do something stupid like that, f*** you”. One by one, those very same people fell victim to the “bug”. Unfortunately, it manifests itself as can't find active_support/core_ext/something_new, which is extremely confusing and can appear to be a generic “bundler bug”. This is especially problematic if the dependencies change in two copies of the gem with identical names and versions.

To solve this problem, we decided that if you had snapshotted the repository via bundle lock and had all of the required gems on your local machine, we would not try to hit a remote. However, if you run bundle install otherwise, we always check to see if there is a newer version in the remote.

In fact, this class of error (two different copies of the gems with the same name and version) has resulting in a fairly intricate prioritization system, which can be different in different scenarios. Unfortunately, the principle of least surprise requires that we tweak these priorities for different scenarios.

While it seems that we could just say “if you rake install a gem you’re on your own”, it’s very common, and people expect things to mostly work even in this scenario. Small tweaks to these priorities have also resulted in small changes in behavior between versions of 0.9 (but only in cases where the exact same name and versioned gems, in different sources, provides different code).

In fact, because of the overall complexity of the problem, and because of different ways that these features can interact, very small tweaks to different parts of the system can result in unexpected changes. We’ve gotten pretty good at seeing the likely outcome of these tweaks, but they can be baffling to users of bundler. A major goal of the lead-in to 1.0 has been to increase determinism, even in cases where we have to arbitrarily pick a “right” answer.


This is just a small smattering of some of the problems we’ve encountered while working on bundler. Because the problem is non-trivial (and parts are np-complete), adding an apparently simple feature can upset the equilibrium of the entire system. More frustratingly, adding features can sometimes change “undefined” behavior that accidentally breaks a working system as a result of an upgrade.

As we head into 0.10 and 1.0, we hope to add some additional features to smooth out the typical workflows, while stabilizing some of the seeming indeterminism in Bundler today. One example is imposing a standard require order for gems in the Gemfile, which is currently “undefined”.

Thanks for listening, and getting to the end of this very long post.

AbstractQueryFactoryFactories and alias_method_chain: The Ruby Way

In the past week, I read a couple of posts that made me really want to respond with a coherent explanation of how I build modular Ruby code.

The first post, by Nick Kallen of Twitter, gushed about the benefits of PerQueryTimingOutQueryFactory and called out Ruby (and a slew of other “hipster” languages) for using language features (like my “favorite” alias_method_chain) and leveraging dynamicism to solve problems that he argues are more appropriately solved with laugh-inducing pattern names:

In a very dynamic language like Ruby, open classes and method aliasing (e.g., alias_method_chain) mitigate this problem, but they don’t solve it. If you manipulate a class to add logging, all instances of that class will have logging; you can’t take a surgical approach and say “just objects instantiated in this context”.

If you haven’t read it yet, you should probably read it now (at least skim it).

As if on cue, a post by Pivot Rob Olson demonstrated the lengths some Rubyists will go to torture alias_method_chain to solve essentially the same problem that Nick addressed.

In short, while I agree in principle with Nick, his examples and the jargon he used demonstrated exactly why so few Rubyists take his point seriously. It is possible to write modular code in Ruby with the same level of flexibility but with far less code and fewer concept hoops to jump through.

Let’s take a look at the problem Rob was trying to solve:

module Teacher
  def initialize
    puts "initializing teacher"
class Person
  include Teacher
  def initialize
    puts "initializing person"
# Desired output:
# >
# initializing teacher
# initializing person

This is a classic problem involving modularity. In essence, Rob wants to be able to “decorate” the Person class to include teacher traits.

Nick’s response would have been to create a factory that creates a Person proxy decorated with Teacher properties. And he would have been technically correct, but that description obscures the Ruby implementation, and makes it sound like we need new “Factory” and “Decorator” objects, as we do, in fact, need when programming in Java.

In Ruby, you’d solve this problem thusly:

# The base person implementation. Never instantiate this.
# Instead, create a subclass that mixes in appropriate modules.
class AbstractPerson
  def initialize
    puts "Initializing person"
# Provide additional "teacher" functionality as a module. This can be
# mixed into subclasses of AbstractPerson, giving super access to
# methods on AbstractPerson
module Teacher
  def initialize
    puts "Initializing teacher"
# Our actual Person class. Mix in whatever modules you want to
# add new functionality.
class Person < AbstractPerson
  include Teacher
# >
# Initializing teacher
# Initializing person

Including modules essentially decorates existing classes with additional functionality. You can include multiple modules to layer on existing functionality, but you don’t need to create special factory or decorator objects to make this work.

For those following along, the classes used here are “factories”, and the modules are “decorators”. But just as it’s not useful to constantly think about classes as “structs with function pointers” because that’s historically how they were implemented, I’d argue it’s not useful to constantly think about classes and modules as factories and decorators, simply because they’re analogous to those concepts in languages like Java.

The Case of the PerQueryTimingFactoryFactory

Nick’s example is actually a great example of a case where modularity is important. In this case, he has a base Query class that he wants to extend to add support for timeouts. He wrote his solution in Scala; I’ll transcode it into Ruby.

Feel free to skim the examples that follow. I’m transcoding the Scala into Ruby to demonstrate something which you will be able to understand without fully understanding the examples.

class QueryProxy
  def initialize(query)
    @query = query
  def select
    delegate { { yield } }
  def execute
    delegate { @query.execute }
  def cancel
  def delegate

Then, in order to add support for Timeouts, he creates a new subclass of QueryProxy:

class TimingOutQuery < QueryProxy
  def initialize(query, timeout)
    @timeout = timeout
    @query   = query
  def delegate
      Timeout.timeout(@timeout) do
    rescue Timeout::Error
      raise SqlTimeoutException

Next, in order to instantiate a TimingOutQuery, he creates a TimingOutQueryFactory:

class TimingOutQueryFactory
  def initialize(query_factory, timeout)
    @query_factory = query_factory
    @timeout = timeout
  def, query, *args), query, *args), timeout)

As his coup de grâce, he shows how, now that everything is so modular, it is trivial to extend this system to support timeouts that were per-query.

class PerQueryTimingOutQueryFactory
  def initialize(query_factory, timeouts)
    @query_factory = query_factory
    @timeouts = timeouts
  def, query, *args), query, *args), @timeouts[query])

This is all true. By using factories and proxies, as you would in Java, this Ruby code is modular. It is possible to create a new kind of QueryFactory trivially.

However, this code, by tacking close to vocabulary created to describe Java patterns, rebuilds functionality that exists natively in Ruby. It would be equivalent to creating a Hash of Procs in Ruby when a Class would do.

The Case: Solved

Ruby natively provides factories, proxies and decorators via language features. In fact, that vocabulary obscures the obvious solution to Nick’s problem.

# No need for a proxy at all, so we skip it
module Timeout
  # super allows us to delegate to the Query this
  # module is included into, even inside a block
  def select
    timeout { super }
  def execute
    timeout { super }
  # We get the cancel delegation natively, because
  # we can use subclasses, rather than separate
  # proxy object, to implement the proxy
  # Since we're not using a proxy, we'll just implement
  # the timeout method directly, and skip "delegate"
  def timeout
    # The Timeout module expects a duration method
    # which classes that include Timeout should provide
    Timeout.timeout(duration) do
  rescue Timeout::Error
    raise SqlTimeoutException
# Classes in Ruby serve double duty as "proxies" and
# "factories". This behavior is part of Ruby semantics.
class TimingOutQuery < Query
  include Timeout
  # implement duration to hardcode the value of 1
  def duration
# Creating a second subclass of Query, this time with
# per-query timeout semantics.
class PerQueryTimingOutQuery < Query
  TIMEOUTS ="query1" => 1, "query2" => 3)
  include Timeout
  def duration

As Nick would point out, what we’re doing here, from a very abstract perspective, isn’t all that different from his example. Our subclasses are proxies, our modules are decorators, and our classes are serving as factories. However, forcing that verbiage on built-in Ruby language features, in my opinion, only serves to complicate matters. More importantly, by starting to think about the problem in terms of the Java-inspired patterns, it’s easy to end up building code that looks more like Nick’s example than my solution above.

For the record, I think that designing modularly is very important, and while Ruby provides built-in support for these modular patterns, we don’t see enough usage of them. However, we should not assume that the reason for the overuse of poor modularity patterns (like alias_method_chain) result from a lack of discussion around proxies, decorators, and factories.

By the way, ActionController in Rails 3 provides an abstract superclass called ActionController::Metal, a series of modules that users can mix in to subclasses however they like, and a pre-built ActionController::Base with all the modules mixed in (to provide the convenient “default” experience). Additionally, users or extensions can easily provide additional modules to mix in to ActionController::Metal subclasses. This is precisely the pattern I am describing here, and I strongly recommend that Rubyists use it more when writing code they wish to be modular.

Postscript: Scala

When researching for this article, I wondered why Nick hadn’t used Ruby’s equivalent to modules (traits) in his examples. It would be possible to write Scala code that was extremely similar to my preferred solution to the problem. I asked both Nick and the guys in #scala. Both said that while traits could solve this problem in Scala, they could not be used flexibly enough at runtime.

In particular, Nick wanted to be able to read the list of “decorators” to use at runtime, and compose something that could create queries with the appropriate elements. According to the guys in #scala, it’s a well-understood issue, and Kevin Wright has a compiler plugin to solve this exact problem.

Finally, the guys there seemed to generally agree with my central thesis: that thinking about problems in terms of patterns originally devised for Java can leave a better, more implementation-appropriate solution sitting on the table, even when the better solution can be thought of in terms of the older pattern (with some contortions).

The Craziest F***ing Bug I’ve Ever Seen

This afternoon, I was telling a friend about one of my exploits tracking down a pretty crazy heisenbug, and he said he thought other people would be interested in hearing about it. So let me tell you about it.

Before you continue, if you’re not interested in relatively arcane technical details, feel free to skip this post. It’s here mainly because a friend said he thought people would be interested in it.

Our Story Begins

Our story begins with a small change in the way Ruby 1.9 handles implicit coercion. The most common case of implicit coercion is in Array#flatten. The basic logic is that Ruby checks to see if an element of the Array can be itself coerced into an Array, which is then does before flattening.

There are two steps to the process:

  1. Check to see if the element can be coerced into an Array
  2. If it can, call to_ary on the element, and repeat the process recursively

In Ruby 1.8, the process is essentially the following:

if obj.respond_to?(:to_ary)

In Ruby 1.9, it was changed to:

rescue NoMethodError

Of course, the internal code was implemented in C, but you get the idea. This change subtly effects objects that implement method_missing:

class MyObject
  def method_missing(meth, *)
    if meth == :testing
      puts "TESTING"
      puts "Calling Super"

In Ruby 1.8, #flatten will first call #respond_to? which returns false and therefore doesn’t ever trigger the #method_missing. In Ruby 1.9, #method_missing will be triggered blindly, and because the call to super should raise a NoMethodError, we will essentially get the same result.

There are some subtle differences here, but for the vast majority of cases, the behavior is identical or close enough to not matter.

A Weird Quirk

When the above change landed in Ruby 1.9.2′s head, Rails started experiencing weird, intermittent behavior. As you might know, Rails overrides method_missing on NilClass to provide a guess about what object you were expecting instead of nil. This feature is called “whiny nils” and can be enabled or disabled in Rails applications.

def method_missing(method, *args, &block)
  if klass = METHOD_CLASS_MAP[method]
    raise_nil_warning_for klass, method, caller

But sometimes, when nil was inside an Array, and the Array was flattened, the flatten failed with a NameError: undefined local variable or method `to_ary' for nil:NilClass, which resulted in a failure in the flatten entirely.

The bug appeared intermittently, apparently unrelated to the code that was calling it. We worked around it by catching the NameError and reraising a NoMethodError, but the existence of the bug was rather baffling.

The Bug Reappears

When working on bundler, Carl and I saw the bug again. This time, we didn’t want to let it go. In this case, we had four virtually identical tests with four identical stack traces leading to NoMethodError or NameError seemingly at random. Something didn’t add up.

After hunting bugs for a few hours (having dug deeply into Ruby’s source), I brought in Evan Phoenix (of Rubinius). At first, Evan was baffled as well, but he correctly pointed out that the key to understanding what was going on was the difference between NameError and NoMethodError in Ruby.

x =
x.no_method #=> NoMethodError
class Testing
  def vcall_no_method
    no_method #=> NameError
  def call_no_method
    self.no_method #=> NoMethodError

In essence, when Ruby sees a method call with implicit self (which might be a typo’ed local variable), it raises a NameError, rather than a NoMethodError.

However, it still didn’t add up, because the call to to_ary was internal, and shouldn’t have been randomly interpreted as a vcall (a call to a method with implicit self) or a normal call.

Tracking it Down

Evan dug deeply into the Ruby source, and found how Ruby was determining whether the call was a vcall or a regular call.

static inline VALUE
method_missing(VALUE obj, ID id, int argc, const VALUE *argv, int call_status)
    VALUE *nargv, result, argv_ary = 0;
    rb_thread_t *th = GET_THREAD();
    th->method_missing_reason = call_status;
    th->passed_block = 0;
    // ...

Before calling the method_missing method itself, Ruby sets a thread-local variable called call_status that reflects whether or not the original call was a vcall or a normal call.

Upon further examination, Evan discovered that the call to method_missing in the case of type coercion did not change the thread-local method_missing_reason.

As a result, Ruby raised a NoMethodError or NameError based upon whether the last call to method_missing was a vcall or regular call.

The fix was a simple one-line patch, which allowed us to remove the workaround in ActiveSupport, but it was by far the craziest f***ing bug I’ve ever seen

Spinning up a new Rails app

So people have been attempting to get a Rails app up and running recently. I also have some apps in development on Rails 3, so I’ve been experiencing some of the same problems many others have.

The other night, I worked with sferik to start porting merb-admin over to Rails. Because this process involved being on edge Rails, we got the process honed to a very simple, small, repeatable process.

The Steps

Step 1: Check out Rails

$ git clone git://

Step 2: Generate a new app

$ ruby rails/railties/bin/rails new_app
$ cd new_app

Step 3: Edit the app’s Gemfile

# Add to the top
directory "/path/to/rails", :glob => "{*/,}*.gemspec"
git "git://"
git "git://"

Step 4: Bundle

$ gem bundle


Everything should now work: script/server, script/console, etc.

If you want to check your copy of Rails into your app, you can copy it into the app and then change your Gemfile to point to the relative location.

For instance, if you copy it into vendor/rails, you can make the first line of the Gemfile directory "vendor/rails", :glob => => "{*/,}*.gemspec". You’ll want to run gem bundle again after changing the Gemfile, of course.

The Rails 3 Router: Rack it Up

In my previous post about generic actions in Rails 3, I made reference to significant improvements in the router. Some of those have been covered on other blogs, but the full scope of the improvements hasn’t yet been covered.

In this post, I’ll cover a number of the larger design decisions, as well as specific improvements that have been made. Most of these features were in the Merb router, but the Rails DSL is more fully developed, and the fuller emphasis on Rack is a strong improvement from the Merb approach.

Improved DSL

While the old map.connect DSL still works just fine, the new standard DSL is less verbose and more readable.

# old way
ActionController::Routing::Routes.draw do |map|
  map.connect "/main/:id", :controller => "main", :action => "home"
# new way
Basecamp::Application.routes do
  match "/main/:id", :to => "main#home"

First, the routes are attached to your application, which is now its own object and used throughout Railties. Second, we no longer need map, and the new DSL (match/to) is more expressive. Finally, we have a shortcut for controller/action pairs ("main#home" is {:controller => "main", :action => "home").

Another useful shortcut allows you to specify the method more simply than before:

Basecamp::Application.routes do
  post "/main/:id", :to => "main#home", :as => :homepage

The :as in the above example specifies a named route, and creates the homepage_url et al helpers as in Rails 2.

Rack It Up

When designing the new router, we all agreed that it should be built first as a standalone piece of functionality, with Rails sugar added on top. As a result, we used rack-mount, which was built by Josh Peek as a standalone Rack router.

Internally, the router simply matches requests to a rack endpoint, and knows nothing about controllers or controller semantics. Essentially, the router is designed to work like this:

Basecamp::Application.routes do
  match "/home", :to => HomeApp

This will match requests with the /home path, and dispatches them to a valid Rack application at HomeApp. This means that dispatching to a Sinatra app is trivial:

class HomeApp < Sinatra::Base
  get "/" do
    "Hello World!"
Basecamp::Application.routes do
  match "/home", :to => HomeApp

The one small piece of the puzzle that might have you wondering at this point is that in the previous section, I showed the usage of :to => "main#home", and now I say that :to takes a Rack application.

Another improvement in Rails 3 bridges this gap. In Rails 3, PostsController.action(:index) returns a fully valid Rack application pointing at the index action of PostsController. So main#home is simply a shortcut for MainController.action(:home), and it otherwise is identical to providing a Sinatra application.

As I posted before, this is also the engine behind match "/foo", :to => redirect("/bar").

Expanded Constraints

Probably the most common desired improvement to the Rails 2 router has been support for routing based on subdomains. There is currently a plugin called subdomain_routes that implements this functionality as follows:

ActionController::Routing::Routes.draw do |map|
  map.subdomain :support do |support|
    support.resources :tickets

This solves the most common case, but the reality is that this is just one common case. In truth, it should be possible to constrain routes based not just on path segments, method, and subdomain, but also based on any element of the request.

The Rails 3 router exposes this functionality. Here is how you would constrain requests based on subdomains in Rails 3:

Basecamp::Application.routes do
  match "/foo/bar", :to => "foo#bar", :constraints => {:subdomain => "support"}

These constraints can include path segments as well as any method on ActionDispatch::Request. You could use a String or a regular expression, so :constraints => {:subdomain => /support\d/} would be valid as well.

Arbitrary constraints can also be specified in block form, as follows:

Basecamp::Application.routes do
  constraints(:subdomain => "support") do
    match "/foo/bar", :to => "foo#bar"

Finally, constraints can be specified as objects:

class SupportSubdomain
  def self.matches?(request)
    request.subdomain == "support"
Basecamp::Application.routes do
  constraints(SupportSubdomain) do
    match "/foo/bar", :to => "foo#bar"

Optional Segments

In Rails 2.3 and earlier, there were some optional segments. Unfortunately, they were hardcoded names and not controllable. Since we’re using a generic router, magical optional segment names and semantics would not do. And having exposed support for optional segments in Merb was pretty nice. So we added them.

# Rails 2.3
ActionController::Routing::Routes.draw do |map|
  # Note that :action and :id are optional, and
  # :format is implicit
  map.connect "/:controller/:action/:id"
# Rails 3
Basecamp::Application.routes do
  # equivalent
  match "/:controller(/:action(/:id))(.:format)"

In Rails 3, we can be explicit about the optional segments, and even nest optional segments. If we want the format to be a prefix path, we can do match "(/:format)/home" and the format is optional. We can use a similar technique to add an optional company ID prefix or a locale.

Pervasive Blocks

You may have noticed this already, but as a general rule, if you can specify something as an inline condition, you can also specify it as a block constraint.

Basecamp::Application.routes do
  controller :home do
    match "/:action"

In the above example, we are not required to specify the controller inline, because we specified it via a block. You can use this for subdomains, controller restrictions, request method (get etc. take a block). There is also a scope method that can be used to scope a block of routes under a top-level path:

Basecamp::Application.routes do
  scope "/home" do
    match "/:action", :to => "homepage"

The above route would match /home/hello/foo to homepage#foo.


There are additional (substantial) improvements around resources, which I will save for another time, assuming someone else doesn’t get to it first.

Generic Actions in Rails 3

So Django has an interesting feature called “generic views”, which essentially allow you to to render a template with generic code. In Rails, the same feature would be called “generic actions” (just a terminology difference).

This was possible, but somewhat difficult in Rails 2.x, but it’s a breeze in Rails 3.

Let’s take a look at a simple generic view in Django, the “redirect_to” view:

urlpatterns = patterns('django.views.generic.simple',
    ('^foo/(?P<id>\d+)/$', 'redirect_to', {'url': '/bar/%(id)s/'}),

This essentially redirects "/foo/<id>" to "/bar/<id>s/". In Rails 2.3, a way to achieve equivalent behavior was to create a generic controller that handled this:

class GenericController < ApplicationController
  def redirect
    redirect_to(params[:url] % params, params[:options])

And then you could use this in your router:

map.connect "/foo/:id", :controller => "generic", :action => "redirect", :url => "/bar/%{id}s"

This uses the new Ruby 1.9 interpolation syntax (“%{first} %{last}” % {:foo => “hello”, :bar => “sir”} == “hello sir”) that has been backported to Ruby 1.8 via ActiveSupport.

Better With Rails 3

However, this is a bit clumsy, and requires us to have a special controller to handle this (relatively simple) case. It also saddles us with the conceptual overhead of a controller in the router itself.

Here’s how you do the same thing in Rails 3:

match "/foo/:id", :to => redirect("/bar/%{id}s")

This is built-into Rails 3′s router, but the way it works is actually pretty cool. The Rails 3 router is conceptually decoupled from Rails itself, and the :to key points at a Rack endpoint. For instance, the following would be a valid route in Rails 3:

match "/foo", :to => proc {|env| [200, {}, ["Hello world"]] }

The redirect method simply returns a rack endpoint that knows how to handle the redirection:

def redirect(*args, &block)
  options = args.last.is_a?(Hash) ? args.pop : {}
  path = args.shift || block
  path_proc = path.is_a?(Proc) ? path : proc {|params| path % params }
  status = options[:status] || 301
  lambda do |env|
    req =
    params =["action_dispatch.request.path_parameters"])
    url = req.scheme + '://' + + params
    [status, {'Location' => url, 'Content-Type' => 'text/html'}, ['Moved Permanently']]

There’s a few things going on here, but the important part is the last few lines, where the redirect method returns a valid Rack endpoint. If you look closely at the code, you can see that the following would be valid as well:

match "/api/v1/:api", :to => 
  redirect {|params| "/api/v2/#{params[:api].pluralize}" }
# and
match "/api/v1/:api", :to => 
  redirect(:status => 302) {|params| "/api/v2/#{params[:api].pluralize}" }

Another Generic Action

Another nice generic action that Django provides is allowing you to render a template directly without needing an explicit action. It looks like this:

urlpatterns = patterns('django.views.generic.simple',
    (r'^foo/$',             'direct_to_template', {'template': 'foo_index.html'}),
    (r'^foo/(?P<id>\d+)/$', 'direct_to_template', {'template': 'foo_detail.html'}),

This provides a special mechanism for rendering a template directly from the Django router. Again, this could be implemented by creating a special controller in Rails 2 and used as follows:

class GenericController < ApplicationController
  def direct_to_template
# Router
map.connect "/foo", :controller => "generic", :action => "direct_to_template", :options => {:template => "foo_detail"}

A Prettier API

A nicer way to do this would be something like this:

match "/foo", :to => render("foo")

For the sake of clarity, let’s say that directly rendered templates will come out of app/views/direct unless otherwise specified. Also, let’s say that the render method should work identically to the render method used in Rails controllers themselves, so that render :template => "foo", :status => 201, :content_type => Mime::JSON et al will work as expected.

In order to make this work, we’ll use ActionController::Metal, which exposes a Rack-compatible object with access to all of the powers of a full ActionController::Base object.

class RenderDirectly < ActionController::Metal
  include ActionController::Rendering
  include ActionController::Layouts
  append_view_path Rails.root.join("app", "views", "direct")
  append_view_path Rails.root.join("app", "views")
  layout "application"
  def index
    render *env["generic_views.render_args"]
module GenericActions
  module Render
    def render(*args)
      app = RenderDirectly.action(:index)
      lambda do |env|
        env["generic_views.render_args"] = args

The trick here is that we’re subclassing ActionController::Metal and pulling in just Rendering and Layouts, which gives you full access to the normal rendering API without any of the other overhead of normal controllers. We add both the direct directory and the normal view directory to the view path, which means that any templates you place inside app/views/direct will take be used first, but it’ll fall back to the normal view directory for layouts or partials. We also specify that the layout is application, which is not the default in Rails 3 in this case since our metal controller does not inherit from ApplicationController.

Note for the Curious

In all normal application cases, Rails will look up the inheritance chain for a named layout matching the controller name. This means that the Rails 2 behavior, which allows you to provide a layout named after the controller, still works exactly the same as before, and that ApplicationController is just another controller name, and application.html.erb is its default layout.

And then, the actual use in your application:

Rails.application.routes do
  extend GenericActions
  match "/foo", :to => render("foo_index")
  # match "/foo" => render("foo_index") is a valid shortcut for the simple case
  match "/foo/:id", :constraints => {:id => /\d+/}, :to => render("foo_detail")

Of course, because we’re using a real controller shell, you’ll be able to use any other options available on the render (like :status, :content_type, :location, :action, :layout, etc.).

Metaprogramming in Ruby: It’s All About the Self

After writing my last post on Rails plugin idioms, I realized that Ruby metaprogramming, at its core, is actually quite simple.

It comes down to the fact that all Ruby code is executed code–there is no separate compile or runtime phase. In Ruby, every line of code is executed against a particular self. Consider the following five snippets:

class Person
  def self.species
    "Homo Sapien"
class Person
  class << self
    def species
      "Homo Sapien"
class << Person
  def species
    "Homo Sapien"
Person.instance_eval do
  def species
    "Homo Sapien"
def Person.species
  "Homo Sapien"

All five of these snippets define a Person.species that returns Homo Sapien. Now consider another set of snippets:

class Person
  def name
Person.class_eval do
  def name

These snippets all define a method called name on the Person class. So will return “Matz”. For those familiar with Ruby, this isn’t news. When learning about metaprogramming, each of these snippets is presented in isolation: another mechanism for getting methods where they “belong”. In fact, however, there is a single unified reason that all of these snippets work the way they do.

First, it is important to understand how Ruby’s metaclass works. When you first learn Ruby, you learn about the concept of the class, and that each object in Ruby has one:

class Person
Person.class #=> Class
class Class
  def loud_name
Person.loud_name #=> "PERSON!"

Person is an instance of Class, so any methods added to Class are available on Person as well. What they don’t tell you, however, is that each object in Ruby also has its own metaclass, a Class that can have methods, but is only attached to the object itself.

matz =
def matz.speak
  "Place your burden to machine's shoulders"

What’s going on here is that we’re adding the speak method to matz‘s metaclass, and the matz object inherits from its metaclass and then Object. The reason this is somewhat less clear than ideal is that the metaclass is invisible in Ruby:

matz =
def matz.speak
  "Place your burden to machine's shoulders"
matz.class #=> Object

In fact, matz‘s “class” is its invisible metaclass. We can even get access to the metaclass:

metaclass = class << matz; self; end
metaclass.instance_methods.grep(/speak/) #=> ["speak"]

At this point in other articles on this topic, you’re probably struggling to keep all of the details in your head; it seems as though there are so many rules. And what’s this class << matz thing anyway?

It turns out that all of these weird rules collapse down into a single concept: control over the self in a given part of the code. Let’s go back and take a look at some of the snippets we looked at earlier:

class Person
  def name
  end #=> "Person"

Here, we are adding the name method to the Person class. Once we say class Person, the self until the end of the block is the Person class itself.

Person.class_eval do
  def name
  end #=> "Person"

Here, we’re doing exactly the same thing: adding the name method to instances of the Person class. In this case, class_eval is setting the self to Person until the end of the block. This is all perfectly straight forward when dealing with classes, but it’s equally straight forward when dealing with metaclasses:

def Person.species
  "Homo Sapien"
end #=> "Person"

As in the matz example earlier, we are defining the species method on Person‘s metaclass. We have not manipulated self, but you can see using def with an object attaches the method to the object’s metaclass.

class Person
  def self.species
    "Homo Sapien"
  end #=> "Person"

Here, we have opened the Person class, setting the self to Person for the duration of the block, as in the example above. However, we are defining a method on Person‘s metaclass here, since we’re defining the method on an object (self). Also, you can see that while inside the person class is identical to while outside it.

class << Person
  def species
    "Homo Sapien"
  end #=> ""

Ruby provides a syntax for accessing an object’s metaclass directly. By doing class << Person, we are setting self to Person‘s metaclass for the duration of the block. As a result, the species method is added to Person‘s metaclass, rather than the class itself.

class Person
  class << self
    def species
      "Homo Sapien"
    end #=> ""

Here, we combine several of the techniques. First, we open Person, making self equal to the Person class. Next, we do class << self, making self equal to Person‘s metaclass. When we then define the species method, it is defined on Person‘s metaclass.

Person.instance_eval do
  def species
    "Homo Sapien"
  end #=> "Person"

The last case, instance_eval, actually does something interesting. It breaks apart the self into the self that is used to execute methods and the self that is used when new methods are defined. When instance_eval is used, new methods are defined on the metaclass, but the self is the object itself.

In some of these cases, the multiple ways to achieve the same thing arise naturally out of Ruby’s semantics. After this explanation, it should be clear that def Person.species, class << Person; def species, and class Person; class << self; def species aren’t three ways to achieve the same thing by design, but that they arise out of Ruby’s flexibility with regard to specifying what self is at any given point in your program.

On the other hand, class_eval is slightly different. Because it take a block, rather than act as a keyword, it captures the local variables surrounding it. This can provide powerful DSL capabilities, in addition to controlling the self used in a code block. But other than that, they are exactly identical to the other constructs used here.

Finally, instance_eval breaks apart the self into two parts, while also giving you access to local variables defined outside of it.

In the following table, defines a new scope means that code inside the block does not have access to local variables outside of the block.

mechanism method resolution method definition new scope?
class Person Person same yes
class << Person Person’s metaclass same yes
Person.class_eval Person same no
Person.instance_eval Person Person’s metaclass no

Also note that class_eval is only available to Modules (note that Class inherits from Module) and is an alias for module_eval. Additionally, instance_exec, which was added to Ruby in 1.8.7, works exactly like instance_eval, except that it also allows you to send variables into the block.

UPDATE: Thank you to Yugui of the Ruby core team for correcting the original post, which ignored the fact that self is broken into two in the case of instance_eval.