Ruby's Implementation Does Not Define its Semantics
When I was first getting started with Ruby, I heard a lot of talk about blocks, and how you could "cast" them to Procs by using the & operator when calling methods. Last week, in comments about my last post (Ruby is NOT a Callable Oriented Language (It’s Object Oriented)), I heard that claim again.
To be honest, I never really thought that much about it, and the idea of "casting" a block to a Proc never took hold in my mental model, but when discussing my post with a number of people, I realized that a lot of people have this concept in their mental model of Ruby.
It is not part of Ruby's semantics.
In some cases, Ruby's internal implementation performs optimizations by eliminating the creation of objects until you specifically ask for them. Those optimizations are completely invisible, and again, not part of Ruby's semantics.
Let's look at a few examples.
Blocks vs. Procs
Consider the following scenarios:
def foo
yield
end
def bar(&block)
puts block.object_id
baz(&block)
end
def baz(&block)
puts block.object_id
yield
end
foo { puts "HELLO" } #=> "HELLO"
bar { puts "HELLO" } #=> "2148083200\n2148083200\nHELLO"
Here, I have three methods using blocks in different ways. In the first method (foo
), I don't specify the block at all, yielding to the implicit block. In the second case, I specify a block parameter, print its object_id
, and then send it on to the baz
method. In the third case, I specify the block, print out its object_id
and yield to the block.
In Ruby's semantics, these three uses are identical. In the first case, yield calls an implicit Proc object. In the second case, it takes an explicit Proc, then sends it on to the next method. In the last case, it takes an explicit Proc, but yields to the implicit copy of the same object.
So what's the &
thing for?
In Ruby, in addition to normal arguments, methods can receive a Proc in a special slot. All methods can receive such an argument, and Procs passed in that slot are silently ignored if not yielded to:
def foo
puts "HELLO"
end
foo { something_crazy } #=> "HELLO"
On the other hand, if you want a method to receive a Proc in that slot, and thus be able to yield to it, you specify that by prefixing it with an &
:
def foo
yield
end
my_proc = Proc.new { puts "HELLO" }
foo(&my_proc)
Here, you're telling the foo
method that my_proc
is not a normal argument; it should be placed into the proc slot and made available to yield.
Additionally, if you want access to the Proc
object, you can give it a name:
def foo(&block)
puts block.object_id
yield
end
foo { puts "HELLO" } #=> "2148084320\nHELLO"
This simply means that you want access to the implicit Proc
in a variable named block
.
Because, in most cases, you're passing in a block (using do/end
or {}
), and calling it using yield
, Ruby provides some syntax sugar to make that simple case more pleasing. That does not, however, mean that there is a special block
construct in Ruby's semantics, nor does it mean that the &
is casting the a block to a Proc.
You can tell that blocks are not being semantically wrapped and unwrapped because blocks passed along via &
share the same object_id
across methods.
Mental Models
For the following code there are two possible mental models.
def foo(&block)
puts block.object_id
yield
end
b = Proc.new { puts "OMG" }
puts b.object_id
foo(&b) #=> 2148084040\n2148084040\nOMG
In the first, the &b
unwraps the Proc object, and the &block
recasts it into a Proc. However, it somehow also wraps it back into the same wrapper that it came from into the first place.
In the second, the &b
puts the b
Proc into the block slot in foo
's argument list, and the &block
gives the implicit Proc a name. There is no need to explain why the Proc has the same object_id
; it is the same Object!
These two mental models are perfectly valid (the first actually reflects Ruby's internal implementation). I claim that those who want to use the first mental model have the heavy burden of introducing the new concept to the Ruby language of a non-object block, and that as a result, it should be generally rejected.
Metaclasses
Similarly, in Ruby's internal implementation, an Object does not get a metaclass until you ask for one.
obj = Object.new
# internally, obj does not have a metaclass here
obj.to_s
# internally, Ruby skips right up to Object when searching for #to_s, since
# it knows that no metaclass exists
def obj.hello
puts "HELLO"
end
# now, Ruby internally rewrites obj's class pointer to point to a new internal
# metaclass which has the hello method on it
obj.to_s
# Now, Ruby searches obj's metaclass before jumping up to Object
obj.to_s
# Now, Ruby skips the search because it's already cached the method
# lookup
All of the comments in the above snippet are correct, but semantically, none of them are important. In all cases, Ruby is semantically looking for methods in obj
's metaclass, and when it doesn't find any, it searches higher up. In order to improve performance, Ruby skips creating a metaclass if no methods or modules are added to an object's metaclass, but that doesn't change Ruby's semantics. It's just an optimization.
By thinking in terms of Ruby's implementation, instead of Ruby's semantics, you are forced to think about a mutable class pointer and consider the possibility that an object has no metaclass.
Mental Models
Again, there are two possible mental models. In the first, Ruby objects have a class pointer, which they manipulate to point to new metaclass objects which are created only when methods or modules are added to an object. Additionally, Ruby objects have a method cache, which they use to store method lookups. When a method is looked up twice, in this mental model, some classes are skipped because Ruby already knows that they don't have the method.
In the second mental model, all Ruby objects have a metaclass, and method lookup always goes through the metaclass and up the superclass chain until a method is found.
As before, I claim that those who want to impose the first mental model on Ruby programmers have the heavy burden of introducing the new concepts of "class pointer" and "method cache", which are not Ruby objects and have no visible implications on Ruby semantics.
Regular Expression Matches
In Ruby, certain regular expression operations create implicit local variables that reflect parts of the match:
def foo(str)
str =~ /e(.)l/
p $~
p $`
p $'
p $1
p $2
end
foo("hello") #=> #<MatchData "ell" 1:"l">\n"h"\n"o"\n"l"\nnil
This behavior is mostly inherited from Perl, and Matz has said a few times that he would not support Perl's backrefs if he had it to do over again. However, the provide another opportune example of implicit objects in Ruby.
Mental Models
In this case, there are three possible mental models.
In the first mental model, if you don't use any $
local variables, they don't exist. When you use a specific one, it springs into existence. For instance, when using $1
, Ruby looks at some internal representation of the last match and retrieves the last capture. If you use $~
, Ruby creates a MatchData object out of it.
In the second mental model, when you call a method that uses regular expressions, Ruby walks back up the stack frames, and inserts the $
local variables on it when it finds the original caller. If you later use the variables, they are already there. Ruby must be a little bit clever, because the most recent frame on the stack (which might include C frames, Java frames, or internal Ruby frames in Rubinius) is not always the calling frame.
In the last mental model, when you call a method that uses regular expressions, there is an implicit match object available (similar to the implicit Proc object that is available in methods). The $~
variable is mapped to that implicit object, while the $1
variable is the equivalent of $~[1]
.
Again, this last mental model introduces the least burdensome ideas into Ruby. The first mental model introduces the idea of an internal representation of the last match, while the second mental model (which again, has the upside of being how most implementations actually do it) introduces the concept of stack frames, which are not Ruby objects.
The last mental model uses an actual Ruby object, and does not introduce new concepts. Again, I prefer it.
Conclusion
In a number of places, it is possible to imbue Ruby semantics with mental models that reflect the actual Ruby implementation, or the fact that it's possible to imagine that a Ruby object only springs into existence when it is asked for.
However, these mental models require that Ruby programmers add non-objects to the semantics of Ruby, and requiring contortions to explain away Ruby's own efforts to hide these internals from the higher-level constructs of the language. For instance, while Ruby internally wraps and unwraps Procs when passing them to methods, it makes sure that the Proc object attached to a block is always the same, in an effort to hide the internal details from programmers.
As a result, explaining Ruby's semantics in terms of these internals requires contortions and new constructs that are not natively part of Ruby's object model, and those explanations should be avoided.
To be clear, I am not arguing that it's not useful to understand Ruby's implementation. It can, in fact, be quite useful, just as it's useful to understand how C's calling convention works under the covers. However, just as day-to-day programmers in C don't need to think about the emitted Assembler, day-to-day Ruby programmers don't need to think about the implementation. And finally, just as C implementations are free to use different calling conventions without breaking existing processor-agnostic C (or the mental model that C programmers use), Ruby implementations are free to change the internal implementation of these constructs without breaking pure-Ruby code or the mental model Ruby programmers use.