Yehuda Katz is a member of the Ember.js, Ruby on Rails and jQuery Core Teams; he spends his daytime hours at the startup he founded, Tilde Inc.. Yehuda is co-author of best-selling jQuery in Action and Rails 3 in Action. He spends most of his time hacking on open source—his main projects, like Thor, Handlebars and Janus—or traveling the world doing evangelism work. He can be found on Twitter as @wycats and on Github.

Archive for the ‘Other’ Category

Inherited Templates With Rails

A nice feature of some template languages (for instance, Django’s) is the ability to create templates that can be “inherited” by other templates. In effect, the goal is to create a template that has some missing content, and let “inheritors” fill in that content downstream.

It’s not terribly difficult to add this feature to Rails; let’s first design the API. First, the parent template (_parent.html.erb):

Before in parent
<%= first_from_child %>
<%= second_from_child %>
After in parent

We just use normal partial locals for the parent API, because they’re a well-understood construct and pretty easy to work with. Next, the “subclass” (_child.html.erb):

Before in child
<% override_template("parent") do |template| -%>
  <% template.set_content(:first_from_child) do -%>first_from_child<% end -%>
  <% template.set_content(:second_from_child) do -%>second_from_child<% end -%>
<% end -%>
After in child

Here, we use Ruby’s block syntax to create a context for supplying the content for the template. We expect the following output:

Before in child
Before in parent
After in parent
After in child

There are two parts to this implementation. First, let’s create the implementation for the override_template method that will be exposed onto ActionView::Base:

module ActionView
  module InheritedTemplates
    class Collector
    def override_template(name, &block)
      locals = Collector.collect(self, &block)
      concat(render :partial => name, :locals => locals)
  class Base
    include InheritedTemplates

To start, we create the method, deferring the heavy lifting to the Collector.collect method. The API we’ve designed asks #collect to hand back a Hash of locals, which we can pass unmodified into the “super” template.

The collector is a pretty straight-forward implementation, but it deserves some explanation:

class Collector
  def self.collect(view)
    collector = new(view)
    yield collector
  attr_reader :content
  def initialize(view)
    @view = view
    @content = {}
  def set_content(name, &block)
    @content[name] = @view.capture(&block)
def override_template(name, &block)
  locals = Collector.collect(self, &block)
  concat(render :partial => name, :locals => locals)

The first thing we do is define the collect method. It’s a basic DSL implementation, creating an instance of itself and evaluating the block in its context. In order to fully understand what’s going on here, you need to see what the block looks like, once it’s compiled from ERB into pure-Ruby:

override_template("parent") do |template|
  template.set_content(:first_from_child) do; @output_buffer << "first_from_child"; end
  template.set_content(:second_from_child) do; @output_buffer << "second_from_child"; end

We start by creating a new collector, yielding the collector to the block passed to override_template. Inside the block, template.set_content is called twice; each call sets a key and value in the @content Hash. When done, the Hash will look like {:first_from_child => "first_from_child", :second_from_child => "second_from_child"}. The collector then returns the content Hash.

When that’s done, override_template simply passes that locals Hash through to render :partial, and concats the output to the buffer.


Those following my blog might remember that I showed how we can improve block helpers by smarter compilation. If that fix was in place, here’s what _child.html.erb would look like:

Before in child
<%= override_template("parent") do |template| -%>
  <% template.set_content(:first_from_child) do -%>first_from_child<% end -%>
  <% template.set_content(:second_from_child) do -%>second_from_child<% end -%>
<% end -%>
After in child

More importantly, here’s what override_template would look like:

def override_template(name, &block)
  locals = Collector.collect(self, &block)
  render :partial => name, :locals => locals

We’re actually saved a bit of pain here in the unfixed case, because override_template can only be called from inside templates, so we don’t need to use block_called_from_erb? to disambiguate. Had this been a normal block helper, override_template would have looked like:

def override_template(name, &block)
  locals = Collector.collect(self, &block)
  response = render(:partial => name, :locals => locals)
  if block_called_from_erb?(&block)

Of course, realizing all of this is non-trivial, and the reason we’re going to be moving toward the simple block helper syntax above.

Emulating Smalltalk’s Conditionals in Ruby

If you follow my blog, you know that I enjoy emulating language features from other languages (like Python) in pure-Ruby. On the flight back from London, I read through Smalltalk Best Practice Patterns, and was reminded that Smalltalk doesn’t have built-in conditionals.

Instead, they use method calls (aka message sends) to do the heavy lifting:

anObject isNil
  ifTrue:  [ Transcript show: 'true'. ]
  ifFalse: [ Transcript show: 'false'. ].

In contrast, the typical way to do that in Ruby is:

if an_object.nil?
  puts "true"
  puts "false"

However, the Smalltalk approach is fully compatible with Ruby’s pure-OO approach (which is, in fact, inherited from Smalltalk in the first place). So why not be able to do:

  if_true  { puts "true" }.
  if_false { puts "false" }

In Ruby, unlike Smalltalk, message sends use a “.“, so we need to use periods between the message sends to link them together. With that exception, the semantics of the two languages are roughly identical. And, as usual, the implementation in Ruby is pretty straight-forward:

class Object
  def if_true
  def if_false
class NilClass
  def if_true
  def if_false
class FalseClass
  def if_true
  def if_false

In Ruby, false and nil are “falsy”, while all other values are “truthy”. Since everything in Ruby inherits from Object, we can simply define if_true and if_false on Object, FalseClass, and NilClass. If the block should be called (if_true on truthy values, or if_false on falsy values), we yield. In all cases, we return “self” so that we can chain if_true to if_false. And that’s it.

Of course, this means extending core classes, which some people are shy about doing, but it does mirror the semantics of Smalltalk.

“The Bar for Success in Our Industry” — A Quibble

I read The bar for success in our industry is too low with interest. I generally like and agree with the argument that as an entrepreneur, people should build businesses around sustainable practices that involve actually making money in the short-term. The fact that the advice comes from a successful business like 37 Signals, which has actually made sustainable, long-term money, makes it further credible.

I also agree that the analysis of Evernote as a “success” on the basis of projected income earned incrementally (a month at a time) over a year from now is a pretty bizarre way to define present success.

That said, I think their success blinds them to alternative, more risky ways of being successful with technology. In Jason’s post, he analogizes web businesses to stores:

If there was an airline that flew more passengers than anyone else, but lost money on each one, would we call it a success? If there was a restaurant that served more people than anyone else, but lost money on each meal served, would we call it a success? If there was a store that sold more product than anyone else, but took a loss on each one, would we call it a success? Would the business press hold these companies up as business model successes? Would anyone? Interesting, maybe. Promising, sure. But successful? Then what the hell is going on with the coverage of our industry?

There are, however, an entirely different mode of doing business that is well-represented in the traditional sphere. While it is certainly more risky, large companies often sink millions or billions of dollars into research, hoping it will pay off by providing them a sellable product. Microsoft, for instance, spent more than 9 billion dollars on research in FY 2008, almost double what it spent on income tax. Around one in six dollars that comes in to Microsoft in revenue are spent on research.

By Jason’s argument, Microsoft Research is not “successful”, and if someone says it is, they are using an unacceptably low bar for success. It would be more appropriate to think of Microsoft as aggregating significant amounts of risky propositions into a single department, on the gamble that enough of those propositions will be successful to make the expenditure worth it, even if it takes a while for those propositions to pay off.

In effect, venture capitalists are making the exact same gamble. The underlying business model is still the same: plunk down a bunch of cash on research projects that, if successful, will result in significant payoff. The difference is that Internet businesses are very amenable to a more distributed model, where individual folks with ideas do the research, shouldering some of the risk of failure and some of the reward for success.

In other words, these sorts of businesses are simply accumulating value–in followers, users, or even eyeballs. They will be successful when they can sell that value (usually, all at once), just as Microsoft will be successful when they can ship products based on years of research (usually, all at once). According to Jason’s analysis, Twitter is not a success. That is a fundamentally flawed analysis as they have built enough value at this point to be converted into a lump-sum payoff, even if they have not yet cashed in.

However, none of this is to say that as an entrepreneur, you should build your business around the hope of such a payoff. It’s a highly risky way to build a business, and is subsidizing venture capitalists, who don’t actually share in the risk (like Microsoft, they are aggregating risk into a larger research pool). But if taking big risks is your thing, taking on a long-term research project is a perfectly reasonable way to build a business.

Python Decorators in Ruby

This past week, I had a pretty long discussion on StackOverflow about Ruby and Python, that was touched off by question about whether there was anything special about Ruby that made it more amenable for Rails.

My initial response covered blocks (as a self-hosting way to improve the language) and the ability to define new “class keywords” because of the fact that class bodies are just normal executable code. The strongest argument against this was that Python decorators are as powerful as Ruby’s executable class bodies, and therefore can be used to achieve anything that Ruby can achieve.

Python decorators do in fact provide a certain amount of additional power and elegance over the Python before decorators. However, this power is simply a subset of the available Ruby functionality. This is because Ruby has always had the ability to add features like decorators to the language without needing language changes.

In other words, while Python has added a new hardcoded feature in order to achieve some of the power of Ruby, adding such a feature to Ruby would be unnecessary because of the underlying structure of the Ruby language.

To show this clearly, I have implemented a port of Python function decorators in under 70 lines of Ruby. I wrote the code test-first, porting the necessary functionality from two Python Decorator tutorials written by Bruce Eckel in October 2008.

The list of features supported:

  • Decorate any object that responds to call via: decorate SomeKlass or decorate some_lambda
  • A simpler syntax for class names that can be used as decorators: SomeKlass(arg)
  • The ability to give the decorator a name that could be used instead of the class name, if desired

Some examples:

class MyDecorator
  def initialize(klass, method)
    @method = method
  def call(this, *args)
    puts "Before MyDecorator with #{args.inspect}"
    puts "After MyDecorator with #{args.inspect}"
class WithDecorator
  # we could trivially add these to all classes, but we'll keep it isolated
  extend MethodDecorators
  MyDecorator() # equivalent to decorate MyDecorator
  def my_function(*args)
    puts "Inside my_function with #{args.inspect}"

When MyDecorator() is called inside the WithDecorator class, it registers a decorator for the next method that is defined. This is roughly equivalent to Python’s function decoration. Some other examples:

class MyDecorator < Decorator
  decorator_name :surround
  def initialize(klass, method, *args)
    @method, @args = method, args
  def call(this, *args)
    puts "Before MyDecorator(#{@args.inspect}) with #{args.inspect}"
    puts "After MyDecorator(#{@args.inspect}) with #{args.inspect}"
class Class
  include MethodDecorators
class WithDecorator
  surround(1, 2)
  def function(*args)
    p args

In this case, I’ve inherited from the Decorator class, which allows me to add a decorator name, which can then be used in a class with MethodDecorators mixed in. In this example, I’ve mixed MethodDecorators into Class, which makes decorators available for all classes. Again, I could have made this the default, but I like to try to make new behaviors global if I can avoid it.

This is of course a first pass and I’m sure there are subtle inconsistencies between Python’s decorator implementation and what I have here. My point is just that the feature that was added to Python to add flexibility is merely a subset of the functionality available to Ruby, because Rubyists can implement new declarative features without needing help from the language implementors, and have always been able to.

Delicious Food

A couple of months ago, I read The End of Overeating, which got me started on a series of books about food. I worked my way through In Defense of Food, The Omnivore’s Dilemma, and Animal, Vegetable, Miracle.

After reading through these books and doing a bunch of auxiliary research, I came to the fairly disturbing conclusion that the food we eat from the grocery is sorely lacking in required nutrients. That’s especially true for packaged, processed foods, but even the fruits and vegetables purchased in a typical produce section are lacking.

Studies show that fruits and vegetables grown without pesticides or herbicides have significantly higher levels of certain vitamins and antioxidants. And simple replacements of the missing compounds assumes that we have a full understanding of what’s missing. We don’t.

Additionally, breeding fruits and vegetables for high yield, uniform appearance, and long travel distance necessarily reduces other more important factors, like taste and nutritional content. Like I said, studies have found this to be true, but in retrospect, it’s fairly self-evident. Evolution is a process of competition among zero-sum ends. If yield and long-distance travel win out, something else loses out.

With that introduction, over the past few months, I’ve slowly started working toward cooking virtually all of my own food. In the past few weeks, that has expanded to include breads and sauces. In short, the rule I try to follow is: “Only purchase items with a single ingredient in their ingredient list.” And at a very minimum, a rule I got from Michael Pollan’s books: “Only purchase items with a few ingredients, all of which are understandable.”

It probably sounds like I’ve absorbed too much of San Francisco’s culture, but I have to tell you, the quality of the food I’ve been eating has increased dramatically. For the most part, food tastes better, and even when it doesn’t, I really enjoy putting together meals.

In order to make this happen, I’ve purchased a few pieces of equipment. First of all, I purchased a bread-maker. It cost only $100, and I’ve already made two loaves of pretty good tasting whole-wheat bread.


I also purchased a rice maker, which makes cooking brown rice myself actually possible. Trying to do it on a stove yourself is basically impossible (try Googling instructions on cooking brown rice).

I dramatically increased my consumption of organic fruits and vegetables, much of it locally grown. After a month or so, I can absolutely confirm that the quality and taste of the fruits and vegetables is significantly higher, and I’ve straight-up stopped worrying about the small differences in fat in things like whole milk and skim milk, since those items are mostly garnishes on fruits and vegetables. As a result, I’ve been able to focus on the taste of my ingredients, instead of trying to squeeze a few more grams of fat out of a meal, and I’ve still been able to lose a bunch of weight since I started (again, in retrospect, it’s not all that surprising that eating a ton more fruits and vegetables, regardless of what else is in the diet, would result in weight loss).


From The End of Overeating, I also completely cut out snacks (snacks are actually a relatively new Western invention), focusing heavily on three meals a day, which increases the quality and enjoyment of actually eating food that I prepare carefully and well.


Finally, I joined a local CSA, which delivers a weekly box of vegetables. I got my first shipment today, which brought me 1.5 pounds of yellow peaches, 1.5 pounds black plums, 6 oz. blueberries, 1 pund summer squash, 1 bunch chard, 1/2 pund gypsey peppers, 1/2 pound lipstick peppers, 1.5 pounds heirloom tomatoes, 1 bunch greenleaf lettuce, 1 bunch red beets, 1 bunch nantes carrots, and 1 pound red onion. All from local farms, all organic, and all for just $30.

In the past, when I heard people talking about stuff like this, they sounded like kooky hippies, so I suspect that’s how I sound to people as well. But if eating tasty, nutritious food that I prepare myself, losing weight and saving money is a kooky hippie thing to do, I’ll take kookie hippie any day.

And the Pending Tests — They Shall Pass!

While we’ve been working on ActionController::Base, there were a few tests for fixes in Rails 2.3 that were mostly hacked in and we were waiting for a cleaner underlying implementation to build them on top of. Today, we finally got all the pending tests passing!

There were basically two categories of pending tests:

Layout Selection

Rails 2 had a feature called :exempt_from_layout, which allowed users to specify that a particular kind of layout (defaulting to RJS) should be exempt from layout. The reason for this feature was that people were noticing that their RJS layouts were being wrapped in their HTML layouts.

When I first saw this feature, I had a simple question: “Why were RJS layouts being wrapped in HTML templates in the first place?” And also, if you want an RJS layout (application.js.erb), why shouldn’t you be allowed to.

As it turns out, the reason for this was that any number of parts of Rails render templates, and the layout lookup is completely separate. So when it came time to look up a layout, Rails didn’t realize that it had just rendered an RJS template (as far as it was concerned, since the available formats allow for HTML and JS, either would do).

The solution: Have template and layout lookup go through a deterministic process, and limit layout lookup to the mime type for the template that was actually rendered (not templates that might have been rendered).

The upshot: exempt_from_layout isn’t needed. If you don’t want a layout around your RJS templates, don’t make an application.js.*. If you do, do. Rails will do the right thing.

An aside: the hardest part of Rails layouts involves when to raise an exception for a missing layout. Since Rails allows layouts to be missing in certain circumstances, making sure an exception is raised in other circumstances is quite tricky. In master, we raise an exception if you explicitly provided a layout (layout "foo"), and none exist for any MIME type. Implicit layouts or explicit layouts that exist for another MIME are permitted to be absent. The exception to that rule is render :layout => true, which converts an implicit layout to a required, explicit one.


Rails 2 has a bunch of hardcoded rules that allow RJS templates to render HTML. This allows page[:foo].replace_html :partial => "some_partial" to render some_partial.html.erb.

Effectively, when you got into an RJS template, the acceptable formats list was hardcoded to [:html]. Again, this blocks the use of RJS partials, and if you supply only an RJS partial, it will not be used (a missing template error would be raised).

When we thought about it, we realized that what is desired is to look for templates matching the mime type of the current template first, and if such a template could not be found, to allow templates matching any mime (with HTML leading the list). If you’re in an RJS template and you call render :partial => "foo", and only a foo.xml.erb exists, we can assume that you mean to render that template.

This handles all of the cases supported by Rails 2, with one small change. If you have both a js and html template, the js template will win inside of RJS. If you didn’t want the js template to win, why did you create it?

That rule now applies to any template. Partials matching the existing mime will be rendered if they exist, but any other mime will work fine as a fallback. So no special rules required for RJS anymore. The same rules apply for other templates (:file etc.) rendered from the context of another template.

One last thing. If you do page[:customer].replace_html :partial => "world", an html template will be required. When we noticed that we could separate out the rules for replace_html from other partials, we realized that we could hardcode more restrictive rules for that specific API than the general API.

And with that, the remaining pending tests pass. I really enjoy being able to solve problems that required hacks in the past by reorganizing the code to produce conceptually nicer solutions.

RubyGems: Problems and (proposed) Solutions

There’s been a fair bit of discussion around RubyGems lately, and some suggestions about what the core problems with RubyGems are.

People have the general sense that there’s something wrong with dependencies, and that it might have something to do with multiple versions being installed in one repository. It also seems (to people) that having require do magical things is Bad(tm). And in general, people like knowing exactly what versions of things are being loaded.

To some degree, all of these concerns are valid, and led to the rather hackish solution that we distributed with Merb called merb.thor. What we did:

  • Created a manifest for your application that would describe the gems and versions you wanted to use. That same manifest was used at runtime to load those gems.
  • Create a virtual environment just for your application, with the one-version-per-environment rule. This meant that it was always possible to see what versions and gems were being used.
  • Make it reasonably easy to update the local environment when the manifest changes. Make such changes *not* require knowledge of the dependencies and versions of either the old or new gems.

What we did not do:

  • Put all the gems in a single directory, so normal Ruby require would work.

At first glance, this seems like a very good idea. Instead of relying on magical runtime load-path manipulation, just take, for instance, the merb-core gem, and stick it in a top-level. Then add that top-level to the load path and you don’t need Rubygems at runtime.

The problem with this fabulous idea is that there isn’t a consistent way that people use Rubygems. Consider the following scenario:

A gem called “bad-behavior” that has a lib loadpath, but puts server.rb, initializer.rb, and omg.rb at the top-level. In omg.rb, the gem does Dir["#{File.dirname(__FILE__)}/*"].each {|f| require f }. This works fine when the gem actually owns the entire directory. But if you drop the gem into a larger file structure (similar to how other package managers handle the problem), its top-level is now everyone else’s top-level.

Another scenario: A gem called rack-silliness that puts its files in rack/*, and then calls Dir["#{File.dirname(__FILE__)}/*"].each {|f| require f } from rack/silliness.rb. Again, this works fine if the gem owns the entire directory, but if multiple gems put things in rack/*, moving everything to a shared structure will fail.

With all that said, if we *could* use a shared structure, things would automatically fall into place. We wouldn’t need rubygems at runtime. It would be easy to have separate environments with the one-version rule. It would be easy to have local environments. *All within the existing Rubygems structure*.

The solution I promised

So how do we solve this problem? We need to agree to deprecate everything but the following structure for Rubygems:

Given a gem foo, there should be a foo.rb at the top-level, and optionally, a foo directory underneath. No other files or directories are allowed

Update:What I meant here was lib/foo.rb and lib/foo/…, which will be the directory that gets added to the load path. As a result, the vast majority of existing gems would not need to change.

Other solutions that work with Rubygems but use a single shared directory structure *assume* well-behaved gems only. If we could enforce well-behaved gems, we would both have an excellent solution in Rubygems proper, and make it easier for people to build additional solutions and plugins around the gem format.

So here’s my proposal: For the next version of Rubygems, print a warning if installing a gem that does not comply. Over the next few months, get the few existing gem authors who have non-complying gems to release new versions that comply.

At the same time, I will release a gem plugin that provides virtual environments and local environments for Rubygems (I have already been working on this). It will support the one-version rule, named virtual environments, a gem manifest for applications, and gem resolution (thanks to the hard work by Tim Carey-Smith on gem_resolver).

In the interim, we have a slightly clunky solution that will work well. Instead of putting all gems into a single load-path and using that, we leave the current structure (each gem has its own space). Then, when a gem is installed into an environment, we preresolve all load-paths, and keep a list of them. When you switch into an environment, we add those load-paths to the default set of Ruby load-paths, which will behave exactly the same, but still support misbehaving gems.

In the long-term, all gems will be able to live side-by-side in a single load-path, which will allow us to create a cleaner version of the virtual environments (and will improve startup times, especially on JRuby and Google App Engine, but won’t have any user-facing implications).

So, are we up for finally getting our gem packaging format under control?

P.S. I am aware that rip was just announced, and is attempting to do a lot of the same things. This blog post has been a long time coming (the ideas were hatched a year ago, and many are available today as part of Merb). What I’d like to do here is take the good ideas that exist in Merb, rip, and the Python community and make them native to Rubygems, addressing the problems I outlined above that are inherent to the transition. It’s perfectly fine for rip to simply require well-formed gems, but a solution that gets us from here to there as a community is important.

My Code Directory

So after a couple of weeks, I’ve managed to remain mostly clean. A couple of observations:

  • It’s crucially important to keep the Downloads folder clean. This means finding a more permanent home for downloaded files quickly or throwing them into the trash. The “broken window” impact of having a anything-but-empty Downloads directory is higher than I expected.
  • Similarly, a strictly controlled Documents directory is crucial. I have “Presentations”, “Virtual Machines”, and “jQuery Doc Files” (for my book).
  • Applications has turned out to be more difficult than I expected. There’s a fine balance between keeping commonly used things at the top-level and keeping the top-level relatively small. Obviously, this is mostly obviated by LaunchBar, but part of this project is about making it a lot easier for me to understand what is on my system, so having a global trash bin isn’t really acceptable, even if it’s easy to rummage in it.
  • It’s very hard to control what gets installed in the system. I’ve tried to install as much as possible via MacPorts, just because I know I’ll be able to uninstall it later.
  • I’ve been using Adium for IRC, AIM, GTalk, and Twitter. Even though it’s not as good as the special-purpose tools for IRC or Twitter, there’s a lot of value in keeping everything in a single app, and being able to combine IRC users with their GTalk counterparts is a nice side-effect that has started to pay off over time. I definitely don’t expect everyone to do this (especially for Twitter), but I’d recommend playing with it and see if having all of your contacts in a single place is a win for you :)
  • Incidentally, part of what makes Adium for Twitter doable for me is Cotweet, which sends me a batched email every so often of the @wycats mentions on Twitter.

What about my code directory?

  • I divided the top-level into two directories: active and vendor. The active directory is for the projects I’m actively working on and have commit (or a fork).
  • That includes: rails, merb, rubygems, gem_resolver, and evented_jquery (private for now).
  • My vendor directory includes git or hg repos I’m watching and use frequently. Under vendor, I have a java directory, which includes the davinci project, jruby, jvmscript, ruby2java, and Rhino.
  • Right under vendor, I have adium, bespin, jquery, macports, matzruby, and rubymacros.

This may or may not scale over time, but again, it has the pleasing property of being able to determine, at a glance, what’s going on in my code directory for some aspect of my work. So far, organizing things reasonably well has helped me a lot to find what I need when I need it in the appropriate context.

Quest for a Clean Machine

My last machine finally died a slow, painful death, so I have the opportunity to start with a new, fresh machine. As usual, I begin with high hopes of keeping things clean and easy to navigate, but I anticipate failure. In an effort to stave off that failure, I’ll be blogging specific techniques (successful and failed) that I use to try and keep things organized.

My steps so far:

  • Update the system to the latest OSX. Repeat until there are no updates left.
  • Install XCode. As an iPhone developer, I downloaded and installed the latest XCode with iPhone support. Failing that, I would have just downloaded the latest from Apple.
  • I then went into the Help menu in XCode, opened Help, and subscribed to Core Library, Java 5 API Reference, Apple XCode 3.1, iPhone OS, and iPhone OS 3.0.
  • Install MacPorts.
  • Install AppZapper. This is the first thing I haven’t done before. AppZapper will remove other traces of an application, in addition to the application icon itself from /Applications.
  • Install LaunchBar. This is equivalent to QuickSilver, but I prefer LaunchBar. After installing LaunchBar, go into the Advanced Tab in Help and select “Hide Dock Icon”. This activates a hack that hides LaunchBar from the dock, while keeping it running.
  • Install Google Chrome. This isn’t strictly necessary, but I’ve been playing with Chrome for the past few days and the fact that it avoids long-term memory leaks has helped reduce the long-term degradation of a machine that eventually needs a restart.
  • Install and setup Adium.
  • Install bash-completion via ports. sudo port install bash-completion. After completing, add . /opt/local/etc/bash_completion to ~/.profile. Also, add +bash_completion to /opt/local/etc/macports/variants.conf. This will cause all future ports to also install their bash completion, if available.
  • Install git-core via ports. This will include bash_completion, because we added it to the default variants above.
  • Update Rubygems. gem update --system. Wait for it to bulk update your gems. This should be the last time you ever have to do this (thanks Eric!)
  • Use Calaboration to sync my Google Calendars with iCal.
  • Transfer music from backup to iTunes and setup iTunes to be using the correct user.
  • Resync my iPhone with iTunes.
  • Remove System Preferences and Time Machine from my dock.
  • Trash downloaded files from my ~/Downloads.

Next, I’ll talk about how I’m organizing my Code directory :)

Optimistic Sci-Fi

I had forgotten how much I missed optimistic Science Fiction. When we look back at the history of science fiction, the first decade of the 21st century will be remembered for an adventure into grittiness, pessimism, and exploration of the devils, rather than angels of our nature. Perhaps it was a needed excursion, but it has been an exhausting decade.

Perhaps science fiction simply reflects its era, and the last decade has certainly been an exhausting look at the devils of our nature in reality as well as fiction. Just as I am glad to see the pessimism and grittiness of the last decade give way to the hope and optimism of an Obama administration, I was glad to leave the movie theater from the new Star Trek movie feeling like the shadow of the last decade has lifted.

To be fair, television series like Battlestar Galactica were excellently produced, directed, and acted. The genre is better off for its existence; indeed, the darkening of Stargate SG-1, and the darker themes in Stargate Atlantis relative to its parent series made for compelling, interesting science fiction.

But I lost sight of the fact that optimistic stories, those that did not rely on racism, prejudice and other human failings had all but vanished from television. Even Star Trek, which always showcased the elimination of racism hundreds of years in our future, trotted our a grittier, conflict-ridden series in Enterprise, in which the ongoing conflict between the humans and vulcans was a thinly veiled metaphor for modern-day racism and hatred.
The thematic changes are even reflected in the lighting. Even shows from the 90s like Babylon 5, which tried to paint a more realistic, gritty face on science fiction, featured relatively bright settings and an overall optimistic theme. After several seasons in which the heroes battled relatively evil (but sometimes ambiguous) villains, they emerged victorious, and the victory was soaring and uplifting. In contrast, virtually all shows in the last decade have involved morally ambiguous heroes and dark, gritty sets.

Again, there is certainly a place for that sort of thing, but the utter lack of optimistic, bright, unambiguous science fiction over the past decade has fatigued me in a way I hadn’t realized at all until I stepped out of the theater from Star Trek this weekend. Having gotten past the unpleasant future of the Enterprise world, Star Trek (the new movie) features a racism-free future, flawed but uplifting heroes, and bright, large sets that evoke feelings of optimism. I have to say: it felt good.