The Chrome team launched a new beta today, for the first time supporting MacOS X, Linux and Extensions. This work represents the culmination of over a year of work by a large group of folk and I’m very proud of what the team has accomplished. I want to take the opportunity to talk a little bit about extensions.
The Chrome extensions API provides users and developers with the ability to customize the browser while adhering to the Chrome core principles of speed, stability, security and simplicity.
Google Chrome extensions:
You can watch videos about Google Chrome extensions here.
I’ve been working on browser stuff for a while, and people sometimes find my stories interesting, so here’s a little extensions history…
Years ago, when Netscape was building Netscape 6 (its suite of browser, mail, page editor and IM client), a product requirement was that it be possible to install the browser with or without any of the “optional” components - mail, IM, editor, etc. When you installed those components “extra bits” (menu items, toolbar buttons, etc) would appear in the browser UI allowing you to launch mail, IMs, etc.
The Netscape front end was implemented in XUL, a cross platform UI language. The optional components specified things called “overlays” that allowed them to add in their “extra bits” when they were present. The optional components packaged these overlays and the other bits of their logic into “XPI” files that were installed by the install engine.
A few years later, Netscape’s successor Firefox was growing in popularity and developers were discovering that it was possible to use this component install mechanism to add other functionality to the browser. It was never really designed for consumption outside of Netscape’s own product and so not much thought had been given to what APIs should be exposed within the browser UI. It was really an internal API. So when people began making these “extensions” for Firefox, whenever Firefox changed it was possible for the extension to prevent the browser from starting. In the early days of Firefox, it wasn’t uncommon for a user who had an extension installed to see the error “no XBL binding for browser” when they upgraded to a new verison of Firefox.
It became clear that we’d have to fix this before Firefox could reach 1.0, so I put together an extension manager that offered a less hacky install/uninstall path and more importantly added versioning. Given the lack of stable APIs, the system would simply disable extensions not advertised as being compatible with the current version of the browser. This meant developers would have to certify their extension with every new version of Firefox. It wasn’t perfect and was somewhat cumbersome, but it worked, and didn’t require us to freeze a bunch of APIs which we most certainly would have botched if forced to do it in haste.
Firefox extensions have always been a double edged sword - they offer immense flexibility at the cost of forcing developers to re-certify with every browser update. When developers are late to do so, users that upgrade quickly have to do without their extension for a while, or disable the compatibility verification step at their own risk.
I think this historical note is more than just interesting for anecdotal value. If this “back door” into browser customization hadn’t existed by virtue of Netscape’s componentized install, it’s not necessarily a given that Firefox would _ever_ have had extensions. Think about that. It’s awesome that it did, because it’s a feature users love. Because it did, and because users love extensions, now we on the Google Chrome team have the luxury of developing a new API from scratch that represents what we hope is the best of both worlds - customization and Chrome’s core principles.
The Firefox experience was immensely valuable. I am not one of the engineers that has personally done a lot of work on extensions in Google Chrome, but I have enough battle scars to have some thoughts about how they should function. I am really pleased with the approach the Chrome extensions team has decided to take, and so kudos to them for getting us to this first milestone!
I was reading the comments thread on Slashdot about the new Firefox UI design. One comment referenced a mozilla bug, “418864 - Bookmark contextual dialog is not resizable.”
One comment reads: “I think it’s a poor design not to provide the user an option to [..] revert things back to the way it was before in older Firefox version.”
Regardless of whether or not the bug itself is worth fixing or not, I think this specific sentiment is actually dangerous. I’ve noticed a growing amount of it in the discussion threads surrounding Firefox UI design.
It’s dangerous because it means for every new feature or modification to the UI, the cost doubles to include a legacy code path. Over time, the effect of this is that the codebase bloats without bound, beyond the capabilities of your testing staff and quality slips. It’s dangerous because this raises the opportunity cost to explore new UI designs. As a result things change less.
What has made Chrome successful I think in UI design is that we have pushed hard to retain a consistent core vision that the team at large understands but where the design is done by a smaller core group. Glen mentioned to me once that his theory is that the more people you add to a design process the slower it becomes and the less productive output you yield. This might seem at odds with Mozilla’s “inclusive” approach to software development but I’ll just note that in the early days of Firefox the team was much smaller, and as a result could make rapid progress. There were times when various elements within the community wanted something and the answer was simply “no” - no option for the old way, no promise to support it via extensions. If you can make it work, good for you. But that’s the cost of progress.
These days, Mozilla has some talented visual designers contributing to it. The challenge for them will be to give them the creative autonomy and the ability to say “no” so that the UX aspect of the project not to get mired in quicksand.
But my favorite aspect of this launch is that it continues our commitment to ship features frequently. In the early days of Firefox, we did a preview release launch about once a quarter, and the growing community at the time loved how quickly we were making progress and the fact that they could see it so easily. This inspired us when we designed the launch process for Chrome. By focusing on tightly scoped stable releases, we are able to move quickly and deliver features to all Chrome users as quickly as possible (typically once per quarter). People interested in testing newer features can find this in the beta channel which updates roughly once a month. And those who want the bleeding edge can find the very newest features and experiments on the Dev channel, which updates once a week.
Much has been written about the joys of developing web software and how the development model allows for frequent checkpoints with your user community. By developing a seamless autoupdate system we’ve been able to simulate this with client software and as a result we enjoy many of the same benefits.
The only thing more exciting to me than this system and what it’s allowed us to do so far is what it’s going to allow us to do in the future. 2010 is going to be an exciting year!
My iMac, which 3 years ago was state of the art, now struggles to build Chrome in less than an hour. Sad? No, that’s progress.
What is sad is that the current top of the line iMac is not much better - the best config iMac (3.06GHz/24″/8GB RAM) is still shipping with the archaic Core 2 Duo.
In 2009, now that the Core i7 is out, no one should be selling a high end consumer system powered by the Core 2 Duo - it’s junk fit for the garbage can. And yet here Apple is charging $3200 for one. Not since the days of PowerPC has Apple been so behind in CPU horsepower.
Well sure you might say, the iMac is consumer grade not pro-grade - not meant for developers compiling software - that’s the realm of the Mac Pro. But why not? When I bought my iMac in late 2006, it was actually the fastest thing you could buy full stop (except maybe for some crazy $1500 Xeons). I was buying something that not only looked good and was space saving, but something I couldn’t complain about when I used it for work. Intel’s kept up its end of the bargain, developing the amazing new Nehalem CPUs, which deliver simply astounding multitasking performance. But despite being available for almost a year now, Apple has yet to ship it in a consumer line system.
Where does this leave me? With a loud, ugly Windows workstation again. I am not happy about it. But it’s fast enough for me to get work done on.
 The Mac Pro might be a contender if it wasn’t so brazenly overpriced. The Windows system I bought can now be had for ~$1500, a system whose Chrome build performance can only be matched by spending about $6000 on an equivalent Mac Pro. Come on Apple, not all developers are zillionaires comfortable with dropping the price of a brand new Nissan Versa (w/Cash For Clunkers) on something that will be a paperweight in a couple of years.
“[John Key] says if the law does not work and good parents get criminalised for lightly smacking a child, the law should be changed.
But he says it is hard to put up a case to change the law when no one has yet been prosecuted.”
So wait, some “good parents” need to be unfairly victimized by a law that by his own admission most of the country opposes before it can be repealed? How is that fair?
About two and a half years ago, I bought a 1964 ranch house on a hillside in Los Altos Hills, California. Ever since moving in, I’ve been thinking of fixing it. The house was neglected, dark, riddled with spiders and had a series of plumbing problems. Towards the end of 2008, I finally cracked the problem of how to effectively lay out the interior space, and in February I retained an architect to draw up my plans. More recently, I’ve set to work obtaining permits. The required modifications should now be in, and so I hope to get started soon. As premature celebration, I took to the sheetrock in one of the west wing bedrooms with a hammer.
Most recently, I’ve hired an interior designer to help with final finish selections and fixtures. My hope is that once complete this house will sparkle brighter than it ever has, even when it was brand new.
Once construction gets going, I’ll post photos and specs for the changes being made.
I noted with some interest this thread over at mozilla.dev.platform. The Mozilla codebase has historically included a variety of different coding styles, since style for a given file was left up to the person writing the code. The discussion caused me to reflect a little about what we’ve been doing in Chromium, especially since I spent a number of years working in the Mozilla codebase (at times contributing a few strange experimental styles).
The Chromium project, having had the luxury of a clean start, decided to inherit its coding style guidelines from Google. We tend to use C++ for most of the application code except for the Mac front end, which uses Objective-C, for which Google has another style guide. We prefer not to fork third party components but rather develop improvements to them “upstream” in their respective projects, which retain their own style. The most notable of these is WebKit.
In the beginning, I found several aesthetic aspects of the Google C++ style not to my taste. However in time as the project has grown I found that having a uniform style across the entire codebase to be very soothing. You can go from user interface code to the bowels of the network stack and find the same style. It requires fewer subtle context switches. Because the Google C++ style tends to be very specific about a great many things, the areas where it is silent stand out more when there are variances. We’ve tried to document where as a project we’ve filled in some of these gaps.
One of the interesting things about the Google C++ style guide (the one I am most familiar with) is that in many cases it goes beyond the aesthetic, covering use of language features. Other projects like Mozilla cover this in their portability guidelines, but the Google C++ style guide makes recommendations for other reasons too. For example, multiple inheritance is generally banned, because it is easily misused to create spaghetti object hierarchies that are not easily comprehendable. In fact, more than a few of the style guide sections echo tips from Scott Meyers’ excellent “Effective C++”.
Having as large a style guide as we do, there do tend to be a lot of code review comments about conforming to it. Rather than being nit-picking though that obscures the larger benefit of the work, I actually think it serves that exact purpose - the larger benefit. We are mindful that our ability to rapidly prototype and ship new ideas is key to our relevance, and as such maintaining good hygeine is a key component of that. We want new-comers to get started quickly and old-timers to pursue larger changes efficiently. In the end, in my opinion it feels that the greater good of a uniform style far outweighs the value of an individual developer being able to use their preferred style.