Data Visualization with JavaScript

Data Visualization with JavaScript is a nice looking online book by Stephen A. Thomas:

If you’re developing web sites or web applications today, there’s a good chance you have data to communicate, and that data may be begging for a good visualization. But how do you know what kind of visualization is appropriate? And, even more importantly, how do you actually create one? Answers to those very questions are the core of this book. In the chapters that follow, we explore dozens of different visualizations and visualization techniques and tool kits. Each example discusses the appropriateness of the visualization (and suggests possible alternatives) and provides step-by-step instructions for including the visualization in your own web pages.

Faster Yosemite Upgrades for Developers

Faster Mac OS X 10.10 Yosemite Upgrades for Developers

Your Yosemite upgrade may take many hours if you’ve got anything non-Apple in your /usr folder (Homebrew, Texlive, or Mactex for example).

I don't think the solves the problem I had with the Yosemite install, but this article has good suggestions for anyone using Homebrew, etc. Note that this also applied to Mavericks so this might be useful advice again in a year.


My React-in-Brackets Trilogy

I started experimenting with React in February or March, blogged about my experiments starting in May, committed to landing a Brackets feature in September and the latest release of Brackets now ships with React. Here are my three posts on the subject, in reverse chronological order:

I will likely be talking more about simplifying code soon, especially given my 1DevDay Detroit talk coming up next month.

The Brackets Scrum to Kanban Switch

The Adobe Brackets team has switched its development process from Scrum to Kanban. I thought that others might be interested in why we changed the process that we use to build an open source code editor built on web technology.

From What to What?

Wikipedia has a good article about Scrum. Boiled down, it goes something like this:

  • Work is broken into timeboxed sprints (ours were 12 days)
  • A sprint planning meeting determines what work will be done during the sprint, based on estimated “points” for a story and the expected “velocity” (number of points) for the sprint
  • No one is supposed to change the work that is committed to for the sprint (we had some exceptions in practice)
  • At the end of the sprint, we had a review meeting where we showed off the completed work for the sprint
  • There would also be a retrospective meeting where we discuss how the work went and things we should do differently
  • Daily meetings would keep people in sync

Wikipedia also has an article about Kanban. Here’s a quick summary:

  • Visualize the flow of work
  • Limit the amount of work-in-progress (so that the focus is on shipping)
  • Evolve the process as needed

As you can see, Kanban is not very prescriptive. As the Wikipedia article says, you “start with what you do now“. I can imagine Scrum being a more comfortable starting point for people moving from a more “heavyweight” process because it gives you a clear path forward.

Why Change?

We’ve been successfully shipping software with Scrum for a while, so it’s reasonable to ask why we would make this change. Since we’ve been holding retrospectives, couldn’t we just tweak the process along the way?

Based on our findings from our retrospectives, we did make minor tweaks to our way of doing things. But if we made too many changes, or certain kinds of changes, then we’d no longer be doing Scrum.

Here are some of the problems that we sought to fix:

Sprint Boundaries

The end of a sprint was boring, overly exciting or both at the same time. Some sprints, everyone had to work really hard at the end to get the stories wrapped up and done. Other sprints, we might have a single engineer that’s up against the time box while everyone else is just fixing bugs and doing random other work.

It was always an option to start working on the next story in the product backlog, even though it wouldn’t ship during the same sprint. But, it felt unnatural to the process to do so. Besides, we had plenty of bugs we could fix, which brings me to…


A sufficiently complex project with a significant number of users who have an easy way to report bugs means that a lot of bugs will be reported. As Brackets has become more popular, so has our bug tracker.

We have a process by which we prioritize bugs, but there’s still an awful lot of them. Our “medium” priority bugs are annoyances that we’d really like to fix. But there are also a lot of features that we’d really like to have.

Our process was such that bugs were counted as “overhead”. They didn’t get assigned “points” and they weren’t prioritized along with the feature work. Bugs also seemed to be the default work that someone would do when they’re not working on a story, like at the end of a sprint.

With a project like Brackets, it’s easy to argue for some features being more important than nearly all open bugs, but we didn’t put the two head-to-head that way. For example, it’s not good that app layout is a bit laggy when you resize the window, but I would bet that the ability to split the editor to show multiple files will make people happier.

This could have been fixed in our Scrum process, but we likely would have had to make some changes to how we pointed stories.

Research Stories

We’ve had a number of features that we wanted to implement but needed to learn more about before we could reasonably come up with criteria and estimates for the implementation.

Our process for this was to create a Research story card, the output of which would generally be well-defined stories to properly create the desired feature.

The trouble with this process is that the developer doing the research may wrap up the work after a few days, and then we’d have to wait until the next sprint to start working on the implementation, because you’re not allowed to change the committed stories for a sprint.

The purpose of putting the research story there in the first place is because our product owner thinks that’s a feature we need to have—in other words, to build the feature in some form. The gap between the research and the implementation felt a bit artificial.

Estimates and Velocity

Software estimation is hard, because almost everything we do has some elements to it that are novel. For most of our stories, our estimates (in points) seemed pretty reasonable. Our typical story cards would wind up in the 2-5 point range, with most landing at 2 or 3. I think the difference between 2 and 3 was pretty fuzzy, and sometimes you’d have a 5 pointer that turned out to be quicker than a 2 pointer.

A 2 point story would pretty much consume someone for an entire sprint (12 days). A 1 point story would probably take someone a couple of days of work.

We would add these points up to determine what we can fit into a sprint, based on our expected velocity. This was successful for the most part, modulo the end of sprint situations that I talked about earlier.

Velocity itself was based on how many points we were able to complete previously. There was sprints in which this got confusing, however. For example, we had a crashing bug come up late last summer which forced us to displace some work (one of those cases where we needed to change the commitment for a sprint). It was hard to calculate the effect that crasher had on our velocity for a couple of sprints.

We could have adjusted the way we did our estimates, possibly rebooting our notion of points. Switching to Kanban, we embrace a bit of uncertainty in the estimates and use calculated averages for…


A question that I have been asked repeatedly during this transition is around planning for specific releases or for a release that coincides with a conference or some other special event.

With Scrum, this sort of planning is built-in to the process. You estimate all of the stories in question, add them up and divide by the velocity to find out how many sprints it’ll take. Or, compute how many sprints you have and then you can choose which stories fit best into the time you have.

In the Kanban process, you can get the same kinds of useful estimates by computing “cycle time” and “throughput”. Cycle time is the average time it takes for similar-sized stories to work their way across the board. Throughput is how many similar-sized stories are done over a given period of time. Using this combination of data, you can do the same sorts of planning you can do in Scrum. As a bonus, cycle time and throughput are easy to compute, and should be easy to adjust when exceptional conditions arise.

Ability to do release and time planning is a strength of Scrum and something people were concerned about with the move to Kanban. Thankfully, this is territory that others have covered.

Less Time in Meetings

This wasn’t a driving factor, but our Kanban process has us regularly filling up a “Ready” column for the next bit of work to do. We don’t spend more than a few seconds estimating stories (they’re small, medium, or need to be broken down further). We no longer spend any time planning out the next sprint. Our product owner just ensures that our Ready column reflects what he thinks we need to work on next.

What Happens to the Existing Trello Board?

We have a board that we use for tracking our in-progress Kanban work. It’s on Trello today, but we’re not certain it will remain there because there are other tools that are more Kanban-oriented that will do things like calculate cycle time and throughput for us.

The Brackets Trello board that we’ve had since the project’s public release contains a lot of information about features that Brackets users are interested in. Jonathan Dunlap, our product manager, will be revamping that board to better reflect our roadmap. Keep an eye on the Brackets blog for more on the roadmap.


We’ve only just made this switch, so I can’t yet comment on how well it has worked for us. I have enjoyed reading about other people’s experiences with their software development processes and thought I would share in kind. Plus, there’s a large community of smart people out there, and I’m sure there are many suggestions that people might have. If you have a comment on this article, please add to this thread in the googlegroup.

Finally, Brackets is open source and I thought it would be valuable for the Brackets community to have an idea of how we work.

Further Reading

Web developer personas (are you in there?)

User personas are a useful tool for when you want to discuss needs that users  have and, ultimately, features that meet those needs. Personas help keep you talking about real people versus some random “user”.

I’d like to build up a collection of web developer personas that we can use in our product and feature planning. The idea is that each persona will represent a (largely)[1] distinct collection of web developers.  We can use these personas when we’re crafting product requirements, though we will likely need to customize them to properly describe specific kinds of problems we’re trying to solve.

Are you a web developer? If so, please take a quick look at the initial set of personas I’ve created and let me know if none of these would be able to reflect the workflows and tools you use regularly.

Footnotes    (↵ returns to text)

  1. There’s actually quite a bit of overlap in terms of tools that different web developers regularly use, but there is also a wide range of workflows and a wide variety of tools at the edges. For example, almost everyone uses browser-based styling tools for making tweaks to their styles. But, for editing their CSS files, people use all kinds of different editors. Some people might use the ReCSS bookmarklet to reload their CSS automatically. Others use LESS or SASS with some kind of build step before they preview their pages. This variety is part of the reason it’s helpful to have some representative personas and talk in terms of what they’re trying to accomplish, rather than how they do it.

What is a “developer”?

My goal as a product manager at Mozilla is to represent the needs of web developers well and make sure that Mozilla is doing what we can to help them.

I came to the realization last night that when I talk with others about what “developers” need, the picture of that “developer” that appears in different people’s minds can be quite different from my own. In fact, while I may imagine someone sitting at a keyboard getting frustrated at trying to make something work, someone else may be thinking of a company that “wants” to get an app into Mozilla’s Marketplace.

In my view, when you’re figuring out what you need to build, imagining a company is almost always not what you want. Companies don’t do anything. People do. People have a variety of reasons for doing the things they do, and understanding what those people are trying to accomplish is key to building good products.

This is what I like about personas [1]. Personas describe realistic people, allowing you to empathize with them and ensuring that you’d never mistake a person for a company. They can help give clarity to which things are important to build and also help catch gaps. Imagine a coworker coming up to you and saying “Can you believe I just met someone who was trying to make our product [do something outlandish]?”. It’s possible that the person in question is an outlier that you can safely ignore. But, it’s also possible that there are other, similar people out there and adding a new persona to the mix may open up a whole new market.

All of that said, it’s perfectly reasonable in many contexts to talk about a company as a “developer”. “Mozilla is the developer of the popular Firefox web browser”, for example. There are certainly times in product development where talking about a developer as the entity that controls an app in the Marketplace is perfectly reasonable. But, when you get down to planning features, I think it pays to think about individual people.

Footnotes    (↵ returns to text)

  1. In this use, “persona” is a generic industry term, not to be confused with Mozilla Persona, the awesome identity system for the web.

GitHub adds a command line, and so should you!

Yesterday, GitHub announced their new “Command Bar”. I am a fan of command lines, and this is an awesome addition for navigating GitHub. I love being able to get more done without pulling my hands away from the keyboard.

You may remember that we’ve added a command line to the Firefox developer tools. It’s currently in beta, slated for release in about 3 weeks.

My reason in bringing this up is that if you want to add keyboard controlled command line goodness to your webapp, you can do so easily. The command line in Firefox is actually a separate open source (BSD-licensed) project called GCLI that you can incorporate into your own webapps!

Fork GCLI on GitHub and let’s see a thousand command lines, um, bloom.

An Important Role for SVG

Yesterday, I came across a link to JustGage, a JavaScript library that uses Raphaël.js to create attractive, dashboard-style gauges. Since it uses Raphaël, which in turn builds upon SVG, these gauges will be resolution-independent. Thanks to vector-based graphics, they’ll look smooth at basically any size.

This morning, I saw John Gruber’s ode to lots of pixels. When Apple introduced the iPhone 4, their first product with a “Retina display”, they took the first massive, single-step leap in pixel density since we started reading off of glowing screens decades ago. As Gruber points out, displays crept up from 72ppi to 110ppi over the decades, before jumping to more than 300ppi on the iPhone 4. Now, we’re seeing high-dpi screens all over the place.

The trouble with the high-dpi displays is that bitmapped image files need to be much higher resolution than we have been making them. On the web, that means more bandwidth used. For apps, that means larger app sizes.

Which brings me back to JustGage. It’s nice to see libraries like that and the popular D3.js, because no matter what the resolution of the display, these visualizations will look good.

Gruber mentions this about the new high resolution displays:

The sort of rich, data-dense information design espoused by Edward Tufte can now not only be made on the computer screen but also enjoyed on one.

Ironically, it was the combination of JustGage and Gruber’s article that made me think about just how important SVG will be in this new era of screens of many sizes and resolutions. Ironic, because I’m pretty sure that the kinds of visualizations provided by JustGage are exactly the sort of low-density displays that Tufte seems to dislike[1].

Footnotes    (↵ returns to text)

  1. D3, on the other hand, is capable of some very nice displays, including the slopegraph that Tufte himself proposed.

Editing HTML in the browser

Live editing of CSS in the browser works great. You get to see your changes immediately, and there’s nothing like live feedback to keep you rolling, right? The Style Editor in Firefox makes it really easy to edit your CSS and save when you’ve got everything looking just right.

CSS is purely declarative, which makes it a bit more straightforward to implement live editing. Swap out the rules and the new rules take effect without weird side effects.

HTML is declarative as well, but there are two problems with “live HTML editing”:

  1. You’re changing the Document Object Model (DOM), not HTML
  2. What you’re changing is often not the same as what you started out with


I won’t dwell on this, but the first problem is that even if you’re looking at a “static HTML file”, once the browser grabs onto that HTML it turns it into a DOM. The browser’s tools will show you what looks like HTML, but it’s really just a visual representation of the DOM. JavaScript can manipulate the DOM and change things entirely from what was sent down the wire. JavaScript can also add properties to DOM objects that don’t even appear in the “HTML view” that the browser shows you.

Of course, using Firebug (and, soon enough, Firefox as well) you can make live changes to the DOM in an HTML-like view. Some kinds of these changes may mess up things like event handlers, but you can still try out your changes.

The real problem is…

What you’re changing is not what you started with

A fundamental bit of building a webapp is taking some data and plugging it into an HTML/DOM structure so that the user can see it and interact with it. Consider this view of Mozilla’s GitHub repositories:

Let’s say I wanted to add a new bit of information that’s already in the database into this compact repository view. I could open up Firebug and add it to the DOM. This would at least allow me to try out different DOM structures and get the styling right. However, I’d need to go back to my server code (I’m assuming the view above is generated by GitHub’s Rails code) and make the change for real there. Plus, I’d only get to see how the change looks on one of the elements, until I’ve updated the server and reloaded.

Back when I was working on TurboGears (2005), almost all of the stuff that ended up in the DOM got there from HTML that was rendered on the server.  Then, as today, people used all kinds of template engines on the server to take a mass of data and munge that into some kind of HTML structure for the browser to display.

Many developers have been experimenting (with varying degrees of success) with moving the conversion from “hunk of data” to DOM into the browser. That means that the browser will take a hunk of data, combine it with some sort of template and then update the DOM with the new content[1]. So, the browser has the data and the template in hand.

What if the browser tracked the relationship between data, template and elements of the DOM? Wouldn’t it be cool to fire up the Page Inspector, point it at one of those repositories in the screenshot above and then tweak the template rather than tweaking the DOM? If you add that new field to the template, then all of the repositories on screen would immediately be refreshed.

Even better, once you’ve got everything looking the way you want, you could save your template to disk[2], much like you can with a CSS file in our Style Editor right now.

How do we get there from here?

The creators of one of the MV* frameworks could make a browser-based editor that understands the templates used in their system and lets you make on-the-fly tweaks. The disadvantage to that approach is that it only benefits that one framework. And, even for users of that framework, that editor would stand alone from the rest of the browser-based tools (would they create their own CSS editing in addition to template editing?)

Another way to go would be something like the Model-driven Views (MDV) project. The idea there is to add templating directly to HTML and the web platform. MDV is nice because changes to the data are automatically propagated back into what the browser displays. I don’t know if MDV would also automatically propagate changes to the template back into the display, but that would be the idea.

There are many different opinions on how templates should work. I’d be happy if we pick one approach, make it a part of the web and enable smooth live editing of the entire appearance of our sites and apps.


Mardeg writes:

The “Model-driven Views” puts <template> tags inside the html and relies on them being parsed with javascript, whereas it might be feasible to use javascript to generate/alter a template from combined MicroData and explicit role attributes already existing within the page, which wouldn’t affect how the page is viewed with javascript disabled.

If MDV becomes a part of HTML (as in “baked into the browser”), then it may not rely on JS at all. I like the idea of being able to combine a template with microdata. Personally, I think we’re reaching a point where if you want to run web applications you really need to have JS on.

Footnotes    (↵ returns to text)

  1. I’m sure some people just build the DOM imperatively with JavaScript, but I’m going to ignore that case for this post.
  2. OK, this certainly depends on where and how the templates are actually stored. I would personally make it a goal to be able to easily save (or at least copy/paste) templates back to their origins.