Web developer personas (are you in there?)

User personas are a useful tool for when you want to discuss needs that users  have and, ultimately, features that meet those needs. Personas help keep you talking about real people versus some random “user”.

I’d like to build up a collection of web developer personas that we can use in our product and feature planning. The idea is that each persona will represent a (largely)[1] distinct collection of web developers.  We can use these personas when we’re crafting product requirements, though we will likely need to customize them to properly describe specific kinds of problems we’re trying to solve.

Are you a web developer? If so, please take a quick look at the initial set of personas I’ve created and let me know if none of these would be able to reflect the workflows and tools you use regularly.

Footnotes    (↵ returns to text)
  1. There’s actually quite a bit of overlap in terms of tools that different web developers regularly use, but there is also a wide range of workflows and a wide variety of tools at the edges. For example, almost everyone uses browser-based styling tools for making tweaks to their styles. But, for editing their CSS files, people use all kinds of different editors. Some people might use the ReCSS bookmarklet to reload their CSS automatically. Others use LESS or SASS with some kind of build step before they preview their pages. This variety is part of the reason it’s helpful to have some representative personas and talk in terms of what they’re trying to accomplish, rather than how they do it.

r2d2b2g is becoming the Firefox OS Simulator!

It’s been a month since Myk Melez posted an introduction to r2d2b2g, a prototype Firefox add-on that makes it easy to try out B2G (Mozilla’s new, completely web platform-based mobile OS) on your desktop/laptop computer. Just as B2G is growing up to become Firefox OS, r2d2b2g is growing up to become the Firefox OS Simulator.

The work of Myk, Harald and Matt is coming along nicely, and you can try it out today by picking up Wednesday’s r2d2b2g 0.6 release.

Quite a bit has changed since Myk’s blog post, so I’ll give a run through of the Simulator as it stands today. The biggest difference between the early r2d2b2g releases and the latest is that it is now much easier to install apps that you’re working on into the Simulator. Let’s take a look!

The Simulator has moved into the Web Developer menu.


The Simulator also integrates with the command line on the Developer Toolbar. Use the firefoxos manager command to jump into the Simulator Manager, just as the menu item does.

Here’s the Simulator Manager itself:


On the left, you’ll find some navigation controls including a switch that lets you start and stop the Simulator. The Simulator itself is still B2G Desktop, which is a build of B2G that runs natively on Windows, Mac and Linux. You can also start and stop the Simulator using the firefoxos start and firefoxos stop commands in the Developer Toolbar. The “JS Console” checkbox allows you to start up B2G Desktop’s Error Console so that you can spot errors that might arise while you’re working on your apps.

In the screenshot above, you’ll see that I’ve already installed a couple of apps. You can add apps by providing a URL to a site (with autocompletion based on your open tabs) or, even better, a URL to a manifest (so that the app can have a proper icon and such). You’ll need a manifest file anyhow to submit to the Firefox Marketplace, so you might as well start out with that early on.

You can also locate a manifest file on your local computer so that you can create a packaged app (no web server required!).

In the screenshot above, you’ll see that I installed James Long’s Weather Me app straight from GitHub and Fred Wenzel’s Serpent game from a local clone of its git repository. I’ll note that I did have to tweak the manifest for Serpent a little bit, because it was set up to deploy to GitHub rather than from its local directory. Changing just a couple of fields was all it took and then it worked great!

With those set up, I clicked the switch that says “Stopped” to fire up the Simulator. Then, I unlocked it with a swipe of the mouse, and swiped left on the home screen to get to my apps:

As you can see, the Weather Me and Serpent apps are installed and ready for testing! One new feature I’d like to point out is the home button at the bottom of the Simulator. You no longer have to guess which key to press on your keyboard… just click the on-screen button as you would on a phone.

While hacking away on these apps, if I make changes I just have to follow some simple rules to see my changes. Hosted apps follow the usual rules for website caching and for working with appcache (which you should!). You can update packaged apps just by clicking the Update button in the dashboard and restarting the Simulator.

Once you’re done working with an app, you can remove it from the manager, which will also remove it from the Simulator (though you made need to restart your Simulator to see it disappear).

The Firefox OS Simulator is the easiest way to try out Firefox OS apps today and to verify how they’ll look before submitting them to the Marketplace.

Install it today and let us know if you run into any problems. There are currently known issues on Windows XP and Linux that we’re working to resolve, but Windows 7 and Mac users should try it now!

New device and a multifactor nuisance

Two-factor authentication (2FA) is great. Thanks to 2FA, even if someone manages to figure out my password, they still need to have physical access to my phone. Well, I actually have two phones that I switch between, so they need access to one of those two phones. I just got a new phone to replace an aging one. I use three different services that support Google Authenticator. Guess what? Now I need to reset the 2FA on all three of those services so that my new device has the secret.

Sure, this is a first world problem, blah blah. But, what I’d really love to see is 2FA tied into Persona (BrowserID) and all of the sites I log into support Persona. Then I only have one password to know, one 2FA secret. It would eliminate the need for a password manager. Convenience and security. Sounds grand, doesn’t it?

What is a “developer”?

My goal as a product manager at Mozilla is to represent the needs of web developers well and make sure that Mozilla is doing what we can to help them.

I came to the realization last night that when I talk with others about what “developers” need, the picture of that “developer” that appears in different people’s minds can be quite different from my own. In fact, while I may imagine someone sitting at a keyboard getting frustrated at trying to make something work, someone else may be thinking of a company that “wants” to get an app into Mozilla’s Marketplace.

In my view, when you’re figuring out what you need to build, imagining a company is almost always not what you want. Companies don’t do anything. People do. People have a variety of reasons for doing the things they do, and understanding what those people are trying to accomplish is key to building good products.

This is what I like about personas [1]. Personas describe realistic people, allowing you to empathize with them and ensuring that you’d never mistake a person for a company. They can help give clarity to which things are important to build and also help catch gaps. Imagine a coworker coming up to you and saying “Can you believe I just met someone who was trying to make our product [do something outlandish]?”. It’s possible that the person in question is an outlier that you can safely ignore. But, it’s also possible that there are other, similar people out there and adding a new persona to the mix may open up a whole new market.

All of that said, it’s perfectly reasonable in many contexts to talk about a company as a “developer”. “Mozilla is the developer of the popular Firefox web browser”, for example. There are certainly times in product development where talking about a developer as the entity that controls an app in the Marketplace is perfectly reasonable. But, when you get down to planning features, I think it pays to think about individual people.

Footnotes    (↵ returns to text)
  1. In this use, “persona” is a generic industry term, not to be confused with Mozilla Persona, the awesome identity system for the web.

GitHub adds a command line, and so should you!

Yesterday, GitHub announced their new “Command Bar”. I am a fan of command lines, and this is an awesome addition for navigating GitHub. I love being able to get more done without pulling my hands away from the keyboard.

You may remember that we’ve added a command line to the Firefox developer tools. It’s currently in beta, slated for release in about 3 weeks.

My reason in bringing this up is that if you want to add keyboard controlled command line goodness to your webapp, you can do so easily. The command line in Firefox is actually a separate open source (BSD-licensed) project called GCLI that you can incorporate into your own webapps!

Fork GCLI on GitHub and let’s see a thousand command lines, um, bloom.

An Important Role for SVG

Yesterday, I came across a link to JustGage, a JavaScript library that uses Raphaël.js to create attractive, dashboard-style gauges. Since it uses Raphaël, which in turn builds upon SVG, these gauges will be resolution-independent. Thanks to vector-based graphics, they’ll look smooth at basically any size.

This morning, I saw John Gruber’s ode to lots of pixels. When Apple introduced the iPhone 4, their first product with a “Retina display”, they took the first massive, single-step leap in pixel density since we started reading off of glowing screens decades ago. As Gruber points out, displays crept up from 72ppi to 110ppi over the decades, before jumping to more than 300ppi on the iPhone 4. Now, we’re seeing high-dpi screens all over the place.

The trouble with the high-dpi displays is that bitmapped image files need to be much higher resolution than we have been making them. On the web, that means more bandwidth used. For apps, that means larger app sizes.

Which brings me back to JustGage. It’s nice to see libraries like that and the popular D3.js, because no matter what the resolution of the display, these visualizations will look good.

Gruber mentions this about the new high resolution displays:

The sort of rich, data-dense information design espoused by Edward Tufte can now not only be made on the computer screen but also enjoyed on one.

Ironically, it was the combination of JustGage and Gruber’s article that made me think about just how important SVG will be in this new era of screens of many sizes and resolutions. Ironic, because I’m pretty sure that the kinds of visualizations provided by JustGage are exactly the sort of low-density displays that Tufte seems to dislike[1].

Footnotes    (↵ returns to text)
  1. D3, on the other hand, is capable of some very nice displays, including the slopegraph that Tufte himself proposed.

Seen on Mars #2

Update: on Google+, Aaron Shaver points out that there was nothing from Mars in this post! D’oh! In a week filled with news from Mars, on a blog with “Mars” in the name, I really have to include something. So, here we go: did the Curiosity rover photograph the crash landing of the spacecraft that took it to Mars?

I now return to the previously written post:

This week, I managed to keep my link gathering going, but I didn’t do as good a job cross-posting these links to Twitter and G+. I’ll figure out a better way to do it.

One thing that may make this all a bit easier is that if I find myself reading something on the iPad and want to quickly write something for this post or Twitter, I can just grab my keyboard. I just picked up a Logitech K760 keyboard (Amazon affiliate link) which allows me to easily switch between typing on my Mac and typing on my iPad.

This will also make taking notes on my iPad easier. Why take notes on an iPad specifically? Because it enables doodling! Alma Hoffman wrote a good article about the value of doodling and drawing pictures in general.

Web Development

Addy Osmani wrote an epic article for Smashing Magazine that introduces JavaScript MVC frameworks, the TodoMVC project for quickly comparing the framework styles, various criteria you can use to evaluate the choices, guidelines on which framework to use when, quotes from people who are users are the various frameworks and thoughts about tools beyond just MVC. If you’re wrestling with non-trivial JavaScript applications, you really should check this out.

A quick rundown from Steven Sanderson of Knockout.js on the 7 front-end libraries/frameworks represented at the Throne of JS. Good reading, if you’re curious about how these libraries are similar and different.

Firefox adds supports for @supports in CSS. This most excellent new feature is like Modernizr‘s CSS bits built straight in CSS. Paul Irish even reports that Modernizr will use the proposed JS version of supports CSS. Apply different CSS depending on what the browser supports.

Daniel Buchner has code that lets you watch for DOM mutations that match a given CSS selector. The cool thing about this approach is that it uses the browser’s own machinery to do the matching, rather than a bunch of custom (and slow) JavaScript.

Rob Campbell wrote about the $() function that comes for free in the browser consoles. Following Firebug’s lead, $() performs a document.getElementById, which is not super useful. After some discussion, Firefox Nightly has already switched to using $() for document.querySelector. ($$() remains document.querySelectorAll).

Want to give Firefox OS (Boot2Gecko) a try? It’s not yet for the faint of heart, but Jeff Griffiths gives the details on trying it out on your desktop machine.

Reveal.js is a nice looking (yet another) HTML5 presentation library.

Considered tinkering with Go? Jeff Wendling wrote a detailed, step-by-step rundown of building a 1997-esque guestbook with Go.

Finally, in case you missed it, I wrote about live editing of HTML in the browser and the hurdle that I think we need to overcome to get there: templates.

Other Geekery

Tindle is a site where people can sell their electronic creations. There are some neat looking kits there!


Editing HTML in the browser

Live editing of CSS in the browser works great. You get to see your changes immediately, and there’s nothing like live feedback to keep you rolling, right? The Style Editor in Firefox makes it really easy to edit your CSS and save when you’ve got everything looking just right.

CSS is purely declarative, which makes it a bit more straightforward to implement live editing. Swap out the rules and the new rules take effect without weird side effects.

HTML is declarative as well, but there are two problems with “live HTML editing”:

  1. You’re changing the Document Object Model (DOM), not HTML
  2. What you’re changing is often not the same as what you started out with


I won’t dwell on this, but the first problem is that even if you’re looking at a “static HTML file”, once the browser grabs onto that HTML it turns it into a DOM. The browser’s tools will show you what looks like HTML, but it’s really just a visual representation of the DOM. JavaScript can manipulate the DOM and change things entirely from what was sent down the wire. JavaScript can also add properties to DOM objects that don’t even appear in the “HTML view” that the browser shows you.

Of course, using Firebug (and, soon enough, Firefox as well) you can make live changes to the DOM in an HTML-like view. Some kinds of these changes may mess up things like event handlers, but you can still try out your changes.

The real problem is…

What you’re changing is not what you started with

A fundamental bit of building a webapp is taking some data and plugging it into an HTML/DOM structure so that the user can see it and interact with it. Consider this view of Mozilla’s GitHub repositories:

Let’s say I wanted to add a new bit of information that’s already in the database into this compact repository view. I could open up Firebug and add it to the DOM. This would at least allow me to try out different DOM structures and get the styling right. However, I’d need to go back to my server code (I’m assuming the view above is generated by GitHub’s Rails code) and make the change for real there. Plus, I’d only get to see how the change looks on one of the elements, until I’ve updated the server and reloaded.

Back when I was working on TurboGears (2005), almost all of the stuff that ended up in the DOM got there from HTML that was rendered on the server.  Then, as today, people used all kinds of template engines on the server to take a mass of data and munge that into some kind of HTML structure for the browser to display.

Many developers have been experimenting (with varying degrees of success) with moving the conversion from “hunk of data” to DOM into the browser. That means that the browser will take a hunk of data, combine it with some sort of template and then update the DOM with the new content[1]. So, the browser has the data and the template in hand.

What if the browser tracked the relationship between data, template and elements of the DOM? Wouldn’t it be cool to fire up the Page Inspector, point it at one of those repositories in the screenshot above and then tweak the template rather than tweaking the DOM? If you add that new field to the template, then all of the repositories on screen would immediately be refreshed.

Even better, once you’ve got everything looking the way you want, you could save your template to disk[2], much like you can with a CSS file in our Style Editor right now.

How do we get there from here?

The creators of one of the MV* frameworks could make a browser-based editor that understands the templates used in their system and lets you make on-the-fly tweaks. The disadvantage to that approach is that it only benefits that one framework. And, even for users of that framework, that editor would stand alone from the rest of the browser-based tools (would they create their own CSS editing in addition to template editing?)

Another way to go would be something like the Model-driven Views (MDV) project. The idea there is to add templating directly to HTML and the web platform. MDV is nice because changes to the data are automatically propagated back into what the browser displays. I don’t know if MDV would also automatically propagate changes to the template back into the display, but that would be the idea.

There are many different opinions on how templates should work. I’d be happy if we pick one approach, make it a part of the web and enable smooth live editing of the entire appearance of our sites and apps.


Mardeg writes:

The “Model-driven Views” puts <template> tags inside the html and relies on them being parsed with javascript, whereas it might be feasible to use javascript to generate/alter a template from combined MicroData and explicit role attributes already existing within the page, which wouldn’t affect how the page is viewed with javascript disabled.

If MDV becomes a part of HTML (as in “baked into the browser”), then it may not rely on JS at all. I like the idea of being able to combine a template with microdata. Personally, I think we’re reaching a point where if you want to run web applications you really need to have JS on.

Footnotes    (↵ returns to text)
  1. I’m sure some people just build the DOM imperatively with JavaScript, but I’m going to ignore that case for this post.
  2. OK, this certainly depends on where and how the templates are actually stored. I would personally make it a goal to be able to easily save (or at least copy/paste) templates back to their origins.

Seen on Mars #1

Back in control

Recently, I’ve seen a fair number of articles where people are complaining about having the data under the control of one for-profit entity or another. That tension will always be there. One thing I can control, though, is this blog. With better software than I’ve had in the past, I hope to maintain my interesting sets of links (with commentary) on my blog in addition to Twitter and G+.

Software Development

The new Montage framework from Motorola Mobility has some awesome ideas in it. Ars Technica has posted an introduction by the framework’s creators. The article also talks about Ninja and Screening, which are visual tools for building and testing Montage apps.

The Google Summer of Code project to build a graphical event timeline for Firefox is progressing nicely indeed. You can download the add-on now.

Parashuram Narasimhan shows us how we can get going with IndexedDB today on browsers that don’t support it using IndexedDB polyfills.

Metaquery is an interesting approach to breakpoints in web design. Of course, not everyone thinks breakpoints are the right approach, but this is still an interesting library.

Firefox add-ons have been downloaded 3 billion times now. Firebug has nearly 50 million downloads. And those figures are from addons.mozilla.org alone. I know for certain that a significant number of Firebug downloads have come straight from the Firebug site.

Speaking of Firebug, did you know that you can set conditional breakpoints not only for JavaScript but for XHR and even cookies as well?

Lea Verou has introduced Prism, the new JavaScript syntax highlighting library that she extracted from her Dabblet project.

WeasyPrint converts HTML/CSS (including print styles) to beautiful PDFs (well, assuming your original HTML/CSS was beautiful!). Unlike PrinceXML, WeasyPrint is free (BSD-licensed).

How to make a game like Cut the Rope. I wonder if a tutorial like this exists for the web? I enjoy Cut the Rope, personally.

Other tech

Ed Bott finds Microsoft’s new strategy laid out in MSFT’s 10-K. This is a bold shift for Microsoft. It’s hard to imagine Microsoft as the underdog, but to some extent that’s the position they find themselves in.


Better Mac OS defaults for geeks that I will likely not use all of, but there are a bunch of useful settings in here.


The Two-Way Conference (MozCamp and more)

A week ago, I had the good fortune of attending and speaking at MozCamp Latin America in Buenos Aires, Argentina. I really enjoyed meeting a whole bunch of new people and appreciated the chance to talk about the Firefox developer tools shipping today and in the near future. The organizers clearly put a lot of effort into getting this conference together (thanks!)

This MozCamp was filled with excitement about B2G and the many other initiatives Mozilla has going on. Beyond the product building we’re doing, there was a lot of energy and enthusiasm for growing the Mozilla community and building on the ideals that Mozilla stands for.

During MozCamp, I spoke with a few people about conferences in general. I think there’s a lot of room to make MozCamp and other conferences better than what we’re used to. These ideas are not new and didn’t originate with me, but they’re worth repeating.

The Format Today

MozCamp was structured like most other conferences that I’ve been to: a packed schedule with multiple tracks of presentations. You get interesting people presenting on useful topics. That’s not a bad thing, but I think it can be better.

If I’m going to deal with the hassle of air travel and spend days away from my family, I’d like to get the most I can out of that time.

A typical presentation slot is 30, 45 or 60 minutes. Of that, there’s maybe 10 minutes of questions and the rest is an “eyes-forward” presentation. I don’t think this is the best use of time. The unique thing about MozCamp (or any conference, for that matter), is that I’m physically there with the other people. The communications bandwidth is much higher. To use that bandwidth for one-way communication seems suboptimal.

There were other issues that I noticed as well:

  1. Attendees at MozCamp had varying levels of English proficiency. This can make it hard for some to keep up with eyes-forward presentations from native English speakers. Plus, a whole day of constantly translating in your mind can get tiring… by the time the second day rolls around (after a possibly late night followed by soccer in the morning!), I would imagine that keeping focused would be difficult.
  2. No slack time for checking email and having hallway conversations. The schedule was packed, leaving mealtime and snack time as the only times to talk (short of skipping sessions). Some of the evening activities suffered from high noise levels as well, eliminating that chance to talk easily.
  3. No slack time also causes issues with schedule slip.. On Sunday, the Q&A session threw the rest of the day off by 30 minutes. That’s not surprising, since it was an interesting two-way communication sort of session… but it meant that the rest of the schedule needed to be pushed by half an hour and never got back on track (causing some sessions at the end of the day to be dropped).

The Two-Way Conference

I think conferences, including MozCamp, should try to become more “two-way”. One formula for a session could go something like this:

  • “speakers” are more like “hosts” or “invited experts”.
  • The expert prepares a page with links to background material and probably a presentation. This page should be available a couple weeks in advance of the event
  • That page can also include some suggestions for areas that could benefit from discussion.
  • That page could also have an etherpad or wiki associated with it to collect more ideas in advance (as attendees view the material).
  • At the beginning of the session, the expert provides a lightning talk-sized intro and, possibly with the help of a facilitator, gets people organized to usefully talk about things or work on something

A parallel is the recent talk of “flipping the classroom”. The students watch a video outside of class and then use class time to work together or get help from the teacher.

Wouldn’t it be awesome if, instead of 50 minutes of eyes-forward presentation followed by questions, we had 5-10 minutes of organizing, level-setting and topic choosing, followed by 50 minutes of two-way communication?

Some unconferences go so far as to not even have predefined topics and time slots. I’m not going that far. I think that a little bit of structure with some constraints on time can help make the most of the time. Additionally, I have heard from some conference organizers that you can’t even get some companies to sponsor sending people to a conference without presentations from industry experts.

Boriss had a user experience workshop that followed a good format: she did a few minutes of intro followed by demonstrations of applying her UX research suggestions to projects that people were working on. William Reynolds told me that he also ran a workshop session on the topic of getting people involved with Mozilla. Individuals can take matters into their own hands this way, and I wish I had done so myself (there’s always a next time!). I’d like to see conferences that encourage and support this even more.

How can this format help with the problems I talked about?

  1. the sessions spend most of their time in a two-way exchange between the “expert” and the participants, thus making better use of the bandwidth
  2. when communication is two-way, there’s more opportunity to overcome language issues than when you have a presenter following a predefined outline. In fact, sessions that involve group discussion could possibly take place in the group’s native language. (Of course, if everyone involved speaks the same language, then there’s no problem. Some of the MozCamp sessions were held in Spanish… of course, that left me, and some of the Brazilians, out.)
  3. More of the stuff that might get discussed on the “hallway track” can now get discussed by more people during sessions
  4. Ideally, there would be a bit more buffer time to handle schedule slippage and sessions that are so good that people just don’t want to stop

I still find conferences valuable and will continue to attend them, but I think we can do better.