What is a “developer”?

Standard

My goal as a product manager at Mozilla is to represent the needs of web developers well and make sure that Mozilla is doing what we can to help them.

I came to the realization last night that when I talk with others about what “developers” need, the picture of that “developer” that appears in different people’s minds can be quite different from my own. In fact, while I may imagine someone sitting at a keyboard getting frustrated at trying to make something work, someone else may be thinking of a company that “wants” to get an app into Mozilla’s Marketplace.

In my view, when you’re figuring out what you need to build, imagining a company is almost always not what you want. Companies don’t do anything. People do. People have a variety of reasons for doing the things they do, and understanding what those people are trying to accomplish is key to building good products.

This is what I like about personas [1]. Personas describe realistic people, allowing you to empathize with them and ensuring that you’d never mistake a person for a company. They can help give clarity to which things are important to build and also help catch gaps. Imagine a coworker coming up to you and saying “Can you believe I just met someone who was trying to make our product [do something outlandish]?”. It’s possible that the person in question is an outlier that you can safely ignore. But, it’s also possible that there are other, similar people out there and adding a new persona to the mix may open up a whole new market.

All of that said, it’s perfectly reasonable in many contexts to talk about a company as a “developer”. “Mozilla is the developer of the popular Firefox web browser”, for example. There are certainly times in product development where talking about a developer as the entity that controls an app in the Marketplace is perfectly reasonable. But, when you get down to planning features, I think it pays to think about individual people.

Footnotes    (↵ returns to text)
  1. In this use, “persona” is a generic industry term, not to be confused with Mozilla Persona, the awesome identity system for the web.

GitHub adds a command line, and so should you!

Standard

Yesterday, GitHub announced their new “Command Bar”. I am a fan of command lines, and this is an awesome addition for navigating GitHub. I love being able to get more done without pulling my hands away from the keyboard.

You may remember that we’ve added a command line to the Firefox developer tools. It’s currently in beta, slated for release in about 3 weeks.

My reason in bringing this up is that if you want to add keyboard controlled command line goodness to your webapp, you can do so easily. The command line in Firefox is actually a separate open source (BSD-licensed) project called GCLI that you can incorporate into your own webapps!

Fork GCLI on GitHub and let’s see a thousand command lines, um, bloom.

An Important Role for SVG

Standard

Yesterday, I came across a link to JustGage, a JavaScript library that uses Raphaël.js to create attractive, dashboard-style gauges. Since it uses Raphaël, which in turn builds upon SVG, these gauges will be resolution-independent. Thanks to vector-based graphics, they’ll look smooth at basically any size.

This morning, I saw John Gruber’s ode to lots of pixels. When Apple introduced the iPhone 4, their first product with a “Retina display”, they took the first massive, single-step leap in pixel density since we started reading off of glowing screens decades ago. As Gruber points out, displays crept up from 72ppi to 110ppi over the decades, before jumping to more than 300ppi on the iPhone 4. Now, we’re seeing high-dpi screens all over the place.

The trouble with the high-dpi displays is that bitmapped image files need to be much higher resolution than we have been making them. On the web, that means more bandwidth used. For apps, that means larger app sizes.

Which brings me back to JustGage. It’s nice to see libraries like that and the popular D3.js, because no matter what the resolution of the display, these visualizations will look good.

Gruber mentions this about the new high resolution displays:

The sort of rich, data-dense information design espoused by Edward Tufte can now not only be made on the computer screen but also enjoyed on one.

Ironically, it was the combination of JustGage and Gruber’s article that made me think about just how important SVG will be in this new era of screens of many sizes and resolutions. Ironic, because I’m pretty sure that the kinds of visualizations provided by JustGage are exactly the sort of low-density displays that Tufte seems to dislike[1].

Footnotes    (↵ returns to text)
  1. D3, on the other hand, is capable of some very nice displays, including the slopegraph that Tufte himself proposed.

Seen on Mars #2

Standard

Update: on Google+, Aaron Shaver points out that there was nothing from Mars in this post! D’oh! In a week filled with news from Mars, on a blog with “Mars” in the name, I really have to include something. So, here we go: did the Curiosity rover photograph the crash landing of the spacecraft that took it to Mars?

I now return to the previously written post:

This week, I managed to keep my link gathering going, but I didn’t do as good a job cross-posting these links to Twitter and G+. I’ll figure out a better way to do it.

One thing that may make this all a bit easier is that if I find myself reading something on the iPad and want to quickly write something for this post or Twitter, I can just grab my keyboard. I just picked up a Logitech K760 keyboard (Amazon affiliate link) which allows me to easily switch between typing on my Mac and typing on my iPad.

This will also make taking notes on my iPad easier. Why take notes on an iPad specifically? Because it enables doodling! Alma Hoffman wrote a good article about the value of doodling and drawing pictures in general.

Web Development

Addy Osmani wrote an epic article for Smashing Magazine that introduces JavaScript MVC frameworks, the TodoMVC project for quickly comparing the framework styles, various criteria you can use to evaluate the choices, guidelines on which framework to use when, quotes from people who are users are the various frameworks and thoughts about tools beyond just MVC. If you’re wrestling with non-trivial JavaScript applications, you really should check this out.

A quick rundown from Steven Sanderson of Knockout.js on the 7 front-end libraries/frameworks represented at the Throne of JS. Good reading, if you’re curious about how these libraries are similar and different.

Firefox adds supports for @supports in CSS. This most excellent new feature is like Modernizr‘s CSS bits built straight in CSS. Paul Irish even reports that Modernizr will use the proposed JS version of supports CSS. Apply different CSS depending on what the browser supports.

Daniel Buchner has code that lets you watch for DOM mutations that match a given CSS selector. The cool thing about this approach is that it uses the browser’s own machinery to do the matching, rather than a bunch of custom (and slow) JavaScript.

Rob Campbell wrote about the $() function that comes for free in the browser consoles. Following Firebug’s lead, $() performs a document.getElementById, which is not super useful. After some discussion, Firefox Nightly has already switched to using $() for document.querySelector. ($$() remains document.querySelectorAll).

Want to give Firefox OS (Boot2Gecko) a try? It’s not yet for the faint of heart, but Jeff Griffiths gives the details on trying it out on your desktop machine.

Reveal.js is a nice looking (yet another) HTML5 presentation library.

Considered tinkering with Go? Jeff Wendling wrote a detailed, step-by-step rundown of building a 1997-esque guestbook with Go.

Finally, in case you missed it, I wrote about live editing of HTML in the browser and the hurdle that I think we need to overcome to get there: templates.

Other Geekery

Tindle is a site where people can sell their electronic creations. There are some neat looking kits there!

 

Editing HTML in the browser

Standard

Live editing of CSS in the browser works great. You get to see your changes immediately, and there’s nothing like live feedback to keep you rolling, right? The Style Editor in Firefox makes it really easy to edit your CSS and save when you’ve got everything looking just right.

CSS is purely declarative, which makes it a bit more straightforward to implement live editing. Swap out the rules and the new rules take effect without weird side effects.

HTML is declarative as well, but there are two problems with “live HTML editing”:

  1. You’re changing the Document Object Model (DOM), not HTML
  2. What you’re changing is often not the same as what you started out with

DOM vs. HTML

I won’t dwell on this, but the first problem is that even if you’re looking at a “static HTML file”, once the browser grabs onto that HTML it turns it into a DOM. The browser’s tools will show you what looks like HTML, but it’s really just a visual representation of the DOM. JavaScript can manipulate the DOM and change things entirely from what was sent down the wire. JavaScript can also add properties to DOM objects that don’t even appear in the “HTML view” that the browser shows you.

Of course, using Firebug (and, soon enough, Firefox as well) you can make live changes to the DOM in an HTML-like view. Some kinds of these changes may mess up things like event handlers, but you can still try out your changes.

The real problem is…

What you’re changing is not what you started with

A fundamental bit of building a webapp is taking some data and plugging it into an HTML/DOM structure so that the user can see it and interact with it. Consider this view of Mozilla’s GitHub repositories:

Let’s say I wanted to add a new bit of information that’s already in the database into this compact repository view. I could open up Firebug and add it to the DOM. This would at least allow me to try out different DOM structures and get the styling right. However, I’d need to go back to my server code (I’m assuming the view above is generated by GitHub’s Rails code) and make the change for real there. Plus, I’d only get to see how the change looks on one of the elements, until I’ve updated the server and reloaded.

Back when I was working on TurboGears (2005), almost all of the stuff that ended up in the DOM got there from HTML that was rendered on the server.  Then, as today, people used all kinds of template engines on the server to take a mass of data and munge that into some kind of HTML structure for the browser to display.

Many developers have been experimenting (with varying degrees of success) with moving the conversion from “hunk of data” to DOM into the browser. That means that the browser will take a hunk of data, combine it with some sort of template and then update the DOM with the new content[1]. So, the browser has the data and the template in hand.

What if the browser tracked the relationship between data, template and elements of the DOM? Wouldn’t it be cool to fire up the Page Inspector, point it at one of those repositories in the screenshot above and then tweak the template rather than tweaking the DOM? If you add that new field to the template, then all of the repositories on screen would immediately be refreshed.

Even better, once you’ve got everything looking the way you want, you could save your template to disk[2], much like you can with a CSS file in our Style Editor right now.

How do we get there from here?

The creators of one of the MV* frameworks could make a browser-based editor that understands the templates used in their system and lets you make on-the-fly tweaks. The disadvantage to that approach is that it only benefits that one framework. And, even for users of that framework, that editor would stand alone from the rest of the browser-based tools (would they create their own CSS editing in addition to template editing?)

Another way to go would be something like the Model-driven Views (MDV) project. The idea there is to add templating directly to HTML and the web platform. MDV is nice because changes to the data are automatically propagated back into what the browser displays. I don’t know if MDV would also automatically propagate changes to the template back into the display, but that would be the idea.

There are many different opinions on how templates should work. I’d be happy if we pick one approach, make it a part of the web and enable smooth live editing of the entire appearance of our sites and apps.

Feedback

Mardeg writes:

The “Model-driven Views” puts <template> tags inside the html and relies on them being parsed with javascript, whereas it might be feasible to use javascript to generate/alter a template from combined MicroData and explicit role attributes already existing within the page, which wouldn’t affect how the page is viewed with javascript disabled.

If MDV becomes a part of HTML (as in “baked into the browser”), then it may not rely on JS at all. I like the idea of being able to combine a template with microdata. Personally, I think we’re reaching a point where if you want to run web applications you really need to have JS on.

Footnotes    (↵ returns to text)
  1. I’m sure some people just build the DOM imperatively with JavaScript, but I’m going to ignore that case for this post.
  2. OK, this certainly depends on where and how the templates are actually stored. I would personally make it a goal to be able to easily save (or at least copy/paste) templates back to their origins.

Seen on Mars #1

Standard

Back in control

Recently, I’ve seen a fair number of articles where people are complaining about having the data under the control of one for-profit entity or another. That tension will always be there. One thing I can control, though, is this blog. With better software than I’ve had in the past, I hope to maintain my interesting sets of links (with commentary) on my blog in addition to Twitter and G+.

Software Development

The new Montage framework from Motorola Mobility has some awesome ideas in it. Ars Technica has posted an introduction by the framework’s creators. The article also talks about Ninja and Screening, which are visual tools for building and testing Montage apps.

The Google Summer of Code project to build a graphical event timeline for Firefox is progressing nicely indeed. You can download the add-on now.

Parashuram Narasimhan shows us how we can get going with IndexedDB today on browsers that don’t support it using IndexedDB polyfills.

Metaquery is an interesting approach to breakpoints in web design. Of course, not everyone thinks breakpoints are the right approach, but this is still an interesting library.

Firefox add-ons have been downloaded 3 billion times now. Firebug has nearly 50 million downloads. And those figures are from addons.mozilla.org alone. I know for certain that a significant number of Firebug downloads have come straight from the Firebug site.

Speaking of Firebug, did you know that you can set conditional breakpoints not only for JavaScript but for XHR and even cookies as well?

Lea Verou has introduced Prism, the new JavaScript syntax highlighting library that she extracted from her Dabblet project.

WeasyPrint converts HTML/CSS (including print styles) to beautiful PDFs (well, assuming your original HTML/CSS was beautiful!). Unlike PrinceXML, WeasyPrint is free (BSD-licensed).

How to make a game like Cut the Rope. I wonder if a tutorial like this exists for the web? I enjoy Cut the Rope, personally.

Other tech

Ed Bott finds Microsoft’s new strategy laid out in MSFT’s 10-K. This is a bold shift for Microsoft. It’s hard to imagine Microsoft as the underdog, but to some extent that’s the position they find themselves in.

Mac

Better Mac OS defaults for geeks that I will likely not use all of, but there are a bunch of useful settings in here.

 

The Two-Way Conference (MozCamp and more)

Standard


A week ago, I had the good fortune of attending and speaking at MozCamp Latin America in Buenos Aires, Argentina. I really enjoyed meeting a whole bunch of new people and appreciated the chance to talk about the Firefox developer tools shipping today and in the near future. The organizers clearly put a lot of effort into getting this conference together (thanks!)

This MozCamp was filled with excitement about B2G and the many other initiatives Mozilla has going on. Beyond the product building we’re doing, there was a lot of energy and enthusiasm for growing the Mozilla community and building on the ideals that Mozilla stands for.

During MozCamp, I spoke with a few people about conferences in general. I think there’s a lot of room to make MozCamp and other conferences better than what we’re used to. These ideas are not new and didn’t originate with me, but they’re worth repeating.

The Format Today

MozCamp was structured like most other conferences that I’ve been to: a packed schedule with multiple tracks of presentations. You get interesting people presenting on useful topics. That’s not a bad thing, but I think it can be better.

If I’m going to deal with the hassle of air travel and spend days away from my family, I’d like to get the most I can out of that time.

A typical presentation slot is 30, 45 or 60 minutes. Of that, there’s maybe 10 minutes of questions and the rest is an “eyes-forward” presentation. I don’t think this is the best use of time. The unique thing about MozCamp (or any conference, for that matter), is that I’m physically there with the other people. The communications bandwidth is much higher. To use that bandwidth for one-way communication seems suboptimal.

There were other issues that I noticed as well:

  1. Attendees at MozCamp had varying levels of English proficiency. This can make it hard for some to keep up with eyes-forward presentations from native English speakers. Plus, a whole day of constantly translating in your mind can get tiring… by the time the second day rolls around (after a possibly late night followed by soccer in the morning!), I would imagine that keeping focused would be difficult.
  2. No slack time for checking email and having hallway conversations. The schedule was packed, leaving mealtime and snack time as the only times to talk (short of skipping sessions). Some of the evening activities suffered from high noise levels as well, eliminating that chance to talk easily.
  3. No slack time also causes issues with schedule slip.. On Sunday, the Q&A session threw the rest of the day off by 30 minutes. That’s not surprising, since it was an interesting two-way communication sort of session… but it meant that the rest of the schedule needed to be pushed by half an hour and never got back on track (causing some sessions at the end of the day to be dropped).

The Two-Way Conference

I think conferences, including MozCamp, should try to become more “two-way”. One formula for a session could go something like this:

  • “speakers” are more like “hosts” or “invited experts”.
  • The expert prepares a page with links to background material and probably a presentation. This page should be available a couple weeks in advance of the event
  • That page can also include some suggestions for areas that could benefit from discussion.
  • That page could also have an etherpad or wiki associated with it to collect more ideas in advance (as attendees view the material).
  • At the beginning of the session, the expert provides a lightning talk-sized intro and, possibly with the help of a facilitator, gets people organized to usefully talk about things or work on something

A parallel is the recent talk of “flipping the classroom”. The students watch a video outside of class and then use class time to work together or get help from the teacher.

Wouldn’t it be awesome if, instead of 50 minutes of eyes-forward presentation followed by questions, we had 5-10 minutes of organizing, level-setting and topic choosing, followed by 50 minutes of two-way communication?

Some unconferences go so far as to not even have predefined topics and time slots. I’m not going that far. I think that a little bit of structure with some constraints on time can help make the most of the time. Additionally, I have heard from some conference organizers that you can’t even get some companies to sponsor sending people to a conference without presentations from industry experts.

Boriss had a user experience workshop that followed a good format: she did a few minutes of intro followed by demonstrations of applying her UX research suggestions to projects that people were working on. William Reynolds told me that he also ran a workshop session on the topic of getting people involved with Mozilla. Individuals can take matters into their own hands this way, and I wish I had done so myself (there’s always a next time!). I’d like to see conferences that encourage and support this even more.

How can this format help with the problems I talked about?

  1. the sessions spend most of their time in a two-way exchange between the “expert” and the participants, thus making better use of the bandwidth
  2. when communication is two-way, there’s more opportunity to overcome language issues than when you have a presenter following a predefined outline. In fact, sessions that involve group discussion could possibly take place in the group’s native language. (Of course, if everyone involved speaks the same language, then there’s no problem. Some of the MozCamp sessions were held in Spanish… of course, that left me, and some of the Brazilians, out.)
  3. More of the stuff that might get discussed on the “hallway track” can now get discussed by more people during sessions
  4. Ideally, there would be a bit more buffer time to handle schedule slippage and sessions that are so good that people just don’t want to stop

I still find conferences valuable and will continue to attend them, but I think we can do better.

When to Build Performance Measurement Tools for Firefox

Standard

We’re well on our way to having a full-featured set of tools for web developers that ship with every release of Firefox, in addition to the already great Firebug add-on. In our roadmap, I talk about building “bundled tools for the most common tasks”. Lately, people have been asking me about tools to help web developers improve the performance of their applications.

Firefox is very fast. In fact, Firefox and its competitors are so fast that most web developers only care about one aspect of web application performance: network access. Latency and the amount of data transferred are the biggest issues for most web developers. We’ll be working on providing insight into network access soon in Firefox.

Developers working on three sorts of web applications in particular are asking for deeper insight into what the platform is doing:

  • games
  • complex layouts involving large amounts of data
  • applications that have features you’d traditionally associate with “desktop applications”

Each browser has different performance characteristics, and these developers need tools that give them hints on how to make their apps responsive on each browser. They care about things like garbage collection pauses, repaints and reflows and hot spots in their JavaScript code where the just-in-time compilers aren’t able to make the JS zoom.

Most web developers aren’t working on these kinds of apps, and we’re focused on building tools that are useful for the “most common tasks”. However, we want these kinds of applications to run well on Firefox. Firefox developer tools really serve two groups: the web developers who use the tools directly, and the hundreds of millions of Firefox users who are looking to experience the web in the best way possible.

I think that our focus needs to remain on building the best tools for the most common tasks. But we also need to accommodate these sophisticated developers. Fortunately, we have more options than just “build it” or “don’t”.

For a feature to ship in Firefox, it goes through a lot of work to ensure that the feature is of a quality that is ready to ship to many millions of people and in many languages. The developers building these performance intensive apps do not number in the millions, and they are capable of installing add-ons. Some are even willing to produce their own custom builds of Firefox, if that’s what it takes to get the performance data they want.

In my opinion, that’s the planning lever we need to pull here. We can try to get these developers the data they need, albeit in a rough form, in add-ons as soon as possible. Along those lines, Brian Hackett has made his JIT Inspector tool available as an add-on.

If you need help figuring out performance issues with your application in Firefox, get in touch.

Thinking About the Developer Experience for the Web

Standard

I’ve been working on developer tools for a while now, and I’m really proud of what we are shipping in Firefox today and the new features that are right around the corner. Browser tools are one of the most important parts of a web developer’s toolbox.

But, there’s a lot more that goes into web development than the browser tools. The video above and the text that follows are some thoughts on the whole of the web developer’s experience.

Web development is great because the platform is so high level and dynamic. That makes it easy to get started. There’s a massive collection of libraries, tools, books, tutorials and more to help web developers get things done once they’ve moved beyond the first steps. In fact, there’s so much out there that it can be hard for someone getting going to decide how to go from idea to done. The riches of the web ecosystem are both a blessing and a curse. It’s more blessing than curse, but that doesn’t make it any easier for newcomers and, in some instances, for experienced developers that are moving into a new area or applying a new technology.

Mozilla’s non-profit mission is to protect openness and innovation on the web. We want to make the web better for everyone, and I think we’re in a good position to help guide developers from idea to published app. Doing so is especially critical for our Apps initiative.

To that end, Daniel Buchner and I will be looking beyond developer tools in our product plans to include the whole of the developer experience. This will first show up in an Apps context, but we’re going to look for ways to apply what we do more broadly.

Firefox 2012 Roadmap for Developer Tools

Standard

Firefox in 2012

This week, the Firefox 2012 Roadmaps went live. Mozilla has a lot going on and huge goals for 2012. We have a lot to do and the web will be in a better place as a result.

Developer Tools in 2012

It’s been two months since I posted the 2012 developer tools roadmap. I thought that the new attention on the larger Firefox roadmap gives me a good opportunity to reflect on where developer tools is now and what’s coming in this year.

I want to start by saying that I think the Firefox developer tools team is doing amazing work. They’re shipping big features, great refinements and handling the growth of the community well.

This year, we’re filling in a new set of bundled tools for the most common web development tasks. Two weeks ago, we had our first big release of the year. I’ve been watching the feedback in various channels and it has been overwhelmingly positive. People have seen that we’re offering:

  • Streamlined user interfaces on…
  • Fast and well-tested tools that…
  • Meet the common needs in new and better ways

I really appreciate all of the constructive criticism we’ve gotten. Ryan DeBeasi wrote in “New Developer Tools in Firefox 10 and 11” for Web Designer Depot:

There’s no user agent switcher, no “edit as HTML feature,” no performance-testing tools, no way to inject new tags into a page, no way to activate an element’s hover state. There’s not even a “layout” panel for viewing the dimensions, padding, and margins of your element.

Despite all those limitations, I keep coming back to the Page and Style Inspectors. I come back for the uncluttered interface, the thoughtfully placed panes, and that funky purple chrome. I come back because they’re a pleasure to use, and because they meet my needs most of the time.

Yep! That’s all true. And Tyler Herman’s recent Impression of Firefox’s New Developer Tools offered similar sorts of feedback. Those two articles are representative of much of the reaction I’ve seen surrounding Firefox 10’s release.

We’re happy that you think we’re off to a good start. We know we’ve got more work to do, and we’re listening. In the coming months, watch for new tools and improvements to the ones we’ve shipped.

Along those lines, a huge chunk of work for the new JavaScript debugger has recently made it into Nightly. It needs some more work before it’s usable and shippable, but the debugger is coming. As Panos mentions in that blog post, this debugger builds on entirely new infrastructure. More on that in a moment.

Mobile

Mobile is huge in Mozilla’s roadmap for 2012. We’re working to ship a new Firefox for Android with a screaming fast native UI. The Boot2Gecko project is working on an entirely new mobile phone OS that is truly open source from top to bottom and from the very beginning of the project. Plus, B2G is open web all the way.

Our focus in developer tools is on tools that live in the desktop browser for the common web development cases. That doesn’t mean we’re not thinking about mobile. We don’t think that people want to do mobile phone development on their mobile phones, hence the need for great desktop tools. Tablets are another story, but we still think that desktop OSes will rule development for some time to come.

The new debugger that I mentioned is built around a client/server architecture. In other words, an app running on in Firefox for Android could be debugged by a desktop Firefox browser. There are infrastructure changes underway in the Web Console that will help make remote access possible in the Web Console as well. We need to do non-trivial user interface work to expose this kind of feature, but the technical underpinnings are falling into place.

There’s another option that works especially well for Firefox: making the desktop browser pretend to be a mobile device. Firefox on the desktop and Firefox for Android generally follow the same release schedules and rely on the same rendering engine, which means that the desktop should be able to behave quite closely to the way the mobile device does. Again, there’s work to be done before we can ship this and that work depends on us having our core desktop tools fleshed out.

Apps and Add-ons

From the perspective of the code a developer creates, Mozilla’s “Apps” initiative is not all that different from standard web development. Apps use the latest web specs and have a little bit of wrapping around them. But, Mozilla is working to build a whole bunch of infrastructure around Apps to make the end-to-end user experience fantastic.

The tools that we’re building for web developers should be immediately applicable to Apps. Beyond those tools, though, providing the best possible end-to-end developer experience for Apps developers is something we’d like to pursue.

Similarly, Firefox Add-ons built with the Add-on SDK are mostly built from the same kinds of technology web apps are built. Ideally, the tools that developers can use for the web could also be applied for Add-ons.

Daniel Buchner now reports to me as the product manager for Apps and Add-ons Developer Tools. He’ll be helping to identify the best opportunities we have in these areas.

Firebug

I also wanted to give a shoutout to the splendid work coming out from the Firebug team. Firebug 1.10 (currently in alpha) is going to be the first ever restartless version of Firebug. The code organization changes that they’re making will make Firebug an even easier project to contribute to than it has been in the past. And, of course, new user-facing features are part of 1.10 as well.

Many ways to get involved

Everything we do at Mozilla is open source and for the betterment of the web. Those of us working on developer tools at Mozilla want to make web development easier and more fun. You can help in a variety of ways:

I hope you enjoy the new tools in Firefox 10 and beyond. They’re just the beginning of what we have planned for 2012.