Ron Jeffries on Hours Estimation

Ron Jeffries kicks off the new year with the notion that small (1-2) day stories is best.

So here we are, in my experienced if not wise opinion:

  • Designing using tasks can have value.
  • Building with tasks is almost always inferior.
  • Estimates can trigger useful conversations.
  • Tracking estimates is always inefficient and often harmful.

Custom messages for Jasmine expectations

jasmine-custom-message gives you a “custom failure message on any jasmine assertion”:

describe('the story', function() {
  it('should finish ok', function() {
    since('all cats are grey in the dark').
    expect('tiger').toEqual('kitty'); // => 'all cats are grey in the dark'
  });
});

Overall, I like Jasmine as a testing library, but this particular limitation has really bothered me.

Data Visualization with JavaScript

Data Visualization with JavaScript is a nice looking online book by Stephen A. Thomas:

If you’re developing web sites or web applications today, there’s a good chance you have data to communicate, and that data may be begging for a good visualization. But how do you know what kind of visualization is appropriate? And, even more importantly, how do you actually create one? Answers to those very questions are the core of this book. In the chapters that follow, we explore dozens of different visualizations and visualization techniques and tool kits. Each example discusses the appropriateness of the visualization (and suggests possible alternatives) and provides step-by-step instructions for including the visualization in your own web pages.

Faster Yosemite Upgrades for Developers

Faster Mac OS X 10.10 Yosemite Upgrades for Developers

Your Yosemite upgrade may take many hours if you’ve got anything non-Apple in your /usr folder (Homebrew, Texlive, or Mactex for example).

I don't think the solves the problem I had with the Yosemite install, but this article has good suggestions for anyone using Homebrew, etc. Note that this also applied to Mavericks so this might be useful advice again in a year.

 

My React-in-Brackets Trilogy

I started experimenting with React in February or March, blogged about my experiments starting in May, committed to landing a Brackets feature in September and the latest release of Brackets now ships with React. Here are my three posts on the subject, in reverse chronological order:

I will likely be talking more about simplifying code soon, especially given my 1DevDay Detroit talk coming up next month.

Daring Fireball at XOXO

John Gruber spoke about his site, Daring Fireball, at XOXO. Daring Fireball looks almost the same as it did when it launched in 2002, but the way Gruber makes a living has evolved a good deal and the story behind that transition is well worth the listen.

The Brackets Scrum to Kanban Switch

The Adobe Brackets team has switched its development process from Scrum to Kanban. I thought that others might be interested in why we changed the process that we use to build an open source code editor built on web technology.

From What to What?

Wikipedia has a good article about Scrum. Boiled down, it goes something like this:

  • Work is broken into timeboxed sprints (ours were 12 days)
  • A sprint planning meeting determines what work will be done during the sprint, based on estimated “points” for a story and the expected “velocity” (number of points) for the sprint
  • No one is supposed to change the work that is committed to for the sprint (we had some exceptions in practice)
  • At the end of the sprint, we had a review meeting where we showed off the completed work for the sprint
  • There would also be a retrospective meeting where we discuss how the work went and things we should do differently
  • Daily meetings would keep people in sync

Wikipedia also has an article about Kanban. Here’s a quick summary:

  • Visualize the flow of work
  • Limit the amount of work-in-progress (so that the focus is on shipping)
  • Evolve the process as needed

As you can see, Kanban is not very prescriptive. As the Wikipedia article says, you “start with what you do now“. I can imagine Scrum being a more comfortable starting point for people moving from a more “heavyweight” process because it gives you a clear path forward.

Why Change?

We’ve been successfully shipping software with Scrum for a while, so it’s reasonable to ask why we would make this change. Since we’ve been holding retrospectives, couldn’t we just tweak the process along the way?

Based on our findings from our retrospectives, we did make minor tweaks to our way of doing things. But if we made too many changes, or certain kinds of changes, then we’d no longer be doing Scrum.

Here are some of the problems that we sought to fix:

Sprint Boundaries

The end of a sprint was boring, overly exciting or both at the same time. Some sprints, everyone had to work really hard at the end to get the stories wrapped up and done. Other sprints, we might have a single engineer that’s up against the time box while everyone else is just fixing bugs and doing random other work.

It was always an option to start working on the next story in the product backlog, even though it wouldn’t ship during the same sprint. But, it felt unnatural to the process to do so. Besides, we had plenty of bugs we could fix, which brings me to…

Priorities

A sufficiently complex project with a significant number of users who have an easy way to report bugs means that a lot of bugs will be reported. As Brackets has become more popular, so has our bug tracker.

We have a process by which we prioritize bugs, but there’s still an awful lot of them. Our “medium” priority bugs are annoyances that we’d really like to fix. But there are also a lot of features that we’d really like to have.

Our process was such that bugs were counted as “overhead”. They didn’t get assigned “points” and they weren’t prioritized along with the feature work. Bugs also seemed to be the default work that someone would do when they’re not working on a story, like at the end of a sprint.

With a project like Brackets, it’s easy to argue for some features being more important than nearly all open bugs, but we didn’t put the two head-to-head that way. For example, it’s not good that app layout is a bit laggy when you resize the window, but I would bet that the ability to split the editor to show multiple files will make people happier.

This could have been fixed in our Scrum process, but we likely would have had to make some changes to how we pointed stories.

Research Stories

We’ve had a number of features that we wanted to implement but needed to learn more about before we could reasonably come up with criteria and estimates for the implementation.

Our process for this was to create a Research story card, the output of which would generally be well-defined stories to properly create the desired feature.

The trouble with this process is that the developer doing the research may wrap up the work after a few days, and then we’d have to wait until the next sprint to start working on the implementation, because you’re not allowed to change the committed stories for a sprint.

The purpose of putting the research story there in the first place is because our product owner thinks that’s a feature we need to have—in other words, to build the feature in some form. The gap between the research and the implementation felt a bit artificial.

Estimates and Velocity

Software estimation is hard, because almost everything we do has some elements to it that are novel. For most of our stories, our estimates (in points) seemed pretty reasonable. Our typical story cards would wind up in the 2-5 point range, with most landing at 2 or 3. I think the difference between 2 and 3 was pretty fuzzy, and sometimes you’d have a 5 pointer that turned out to be quicker than a 2 pointer.

A 2 point story would pretty much consume someone for an entire sprint (12 days). A 1 point story would probably take someone a couple of days of work.

We would add these points up to determine what we can fit into a sprint, based on our expected velocity. This was successful for the most part, modulo the end of sprint situations that I talked about earlier.

Velocity itself was based on how many points we were able to complete previously. There was sprints in which this got confusing, however. For example, we had a crashing bug come up late last summer which forced us to displace some work (one of those cases where we needed to change the commitment for a sprint). It was hard to calculate the effect that crasher had on our velocity for a couple of sprints.

We could have adjusted the way we did our estimates, possibly rebooting our notion of points. Switching to Kanban, we embrace a bit of uncertainty in the estimates and use calculated averages for…

Planning

A question that I have been asked repeatedly during this transition is around planning for specific releases or for a release that coincides with a conference or some other special event.

With Scrum, this sort of planning is built-in to the process. You estimate all of the stories in question, add them up and divide by the velocity to find out how many sprints it’ll take. Or, compute how many sprints you have and then you can choose which stories fit best into the time you have.

In the Kanban process, you can get the same kinds of useful estimates by computing “cycle time” and “throughput”. Cycle time is the average time it takes for similar-sized stories to work their way across the board. Throughput is how many similar-sized stories are done over a given period of time. Using this combination of data, you can do the same sorts of planning you can do in Scrum. As a bonus, cycle time and throughput are easy to compute, and should be easy to adjust when exceptional conditions arise.

Ability to do release and time planning is a strength of Scrum and something people were concerned about with the move to Kanban. Thankfully, this is territory that others have covered.

Less Time in Meetings

This wasn’t a driving factor, but our Kanban process has us regularly filling up a “Ready” column for the next bit of work to do. We don’t spend more than a few seconds estimating stories (they’re small, medium, or need to be broken down further). We no longer spend any time planning out the next sprint. Our product owner just ensures that our Ready column reflects what he thinks we need to work on next.

What Happens to the Existing Trello Board?

We have a board that we use for tracking our in-progress Kanban work. It’s on Trello today, but we’re not certain it will remain there because there are other tools that are more Kanban-oriented that will do things like calculate cycle time and throughput for us.

The Brackets Trello board that we’ve had since the project’s public release contains a lot of information about features that Brackets users are interested in. Jonathan Dunlap, our product manager, will be revamping that board to better reflect our roadmap. Keep an eye on the Brackets blog for more on the roadmap.

Conclusion

We’ve only just made this switch, so I can’t yet comment on how well it has worked for us. I have enjoyed reading about other people’s experiences with their software development processes and thought I would share in kind. Plus, there’s a large community of smart people out there, and I’m sure there are many suggestions that people might have. If you have a comment on this article, please add to this thread in the googlegroup.

Finally, Brackets is open source and I thought it would be valuable for the Brackets community to have an idea of how we work.

Further Reading

How the Web Evolves

Two years ago, I made a mistake in posting this overly succinct statement to Google+:

Web SQL Database needs to die. The sooner IndexedDB is in the hands of developers the better.

Tweet-sized statements often don’t capture enough of the nuances of a thought to communicate well.

My big mistake was in not being clear enough about why Web SQL Database needed to die. I tried to explain why the proposed standard was problematic for some browser vendors and such, but fundamentally my opinion is really that we needed to have some standard way to store reasonable amounts of data for offline and online uses and to be able to access that data efficently. Web SQL DB would meet that criteria if it weren’t dead. I’m sympathetic to the issues that some browsers have with Web SQL DB (I did work for Mozilla, after all!), but at the end of it I just really want the web platform to have all of the capabilities it needs. Data storage is a pretty basic thing.

Of course, reality is more complicated than “Web SQL DB is dead”. All of those hundreds of millions of iOS devices today only support Web SQL and not IndexedDB. Many people on the Google+ thread have a strong preference for SQL vs. the API that IndexedDB has to offer. But, the fact remains that the Web SQL Database proposed standard has had a giant disclaimer at the top since 2010 stating that it has reached an impasse.

But, this blog post is not about WebSQL DB vs. IndexedDB. Web platform features like these don’t just poof into existence. With today’s process, these features are designed, tested in browsers, formalized, argued about and standardized by various groups of people. The web is not like some proprietary platform where a vendor suddenly drops a new version with a bunch of new features on everyone. By knowing how the standards come to exist, you can help ensure that the platform does what it needs to do for your apps.

Alex Russell spoke with the people of JavaScript Jabber about TC-39 (the group that standardizes JavaScript), but Alex also has a lot to say about the evolution of the rest of the web platform as well. If you’ve ever had trouble with HTML5 application cache while trying to make an offline web app, Alex has been working on a new API, Service Worker, that will make your life better. He’s also been quite involved in the Web Components work. And, of course, as one of the founders of the Dojo Toolkit, he’s been at this for a bit longer than just about anyone.

Yehuda Katz gave a talk a few months ago (“The Future of the Client Side Web”) in which he spoke about how the standards are made and where they’re going. Yehuda also has tons of experience with both server side and client side development and he’s part of both Ember and jQuery core teams.

Alex and Yehuda are real-world web developers who have taken the step of helping to build out the standards themselves. They are both part of the W3C’s Technical Architecture Group (TAG). Speaking of which, TAG elections are coming up and the super-sharp David Herman (who has done a ton of amazing work on modules for the next version of JavaScript) and Domenic Denicola (who has helped tremendously in pushing Promises for JavaScript) are running for the TAG.

The web platform is built by real people who want the platform to be the best it can be. Understanding this is the best way to ensure that the web gets the features that you need for your applications.

Optimizing JavaScript Performance Through Custom Memory Allocation

Mozilla’s Kannan Vijayan had an intriguing result in running SunSpider ported to C++, asm.js and Dalvik. In the “Binary Trees” test, asm.js was the fastest by far. Kannan’s untested theory is that it boils down to memory allocation performance:

In asm.js, the compiled C++ is executed using a Javascript typed array as the “machine memory”. The machine memory is allocated once and then used for the entire program runtime. Calls to malloc from asm.js programs perform no syscalls. This may be the reason behind asm.js-compiled C++ doing better than native-compiled C++, but requires further investigation to confirm.

In the Hacker News discussion there were some comments there about the memory performance. duaneb said:

Anyone who has implemented a memory allocator knows how expensive it can be. If you have a simpler algorithm that works with your data, allocate a huge chunk and manage it yourself.

Some game developers reuse objects as a way to avoid unnecessary allocations/GC. An article was just posted in Smashing Magazine about Emmet LiveStyle which talks about how they reused objects in order to save allocations.

I don’t think that selective reuse of big chunks of memory is a tool in most JavaScript developers’ toolboxes right now, but it seems like a good idea in cases where you need consistent and smooth performance.

Update: Vyacheslav Egorov emailed me a link to Kannan’s graph that includes the JavaScript engine performance. In the binary tree case, the JavaScript implementation was the fastest by far. Perhaps the C++ was not well-tuned.

I don’t want to get too hung up on the benchmarks, because my main point is not “asm.js is faster than C++” (in fact, I’m not stating that at all). My point is that there ways to control memory management in JS that may be a non-obvious way to improve responsiveness.

Speaking at Adobe MAX (Extending Brackets)

In just a few weeks, I’ll be giving a talk at the Adobe MAX conference about extending Brackets with JavaScript. Web developers will find that Brackets is pretty easy to extend using techniques they already know. That’s probably why there are a lot of Brackets extensions already.

People who come to my session will learn how to build Brackets extensions. I plan to have plenty of code samples drawn from real extensions.

If you’re interested in attending, you can still register. Use the code MXSM13 to save $300. (Note also that a MAX pass comes with 1 year of Creative Cloud, which gives you access to a huge amount of software and services.)

If you do come to MAX, definitely find me and say hi!

© 2015 Blue Sky On Mars

Theme by Anders NorenUp ↑