Custom messages for Jasmine expectations

jasmine-custom-message gives you a “custom failure message on any jasmine assertion”:

describe('the story', function() {
  it('should finish ok', function() {
    since('all cats are grey in the dark').
    expect('tiger').toEqual('kitty'); // => 'all cats are grey in the dark'
  });
});

Overall, I like Jasmine as a testing library, but this particular limitation has really bothered me.

Data Visualization with JavaScript

Data Visualization with JavaScript is a nice looking online book by Stephen A. Thomas:

If you’re developing web sites or web applications today, there’s a good chance you have data to communicate, and that data may be begging for a good visualization. But how do you know what kind of visualization is appropriate? And, even more importantly, how do you actually create one? Answers to those very questions are the core of this book. In the chapters that follow, we explore dozens of different visualizations and visualization techniques and tool kits. Each example discusses the appropriateness of the visualization (and suggests possible alternatives) and provides step-by-step instructions for including the visualization in your own web pages.

Faster Yosemite Upgrades for Developers

Faster Mac OS X 10.10 Yosemite Upgrades for Developers

Your Yosemite upgrade may take many hours if you’ve got anything non-Apple in your /usr folder (Homebrew, Texlive, or Mactex for example).

I don't think the solves the problem I had with the Yosemite install, but this article has good suggestions for anyone using Homebrew, etc. Note that this also applied to Mavericks so this might be useful advice again in a year.

 

My React-in-Brackets Trilogy

I started experimenting with React in February or March, blogged about my experiments starting in May, committed to landing a Brackets feature in September and the latest release of Brackets now ships with React. Here are my three posts on the subject, in reverse chronological order:

I will likely be talking more about simplifying code soon, especially given my 1DevDay Detroit talk coming up next month.

Daring Fireball at XOXO

John Gruber spoke about his site, Daring Fireball, at XOXO. Daring Fireball looks almost the same as it did when it launched in 2002, but the way Gruber makes a living has evolved a good deal and the story behind that transition is well worth the listen.

The Brackets Scrum to Kanban Switch

The Adobe Brackets team has switched its development process from Scrum to Kanban. I thought that others might be interested in why we changed the process that we use to build an open source code editor built on web technology.

From What to What?

Wikipedia has a good article about Scrum. Boiled down, it goes something like this:

  • Work is broken into timeboxed sprints (ours were 12 days)
  • A sprint planning meeting determines what work will be done during the sprint, based on estimated “points” for a story and the expected “velocity” (number of points) for the sprint
  • No one is supposed to change the work that is committed to for the sprint (we had some exceptions in practice)
  • At the end of the sprint, we had a review meeting where we showed off the completed work for the sprint
  • There would also be a retrospective meeting where we discuss how the work went and things we should do differently
  • Daily meetings would keep people in sync

Wikipedia also has an article about Kanban. Here’s a quick summary:

  • Visualize the flow of work
  • Limit the amount of work-in-progress (so that the focus is on shipping)
  • Evolve the process as needed

As you can see, Kanban is not very prescriptive. As the Wikipedia article says, you “start with what you do now“. I can imagine Scrum being a more comfortable starting point for people moving from a more “heavyweight” process because it gives you a clear path forward.

Why Change?

We’ve been successfully shipping software with Scrum for a while, so it’s reasonable to ask why we would make this change. Since we’ve been holding retrospectives, couldn’t we just tweak the process along the way?

Based on our findings from our retrospectives, we did make minor tweaks to our way of doing things. But if we made too many changes, or certain kinds of changes, then we’d no longer be doing Scrum.

Here are some of the problems that we sought to fix:

Sprint Boundaries

The end of a sprint was boring, overly exciting or both at the same time. Some sprints, everyone had to work really hard at the end to get the stories wrapped up and done. Other sprints, we might have a single engineer that’s up against the time box while everyone else is just fixing bugs and doing random other work.

It was always an option to start working on the next story in the product backlog, even though it wouldn’t ship during the same sprint. But, it felt unnatural to the process to do so. Besides, we had plenty of bugs we could fix, which brings me to…

Priorities

A sufficiently complex project with a significant number of users who have an easy way to report bugs means that a lot of bugs will be reported. As Brackets has become more popular, so has our bug tracker.

We have a process by which we prioritize bugs, but there’s still an awful lot of them. Our “medium” priority bugs are annoyances that we’d really like to fix. But there are also a lot of features that we’d really like to have.

Our process was such that bugs were counted as “overhead”. They didn’t get assigned “points” and they weren’t prioritized along with the feature work. Bugs also seemed to be the default work that someone would do when they’re not working on a story, like at the end of a sprint.

With a project like Brackets, it’s easy to argue for some features being more important than nearly all open bugs, but we didn’t put the two head-to-head that way. For example, it’s not good that app layout is a bit laggy when you resize the window, but I would bet that the ability to split the editor to show multiple files will make people happier.

This could have been fixed in our Scrum process, but we likely would have had to make some changes to how we pointed stories.

Research Stories

We’ve had a number of features that we wanted to implement but needed to learn more about before we could reasonably come up with criteria and estimates for the implementation.

Our process for this was to create a Research story card, the output of which would generally be well-defined stories to properly create the desired feature.

The trouble with this process is that the developer doing the research may wrap up the work after a few days, and then we’d have to wait until the next sprint to start working on the implementation, because you’re not allowed to change the committed stories for a sprint.

The purpose of putting the research story there in the first place is because our product owner thinks that’s a feature we need to have—in other words, to build the feature in some form. The gap between the research and the implementation felt a bit artificial.

Estimates and Velocity

Software estimation is hard, because almost everything we do has some elements to it that are novel. For most of our stories, our estimates (in points) seemed pretty reasonable. Our typical story cards would wind up in the 2-5 point range, with most landing at 2 or 3. I think the difference between 2 and 3 was pretty fuzzy, and sometimes you’d have a 5 pointer that turned out to be quicker than a 2 pointer.

A 2 point story would pretty much consume someone for an entire sprint (12 days). A 1 point story would probably take someone a couple of days of work.

We would add these points up to determine what we can fit into a sprint, based on our expected velocity. This was successful for the most part, modulo the end of sprint situations that I talked about earlier.

Velocity itself was based on how many points we were able to complete previously. There was sprints in which this got confusing, however. For example, we had a crashing bug come up late last summer which forced us to displace some work (one of those cases where we needed to change the commitment for a sprint). It was hard to calculate the effect that crasher had on our velocity for a couple of sprints.

We could have adjusted the way we did our estimates, possibly rebooting our notion of points. Switching to Kanban, we embrace a bit of uncertainty in the estimates and use calculated averages for…

Planning

A question that I have been asked repeatedly during this transition is around planning for specific releases or for a release that coincides with a conference or some other special event.

With Scrum, this sort of planning is built-in to the process. You estimate all of the stories in question, add them up and divide by the velocity to find out how many sprints it’ll take. Or, compute how many sprints you have and then you can choose which stories fit best into the time you have.

In the Kanban process, you can get the same kinds of useful estimates by computing “cycle time” and “throughput”. Cycle time is the average time it takes for similar-sized stories to work their way across the board. Throughput is how many similar-sized stories are done over a given period of time. Using this combination of data, you can do the same sorts of planning you can do in Scrum. As a bonus, cycle time and throughput are easy to compute, and should be easy to adjust when exceptional conditions arise.

Ability to do release and time planning is a strength of Scrum and something people were concerned about with the move to Kanban. Thankfully, this is territory that others have covered.

Less Time in Meetings

This wasn’t a driving factor, but our Kanban process has us regularly filling up a “Ready” column for the next bit of work to do. We don’t spend more than a few seconds estimating stories (they’re small, medium, or need to be broken down further). We no longer spend any time planning out the next sprint. Our product owner just ensures that our Ready column reflects what he thinks we need to work on next.

What Happens to the Existing Trello Board?

We have a board that we use for tracking our in-progress Kanban work. It’s on Trello today, but we’re not certain it will remain there because there are other tools that are more Kanban-oriented that will do things like calculate cycle time and throughput for us.

The Brackets Trello board that we’ve had since the project’s public release contains a lot of information about features that Brackets users are interested in. Jonathan Dunlap, our product manager, will be revamping that board to better reflect our roadmap. Keep an eye on the Brackets blog for more on the roadmap.

Conclusion

We’ve only just made this switch, so I can’t yet comment on how well it has worked for us. I have enjoyed reading about other people’s experiences with their software development processes and thought I would share in kind. Plus, there’s a large community of smart people out there, and I’m sure there are many suggestions that people might have. If you have a comment on this article, please add to this thread in the googlegroup.

Finally, Brackets is open source and I thought it would be valuable for the Brackets community to have an idea of how we work.

Further Reading

How the Web Evolves

Two years ago, I made a mistake in posting this overly succinct statement to Google+:

Web SQL Database needs to die. The sooner IndexedDB is in the hands of developers the better.

Tweet-sized statements often don’t capture enough of the nuances of a thought to communicate well.

My big mistake was in not being clear enough about why Web SQL Database needed to die. I tried to explain why the proposed standard was problematic for some browser vendors and such, but fundamentally my opinion is really that we needed to have some standard way to store reasonable amounts of data for offline and online uses and to be able to access that data efficently. Web SQL DB would meet that criteria if it weren’t dead. I’m sympathetic to the issues that some browsers have with Web SQL DB (I did work for Mozilla, after all!), but at the end of it I just really want the web platform to have all of the capabilities it needs. Data storage is a pretty basic thing.

Of course, reality is more complicated than “Web SQL DB is dead”. All of those hundreds of millions of iOS devices today only support Web SQL and not IndexedDB. Many people on the Google+ thread have a strong preference for SQL vs. the API that IndexedDB has to offer. But, the fact remains that the Web SQL Database proposed standard has had a giant disclaimer at the top since 2010 stating that it has reached an impasse.

But, this blog post is not about WebSQL DB vs. IndexedDB. Web platform features like these don’t just poof into existence. With today’s process, these features are designed, tested in browsers, formalized, argued about and standardized by various groups of people. The web is not like some proprietary platform where a vendor suddenly drops a new version with a bunch of new features on everyone. By knowing how the standards come to exist, you can help ensure that the platform does what it needs to do for your apps.

Alex Russell spoke with the people of JavaScript Jabber about TC-39 (the group that standardizes JavaScript), but Alex also has a lot to say about the evolution of the rest of the web platform as well. If you’ve ever had trouble with HTML5 application cache while trying to make an offline web app, Alex has been working on a new API, Service Worker, that will make your life better. He’s also been quite involved in the Web Components work. And, of course, as one of the founders of the Dojo Toolkit, he’s been at this for a bit longer than just about anyone.

Yehuda Katz gave a talk a few months ago (“The Future of the Client Side Web”) in which he spoke about how the standards are made and where they’re going. Yehuda also has tons of experience with both server side and client side development and he’s part of both Ember and jQuery core teams.

Alex and Yehuda are real-world web developers who have taken the step of helping to build out the standards themselves. They are both part of the W3C’s Technical Architecture Group (TAG). Speaking of which, TAG elections are coming up and the super-sharp David Herman (who has done a ton of amazing work on modules for the next version of JavaScript) and Domenic Denicola (who has helped tremendously in pushing Promises for JavaScript) are running for the TAG.

The web platform is built by real people who want the platform to be the best it can be. Understanding this is the best way to ensure that the web gets the features that you need for your applications.

Optimizing JavaScript Performance Through Custom Memory Allocation

Mozilla’s Kannan Vijayan had an intriguing result in running SunSpider ported to C++, asm.js and Dalvik. In the “Binary Trees” test, asm.js was the fastest by far. Kannan’s untested theory is that it boils down to memory allocation performance:

In asm.js, the compiled C++ is executed using a Javascript typed array as the “machine memory”. The machine memory is allocated once and then used for the entire program runtime. Calls to malloc from asm.js programs perform no syscalls. This may be the reason behind asm.js-compiled C++ doing better than native-compiled C++, but requires further investigation to confirm.

In the Hacker News discussion there were some comments there about the memory performance. duaneb said:

Anyone who has implemented a memory allocator knows how expensive it can be. If you have a simpler algorithm that works with your data, allocate a huge chunk and manage it yourself.

Some game developers reuse objects as a way to avoid unnecessary allocations/GC. An article was just posted in Smashing Magazine about Emmet LiveStyle which talks about how they reused objects in order to save allocations.

I don’t think that selective reuse of big chunks of memory is a tool in most JavaScript developers’ toolboxes right now, but it seems like a good idea in cases where you need consistent and smooth performance.

Update: Vyacheslav Egorov emailed me a link to Kannan’s graph that includes the JavaScript engine performance. In the binary tree case, the JavaScript implementation was the fastest by far. Perhaps the C++ was not well-tuned.

I don’t want to get too hung up on the benchmarks, because my main point is not “asm.js is faster than C++” (in fact, I’m not stating that at all). My point is that there ways to control memory management in JS that may be a non-obvious way to improve responsiveness.

Speaking at Adobe MAX (Extending Brackets)

In just a few weeks, I’ll be giving a talk at the Adobe MAX conference about extending Brackets with JavaScript. Web developers will find that Brackets is pretty easy to extend using techniques they already know. That’s probably why there are a lot of Brackets extensions already.

People who come to my session will learn how to build Brackets extensions. I plan to have plenty of code samples drawn from real extensions.

If you’re interested in attending, you can still register. Use the code MXSM13 to save $300. (Note also that a MAX pass comes with 1 year of Creative Cloud, which gives you access to a huge amount of software and services.)

If you do come to MAX, definitely find me and say hi!

Brackets Quick Open: That’s No Regex!

Ever since I first used TextMate several years ago, I’ve been hooked on “Quick Open” (the feature that lets you jump to any file in your project with just a few keystrokes). I have worked on Brackets’ version of this feature and I thought I’d take advantage of Brackets being open source to talk about how it works.

The current incarnation of Quick Open in Brackets is the third. It started very simply, just matching substrings in the filename if I remember correctly. Today, it relies on a module called StringMatch which:

  • searches across the entire file path, not just the name
  • gives preference to matches in the last part of the path (the name)
  • gives preference to matches on “special characters”
  • produces a score that attempts to get at a most likely match

Those “give preferences” bits are what make StringMatch a lot trickier than just applying a regex and my goal with this post is to touch upon some of what went in to producing good results. While working on Quick Open, which is in src/search/QuickOpen.js, I wanted to be able to type “qo” and have that file be the top match. With thousands of files in our repository, there are doubtless quite a few that match “qo”. Getting to the best match was key.

StringMatch in Action
StringMatch in Action

The stringMatch function

The StringMatch module has a function called stringMatch that takes a string to match against, the query string and an optional “specials” data structure. The function is stateless and doesn’t use any indexes (see the section on speed toward the end).

stringMatch returns information about the match (if the query matches the string) to allow highlighting and sorting by score. stringMatch itself doesn’t do very much. It’s job is to glue together the other functions in the module.

Special Characters

I mentioned a “specials” data structure and how preference is given to “special” characters. What are these mysterious special beings? As stated in the comments for findSpecialCharacters:

  • the first character
  • “/” and the character following the “/” (path separators)
  • “_”, “.” and “-” and the character following it (other characters used to separate parts of filenames)
  • an uppercase character that follows a lowercase one (think camelCase)

So, the job of findSpecialCharacters is to create a list of indexes of these special characters of the string that is being checked against the query. Additionally, findSpecialCharacters returns the index into the list of special characters that represents the beginning of the “last segment”. The code in findSpecialCharacters is straightforward, simply reading each character of the string and keeping track of what it sees as it goes along.

The stringMatch function will call findSpecialCharacters if it’s not given the specials data structure at the beginning. But, as we’ll see in the section on speed, hanging on to the result of findSpecialCharacters gives us a bit of a boost.

Searching the String

stringMatch calls a function called _wholeStringSearch which is responsible for taking the string and the query and producing the list of matches. This happens in two steps:

  1. the query is run against the “last segment” (the file name)
  2. anything that’s left over from that search is matched against the part of the string before the last segment

There’s a separate function (_lastSegmentSearch) that’s responsible for trying the query against the last segment, which is not as simple as it might seem.

With a query of “qo”, it’s easy to see how that can match in the last segment of src/search/QuickOpen.js. But, imagine that I typed “sqo” instead. “sqo” is clearly still a match, but searching for “sqo” in just the last segment fails. So, it seeks out the largest match that it can get in the last segment. It would find that “qo” matches in QuickOpen.js and hunt starting at the beginning of the string for the remaining “s”.

It’s entirely possible that the entire query doesn’t match against the last segment, which means that all of the work of checking each substring of the query against the last segment was a waste. On the other hand, users will generally have a certain file in mind that they want to open and I’d be surprised if they weren’t typing any characters from the file name.

Generating the Matches

The _generateMatchList function is the craziest bit in the file. There’s a reason it has 70 lines of comments preceding the function.

Why does it have to be so crazy? It all comes back to those preferences of which characters to search first. If I have a file called QuoteOpus.js and I search for “qo”, I want the match to be QuoteOpus, not QuoteOpus. I also want QuickOpen.js to be a better match than Quote.js, even though both of those match “qo”.

There’s a loop toward the bottom of the function that is responsible for doggedly trying everything until it finds a match that works:

#!javascript

    while (true) {
        // keep looping until we've either exhausted the query or the string
        while (queryCounter < query.length && strCounter < str.length && strCounter <= deadBranches[queryCounter]) {
            if (state === SPECIALS_MATCH) {
                if (!findMatchingSpecial()) {
                    state = ANY_MATCH;
                }
            }

            if (state === ANY_MATCH) {
                // we look character by character for matches
                if (query[queryCounter] === str[strCounter]) {
                    // got a match! record it, and switch back to searching specials
                    queryCounter++;
                    result.push(new NormalMatch(strCounter++));
                    state = SPECIALS_MATCH;
                } else {
                    // no match, keep looking
                    strCounter++;
                }
            }
        }

        // if we've finished the query, or we haven't finished the query but we have no
        // more backtracking we can do, then we're all done searching.
        if (queryCounter >= query.length || (queryCounter < query.length && !backtrack())) {
            break;
        }
    }

We’re going to keep looking at characters until we hit the end of the string or the end of the query. We start out looking for matches among the special characters. Failing that, we’ll start looking character-by-character through the string until we find a match among the non-special characters. Searching for “qo” in Quote.js is going to find a match at the “Q” which it’s perfectly happy with. Then, it’ll try to match the “o” against the “.” (the next special character). Nope. No luck with the “j”, which is also special because it comes after the “.”. So, then it goes back and checks out the “u” before finally finding the “o” that matches.

Finding the special character matches is pretty straightforward, because we already have a list of indexes into the string where the special characters are located. The findMatchingSpecial function uses a counter to keep track of which special matched last and then proceeds to walk through the list of specials from there.

I rather wish the story could end there: scan through the matching specials and then scan through the non-special characters when that fails. In fact, I did initially end there. But, it turned out that there were cases that I missed entirely when I did that. And that’s where the deadBranches and backtrack() come into play in the loop above.

Here’s the example pulled from the comments of _generateMatchList:

/*
 * A contrived example will help illustrate how the searching and backtracking works. It's a bit long,
 * but it illustrates different pieces of the algorithm which can be tricky. Let's say that we're
 * searching the string "AzzBzzCzdzezzDgxgEF" for "abcdex".
 *
 * To start with, it will match "abcde" from the query to "A B C D E" in the string (the spaces 
 * represent gaps in the matched part of the string), because those are all "special characters".
 * However, the "x" in the query doesn't match the "F" which is the only character left in the
 * string.
 * 
 * Backtracking kicks in. The "E" is pulled off of the match list.
 * deadBranches[4] is set to the "g" before the "E". This means that for the 5th
 * query character (the "e") we know that we don't have a match beyond that point in the string.
 *
 * To resume searching, the backtrack function looks at the previous match (the "D") and starts
 * searching in character-by-character (ANY_MATCH) mode right after that. It fails to find an
 * "e" before it gets to deadBranches[4], so it has to backtrack again.
 *
 * This time, the "D" is pulled off the match list.
 * deadBranches[3] is set to the "z" before the "D", because we know that for the "dex" part of the
 * query, we can't make it work past the "D". We'll resume searching with the "z" after the "C".
 *
 * Doing an ANY_MATCH search, we find the "d". We then start searching specials for "e", but we
 * stop before we get to "E" because deadBranches[4] tells us that's a dead end. So, we switch
 * to ANY_MATCH and find the "e".
 *
 * Finally, we search for the "x". We don't find a special that matches, so we start an ANY_MATCH
 * search. Then we find the "x", and we have a successful match.
 */

In the process of working on this, I read a bit about dynamic programming, which is an interesting topic and one of the ways in which you can solve the longest common substring problem (something I initially thought might be valuable for StringMatch). The idea that I ended up applying is that when we’re searching the string we can keep track of parts of the query that we’ve tried and found do not work past a certain point of the string. Those are the deadBranches. When we hit one, we know we need to backtrack farther.

What does backtracking look like?

    // This function implements the backtracking that is done when we fail to find
    // a match with the query using the "search for specials first" approach.
    //
    // returns false when it is not able to backtrack successfully
    function backtrack() {

        // The idea is to pull matches off of our match list, rolling back
        // characters from the query. We pay special attention to the special
        // characters since they are searched first.
        while (result.length > 0) {
            var item = result.pop();

            // nothing in the list? there's no possible match then.
            if (!item) {
                return false;
            }

            // we pulled off a match, which means that we need to put a character
            // back into our query. strCounter is going to be set once we've pulled
            // off the right special character and know where we're going to restart
            // searching from.
            queryCounter--;

            if (item instanceof SpecialMatch) {
                // pulled off a special, which means we need to make that special available
                // for matching again
                specialsCounter--;

                // check to see if we've gone back as far as we need to
                if (item.index < deadBranches[queryCounter]) {
                    // we now know that this part of the query does not match beyond this
                    // point
                    deadBranches[queryCounter] = item.index - 1;

                    // since we failed with the specials along this track, we're
                    // going to reset to looking for matches consecutively.
                    state = ANY_MATCH;

                    // we figure out where to start looking based on the new
                    // last item in the list. If there isn't anything else
                    // in the match list, we'll start over at the starting special
                    // (which is generally the beginning of the string, or the
                    // beginning of the last segment of the string)
                    item = result[result.length - 1];
                    if (!item) {
                        strCounter = specials[startingSpecial] + 1;
                        return true;
                    }
                    strCounter = item.index + 1;
                    return true;
                }
            }
        }
        return false;
    }

The basic idea here is that we see that we’ve reached a point after which our query doesn’t match the remaining bit of the string, so we rewind any matches that we found to take us back before the last deadBranches point. After we’ve done that, we hang on to our newly identified “if you reach this part of the string with that part of the query, all hope is lost” point.

_generateMatchList returns a list of SpecialMatch and NormalMatch objects. We just need to return the list of matched indexes and the type of object shows whether the index was a special character or not (which is used in scoring).

Turning Matched Characters into Ranges and a Score

The _computeRangesAndScore function does a fairly straightforward transformation of the matched characters list into a set of ranges in the string that are used in highlighting which parts matched and the final score. There’s nothing very magical here, but you can see that there are 7 different components that go into the score at this point:

    if (DEBUG_SCORES) {
        scoreDebug = {
            special: 0,
            match: 0,
            lastSegment: 0,
            beginning: 0,
            lengthDeduction: 0,
            consecutive: 0,
            notStartingOnSpecial: 0
        };
    }

You can also see that with 7 different parts of the score, being able to set a flag to see how each part adds to the whole would come in handy. Coming up with the exact numbers used in scoring was not very scientific, instead coming as a result of looking at various comparisons and adjusting the parameters until the results felt like the kind of results we wanted to see.

Test-driven to Get It Right

I used test-driven development to create StringMatch (you can check out the tests), and I’m really glad that I did. As I was working out the algorithm, I was able to keep iterating to improve it with the confidence that previous cases that worked well were continuing to work well.

Speed

On the one hand, we’re just matching up characters. But, as you can tell from the preceding sections, we’re doing a lot of character comparisons. We keep trying to fit the query string into the subject string in different ways until we find the one that fits best. Luckily, computers are fast and JavaScript just-in-time compilers are pretty good, so this code has generally performed well enough for now.

At this point, we’ve only picked up the low hanging fruit in making StringMatch performance better. In profiling, I found that _findSpecialCharacters was taking around 8% of the total time when searching normally and that only needs to run once per string that we’re testing against, rather than for each character typed, making it easy to cache during the lifetime of a search.

Another simple optimization we’ve implemented takes advantage of the fact that items are eliminated as possible matches as the user types. As long as the user is adding more characters to the query, anything that didn’t match previously is not going to be a match now and we don’t even need to check.

The StringMatcher class implements these optimizations in a neatly compartmentalized way. Brackets creates a StringMatcher for a single query and throws it away at the end, so that it does not need to worry about stale caches.

StringMatch all the things!

StringMatch is also used in Brackets’ Go To Definition feature (which finds functions in your JavaScript files, for example). StringMatch is better than regex and substring searching when you’ve got an idea of the structure of the string being searched and what the users will expect the results to look like.

Brackets is open source! Have ideas on how to improve StringMatch or anything else? Join in the fun!

(Special thanks to Peter Flynn for a very thorough review of the StringMatch code which resulted in a good deal of cleanup in both the code and algorithm. Thanks also to Randy Edmunds for providing feedback on the first draft of this post.)

© 2014 Blue Sky On Mars

Theme by Anders NorenUp ↑