Using Design and Technology to Streamline Ediscovery Document Review
One of the five core company values here at Everlaw is respect for users. That means we work tirelessly to optimize the experience for the people using our products, bringing the superior software experiences you’re used to in your consumer life—the blink-and-you’ll-miss-it speed of Google and the blissful usability of Apple products—to your litigation toolkit.
What Apple, Frog, and other design leaders can tell you, however, is that the best designs often belie the enormous technical effort necessary to bring them to life. The same is true at Everlaw, where we’ve designed the world’s fastest and most intuitive document review interface on a foundation of deep technical expertise and fanatic attention to detail.
For example, we recently optimized the coding portion of the document review window. We wanted to make sure that reviewers could quickly and easily annotate documents with their desired codes without inundating them with too many options. We also wanted to make the experience graceful for every type of case, from ones with just a few codes to ones with dozens of categories and scores of codes.
Senior Engineer Zach Travis led the project. We sat down with Zach to ask him about one of the most innovative features we added with this redesign: automated category suggestions based on recent coding behavior.
Zach, what were the overarching goals for this redesign?
We wanted the review process to be as streamlined as possible for reviewers who look through hundreds of documents a day. We wanted to make common actions, such as adding a specific code, very easy (1 click/key). At the same time, there are a lot of options available for reviewers, and they all need to be made available in full, and so we needed to balance these two goals of simplicity and completeness. We also wanted everything to be keyboard and mobile friendly!
Zach’s first task was to understand how people used the current interface, so he used our activity-tracking system to capture and analyze over 50,000 coding events on Everlaw.
What did you learn about user activity from this data?
We had some ideas about how our users were reviewing docs, but we wanted to see if they were true. Out of those 50,000 coding events, I collected stats for each user:
How many docs they looked at
The proportions of docs where:
they coded it and it wasn’t coded before
they coded it and it was already coded
they didn’t code it and it wasn’t coded
they didn’t code it and it was already coded
The categories they were applying in a given view and how often these combinations occurred
This info confirmed a few hypotheses. There were generally two classes of users, those who viewed mostly coded docs and didn’t do any coding themselves (admins), and those who mostly viewed and coded uncoded docs (reviewers). It was pretty rare for a coded document to have more codes added by a later viewer. Coding was also mostly concentrated in one or two categories or combinations thereof, but these varied by user. The full history for a given user was also useful so I could see how often the set of categories used changed between docs (e.g. if two categories were both very common, are reviewers switching between them on every doc or are they using mostly one, then mostly the other?).
How did this inform your design of the review interface?
We saw an opportunity to optimize for these common review patterns by “suggesting” categories, putting them in the summary bar without any codes selected. The data also helped us answer a host of related questions: When should we give suggestions? How many categories are in common use? If it’s a lot, will suggesting one or two be useful or annoying? Is this true for all users? For all cases? We used our analysis to nail down these design decisions. If a doc already had codes, we wouldn’t suggest any new ones, since it’s unlikely the user would be adding any more ones (and maybe they’re an admin-type reviewer). If there weren’t any codes, even suggesting one or two would often be very useful. But which ones? We tested different suggestion approaches (e.g. the last categories you used, the categories you use most frequently across all docs, some combination of the two measures) by taking a user’s entire viewing/coding history on a case and analyzing how each approach performed given the codes the user actually applied next. Once we had an initial design and algorithm based on this analysis, we also did some hands-on user testing to get feedback and tweak the design. We made sure to keep the less popular codes out of the way but still accessible with a single click.
What impact has this had for Everlaw users?
Currently, for 85% or more of the documents in a given case, our suggestion mechanism ensures that the categories the user wants are already displayed, so we’ve made a huge improvement to the average reviewer’s workflow. At the same time, by emphasizing these best guesses at the expense of codes the user is much less likely to use, we keep the interface clutter-free.
We continue to make small changes as we get feedback and see how the review interface is used.
Our commitment to user interface improvement doesn’t end at review, of course. We’re working to improve the entire litigation workflow by making the most cutting-edge technology accessible with elegant and intuitive design. Have a pet peeve or inefficiency you’d like us to address? Let us know at contact@everlaw.com!