Whitepaper

Early Lessons in Designing Gen AI

I recently did a talk where I described how we've finally been making some real progress in the design of legal tech products. I cautioned at the end of that talk that, however, the proliferation of LLMs and Gen AI seems to have brought about a bit of a reversion to the old days of designing for the tech and not the users. This has brought me out of my multi-year blog-writing hibernation to say... please stop that.

LLMs, as we know, are very good at language-based tasks and *CAN* receive text input in order to generate outputs. As a result, it is very tempting to leverage an open chat UI to gather the inputs from the user as a way to interact with the LLM. Chat UIs are extremely simple to build, take no thought as to user experience, and leave the heavy lifting to prompt engineering to focus on nailing accuracy of output.  I've seen a lot of organizations in legal launching chat UIs who have likely assumed this as the proper UI without really giving much thought to the user and the product, so I thought we'd offer some of the lessons we've learned designing a number of these products for clients, and share with you one particular design journey that took us from an easy UI to a hard, thoughtful UX that accommodates a range of users and needs to get to a highly valuable product for users.

A Legal Gen AI Design Journey

When Litera first came to us about designing their new Firm Intelligence solutions -- two coordinated products -- we were really excited about the possibilities. These new solutions would, thanks to a proprietary LLM, allow firms to query their deal data and gain advanced insights in seconds. Knowing the team wanted to move fast and get a proof of concept out the door, we went to work and provided wireframes of a very basic chatbot-style interface overnight to get the engineering team moving.

Very simplified landing page option

We then learned that we had more time for careful thought and consideration around the UX of this product, and given the power of this tool and the newness of the experience, we wanted to provide the user with a lot more support. The following shows the progression of the concept.

Simple landing page with more guidance for the user

We realized users would need even more guidance, so we explored a "Guided Prompt" option on the landing page

Based on early testing with clients, we learned that, for deal-critical work, users would be more likely to prefer the comprehensive, surgical, high-powered approach of the filtering experience to the loose and open-ended approach of the chat interface.  Therefore, we created an entry point that would allow a user to select either option, prioritizing the opportunity to leverage deal points data to solve/answer and deal-related question quickly and accurately.

The user now has two distinct options for querying deal data

If a user wants a more comprehensive view of their firm's deals, they have a highly visual user interface that represents their firms's deal point data that does not require any guess work on their part.

Note, they can still move to the natural language experience from here as well.

A user who wants to use the natural language chat will be taken into that experience.

We can't lose sight that users still want what they've always wanted -- results. They may be able to get more useful, actionable outputs faster now thanks to new tech, but you can't force them to perform wizardry to get there. Our job as designers and developers of software is to select the best interface for the job. Just because a chat interface is possible, it doesn't mean it is the optimal way to bring the power of an LLM to your users.

Some Initial Rules for Designing Gen AI Products

We've had the opportunity to work on a number of LLM/Gen AI products here at T&P, and as such have developed a few core theses that we look forward to continuing to test and refine as we get more client products out to market and in front of users:

  1. Don't assume a chat UI is the right UI.
  2. If a chat UI is the right UI, you need to accommodate a range of users, from novice to power users who are more comfortable with prompting.
  3. In legal for the foreseeable future, you should never expect that a user will be "trained" to write good prompts.
  4. Don't dump users into an AI experience without adequate (human) support.
  5. Consider whether generated content outputs should be visual.
  6. Users need to know what the source of data is, and need to be able to access it and, ideally, verify it.
  7. Users need to know where their inputs will be fed.
  8. There are many use cases where the user should know the confidence score of the output.
  9. It is not one-and-done; AI-based solutions require ongoing monitoring and updating, more so than non-AI-based software solutions.
  10. Users should be able to provide feedback on the quality of the results they're getting.
  11. AI is not always the right technology for the problem at hand.

We anticipate that our understanding of these concepts will change over time, so we'll come back to this as we get more products out into market and the technology evolves over time.