Generative AI considerations in aiR for Case Strategy

Generative AI technology, specifically large language models (LLMs) are central to the capabilities of aiR for Case Strategy. While powerful, these technologies do carry risk, and it’s important that you understand the extent of those limitations as you use the product.

Broadly speaking, we expect the users of aiR for Case Strategy to always have a human overseeing the work produced by the AI. In particular, the possibility exists that the AI can do the following:

  • Omit something important, such as failing to extract an important fact from a key document.
  • Write something incorrect about something important, such as giving a low fact score to a key fact, or describing some part of the case in a deposition outline inaccurately.
  • Make something important up, such as creating a fact that is not present in the document.

While our testing has produced very little evidence of these behaviors, we cannot be certain that they will never occur, and we want you to act with appropriate care.

Some further system limitations are explained below in more detail.

Non-determinism

Generative AI varies somewhat in its response even when asked the same question. This will manifest in aiR for Case Strategy in a few ways:

  • The number of facts generated from a document can differ from run to run.
  • The wording of facts can differ from run to run.
  • The score, rationale, and other descriptions related to the facts can differ from run to run.
  • Witness summaries and deposition outlines will have differences if run multiple times, even if the input facts list and the prompt criteria are the same.

Possibility of hallucinations

aiR for Case Strategy uses techniques to try to reduce the probability of what are known as hallucinations. Hallucinations, for our purposes, are when the LLM outputs something which is not grounded in the document from which the output is generated. Although we have not witnessed this behavior in rigorous testing, these events are still possible. Here are some issues that could potentially occur:

  • Facts that do not actually reflect what is in the document can potentially be created.
  • Details of facts, such as the fact date, might differ from what is actually in the document.
  • Witness summaries or deposition outlines could reference things which are not in the input documents or facts.

We strongly recommend that an attorney or administrator carefully review the outputs to verify their accuracy as part of your overall workflow.

Extracted text only

Facts are generated based on the extracted text of documents. Any information that is stored in metadata, images, or is otherwise not available to someone reading the extracted text of the document cannot be used to inform generated facts. We recommend that you avoid including documents with misleading or empty extracted text while generating facts.

Fact fields used for generating case documents

To generate witness summaries and deposition outlines, aiR for Case Strategy uses only the main fields for the facts submitted including description, issues, date, etc. and the document summaries for the documents linked to those facts. In particular, metadata and extracted text from documents, apart from the document summary, are not currently used to construct case documents.

No memory from document to document

A single document is sent to the AI for fact generation at a time. It is then processed using the instructions that Relativity provides and the information you specified in the prompt criteria. For this reason, information that is present in another document will not be used in the formation of a fact. For example, a document might only reference an early code name of a project which is relevant to the case. If there’s nothing in the prompt criteria defining that early code name as being equivalent to a later project name, then aiR for Case Strategy will not be able to make the connection. This could potentially cause aiR for Case Strategy look past this document even if the project itself is relevant to the case. Of course, if the description were similar enough, aiR for Case Strategy could catch it, but overall, it’s good to establish aliases like this in the prompt criteria that you provide.

Another effect of this behavior is that similar or near-duplicate input documents can yield duplicate facts because the LLM does not know that the same fact has already been found on another document.