Best practices

aiR for Review works best after fine-tuning the Prompt Criteria. Analyzing just a few documents at first, comparing the results to human coding, and then adjusting the Prompt Criteria as needed yields more accurate results than diving in with a full document set.

Tips for writing Prompt Criteria

The Prompt Criteria entered often aligns with a traditional review protocol or case brief in that they describe the matter, entities involved, and what is relevant to the legal issues at hand.

When writing Prompt Criteria, use natural language to describe why particular types of documents should be considered relevant. Write them as though you were describing them to a human reviewer.

  • Write clearly—use active voice, use natural speaking phrases and terms, be explicit.
  • Be concise—write as if "less is more," summarize lengthy text or only include key passages from a long review protocol. The Prompt Criteria have an overall length limit of 15,000 characters.
  • Simply describe the case—do not give commands, such as “you will review XX."
  • Use positive phrasing—phrase instructions in a positive way when possible. Avoid negatives ("not" statements) and double negatives.
  • Use natural writing format styles—use whatever writing format makes the most sense to a human reader. For example, bullet points might be useful for the People and Aliases section, but paragraphs might make sense in another section.
  • Is it important?—ask yourself will the criteria affect the results, it is essential.
  • Avoid legal jargon or explanations—for example, don't use "including but not limited to" and "any and all" and don't include explanations of the law.
  • Use ALL CAPS—helps identify essential information for the model to focus on, for example use "MUST" instead of "should."
  • Identify internal jargon and phrases—the learning language model (LLM) has essentially “read the whole Internet.” It understands widely used slang and abbreviations, but it does not necessarily know jargon or phrases that are internal to an organization.
  • Identify aliases, nicknames, and uncommon acronyms—for example, a nickname for William may be Bill, or BT may be an abbreviation for the company name Big Thorium.
  • Identify unfamiliar emails—normal company email addresses do not need identified, but unfamiliar ones should, for example Dave Smith may use Dave.Smith@AcmeCompany.com and skippy78@gmail.com.
  • Iterate, iterate, iterate—test the prompt criteria and review the results, adjust it to obtain more accurate predictions and results.

Refer to the helper examples in the Prompt Criteria text boxes of the dialogs for additional guidance entering criteria in each field.

For more guidance on prompt writing, see the following resources on the Community site:

Prompt criteria iteration sample documents

Before setting up the aiR for Review project, create a saved search that contains a small sample of the documents you want reviewed.

For best results:

  • Include roughly 50-100 test documents that are a mix of relevant, not relevant, and challenging documents.
  • Make sure they highlight all the key features of your relevance criteria.
  • Have human reviewers code the documents in advance.

For more information about choosing documents for the sample, see Selecting a Prompt Criteria Iteration Sample for aiR for Review on the Community site.

See Creating or editing a saved search for details about saved searches.

Prompt criteria iteration workflow

We recommend the following workflow for crafting Prompt Criteria:

  1. For your first analysis, run the Prompt Criteria on a saved search of 50-100 test documents that are a mix of relevant, not relevant, and challenging documents.
  2. Compare the results to human coding. In particular, look for documents that the application coded differently than the humans did and investigate possible reasons. This could include unclear instructions, needing to define an acronym or code word, or other blind spots in the Prompt Criteria.
  3. Tweak the Prompt Criteria to adjust for blind spots.
  4. Repeat steps 1 through 3 until the application predicts coding decisions accurately for the test documents.
  5. Test the Prompt Criteria on a sample of 50 more documents and compare results. Continue tweaking and adding documents until you are satisfied with the results for a diverse range of documents.
  6. Finally, run the Prompt Criteria on a larger set of documents.

aiR for Review only sees the extracted text of a document. It does not see any non-text elements like advanced formatting, embedded images, or videos. We do not recommend using aiR for Review on documents such as images, videos, or spreadsheets with heavy formulas. Instead, use it on documents whose extracted text accurately represents their content and meaning.