

The Prompt Criteria are a set of inputs that give aiR for Review the context it needs to understand the matter and evaluate each document. Developing the prompt criteria is a way of training aiR for Review, which is your "reviewer," similar to training a human reviewer. See Best practices for tips and workflow suggestions.
Depending which type of analysis you chose during set up, you will see a different set of tabs on the left-hand side of the aiR for Review dashboard. The Case Summary tab displays for all analysis types.
When you start to write your first prompt criteria, the fields contain grayed-out helper text that shows examples of what to enter. Use it as a guideline for crafting your own entries.
You can also build prompt criteria from existing case documents, like requests for production or review protocols, by using the prompt kickstarter feature. See Using prompt kickstarter for more information.
For more information on how prompt versioning works and how versions affect the Viewer, see Prompt criteria versioning
Additional resources on prompt writing is available on the Community site:
The tabs that appear on the Prompt Criteria panel depend on the analysis type you selected during set up. Refer to Setting up the project for more information.
Use the sections below to enter information in the necessary fields.
The set of Prompt Criteria have an overall length limit of 15,000 characters.
The Case Summary gives the Large Language Model (LLM) the broad context surrounding a matter. It includes an overview of the matter, people and entities involved, and any jargon or terms that are needed to understand the document set.
This tab appears regardless of the Analysis Type selected during set up.
Limit the Case Summary content to 20 or fewer sentences overall, and 20 or fewer each of People and Aliases, Noteworthy Organizations, and Noteworthy Terms.
Fill out the following:
Depending on which Analysis Type you chose when setting up the project, the remaining tabs will be Relevance, Key Documents, or Issues. Refer to the appropriate tab section below for more information on filling out each one.
This tab defines the fields and criteria used for determining if a document is relevant to the case. It appears if you selected Relevance or Relevance and Key Documents as the Analysis Type during setup.
Fill out the following:
For best results when writing the Relevance Criteria:
This tab defines the fields and criteria used for determining if a document is "hot" or key to the case. It appears if you selected Relevance and Key Documents as the Analysis Type during setup.
Fill out the following:
This tab defines the fields and criteria used for determining whether a document relates to a set of specific topics or issues. It appears if you selected Issues as the Analysis Type during setup.
Fill out the following:
For best results when writing the Choice Criteria:
aiR for Review's prompt kickstarter enables you to efficiently create a project's set of Prompt Criteria from existing case documents, such as requests for production, review protocols, complaints, or case memos. By uploading up to five documents (with a total character count of up to 150,000), aiR for Review will analyze them to complete the relevant prompt criteria. This enables you to start a new project with minimal effort. See Job capacity and size limitations for more information on document and prompt limits.
You can repeat this process as needed to refine the prompt criteria before starting the first job analysis. Once the analysis begins, the Draft with AI option is disabled.
Prompt kickstarter uses the large language model (LLM) based on aiR for Review region availability. For more information, refer to Regional availability of aiR for Review.
To use prompt kickstarter:
If two users are editing the same prompt criteria version at the same time, the user who last saves their work will have that work override the other one's changes. Because of this, we recommend having only one user edit a project's prompt criteria at a time. You may find it helpful to define separate roles for users when iterating on prompt changes.
Another aid to collaborating outside of RelativityOne is to export the contents of the currently displayed Prompt Criteria to an MS Word file using the Export option. For more information, see Exporting prompt criteria.
Each aiR for Review project comes with automatic versioning controls, so that you can compare results from running different versions of the prompt criteria. Each analysis job that uses a unique set of prompt criteria counts as a new version.
When you run aiR for Review analysis, the initial prompt criteria are saved as Version 1. Edits to the criteria create Version 2, which you can repeatedly modify until you finalize by running the analysis again to see the results. Subsequent edits follow the same pattern, creating new versions that finalize with each analysis run.
To see dashboard results from a earlier version, click the arrow next to the version name in the project details strip. From there, select the version you want to see.
When you select a prompt criteria version from the dashboard, this also changes the version results you see when you click on individual documents from the dashboard. For example, if you are viewing results from Version 2, clicking on the Control Number for a document brings you to the Viewer with the results and citations from Version 2. If you select Version 1 on the dashboard, clicking the Control Number for that document brings you to the Viewer with results and citations from Version 1.
When you access the Viewer from other parts of Relativity, it defaults to showing the aiR for Review results from the most recent version of the prompt criteria. However, you can change which results appear by using the linking controls on the aiR for Review Jobs tab. For more information, see Managing aiR for Review jobs.
Why was this not helpful?
Check one that applies.
Thank you for your feedback.
Want to tell us more?
Great!