Creating an aiR for Review project

aiR for Review uses generative AI to simulate the actions of a human reviewer, finding and describing relevant documents using the review instructions that you provide. These analyses can be customized to look for relevance, key documents, or specific issues as needed.

The instructions you give aiR for Review are called Prompt Criteria. Prompt Criteria often mimic a traditional review protocol or case brief in that they describe the matter, entities involved, and what is relevant to the legal issues at hand. For best results, we recommend analyzing a small set of documents, tweaking the Prompt Criteria as needed, then finally analyzing a larger set of documents. This lets you see immediately how aiR's coding compares to a human reviewer's coding and adjust the prompts accordingly.

See these related pages:

See these additional resources:

Installing aiR for Review

aiR for Review is available as a secured application from the Application Library. You must have an active aiR for Review contract to use it, and it is not available for repository workspaces.

To install it:

  1. Navigate to the Relativity Applications tab in your workspace.

  2. Select Install from application library.

  3. Select the aiR for Review application.

  4. Click Install.

After installation completes, the following object types will appear in your workspace:

  • aiR Relevance Analysis—records the Relevance results of aiR for Review analysis runs.

  • aiR Issue Analysis—records the Issue results of aiR for Review analysis runs.

  • aiR Key Analysis—records the Key results of aiR for Review analysis runs.

  • aiR for Review Prompt Criteria—records the Prompt Criteria settings and contents for each analysis run. This also records Prompt Criteria drafts for each user.

  • aiR for Review Project—records the details of each aiR for Review project.

The following tabs will also appear:

  • aiR for Review Projects (workspace level)—create and manage aiR for Review projects and view the project dashboard.

  • aiR for Review Jobs (workspace level)—view and manage jobs created by the aiR for Review application within the workspace.

  • aiR for Review Jobs (instance level)—view and manage jobs created by the aiR for Review application across all workspaces in the instance.

For more information on installing applications, see Relativity applications.

Setting up permissions

For detailed information on aiR for Review user permissions, see aiR for Review security permissions.

Choosing an analysis type

aiR for Review supports three types of analysis. Each one is geared towards a different phase of a review or investigation.

For each aiR for Review job, choose one analysis type:

  • Relevance—analyzes whether documents are relevant to a case or situation that you describe, such as documents responsive to a production request.

  • Relevance and Key Documents—analyzes documents for both relevance and whether they are “hot” or key to a case.

  • Issues—analyzes documents for whether they include content that falls under specific categories. For example, you might use this to check whether documents involve coercion, retaliation, or a combination of both.

Based on the analysis type you choose, you will need the following fields:

  • Relevance—one single-choice results field. The field must have at least one choice.

  • Relevance and Key Documents—two single-choice results fields. These should have distinct names such as "Relevant" and "Key," and each field should have at least one choice.

  • Issues—one multi-choice results field. Each of the issues you want to analyze should be represented by a choice on the field.

    Note: Currently, aiR for Review analyzes a maximum of ten issues per run. You can have as many choices for the field as you want, but you can only analyze ten at a time.

aiR for Review does not actually write to these fields. Instead, it uses them for reference when reporting on its predictions.

Best practices for running aiR for Review

aiR for Review works best after fine-tuning the Prompt Criteria. Analyzing just a few documents at first, comparing the results to human coding, and then adjusting the Prompt Criteria as needed yields more accurate results than diving in with a full document set.

We recommend the following workflow:

  1. For your first analysis, run the Prompt Criteria on a saved search of 50 test documents that are a mix of relevant and not relevant.

  2. Compare the results to human coding. In particular, look for documents that aiR coded differently than the humans did and investigate possible reasons. This could include unclear instructions, needing to define an acronym or code word, or other blind spots in the Prompt Criteria.

  3. Tweak the Prompt Criteria to adjust for blind spots.

  4. Repeat steps 1 through 3 until aiR predicts coding decisions accurately for the test documents.

  5. Test the Prompt Criteria on a sample of 50 more documents and compare results. Continue tweaking and adding documents until you are satisfied with the results for a diverse range of documents.

  6. Finally, run the Prompt Criteria on a larger set of documents.

aiR only sees the extracted text of a document. It does not see any non-text elements like advanced formatting, embedded images, or videos. We do not recommend using aiR for Review on documents such as images, videos, or spreadsheets with heavy formulas. Instead, use it on documents whose extracted text accurately represents their content and meaning.

Choosing the data source

Before setting up the aiR for Review project, create a saved search that contains a small sample of the documents you want reviewed.

For best results:

  • Include roughly 50 test documents that are a mix of relevant and not relevant.
  • Have human reviewers code the documents in advance.

For more information about choosing documents for the sample, see Selecting a Prompt Criteria Iteration Sample for aiR for Review on the Community site.

For more information about creating a saved search, see Creating or editing a saved search.

Overview of setting up an aiR for Review project

Setting up an aiR for Review project for the first time has three basic parts:

  1. Create the aiR for Review project

  2. Write the Prompt Criteria

  3. Run the first analysis

At any point in this process, you can save your progress and come back later.

Step 1: Creating the aiR for Review project

To create an aiR for Review project:

  1. On the aiR for Review Project tab, select New aiR for Review Project.

  2. Fill out the following fields:

    1. Project Name—enter a name for the project.

    2. Description—enter a project description.

    3. Data source—select the saved search that holds your document sample.

    4. Project Prompt Criteria—select one of the following:

      • Start blank—select this if you plan to write new Prompt Criteria from scratch.

      • Copy existing—select this to choose a previously created set of Prompt Criteria and copy it for this project.

    5. Analysis Type—select one of the following. For more information, see Choosing an analysis type.

      • Relevance—analyzes whether documents are relevant to a case or situation that you describe, such as documents responsive to a production request.
      • Relevance and Key Documents—analyzes documents for both relevance and whether they are “hot” or key to a case.

      • Issues—analyzes documents for whether they include content that falls under specific categories.

    6. Project Use Case—choose the option that best describes the purpose of the project.

      • If none of the options describe the project, choose Other to type your own description in the field. It will only be used for this project. Keep this description generic and do not include any confidential or personal information.

      • This field is used for reporting and management purposes. It does not affect how the project runs.

  3. Click Create Project.

After the project is created, the aiR for Review project dashboard appears.

Step 2: Writing the Prompt Criteria

The Prompt Criteria are a set of inputs that give aiR the context it needs to understand the matter and evaluate each document. Writing the Prompt Criteria is a way of training your "reviewer," similar to training a human reviewer.

Depending which type of analysis you chose, you will see a different set of tabs on the left-hand side of the aiR for Review dashboard. All Prompt Criteria include the Case Summary tab.

General writing guidelines

For all of the setup tabs, we recommend:

  • Write as if "less is more." Instead of pasting in a long review protocol as-is, summarize where possible and include only key passages. The Prompt Criteria have an overall length limit of 15,000 characters.

  • Phrase things in a positive way when possible. Avoid negatives ("not" statements) and double negatives.

  • Do not include explanations of the law.

  • Do not give the LLM commands, such as “you will review XX." Instead, simply describe the case.

  • Use whatever writing format makes the most sense to a human reader. For example, bullet points might be useful for the People and Aliases section, but paragraphs might make sense in another section.

  • The LLM has essentially “read the whole Internet.” It understands widely used slang and abbreviations, but it does not necessarily know jargon or phrases that are internal to an organization.

When you start to write your first Prompt Criteria, the fields contain grayed-out helper text that shows examples of what to enter. Use this as a guideline for crafting your own entries.

Note:

For more guidance on prompt writing, see the following resources on the Community site:

Filling out the Case Summary tab

The Case Summary gives the LLM the broad context surrounding a matter. It includes an overview of the matter, people and entities involved, and any jargon or terms that are needed to understand the document set.

Limit the Case Summary content to 20 or fewer sentences overall, and 20 or fewer each of People and Aliases, Noteworthy Organizations, and Noteworthy Terms.

To fill out the Case Summary tab:

  1. On the aiR for Review dashboard, click on the Case Summary tab.

  2. Fill out the following:

    • Matter Overview—provide a concise overview of the case. Include the names of the plaintiff and defendant, the nature of the dispute, and other important case characteristics.

    • People and Aliases—list the names and aliases of key custodians who authored or received the documents. Include their role and any other affiliations.

    • Noteworthy Organizations—list the organizations and other relevant entities involved in the case. Highlight any key relationships or other notable characteristics.

    • Noteworthy Terms—list and define any relevant words, phrases, acronyms, jargon, or slang that might be important to the analysis.

    • Additional Context—list any additional information that does not fit the other fields. This section is typically left blank.

  3. After completing the fields on the tab, click Save.

Depending on which Analysis Type you chose, the remaining tabs will be called Relevance, Key Documents, or Issues. Fill out those tabs according to the guide sections below.

Filling out the Relevance tab

If you chose either Relevance or Relevance and Key Documents as the Analysis Type, you will see the Relevance tab. This defines the fields and criteria used for determining if a document is relevant to the case.

To fill out the Relevance tab:

  1. On the aiR for Review dashboard, click on the Relevance tab.

  2. Fill out the following:

    1. Relevance Field—select a single-choice field that represents whether a document is relevant or non-relevant. This selection cannot be changed after the first job run.

    2. Relevant Choice—select the field choice you use to mark a document as relevant. This selection cannot be changed after the first job run.

    3. Relevance Criteria—summarize the criteria that determine whether a document is relevant. Include:

      • Keywords, phrases, legal concepts, parties, entities, and legal claims

      • Any criteria that would make a document non-relevant, such as relating to a project that is not under dispute

    4. Issues Field (Optional)—select a single-choice or multi-choice field that represents the issues in the case.

      • Choice Criteria—select each of the field choices one by one. For each choice, write a summary in the text box listing the criteria that determine whether that issue applies to a document. For more information, see Filling out the Issues tab.

      Note: aiR does not make Issue predictions during Relevance review, but you can use this field for reference when writing the Relevance Criteria. For example, you could tell aiR that any documents related to these issues are relevant.

    5. After completing the fields on the tab, click Save.

For best results when writing the Relevance Criteria:

  • Limit the Relevance Criteria to 5-10 sentences.

  • Do not paste in the original request for production (RFP); those are often too long and complex to give good results. Instead, summarize it and include key excerpts.

  • Group similar criteria together when you can. For example, if an RFP asks for “emails pertaining to X” and “documents pertaining to X,” write “emails or documents pertaining to X.”

Filling out the Key Documents tab

If you chose Relevance and Key as the Analysis Type, you will see the Key Documents tab. This defines the fields and criteria used for determining if a document is "hot" or key to the case.

To fill out the Key Documents tab:

  1. On the aiR for Review dashboard, click on the Key Documents tab.

  2. Fill out the following:

    1. Key Document Field—select a single-choice field that represents whether a document is key to the case. This selection cannot be changed after the first job run.

    2. Key Document Choice—select the field choice you use to mark a document as key. This selection cannot be changed after the first job run.

    3. Key Document Criteria—summarize the criteria that determine whether a document is key. Include:

      • Keywords, phrases, legal concepts, parties, entities, and legal claims

      • Any criteria that would exclude a document from being key, such as falling outside a certain date range

    4. After completing the fields on the tab, click Save.

For best results, limit the Key Document Criteria to 5-10 sentences.

Filling out the Issues tab

If you chose Issues as the Analysis Type, you will see the Issues tab. This defines the fields and criteria used for determining whether a document relates to a set of specific topics or issues.

To fill out the Issues tab:

  1. On the aiR for Review dashboard, click on the Issues tab.

  2. Fill out the following:

    1. Field—select a multi-choice field that represents the issues in the case. This selection cannot be changed after the first job run.

    2. Choice Criteria—select each of the field choices one by one. For each choice, write a summary in the text box listing the criteria that determine whether that issue applies to a document. Include:

      • Keywords, phrases, legal concepts, parties, entities, and legal claims

      • Any criteria that would exclude a document from relating to that issue, such as falling outside a certain date range

      Note: The field choices cannot be changed after the first job run. However, you can still edit the summary in the text box.

    3. After completing the fields on the tab, click Save.

For best results when writing the Choice Criteria:

  • Limit the criteria description for each choice to 5-10 sentences.

  • Each of the choices must have its own criteria. If a choice has no criteria, either fill it in or remove the choice.

Removing issue choices

aiR analyzes a maximum of 10 choices. If the issue field has more than 10 choices:

  1. Select the choice you want to remove.

  2. Click the Remove Choice button on the right.

  3. Repeat with any other unwanted choices.

These choices cannot be changed after the first job run.

Step 3: Running the first analysis

After filling out the Setup, Case Summary, and other tabs, review the job and run the analysis.

To run the analysis:

  1. On the upper right of the dashboard, click Analyze [X] documents.

    A confirmation modal appears.

  2. Review the confirmation summary. This includes:

    • Total Docs—number of documents to be analyzed.

    • Est. Run Time—estimated time aiR for Review will take to analyze and return the results of the documents selected. This does not include time waiting in the queue.

    • Est. Time to Start—estimated wait time from when you submit the job, to when aiR for Review can begin analyzing your job. Longer wait times appear when aiR for Review already has other work queued up across tenants.

  3. Click Start Analysis.

After the analysis job starts, the results start to appear beside each document in the Analysis Results panel of the dashboard. The Analyze button is disabled, and a Cancel option appears.

When the job completes, the Analyze button is re-enabled.

Note: If you try to run a job that is too large or when too many jobs are already running, an error will appear. You can still save and edit the Prompt Criteria, but you will not be able to start the job. For more information, see Job capacity and size limitations.

After the first analysis completes, use the results to fine-tune the Prompt Criteria.

Editing and collaboration

If two users are editing the same Prompt Criteria version at the same time, the user who last saves their work will have that work override the other one's changes. Because of this, we recommend having only one user edit a project's Prompt Criteria at a time. You may find it helpful to define separate roles for users when iterating on prompt changes.

Job capacity and size limitations

Based on the limits of the underlying large language model (LLM), aiR has size limits for the documents and prompts you submit, as well as volume limits for the overall jobs.

Size limits

The documents and Prompt Criteria have the following size limits:

  • The Prompt Criteria have an overall length limit of 15,000 characters.

  • We recommend only including documents whose extracted text is 150KB or smaller. Although the LLM can handle some larger documents, most will receive an error.

Volume limits

The per-instance volume limits for aiR for Review jobs are as follows:

Volume Type Limit Notes
Max job size 100,000 documents

A single job can include up to 100,000 documents.

Total documents running per instance 150,000 documents Across all jobs queued or running in an instance, there is a maximum of 150,000 documents.
Concurrent large jobs per instance 3 jobs For jobs with over 200 documents, only 3 jobs can be queued or running at the same time within an instance.
Concurrent small jobs per instance No limit Jobs with 200 or fewer documents have no limit to how many can queue or run at the same time.

Speed

After a job is submitted, aiR analyzes roughly 25-50 documents per minute. In general, jobs with 200 or fewer documents finish faster than jobs with more than 200. Job speeds vary widely depending on the number of documents, the overall load on the LLM, and other factors such as the size of the documents.

Understanding documents and billing

For billing purposes, a document unit is a single document. The initial pre-run estimate may be higher than the actual units billed because of canceled jobs or document errors. To find the actual document units that are billed, see Cost Explorer.

A document will be billed each time it runs through aiR for Review, regardless of whether that document ran previously.

Caution: Customer may not consolidate documents or otherwise take steps to circumvent the aiR for Review Document Unit limits, including for the purpose of reducing the Customer's costs. If Customer takes such action, Customer may be subject to additional charges and other corrective measures as deemed appropriate by Relativity.