aiR for Review
aiR for Review harnesses the power of large language models, or LLMs, to review documents. aiR for Review goes far beyond existing classifiers by using generative AI to both predict coding decisions and to support those predictions with descriptive text and document excerpts which explain the decisions.
Some benefits of aiR for Review include:
- Highly efficient, low-cost document analysis
- Quick discovery of important issues and criteria
- Consistent, cohesive analysis across all documents
See these related pages:
- Creating an aiR for Review project
- Iterating on the Prompt Criteria
- Managing aiR for Review jobs
- aiR for Review results
- aiR for Review security permissions
- aiR for Review Analysis
See these related trainings, articles, and white papers:
- AI Advantage: Aiming for Prompt Perfection? Level up with Relativity aiR for Review
- A Focus on Security and Privacy in Relativity’s Approach to Generative AI
- Workflows for Applying aiR for Review
- aiR for Review example project
- aiR for Review Prompt Writing Best Practices
- Evaluating aiR for Review Prompt Criteria Performance
- Selecting a Prompt Criteria Iteration Sample for aiR for Review
aiR for Review overview
aiR for Review uses generative AI to simulate the actions of a human reviewer, finding and describing relevant documents according to the review instructions that you provide. It identifies the documents, describes why they are relevant using natural language, and demonstrates relevance using citations from the document.
aiR for Review has three different analysis types:
- Relevance review—predict documents responsive to a request for production.
- Issues review—locate material relating to different legal issues.
- Key documents—find key documents important to a case or investigation, including those that might be critical or embarrassing to one party or another.
Some use cases for aiR for Review include:
- Kickstarting the review process—prioritize the most important documents to give to reviewers.
- First-pass review—determine what you need to produce and discover essential insights.
- Gaining early case insights—learn more about your matter right from the start.
- Internal investigations—find documents and insights that help you understand the story hidden in your data.
- Analyzing productions from other parties—reduce the effort to find important material and get it into the hands of decision makers.
- Quality control for traditional review—compare aiR for Review's coding predictions to decisions made by reviewers to accelerate QC and improve results.
aiR for Review workflow
aiR for Review's process is similar to training a human reviewer: explain the case and its relevance criteria, hand over the documents, and check the results. If aiR misunderstood any part of the relevance criteria, explain that part in more detail, then try again.
Within Relativity, the main steps are:
- Select the documents to review
- Create the aiR for Review project
- Write and submit the review instructions, called Prompt Criteria
- Review the results
When setting up the first analysis, we recommend running it on a sample set of documents that was already coded by human reviewers. If aiR's predictions are different from the human coding, revise the Prompt Criteria and try again. This could include rewriting unclear instructions, defining an acronym or a code word, or adding more detail to an issue definition.
Overall, the workflow has three phases:
- Develop—write the Prompt Criteria, test, and tweak until the results align with human review.
- Verify—run the Prompt Criteria on a slightly larger set of documents and compare to results from senior reviewers.
- Run—use the verified Prompt Criteria on much larger sets of documents.
For more details, see Creating an aiR for Review project. For additional workflow help and examples, see Workflows for Applying aiR for Review on the Community site.
How aiR for Review works
aiR for Review's analysis is powered by Azure OpenAI's GPT-4 Omni large language model. The LLM is designed to understand and generate human language, and it is trained on billions of documents from open datasets and the web.
When you submit Prompt Criteria and a set of documents to aiR for Review, Relativity sends the first document to Azure OpenAI and asks it to review the document according to the Prompt Criteria. After Azure OpenAI returns its results, Relativity sends the next document. The LLM reviews each document independently, and it does not learn from previous documents. Unlike Review Center, which makes its predictions based on learning from the document set, the LLM makes its predictions based on the Prompt Criteria and its built-in training.
Azure OpenAI does not retain any data from the documents being analyzed. Data you submit for processing by Azure OpenAI is not retained beyond your organization’s instance, nor is it used to train any other generative AI models from Relativity, Microsoft, or any other third party. For more information, see the white paper A Focus on Security and Privacy in Relativity’s Approach to Generative AI.
Note: For European Economic Area (EEA) customers, aiR for Review data may be processed elsewhere in the EU, but it will always be processed in compliance with applicable laws. For more information, please contact your account manager.
For more information on using generative AI for document review, we recommend:
- Relativity Webinar - AI Advantage: How to Accelerate Review with Generative AI
- MIT's Generative AI for Law resources
- The State Bar of California's drafted recommendations for the use of generative AI
Regional availability of the aiR for Review
aiR for Review's availability varies by region. The availability of the large language model (LLM) used by aiR for Review also varies by region.
The following table shows when the LLM and aiR for Review are available for each region:
Region |
Current LLM Model |
Date Model is Available |
Date aiR for Review is Available |
---|---|---|---|
United States |
GPT-4 Omni |
2024-08-26 |
2024-09-16 |
United Kingdom |
GPT-4 Omni |
2024-08-26 |
2024-09-16 |
Australia |
GPT-4 Omni |
2024-08-26 |
2024-09-16 |
Canada |
GPT-4 Omni |
2024-08-26 |
2024-09-16 |
Ireland |
GPT-4 Omni |
2024-10-01 |
2024-10-01 |
Netherlands |
GPT-4 Omni |
2024-10-01 |
2024-10-01 |
Germany |
GPT-4 Omni |
2024-10-01 |
2024-10-01 |
Switzerland |
GPT-4 Omni |
2024-10-01 |
2024-10-01 |
France |
GPT-4 Omni |
2024-10-14 |
2024-10-14 |
For more details about availability in your region, contact your account representative.
Language support in aiR for Review
The underlying large language model (LLM) used by aiR for Review has been evaluated for use with 83 languages. While aiR for Review itself has been primarily tested on English-language documents, unofficial testing with non-English datasets shows encouraging results.
If you use aiR for Review with non-English datasets, we recommend the following:
- Rigorously follow best practices for writing and iterating on the Prompt Criteria. For more information, see Step 2: Writing the Prompt Criteria and Iterating on the Prompt Criteria.
- Analyze the extracted text as-is. You do not need to translate it into English.
- When possible, write the Prompt Criteria in the same language as the documents being analyzed. This should also be the subject matter expert's native language. If that is not possible, write the Prompt Criteria in English.
When you view the results of the analysis, all citations stay in the same language as the document they cite. By default, the rationales and considerations are in English. If you want the rationales and considerations to be in a different language, type “Write rationales and considerations in [desired language]” in the Additional Context field of the Prompt Criteria.
For the study used to evaluate Azure OpenAI's GPT-4 model across languages, see MEGAVERSE: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks on the arXiv website.
Using aiR for Review with emojis
aiR for Review has not been specifically tested for analyzing emojis. However, the underlying LLM does understand Unicode emojis. It also understands other formats that could normally be understood by a human reviewer. For example, an emoji that is extracted to text as :smile:
would be understood as smiling.
Archiving and restoring workspaces with aiR for Review
Workspaces with aiR for Review installed can be archived and restored using the ARM application.
When archiving, check Include Extended Workspace Data under Extended Workspace Data Options. If this option is not checked during the archive process, the aiR for Review features in the restored workspace will not be fully functional. If this happens, you will need to manually reinstall aiR for Review in the restored workspace.
Note: If you restore a workspace that includes previous aiR for Review jobs, the pre-restoration jobs will not appear on the instance-level aiR for Review Jobs tab. The jobs and their results will still be visible at the workspace level.
For more information on using ARM, see ARM Overview.