Last date modified: 2026-Apr-17

aiR Assist (Advanced Access)

Advance Access (AA) is an opportunity to evaluate and work with Relativity features prior to the General Availability (GA) release. Relativity customers typically participate in AA programs on a feature-by-feature basis. The functionality described in this topic may not be available in all Relativity environments. This topic may not represent the functionality, appearance, or behavior of the GA release version of this feature.

aiR Assist is a conversational search tool integrated within RelativityOne, designed to empower legal teams to interact with their data using natural language. By leveraging advanced AI, aiR Assist helps to surface potential insights, reveal possible connections, and uncover themes more efficiently. This can enhance the process they use to analyze and interpret legal data more effectively, potentially leading to quicker understanding, better decisions, and defensible workflows when validated by users.

It works by searching the extracted text of indexed documents. Users can create up to five indexes per workspace, each supporting up to 50,000 documents. When a query is submitted, aiR Assist identifies the most relevant documents deemed most relevant and employs a large language model (LLM) to generate answers, complete with citations from as many as 25 source documents.

See these topics to start using aiR Assist:

How aiR Assist works

aiR Assist operates using a Retrieval-Augmented Generation (RAG) process to deliver grounded, evidence-based responses. This approach combines document retrieval with large language model generation to help support accuracy, transparency, and contextual relevance.

  1. Indexing the documents (indexing step)
    The user identifies documents to query and creates an index.
  2. Asking a question (question step)
    The user asks a question.
  3. Finding relevant documents (retrieval step)
    Each question is matched against the text indexed from the identified documents. aiR Assist performs a similarity search to identify the most relevant content. The documents are divided into smaller passages, and the system selects results that are estimated to correspond most closely to the question.
  4. Generating the answer (generation step)
    The selected passages, along with the original question and system prompt, are passed to the LLM. The model uses this retrieved context to generate a response intended to be coherent, concise, and supported by retrieved content, including up to 25 citations and references to the original sources.

Diagram showing user worflow and Relativity back end process

LLM model in use

aiR Assist currently uses the Azure OpenAI LLM, aiming to provide contextually grounded responses based on retrieved source material.

Important Limits

  • Each index can contain up to 50,000 documents.
  • A maximum of five (5) built indexes can be created per workspace.
  • Individual documents must be 5 MB or smaller; larger files are excluded during indexing.
  • Only documents with extracted text are indexed. The text must be stored in Data Grid (not SQL). Files that do not contain extracted text are excluded automatically from the index.

Understanding aiR Assist responses

aiR Assist is designed to identify and summarize potentially relevant information from large document sets through natural language interaction. The system operates on a Retrieval-Augmented Generation (RAG) architecture, which retrieves and analyzes the documents most likely to be relevant and generates a response based on retrieved content and supported by citations.

aiR Assist is designed to return contextually relevant and evidence-based information rather than performing exhaustive or “find everything” searches. It does not review every document individually, and some occurrences of keywords or topics may not be included in the response.

The RAG process works best when key evidence is found in a few focused documents. Results are less accurate if answers depend on scattered or unclear information.

Language support

aiR Assist currently supports English-language content only. The system has been designed and tested exclusively on English-language datasets to ensure accuracy, reliability, and consistent performance.

At this time, non-English languages are not supported, and aiR Assist has not been formally evaluated or validated for use with multilingual or non-English text. While it may operate with non-English datasets, results can vary in accuracy and completeness, and verification of cited sources is strongly recommended when working with such content.

Future updates may expand language capabilities based on performance testing and model availability.

Supported use cases

Here are some example questions targeting common use cases for aiR Assist:

Use case Common category Example question
Early Case Insight Finding potentially important documents Can you find me documents that discuss potential gifts or incentives?
Finding documents by theme Are there any documents mentioning fraudulent behavior of John Doe?
Understanding actors and roles Who was involved in discussions about offering gifts?
Case Strategy Development Identifying a series of events Create a high-level timeline for events that took place before the start of Project Artemis.
Understanding communications and relationships between actors Who communicated with whom about the contract terms?
Deposition/Trial Preparation Suggesting exhibits based on key criteria List documents to use as exhibits based on [key document criteria].
Confirming conversations or actions took place Did John Maxwell send an email about the compliance policy?

Release notes

This section includes the release information and the current functionality of the aiR Assist (Advanced Access) application.

Return to top of the page
Feedback