Analyzing aiR for Review results
When aiR for Review analyzes documents, it makes predictions about the relevance of documents to different topics or issues. If it predicts that a document is relevant or relates to an issue, it includes a written justification of that prediction, as well as a counterargument and in-text citations. You can view these predictions, citations, and justifications either from the Viewer, or as fields on document lists.
How aiR for Review analysis results work
When aiR for Review finishes its analysis of a document, it delivers recommendations in the form of a Predication and Score on how the document should be categorized, along with supporting rationale the prediction. This analysis has several parts:
- aiR Prediction—the relevance, key, or issue label that aiR predicts should apply to the document. See Predictions versus document coding
- aiR Score—a numerical score that indicates how strongly relevant the document is or how well it matches the predicted issue. See Understanding document scores.
- aiR Rationale—an explanation of why aiR chose this score and prediction.
- aiR Considerations—a counterargument explaining why the prediction might possibly be wrong.
- aiR Citation [1-5]—excerpts from the document that support the prediction and rationale, with a maximum of five citations.
In general, citations are left empty for non-relevant documents and documents that don't match an issue. However, aiR occasionally provides a citation for low-scoring documents if it helps to clarify why it was marked non-relevant. For example, if aiR is searching for changes of venue, it might cite an email that ends with "Hang on, gotta run, more later" as worth noting, even though it does not consider this a true change of venue request.
You can use this information to help update and improve your prompt criteria.
Predictions versus document coding
Even though aiR refers to the relevance, key, and issue fields during its analysis, it does not actually write to these fields. It is not coding the documents or writing to the coding fields. All of aiR's results are stored in aiR-specific fields, such as the Prediction field. This makes it easier to compare aiR's predictions to human coding while refining the prompt criteria.
If you have refined a set of prompt criteria to the point that you are comfortable adopting those predictions, you can copy those predictions to the coding fields using mass-tagging or other methods.
For ideas on how to integrate aiR for Review results into a larger review workflow, see Using aiR for Review with Review Center.
Variability of results
Due to the nature of large language models, output results may vary slightly from one run to another, even using the same inputs. aiR's scores may shift slightly, typically between adjacent levels, such as from 1-not relevant to 2-borderline. Significant changes, like moving from 4-very relevant to 1-not relevant, are rare.
Understanding document scores
aiR scores documents from 0 to 4 according to how relevant they are or how well they match an issue. The higher the number, the more relevant the document is predicted to be.
A score of -1 is assigned to any errored documents. They cannot receive a normal score because they were not properly analyzed.
Below are the aiR for Review scores:
Score | Description |
---|
4 | Very Relevant: The document is predicted very relevant to the issue. aiR found direct, strong evidence that the content relates to the case or issue. Citations show the relevant text. |
3 | Relevant: The document is predicted relevant to the issue. Citations show the relevant text. |
2 | Borderline Relevant: The document is predicted borderline relevant. aiR found some content that might relate to the case or issue. It usually has citations. |
1 | Not Relevant: The document is predicted not relevant. aiR did not find any evidence that it relates to the case or issue. |
0 | Junk: The document contains no useful information or is considered “junk” data, such as system files, an empty document, or sets of random characters. |
-1 | Error: The document either encountered an error or could not be analyzed. For more information, see How document errors are handled. |
Viewing results from the aiR for Review dashboard
Within a project, you can view results using the aiR for Review dashboard. This dashboard includes not just results fields, but calculated metrics such as the number of documents with predictions that conflict with human coding.
To view the dashboard, select a project from the aiR for Review Projects tab. For detailed information on the dashboard layout, see Navigating the aiR for Review dashboard.
Viewing results for individual documents from the Viewer
From the Viewer, you can see the aiR for Review results for each individual document. Predictions show up in the left-hand pane, and all citations are automatically highlighted.
You will only see analysis highlights if you have the necessary permissions. Without these, the aiR for Review Analysis icon does not display. For more information, refer to
Permissions.
To view a document's aiR for Review results in the Viewer, click on the aiR for Review Analysis icon (
) to expand the pane. The aiR for Review Analysis pane displays the following:
- Project name and version of the analysis
- Field choice for which the document was being analyzed
- aiR's Prediction
- aiR's Rationale and Considerations
- Citations found in the document (click each to view it in the document)

For more information, see Viewer documentation on aiR for Review Analysis.
- If you run a new job on documents that were part of a previous job, you may temporarily see both sets of results linked to those documents. The old results will be unlinked after the new job is complete.
- To avoid seeing doubled results, hide the previous result set using the aiR for Review Jobs tab.
Citations and highlighting
A maximum of five citations will be displayed with the document.
To jump to a specific citation, click the citation card. You can also toggle highlighting on or off by clicking the toggle at the top of the aiR for Review Analysis pane.
Citation colors
The highlight colors depend on the type of citation:
If the same passage is cited by two types of results, the highlight blends their colors.

Citation order
The results in the aiR for Review Analysis pane are first ordered by:
- Relevance citation
- Key Document citation
- Issue citation
The Issue results are ordered according to each issue choice's Order value. For information on changing the choice order, see Choice detail fields.
Finally, duplicate results are ordered from most recent to oldest.
Adding aiR for Review fields to layouts
Because of how aiR for Review results fields are structured, you cannot add them directly to layouts. If the highlighting is not enough, you can add an object list to the layout that shows all linked results. For more information, see Adding and editing an object list.
Filtering and sorting aiR for Review results
Documents have a one-to-many relationship with the aiR for Review's results fields. For example, a single document might be linked to several Issue results. This creates some limitations when sorting and filtering results:
- Filter one column at a time in the Document list. Combining filters may include more results than you expect.
- If you need to filter by more than one field at a time, we recommend using search conditions instead.
- You can add these fields to views and widgets, but you cannot sort the view or the widget by these fields.
For information on filtering results using the Version Metrics tab, refer to Filtering the Analysis Results using version metrics.
How document errors are handled
If aiR encounters a problem when analyzing a document, it will not return results for that document. Instead, it scores the document as -1 and returns an error message in the Error Details column. Your organization is not charged for any errored documents, and they do not count towards your organization's aiR for Review total document count.
The possible error messages are:
Error message | Description | Retry? |
---|
Completion is not valid JSON | The LLM encountered an error. | Yes |
Failed to parse completion | The large language model (LLM) encountered an error. | Yes |
Document text is empty | The extracted text of the document was empty. | No |
Document text is too long | The document's extracted text was too long to analyze. | No |
Document text is too short | There was not enough extracted text to analyze in the document. | No |
Model API error occurred | A communication error occurred between the large language model (LLM) and Relativity. This is usually a temporary problem. | Yes |
Uncategorized error occurred | An unknown error occurred. | Yes |
Ungrounded citations detected in completion | The results for this document have a chance of including an ungrounded citation. For more information, see Ungrounded citations. | Yes |
If the Retry? column says Yes, you may get better results trying to run that same document a second time. For errors that say No in that column, you will always receive an error running that specific document.
If you retry a document and keep receiving the same error, the document may have permanent problems that aiR for Review cannot process.
Ungrounded citations
An ungrounded citation may occur for two reasons:
- When the aiR results citation cannot be found anywhere in the document text. This is usually caused by formatting issues. However, just in case the LLM is citing sentences without a source, we mark it as a possible ungrounded citation.
- When the aiR results citation comes from something other than the document itself, but which is still part of the full prompt. For example, it might cite text that was part of the prompt criteria instead of the document's extracted text.
When aiR receives the analysis results from the LLM, it checks all citations against the prompt text. Any possible ungrounded citations are marked as errors, and they receive a score of -1 instead of whatever score they were originally assigned. If retrying documents with these errors does not succeed, we recommend manually reviewing them instead.
Actual ungrounded citations are extremely rare. However, highly structured documents, such as Excel spreadsheets and .pdf forms, are more likely to confuse the detector and trigger these errors.