aiR for Review results

When aiR for Review analyzes documents, it makes predictions about the relevance of documents to different topics or issues. If it predicts that a document is relevant or relates to an issue, it includes a written justification of that prediction, as well as a counterargument and in-text citations. You can view these predictions, citations, and justifications either from the Viewer, or as fields on document lists.

How aiR for Review analysis results work

When aiR for Review finishes its analysis of a document, it returns a prediction about how the document should be categorized, as well as its reasons for that prediction. This analysis has several parts:

Sample of Analysis Results section.

  • aiR Prediction—the relevance, key, or issue label that aiR predicts should apply to the document. See Predictions versus document coding
  • aiR Score—a numerical score that indicates how strongly relevant the document is or how well it matches the predicted issue. See Understanding document scores.
  • aiR Rationale—an explanation of why aiR chose this score and prediction.
  • aiR Considerations—a counterargument explaining why the prediction might possibly be wrong.
  • aiR Citation [1-5]—excerpts from the document that support the prediction and rationale.

In general, citations are left empty for non-relevant documents and documents that don't match an issue. However, aiR occasionally provides a citation for low-scoring documents if it helps to clarify why it was marked non-relevant. For example, if aiR is searching for changes of venue, it might cite an email that ends with "Hang on, gotta run, more later" as worth noting, even though it does not consider this a true change of venue request.

Predictions versus document coding

Even though aiR refers to the relevance, key, and issue fields during its analysis, it does not actually write to these fields. All of aiR's results are stored in aiR-specific fields, such as the Prediction field. This makes it easier to compare aiR's predictions to human coding while refining the prompt criteria.

If you have refined a set of Prompt Criteria to the point that you are comfortable adopting those predictions, you can copy those predictions to the coding fields using mass-tagging or other methods.

For ideas on how to integrate aiR for Review results into a larger review workflow, see Using aiR for Review with Review Center.

Variability of results

Due to the nature of large language models, output results may vary slightly from one run to another, even using the same inputs. aiR's scores may shift slightly, typically between adjacent levels, such as from 1-not relevant to 2-borderline. Significant changes, like moving from 4-very relevant to 1-not relevant, are rare.

Understanding document scores

aiR scores documents from 0 to 4 according to how relevant they are or how well they match an issue. The higher the number, the more relevant the document is predicted to be. A score of -1 is assigned to any errored documents. Because these documents were not properly analyzed, they cannot receive a normal score.

The aiR for Review scores are:

Score Description
-1 The document either encountered an error or could not be analyzed. For more information, see How document errors are handled.
0 The document contains no useful information or is “junk” data, such as an empty document or random characters.
1 The document is predicted not relevant. aiR did not find any evidence that it relates to the case or issue.
2 The document is predicted borderline relevant. aiR found some content that might relate to the case or issue. It usually has citations.
3 The document is predicted relevant to the issue. Citations show the relevant text.
4 The document is predicted very relevant to the issue. aiR found direct, strong evidence that the content relates to the case or issue. Citations show the relevant text.

Viewing results from the dashboard

Within a project, you can view results using the dashboard. This dashboard includes not just results fields, but calculated metrics such as the number of documents with predictions that conflict with human coding.

To view the dashboard, select a project from the aiR for Review Projects tab. For detailed information on the dashboard layout, see Navigating the aiR for Review dashboard.

Viewing results for individual documents

From the Viewer, you can see the aiR for Review results for each individual document. Predictions show up in the left-hand pane, and all citations are automatically highlighted.

To view a document's aiR for Review results in the Viewer, click on the aiR for Review Analysis icon (aiR for Review Analysis icon) to expand the pane. The aiR for Review Analysis pane displays the following:

  1. Prompt Criteria version
  2. Analysis Name
  3. Prediction
  4. Rationale and Considerations
  5. Citation

For more information, see Viewer documentation on aiR for Review Analysis.

  • If you run a new job on documents that were part of a previous job, you may temporarily see both sets of results linked to those documents. The old results will be unlinked after the new job is complete.
  • To avoid seeing doubled results, hide the previous result set using the aiR for Review Jobs tab.

Citations and highlighting

A maximum of five citations will be displayed with the document.

To jump to a specific citation, click the citation card. You can also toggle highlighting on or off by clicking the toggle at the top of the aiR for Review Analysis pane.

The highlight colors depend on the type of citation:

Examples of Relevance citation in orange, Key citation in purple, and Issues citation in green.

If the same passage is cited by two types of results, the highlight blends their colors.

Adding aiR for Review fields to layouts

Because of how aiR for Review results fields are structured, you cannot add them directly to layouts. If the highlighting is not enough, you can add an object list to the layout that shows all linked results. For more information, see Adding and editing an object list.

Filtering and sorting aiR for Review results

Documents have a one-to-many relationship with the aiR for Review's results fields. For example, a single document might be linked to several Issue results. This creates some limitations when sorting and filtering results:

  • Filter one column at a time in the Document list. Combining filters may include more results than you expect.
  • If you need to filter by more than one field at a time, we recommend using search conditions instead.
  • You can add these fields to views and widgets, but you cannot sort the view or the widget by these fields.

How document errors are handled

If aiR encounters a problem when analyzing a document, it will not return results for that document. Instead, it scores the document as -1 and returns an error message in the Error Details column. Your organization is not charged for any errored documents, and they do not count towards your organization's aiR for Review total document count.

The possible error messages are:

Error message Description Retry?
Completion is not valid JSON The LLM encountered an error. Yes
Failed to parse completion The large language model (LLM) encountered an error. Yes
Document text is empty The extracted text of the document was empty. No
Document text is too long The document's extracted text was too long to analyze. No
Document text is too short There was not enough extracted text to analyze in the document. No
Model API error occurred A communication error occurred between the large language model (LLM) and Relativity. This is usually a temporary problem. Yes
Uncategorized error occurred An unknown error occurred. Yes
Ungrounded citations detected in completion The results for this document have a chance of including an ungrounded citation. For more information, see Ungrounded citations. Yes

If the Retry? column says Yes, you may get better results trying to run that same document a second time. For errors that say No in that column, you will always receive an error running that specific document.

If you retry a document and keep receiving the same error, the document may have permanent problems that aiR for Review cannot process.

Ungrounded citations

An ungrounded citation may occur for two reasons:

  • When the aiR results citation cannot be found anywhere in the document text. This is usually caused by formatting issues. However, just in case the LLM is citing sentences without a source, we mark it as a possible ungrounded citation.
  • When the aiR results citation comes from something other than the document itself, but which is still part of the full prompt. For example, it might cite text that was part of the Prompt Criteria instead of the document's extracted text.

When aiR receives the analysis results from the LLM, it checks all citations against the prompt text. Any possible ungrounded citations are marked as errors, and they receive a score of -1 instead of whatever score they were originally assigned. If retrying documents with these errors does not succeed, we recommend manually reviewing them instead.

Actual ungrounded citations are extremely rare. However, highly structured documents, such as Excel spreadsheets and PDF forms, are more likely to confuse the detector and trigger these errors.