Navigating the aiR for Review dashboard

When you select a project from the aiR for Review Projects tab, a dashboard displays showing the project's prompt criteria, the list of documents, and controls for editing the project. If the project has been run, it also displays the results.

Project details strip

At the top of the dashboard, the project details strip displays:

Project details strip

  • Project name—the name chosen for the project. The Analysis Type appears underneath the name.
  • Version selector—if this is the first version of the prompt criteria, this will say Version 1 as a text-only field. For later versions, the version number becomes clickable, and you can choose older versions to view their statistics. For more information, see How prompt criteria versioning works.
  • Data source—name of the saved search chosen at project creation and the document count.
    • If you add or remove documents from the saved search, those changes are not reflected in the aiR for Review project until you refresh the data source.
    • To refresh the data source, click the refresh symbol beside the name.
  • Run button—click to analyze the selected documents using the current version of the prompt criteria. If no documents are selected or filtered, it will analyze all documents in the data source.
    • If you are viewing the newest version of the prompt criteria and no job is currently running, this says Analyze [X] documents.
    • If an analysis job is currently running or queued, this button is unavailable and a Cancel option appears.
    • If you are viewing older versions of the prompt criteria, this button is unavailable.
  • Feedback icon—send optional feedback to the aiR for Review development team.

Prompt Criteria panel

On the left side of the dashboard, the Prompt Criteria panel displays tabs that match the project type you chose when creating the project. These tabs contain fields for writing the criteria you want aiR for Review to use when analyzing the documents.

Prompt Criteria panel

Possible tabs include:

  • Case Summary—appears for all analysis types.
  • Relevance—appears for Relevance and Relevance and Key Documents analysis types.
  • Key Documents—appears for the Relevance and Key Documents analysis type.
  • Issues—appears for the Issues analysis type.

For information on filling out the prompt criteria tabs, see Developing prompt criteria.

For information on building prompt criteria from existing case documents, like requests for production or review protocols, see Using prompt kickstarter.

To export the prompt criteria displayed, see Developing prompt criteria.

If you want to temporarily clear space on the dashboard, click the Collapse symbol (Collapse symbol) in the upper right of the prompt criteria panel. To expand the panel, click the symbol again.

Aspect selector bar

The aspect selector bar appears in the upper middle section of the dashboard for projects that use Issues analysis or Relevance and Key Documents analysis. This lets you choose which metrics, citations, and other results to view in the Analysis Results grid.

  • For a Relevance and Key Documents analysis:
    Aspect selector section

    Two aspect tabs appear: one for the field you selected as the Relevant Choice, and one for the field you selected as the Key Document Choice.

  • For Issues analysis:

    Issues aspect selector.
    An aspect tab appears for every Issues field choice that has choice criteria. They appear in order according to each choice's Order value. For information on changing the choice order, see Choice detail fields.

    Additionally, the All Issues tab offers a comprehensive overview of predictions and scores for all issues and documents within the project. The total number of issue predictions displayed on the All Issues tab is calculated by multiplying the number of issues by the number of documents. For example, if you had 10 issues and 100 documents, issue predictions would equal 1000.

When you select one of the aspect tabs in the bar, both the project metrics section and analysis results grid update to show results related to that aspect. For example:

  • If you choose the Key Document tab:
    The project metrics section shows how many documents have been coded as key. The Analysis Results grid updates to show predictions, rationales, citations, and all other fields related to whether the document is key.
  • If you choose an issue from the aspect selector:
    The project metrics section and analysis results grid both update to show results related to that specific issue. The total number of issue predictions in this section is calculated by multiplying the number of issues by the number of documents. For example, if there are five issues and 100 documents, there will be 500 issue predictions.

Project metrics section

In the middle section of the dashboard, the project metrics section shows the results of previous analysis jobs. There are two tabs: one for the current version's most recent results (Version Metrics tab), and one for a list of historical results for all previous versions (History tab).

Project metrics section

Version Metrics tab

The Version [X] Metrics tab shows metrics divided into sections:

Section Field Description
Documents
  • Reviewer Uncoded (for Relevance or Relevance and Key Documents analysis only)
  • Reviewer Coded Issues(for Issues analysis only)—total number of documents that reviewers coded as having the selected issue. When the All Issues tab is selected, the counts for each issue are combined, which may result in a number that is more than the total document count.
  • Analyzed—documents in the data source that have a prediction attached from this prompt criteria version.
  • Not Analyzed—documents in the data source that do not have a prediction attached from this prompt criteria version.
  • Errored—documents that received an error code during analysis. For more information, see How document errors are handled.

Issue Predictions

The Issue Predictions section displays for Issues analysis when the All Issues tab is selected.
  • Not Relevant—issues predicted as junk or not relevant to the current aspect.
  • Borderline—issues predicted as bordering between relevant and not relevant to the current aspect.
  • Relevant—issues predicted relevant or very relevant to the current aspect.
  • Errored—issues that received an error code during analysis.
aiR Analysis
  • Not Relevant—documents predicted as junk or not relevant to the current aspect.
  • Borderline—documents predicted as bordering between relevant and not relevant to the current aspect.
  • Relevant—documents predicted relevant or very relevant to the current aspect.
Conflicts
  • Total—total number of documents that have a different coding decision from the predicted result. This is the sum of the Relevant Conflicts and Not Relevant Conflicts fields.
  • Relevant—documents predicted as relevant or very relevant to the current aspect, but the coding decision in the related field says something else.
  • Not Relevant—documents predicted as not relevant to the current aspect, but the coding decision in the related field says relevant.

The metrics adjust their counts based on the type of results displayed:

  • For Relevance results, relevance-related metrics use the Relevance Field for their counts.
  • For Key Document results, relevance-related metrics use the Key Document Field for their counts.
  • For Issues analysis, relevance-related metrics count documents marked for the selected issue.

For instance, when viewing results for an issue such as Fraud, the aiR Predicted Relevant field displays documents identified as associated with Fraud. When viewing Key Document results, the aiR Predicted Relevant field displays documents identified as key documents.

Filtering the Analysis Results using version metrics

To filter the Analysis Results table based on any of the version metrics, click the desired metric in the Version Metrics banner. This narrows the results shown in the table to only documents that are part of the metric. It also auto-selects those documents for the next analysis job. The number of selected documents is reflected in the Run button's text. This makes it easier to analyze a subset of the document set instead of selecting all documents every time. To remove filtering, click Clear selection underneath the Run button.

Sample screen highlighting Version Metrics tab, Reviewer Coded Issues selected, two documents filtered, and Run button showing two documents to analyze.

You can also filter documents in the Analysis Results grid by selecting them in the table. See Filtering and selecting documents for analysis.

History tab

The History tab shows results for all previous versions of the prompt criteria. This table includes all fields from the Version Metrics tab, sorted into rows by version. For a list of all Version Metrics fields and their definitions, see Version Metrics tab.

It also displays two additional columns:

  • Version—the prompt criteria version that was used for this row's results.
  • Timestamp—the time the analysis job ran.

History tab data

Analysis Results section

In the middle section of the dashboard, the Analysis Results section shows a list of all documents in the project. If the documents have aiR for Review analysis results, those results appear beside them in the grid.

Sample of Analysis Results section.

The fields that appear in the grid vary depending on what type of analysis was chosen. For a list of all results fields and their definitions, see aiR for Review results.

aiR for Review's predictions do not overwrite the Relevance, Key, or Issues fields chosen during prompt criteria setup. Instead, the predictions are held in other fields. This makes it easier to distinguish between human coding choices and aiR's predictions.

To view inline highlighting and citations for an individual document, click on the Control Number link. The Viewer opens and displays results for the selected prompt criteria version. For more information on using aiR for Review in the Viewer, see aiR for Review Analysis.

Filtering and selecting documents for analysis

To manually select documents to include in the next analysis run, check the box beside each individual document in the Analysis Results grid. The number of selected documents is reflected in the Run button's text. To remove filtering, click Clear selection underneath the Run button.

Select documents in the Analysis Results table using box next to document to analyze.

You can also filter the Analysis Results grid by clicking the metrics in the Version Metrics section. See Filtering the Analysis Results using version metrics.

Clearing selected documents

Click the Clear selection link below the Run button to deselect all documents in the Analysis Results grid. This resets your selections so the next analysis includes all documents in the data source.
Clear selection link

Saving selected documents as list

To save a group of documents in the Analysis Results grid as a list, follow the steps below.

  1. Select the box beside each individual document in the Analysis Results grid that you want to add to the list.
  2. Click the Save as List button at the bottom of the grid.
  3. Enter a unique Name for the document list.
    Save as List dialog.
  4. Enter any Notes in the text box to help describe the list.
  5. Click Save.

For more information on lists, see Lists.