

When you validate aiR for Review Prompt Criteria, the validation process compares aiR for Review's AI-based relevance predictions to human coding decisions. Review Center calculates the validation statistics and helps you organize, track, and manage the human side of the coding process.
See these related pages:
When you create a validation set, there are several steps that cross between aiR for Review and Review Center:
During validation, steps 3 through 5 take place in Review Center. For information on the other steps, see Setting up aiR for Review prompt criteria validation.
For a general overview, see aiR for Review Prompt Criteria validation.
After you create the Prompt Criteria validation queue, the coding process is similar to any other validation queue in Review Center. Reviewers code documents using the Review Queues tab, and administrators track and manage the queue through the main Review Center tab.
As coding progresses, the Review Center dashboard displays metrics and controls related to queue progress. The main validation statistics will not appear until all documents have been coded and the validation process is complete. From the dashboard, the queue administrator can pause or cancel the queue, view coding progress, and edit some settings.
The statistics produced during Prompt Criteria validation are similar to the ones produced for a regular Review Center queue, but not identical. For more information, see Prompt Criteria validation statistics.
Reviewers access the validation queue from the Review Queues tab like all other queues.
During review:
For full reviewer instructions, see Reviewing documents using Review Center.
Validation does not check for human error. We recommend that you conduct your own quality checks to make sure reviewers are coding consistently.
When reviewers have finished coding all the documents in the queue, review the validation statistics. You can use these to determine whether to accept the validation results, or reject them and try again with a different set of Prompt Criteria.
The statistics for Prompt Criteria validation include:
The ranges listed below each statistic apply the margin of error.
The exact criteria for whether to accept or reject may vary depending on your situation, but the goal is to have the AI predictions match the decisions of the human reviewers as closely as possible. In general, look for a low elusion rate and high recall.
For more information on how the statistics are calculated, see Prompt Criteria validation statistics.
When the human coding decisions are complete, you can review how effectively the AI matched human decisions, then decide whether to accept the results and use the Prompt Criteria as-is, or whether to reject the results and improve the Prompt Criteria.
After all documents in the validation queue have been reviewed, a ribbon appears underneath the Queue Summary section. This ribbon has two buttons: one to accept the validation results, and one to reject them.
If you click Accept:
If you click Reject:
After you make the choice, the Validation Progress strip on the dashboard displays the final validation statistics and a link back to the aiR for Review project. From there, you can either use the finalized Prompt Criteria on a larger document set, or edit the Prompt Criteria and continue improving it.
For information on continuing work in the aiR for Review tab, see Setting up aiR for Review prompt criteria validation.
If you reject this validation, you can run validation again later. Even if you reject the results, Review Center keeps a record of them. For more information, see Viewing results for previous validation queues.
If you change your mind after accepting the validation results, you can still reject them manually.
To reject the results after accepting them:
After you have rejected the validation results, you can resume normal reviews in the main queue.
After you have run validation, you can switch back and forth between viewing the statistics for the current validation attempt and any previous validation queues that were completed or rejected. These queues are considered linked. Viewing the statistics for linked queues does not affect which queue is active or interrupt reviewers.
To view linked queues:
When you're done viewing the linked queue's stats, you can use the same drop-down menu to select the main queue or other linked queues.
The validation process assumes that the Prompt Criteria, document set, and coding decisions will all remain the same. If any of these things change, the validation results will also change. Sometimes this can be solved by recalculating the validation statistics, but often it means creating a new validation queue.
The following scenarios can be fixed by recalculating statistics:
In these cases, the sample itself is still valid, but the numbers have changed. For these situations, recalculate the validation results to see accurate statistics. For instructions on how to recalculate results, see Recalculating validation results.
The following scenarios require a new validation queue:
In these cases, the sample or the criteria themselves have changed, so recalculating does not help. For these situations, create a new validation queue.
If you have re-coded any documents from the validation sample, you can recalculate the results without having to re-run validation. For example, if reviewers had initially skipped documents in the sample or coded them as non-relevant, you can re-code those documents outside the queue, then recalculate the validation results to include the new coding decisions.
To recalculate validation results:
Why was this not helpful?
Check one that applies.
Thank you for your feedback.
Want to tell us more?
Great!
For more, see the Whats New section