

Review validation evaluates the accuracy of a Review Center queue. The goal of validation is to estimate the accuracy and completeness of your relevant document set if you were to stop the queue immediately and not produce any unreviewed documents. The primary statistic, elusion rate, estimates how many uncoded documents are actually relevant documents that you would leave behind if you stopped the queue. The other statistics give further information about the state of the queue.
Note: Review validation does not check for human error. We recommend that you conduct your own quality checks to make sure reviewers are coding consistently.
The following definitions are useful for understanding review validation:
When a Prioritized Review queue is nearing completion, it can become more difficult to find additional relevant documents. As you monitor your queue, the following dashboard charts can help you determine when the queue is ready for validation:
When you believe you have found most of the relevant documents, run validation to estimate the accuracy and completeness of your relevant document set.
For more information on the dashboard charts, see Charts and tables.
When you are ready to validate your progress in a Review Center queue, you can start a linked validation queue that samples documents from the discard pile and serves them to reviewers.
To set up the validation queue:
Validation always samples a specific number of documents, but there are three ways to choose the sample size:
The final margin of error estimates may be slightly different from the ones chosen at setup, depending on the documents found during validation. All validation statistics aim for a 95% confidence interval alongside the margin of error.
The estimated elusion margin of error depends only on the sample size, and vice versa. Their relationship to the estimated recall margin of error depends on the number of relevant documents that have already been coded and the current size of the discard pile. It may vary among different validation samples, even within the same review.
For more information on how validation statistics are calculated, see Review validation statistics.
Each validation queue inherits these settings from the main queue:
To change them, edit the validation queue after creating it. For more information, see Editing a validation queue.
Reviewers access the validation queue from the Review Queues tab like all other queues. Have reviewers code documents from the sample until all documents have been served up.
For best results, we strongly recommend coding every document in the validation queue as positive or negative. Avoid skipping documents or coding them as neutral. For more information, see How validation handles skipped and neutral documents.
Validation statistics are reported on the Review Center dashboard like any other queue. You can cancel validation from the three-dot menu, and you can pause validation by clicking the Pause button. All data in the charts and tables reflect the validation queue.
During validation, the Review Progress section changes to become a Validation Progress section, which shows the progress of the validation queue. To view validation statistics instead, click the arrow next to the section name, then select Validation Stats.
For more information on the validation statistics, see Reviewing validation results.
You can change some of the queue settings at any time during validation.
To edit the validation queue:
For descriptions of the queue settings, see Creating a Review Center queue.
If a reviewer falls inactive and does not review the last few documents in a validation queue, you can release those documents through the Queue Summary section of the dashboard. For more information, see Editing queues and other actions.
To see which documents are checked out to a reviewer, filter the Reviewed Documents table by the reviewer's name. Any documents that are still checked out will show the Coded Time as blank. For more information, see Reviewed Documents table.
If you want to run your own calculations or view documents in the validation sample, you can track the sampled documents from the Document list page. This process is optional.
To view sampled documents:
Each validation folder contains the documents selected for the sample. It also holds a sub-choice that shows all documents removed from that sample.
To view coding decisions for each document, add Review Center Coding::Value to the document view. For other optional fields, see Tracking reviewer decisions.
After all documents in the validation queue have been reviewed, a ribbon appears underneath the Queue Summary section. This ribbon has two buttons: one to accept the validation results, and one to reject them.
If you click Accept:
If you click Reject:
You can run validation on the queue again at any later time, and you can reject validation rounds as many times as needed. Even if you reject the results, Review Center keeps a record of them. For more information, see Viewing results for previous validation queues.
If you change your mind after accepting the validation results, you can still reject them manually.
To reject the results after accepting them:
After you have rejected the validation results, you can resume normal reviews in the main queue.
After reviewers code all documents in the sample, the queue status changes to Complete. All validation results appear in the Validation Progress strip on the Review Center dashboard.
The results include:
For more information about how these statistics are calculated, see Review validation statistics.
If you have re-coded any documents from the validation sample, you can recalculate the results without having to re-run validation. For example, if reviewers had initially skipped documents in the sample or coded them as neutral, you can re-code those documents outside the queue, then recalculate the validation results to include the new coding decisions.
To recalculate validation results:
After you have run validation on a queue, you can switch back and forth between viewing the statistics for the main queue and any linked validation queues that were completed or rejected. Viewing the statistics for linked queues does not affect which queue is active or interrupt reviewers.
To view linked queues:
When you're done viewing the linked queue's stats, you can use the same drop-down menu to select the main queue or other linked queues.
Typically, review validation is linear: The administrator sets up the validation sample, the reviewers code the sample, and the results are calculated from those documents. However, if documents are added or removed, coded documents are re-coded, or other things happen to change the queue being validated, this can affect the validity of the results.
The following scenarios can be fixed by recalculating statistics:
In these cases, the sample itself is still valid, but the numbers have changed. For these situations, recalculate the validation results to see accurate statistics.
For instructions on how to recalculate results, see Recalculating validation results.
The following scenarios cannot be fixed by recalculation:
In both of these cases, this means that the validation sample is no longer a random sample of all uncoded or neutral documents. For these situations, we recommend starting a new validation queue.
On this page
Why was this not helpful?
Check one that applies.
Thank you for your feedback.
Want to tell us more?
Great!