Elusion Test

An Elusion Test is used to validate the accuracy of an Active Learning project. We recommend running the Elusion Test near the end of the project when you believe the project has stabilized and the low-ranking documents have an acceptably low relevance rate. However, you can run an Elusion Test at any point during the project.

The Elusion Test is available as an additional card. When you run the Elusion Test, you specify the confidence level, margin of error, and rank cutoff below which the elusion sample is taken from not coded documents. The not coded documents include documents never reviewed and documents that were skipped.

  • Elusion - the proportion of non-produced documents (in our case, documents below the rank cut-off) that are relevant.

Reviewers then code these documents on the same project review field to see what relevant documents remain, which ultimately result in Elusion calculations.

  • Elusion rate - the percentage of documents coded relevant in the elusion sample. A lower elusion rate indicates a project is close to completion.

Starting the Elusion Test

The Elusion Test appears along with the other review queues after a new project is created. Starting an Elusion Test disables all other active queues in the project and suspends model updates until the Elusion Test is completed.

To run an Elusion Test, complete the following:

  1. Click Add Reviewers on the Elusion Test and confirm you want to start an Elusion Test.

    Note: We recommend no more than 20 concurrent reviewers per project. Concurrent reviewers are defined as reviewers making coding decisions in an Active Learning queue. There is no limit to how many reviewers you can add to a queue as long as the number of concurrent reviewers remains at 20 or fewer.

  2. Wait for the system to set up the test. Once the queue reads Click to setup the Elusion Test, click the queue.
  3. On the Elusion Test set up window, complete the following fields:
    • Responsive Cutoff - the rank below which the Elusion Test will sample non-coded, predicted non-relevant documents (not reviewed, skipped, suppressed duplicates).

      Note: When you update the responsive cutoff value, the value is updated in all three places where it’s used in the application: Elusion Test, Update ranks, and Project Settings.

    • Sample Type
      • Fixed - creates a random sample of fixed number of documents.
      • Statistical - creates a random sample set of a size that is based on a given Confidence and Margin of Error.
    • Confidence (%) - the probability that the rate in the sample is a good measure of the rate in the project universe (i.e., within the margin of error). Selecting a higher confidence level requires a larger sample size.
    • Margin of Error (%) - the normal variability between the observed rate in the sample and the true rate in the project universe. Selecting a lower margin of error requires a larger sample size. Margin of error can change if documents were skipped in the Elusion Test.
    • Reviewers - the users that will review documents in the Elusion Test.
  4. Click the green check mark to accept your settings.
  5. Click Start Review.

Running an Elusion Test

Elusion Test statistics are reported in Review Statistics and update during an Elusion Test. You can cancel an Elusion Test at any time. You can also pause a review by clicking the Pause Review button.

Reviewers access the queue from the document list like all other queues. Reviewers code documents from the sample until all documents have been served, at which case the following message appears:

Note: For best results, we strongly recommend coding every document in the Elusion Test and avoiding skipping documents. Skipped documents are considered relevant in Elusion Test results.

When a reviewer saves documents in the Elusion Test, the documents are tagged in the <Project Name> Elusion Test multi-choice field.

Note: Manually coded documents are not sampled for Elusion Tests.

Reviewing Elusion Test results

Once reviewers code all documents in the sample, you can access Elusion Test results by clicking View Elusion Test Results.

Based on the coding of the elusion test sample, the results display the following:

  • Elusion Rate - the percentage of documents coded relevant in the elusion sample. Documents that are skipped during the Elusion Test queue are treated as relevant documents. Therefore, you must code all the documents in the elusion sample to ensure that the calculated elusion rate is a valid estimate for the overall discard pile population.
    The elusion rate is displayed as a range that applies the margin of error to the rate measured in the sample, yielding the expected elusion rate across all uncoded documents predicted not relevant. The rate is rounded to the nearest tenth of a percent.
  • Eluded Documents - the estimated number of eluded documents. This number is subject to the final confidence and margin of error which can be found in review statistics.
  • Pending Documents - the number of documents that have not been submitted to the model, including documents in the elusion test sample and manually-selected documents coded while the elusion test was taking place.

If documents were skipped during the Elusion Test, a warning appears on the modal. You can review these skipped documents, and they'll be reflected in the results as if they were coded during the test. If these documents are coded after you click Complete Project, only the Pending Documents count is updated.

If you find the results of the Elusion Test acceptable, select whether to Update ranks upon completion, and then click Complete Project to close the project. Once the project is complete, the model remains frozen. Any coding decisions that occurred after the Elusion Test was administered remain suppressed from the model.

Note: Updating ranks upon accepting Elusion Test results will use the Elusion Test Rank Cutoff.

If you don't find the results of the Elusion Test acceptable, click Resume Project, and then click again to re-open the Active Learning project. This unlocks the model's learning and submits documents that were coded since administering the Elusion Test to the model.

Elusion Test statistics are reported in Review Statistics and persist after an Elusion Test is finished. This data is located under the Elusion Test tab.

See Review Statistics for more information.