Active Learning application history
If your workspace includes projects from the older Active Learning application, you can review their statistics and results from the Active Learning History tab. This tab shows read-only data for each project.
This tab can be accessed through the Review Center dashboard. For more information, see Viewing archived Active Learning projects.
To use this tab, make sure that both Active Learning and Review Center are installed in the workspace. Uninstalling Active Learning removes the project data.
Selecting an Active Learning project
To view the data for a project, select it from the drop-down menu at the top of the page. This menu includes all Active Learning projects from the current workspace.
Project Statistics section
The Project Statistics section shows overall statistics for the selected project.
It includes:
- Coded Positive—the number of documents coded with the positive designation on the review field (excludes additional reviewed by family).
-
Coded Positive Outside—the number of documents coded outside the Active Learning queue with the positive designation on the review field (excludes additional reviewed by family).
- Coded Negative—the number of documents coded with the negative designation on the review field (excludes additional reviewed by family).
- Coded Negative Outside—the number of documents coded outside the Active Learning queue with the negative designation on the review field (excludes additional reviewed by family).
- Coded Neutral—the number of documents coded with a neutral designation on the review field (excludes additional reviewed by family).
- Coded Neutral Outside—the number of documents coded outside the Active Learning queue with a neutral designation on the review field (excludes additional reviewed by family).
- Skipped—the number of documents that were saved or had Save and Next selected with no coding decision supplied on the review field (excludes additional reviewed by family).
- Project Size—the total number of documents included in the Active Learning project.
Manually Selected section
The Manually Selected section shows the number of document coding decisions made outside the Active Learning queue. These are grouped by date.
The columns include:
- Manually-selected Documents—the number of documents coded outside the Active Learning queue.
- Coded Positive—the number of these documents coded with the positive designation on the review field.
- Coded Negative—the number of these documents coded with the negative designation on the review field.
- Coded Neutral—the number of these documents coded with a neutral designation on the review field.
- Date submitted—the date in UTC that the statistics were submitted.
Prioritized Review section
The Prioritized Review section shows the statistics for Prioritized Review queues. The total number of coded documents appears in the section title.
For every 200 documents that are coded (excluding additional reviewed by family), a new row appears in the table. The first row in the table provides a summary for the entire project.
The columns include:
- Prioritized Review—the set of documents the statistics apply to (excludes additional reviewed by family). The sum of the count of Coded Positive, Coded Negative, and Skipped documents should equal 200.
- # of Reviewers—the number of unique reviewers who reviewed documents in the Prioritized Review queue during this interval.
- Coded Positive—the number of documents coded with the positive designation on the review field (excludes additional reviewed by family).
- Coded Negative—the number of documents coded with the negative designation on the review field (excludes additional reviewed by family).
- Coded Neutral—the number of documents coded with a neutral designation on the review field (excludes additional reviewed by family).
- Skipped—the number of documents that were saved or had Save and Next selected with no coding decision supplied on the review field (excludes additional reviewed by family).
- Index Health—the number of index health documents reviewed in the Prioritized Review queue. These documents are excluded from the relevance rate calculation.
- Highest Ranked—the number of highly ranked documents reviewed in the Prioritized Review queue.
- Highest Ranked Coded Positive—the number of highly ranked documents that were coded with the positive designation in the Prioritized Review queue.
- Relevance Rate—the percentage of documents that were chosen for being highly ranked that were then confirmed as relevant by reviewers' coding decisions. You can calculate the relevance rate manually using the following formula: Highest Ranked Coded Positive / Highest Ranked.
- Family Group Documents—the number of family documents reviewed in the Prioritized Review queue.
- Positive Family Group Documents—the number of family documents coded with the positive designation in the Prioritized Review queue.
Coverage Review section
The Coverage Review section shows the statistics for Coverage Review queues. The total number of coded documents appears in the section title.
For every 200 documents that are coded, a new row appears in the table. The first row in the table provides a summary for the entire project.
The columns include:
- Coverage Review—the set of documents the statistics apply to.
- # of Reviewers— the number of unique reviewers who reviewed documents in the Coverage Review queue during this interval.
- Coded Positive—the number of documents coded with the positive designation on the review field.
- Coded Negative—the number of documents coded with the negative designation on the review field.
- Coded Neutral—the number of documents coded with a neutral designation on the review field.
- Skipped—the number of documents that were saved or had Save and Next selected with no coding decision supplied on the review field.
Project Validation History section
The Project Validation History section shows the results for projects that were validated. Each row represents a separate validation round.
The columns include:
- Validation—each review is called Elusion with Recall or Elusion Test plus a numeral. For example, the first Elusion with Recall is "Elusion with Recall 1," and the second is "Elusion with Recall 2."
- Start Date—the date and time when Project Validation was started. This appears in your computer's time zone.
- Status—whether the validation results were accepted or rejected.
- Rank Cutoff—the numeric cutoff for positive prediction, fixed before project validation begins.
- Discard Pile Size—the number of documents below the rank cutoff that are not coded when Project Validation was started.
- Elusion Sample Size—the number of documents sampled for elusion rate. This number is computed when Project Validation is started.
- Elusion Coded Relevant—the number of sampled documents from below the cutoff which were coded positive during project validation.
- Elusion Coded Not Relevant—the number of sampled documents from below the cutoff which were coded negative during project validation.
- Elusion Documents Skipped/Coded Neutral—the number of sampled documents from below the cutoff that were either saved with no coding decision on the review field, or saved with a neutral coding decision.
- Pending Document Count—the number of documents whose coding has changed since the last model build (prior to Project Validation starting). This includes documents coded in Project Validation, and documents coded through other means. For instance, if a reviewer manually codes documents after Project Validation is underway, these will be counted as Pending.
- Elusion Rate (Range)—the error rate of the discard pile. This is calculated as the percentage of sampled, previously uncoded documents from below the cutoff which are coded positive during Project Validation. The range applies this sampled rate to the entire discard pile, using the confidence level provided by the user and the margin of error calculated from sample size.
- Confidence Level for Elusion—user input or calculated when setting up Project Validation.
- Elusion Margin of Error—margin of error calculated based on the elusion sample size, the discard pile size, and the elusion rate on the validation sample.
- Estimated Eluded Documents (Range)—the projected number of eluded documents. This estimates the number of relevant documents you would miss if you produced all documents marked relevant, as well as those with ranks at or above the cutoff.
- Recall Rate (Range at CL80%)—the percentage of truly positive documents which were found by the Active Learning process. A document has been "found" if it was previously coded positive, or if it is uncoded with a rank at or above the cutoff. Recall is calculated on the sample, then estimated for the total population with a confidence level (CL) of 80%.
- Precision Rate (Range)—the percentage of found documents which are truly positive. A document has been “found” if it was previously coded positive, or if it is uncoded with a rank at or above the cutoff. Documents which were predicted positive but coded negative during validation will count against precision. Precision is calculated on the sample, then estimated for the total population with a confidence level (CL) of 80%.
- Precision Margin of Error (CL80%)—the margin of error for precision as estimated from the sample size, the equivalent portion of the whole project, and the observed precision rate on the validation sample.
- Richness Rate (Range)—the percentage of documents which are relevant (positive choice). This is calculated by dividing the number of positive-coded documents in the sample by the total number of documents in the sample. The range predicts the richness for the whole project, subject to a 95% confidence level.
- Richness Margin of Error (CL95%)—the margin of error for richness as estimated from the sample size, the whole project size, and the observed richness rate on the sample.
- Estimated Total Relevant Documents—the estimated number of relevant documents in the whole project. This is calculated by projecting the richness rate across the whole project.
- Total Documents in Project—the number of documents in the project at the time of project validation.
Model Updates section
The Model Updates section shows each time the Active Learning model built, as well as the document ranks for each build. Each row represents a separate model build.
The columns include:
- Build Date—the date and time the model built. This appears in your computer's time zone.
- 0-10—the number of documents that ranked between 0 and 10 after this build.
- 11-20—the number of documents that ranked between 11 and 20 after this build.
- 21-30—the number of documents that ranked between 21 and 30 after this build.
- 31-40—the number of documents that ranked between 31 and 40 after this build.
- 41-50—the number of documents that ranked between 41 and 50 after this build.
- 51-60—the number of documents that ranked between 51 and 60 after this build.
- 61-70—the number of documents that ranked between 61 and 70 after this build.
- 71-80—the number of documents that ranked between 71 and 80 after this build.
- 81-90—the number of documents that ranked between 81 and 90 after this build.
- 91-100—the number of documents that ranked between 91 and 100 after this build.